Linux——k8s、deployment、pod

  1. 声明式配置文件:要求集群中的某一个资源,处于指定的状态。
  2. 集群中都有哪些可以管理的资源?
  3. 控制器: 用来控制pod数量、运行参数
  4. deployment  管理灵活,而pod的创建、删除、运行、更新等均无需直接操作pod,只需要更新deployment的配置

1.如何控制pod

deployment ----> replica set (基于template 运行的pod的数量)---> 创建服务需要的pod
[root@control ~]# cat nginx-deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:name: nginx-deployment
spec:selector:matchLabels:app: nginxreplicas: 3template:metadata:labels:app: nginxspec:containers:- name: nginximage: nginx:latestimagePullPolicy: IfNotPresentports:- containerPort: 80[root@control ~]# source  .kube/k8s_bash_completion
[root@control ~]# kubectl create -f  nginx-deployment.yml
deployment.apps/nginx-deployment created
[root@control ~]# kubectl get deployments.apps -l app=nginx
No resources found in default namespace.
[root@control ~]# kubectl get deployments.apps
NAME               READY   UP-TO-DATE   AVAILABLE   AGE
my-nginx           2/2     2            2           3d21h
nginx-deployment   3/3     3            3           29s
[root@control ~]# kubectl describe deployments.apps nginx-deployment
Name:                   nginx-deployment
Namespace:              default
CreationTimestamp:      Tue, 24 Sep 2024 14:27:32 +0800
Labels:                 <none>
Annotations:            deployment.kubernetes.io/revision: 1
Selector:               app=nginx
Replicas:               3 desired | 3 updated | 3 total | 3 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:Labels:  app=nginxContainers:nginx:Image:         nginx:latestPort:          80/TCPHost Port:     0/TCPEnvironment:   <none>Mounts:        <none>Volumes:         <none>Node-Selectors:  <none>Tolerations:     <none>
Conditions:Type           Status  Reason----           ------  ------Available      True    MinimumReplicasAvailableProgressing    True    NewReplicaSetAvailable
OldReplicaSets:  <none>
NewReplicaSet:   nginx-deployment-bf56f49c (3/3 replicas created)
Events:Type    Reason             Age   From                   Message----    ------             ----  ----                   -------Normal  ScalingReplicaSet  85s   deployment-controller  Scaled up replica set nginx-deployment-bf56f49c to 3
[root@control ~]# kubectl get rs  nginx-deployment-bf56f49c
NAME                        DESIRED   CURRENT   READY   AGE
nginx-deployment-bf56f49c   3         3         3       2m22s
[root@control ~]# kubectl get pods -l app=nginx
NAME                              READY   STATUS    RESTARTS   AGE
nginx-deployment-bf56f49c-9c8gw   1/1     Running   0          6m48s
nginx-deployment-bf56f49c-n59hg   1/1     Running   0          6m48s
nginx-deployment-bf56f49c-v887p   1/1     Running   0          6m48s
[root@control ~]# kubectl get pods -l app=nginx --show-labels
NAME                              READY   STATUS    RESTARTS   AGE     LABELS
nginx-deployment-bf56f49c-9c8gw   1/1     Running   0          7m32s   app=nginx,pod-template-hash=bf56f49c
nginx-deployment-bf56f49c-n59hg   1/1     Running   0          7m32s   app=nginx,pod-template-hash=bf56f49c
nginx-deployment-bf56f49c-v887p   1/1     Running   0          7m32s   app=nginx,pod-template-hash=bf56f49c
//为了保证工作节点有,需要更新的镜像,手动上传镜像到工作节点
[root@control ~]# docker save -o nginx-19.1.tar nginx:1.19.1
[root@control ~]# scp nginx-19.1.tar root@node1:/root
root@node1's password:
nginx-19.1.tar                                                                                                                                           0%    0     0nginx-19.1.tar                                                                                                                                          40%   52MB  52nginx-19.1.tar                                                                                                                                         100%  130MB  47
[root@control ~]# scp nginx-19.1.tar root@node2:/root
root@node2's password:
nginx-19.1.tar                                                                                                                                         100%  130MB  51.7MB/s   00:02
[root@node1 ~]# ctr -n k8s.io image import  nginx-19.1.tar
[root@node2 ~]# ctr -n k8s.io image import  nginx-19.1.tar
// 查看工作节点保存的镜像
[root@node2 ~]# crictl -r unix:///var/run/containerd/containerd.sock images返回控制节点,进行更新操作
// 更新镜像
[root@control ~]# kubectl set image deployments nginx-deployment nginx=nginx:1.19.1
deployment.apps/nginx-deployment image updated
[root@control ~]# kubectl get pods -l app=nginx
NAME                                READY   STATUS    RESTARTS   AGE
nginx-deployment-7f588fbd68-bjd67   1/1     Running   0          9s
nginx-deployment-7f588fbd68-n2fls   1/1     Running   0          5s
nginx-deployment-7f588fbd68-qnwdn   1/1     Running   0          8s
[root@control ~]# kubectl describe deployments.apps nginx-deployment
Name:                   nginx-deployment
Namespace:              default
CreationTimestamp:      Tue, 24 Sep 2024 14:27:32 +0800
Labels:                 <none>
Annotations:            deployment.kubernetes.io/revision: 2
Selector:               app=nginx
Replicas:               3 desired | 3 updated | 3 total | 3 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:Labels:  app=nginxContainers:nginx:Image:         nginx:1.19.1Port:          80/TCPHost Port:     0/TCPEnvironment:   <none>Mounts:        <none>Volumes:         <none>Node-Selectors:  <none>Tolerations:     <none>
Conditions:Type           Status  Reason----           ------  ------Available      True    MinimumReplicasAvailableProgressing    True    NewReplicaSetAvailable
OldReplicaSets:  nginx-deployment-bf56f49c (0/0 replicas created)
NewReplicaSet:   nginx-deployment-7f588fbd68 (3/3 replicas created)
Events:Type    Reason             Age   From                   Message----    ------             ----  ----                   -------Normal  ScalingReplicaSet  15m   deployment-controller  Scaled up replica set nginx-deployment-bf56f49c to 3Normal  ScalingReplicaSet  49s   deployment-controller  Scaled up replica set nginx-deployment-7f588fbd68 to 1Normal  ScalingReplicaSet  48s   deployment-controller  Scaled down replica set nginx-deployment-bf56f49c to 2 from 3Normal  ScalingReplicaSet  48s   deployment-controller  Scaled up replica set nginx-deployment-7f588fbd68 to 2 from 1Normal  ScalingReplicaSet  45s   deployment-controller  Scaled down replica set nginx-deployment-bf56f49c to 1 from 2Normal  ScalingReplicaSet  45s   deployment-controller  Scaled up replica set nginx-deployment-7f588fbd68 to 3 from 2Normal  ScalingReplicaSet  44s   deployment-controller  Scaled down replica set nginx-deployment-bf56f49c to 0 from 1
// deployment 更新后创建新的rs来启动新的容器,并保留原本的rs
// rs 一般只保留两个,除非额外配置
[root@control ~]# kubectl get rs
NAME                          DESIRED   CURRENT   READY   AGE
my-nginx-7549dd6888           2         2         2       3d21h
nginx-deployment-7f588fbd68   3         3         3       2m18s
nginx-deployment-bf56f49c     0         0         0       16m
[root@control ~]# kubectl edit deployments.apps nginx-deployment

deployment.apps/nginx-deployment edited
// 展示更新过程
// 失败的原因是无法拉取nginx 1.191 镜像
[root@control ~]# kubectl rollout status deployment nginx-deployment  
Waiting for deployment "nginx-deployment" rollout to finish: 1 out of 3 new replicas have been updated...
^C
[root@control ~]# kubectl get pods -l app=nginx// 一个pod 状态错误 三个pod状态正常
NAME                                READY   STATUS             RESTARTS   AGE
nginx-deployment-6dc44567c6-85glr   0/1     ImagePullBackOff   0          54s
nginx-deployment-7f588fbd68-bjd67   1/1     Running            0          8m58s
nginx-deployment-7f588fbd68-n2fls   1/1     Running            0          8m54s
nginx-deployment-7f588fbd68-qnwdn   1/1     Running            0          8m57s
[root@control ~]# kubectl get deployments.apps nginx-deployment
NAME               READY   UP-TO-DATE   AVAILABLE   AGE
nginx-deployment   3/3     1            3           23m
// 查看更新历史
[root@control ~]# kubectl rollout history deployment nginx-deployment
deployment.apps/nginx-deployment
REVISION  CHANGE-CAUSE
1         <none>
2         <none>
3         <none>// 查看第二次更新的参数
[root@control ~]# kubectl rollout history deployment nginx-deployment --revision=2
deployment.apps/nginx-deployment with revision #2
Pod Template:Labels:       app=nginxpod-template-hash=7f588fbd68Containers:nginx:Image:      nginx:1.19.1Port:       80/TCPHost Port:  0/TCPEnvironment:        <none>Mounts:     <none>Volumes:      <none>Node-Selectors:       <none>Tolerations:  <none>// 回滚失败的更新
[root@control ~]# kubectl rollout undo deployment nginx-deployment
deployment.apps/nginx-deployment rolled back
[root@control ~]# kubectl get deployments.apps nginx-deployment
NAME               READY   UP-TO-DATE   AVAILABLE   AGE
nginx-deployment   3/3     3            3           27m

deployment 是最常见的无状态服务部署方式,常见的用例包括:

  1. 使用 Deployment 来创建 ReplicaSet。ReplicaSet 在后台创建 pod。检查启动状态,看它是成功还是失败。
  2. 然后,通过更新 Deployment 的 PodTemplateSpec 字段来声明 Pod 的新状态。这会创建一个新的 ReplicaSet,Deployment 会按照控制的速率将 pod 从旧的 ReplicaSet 移动到新的 ReplicaSet 中。
  3. 如果当前状态不稳定,回滚到之前的 Deployment revision。每次回滚都会更新 Deployment 的 revision。
  4. 扩容 Deployment 以满足更高的负载。
  5. 暂停 Deployment 来应用 PodTemplateSpec 的多个修复,然后恢复上线。
  6. 根据 Deployment 的状态判断上线是否 hang 住了。
  7. 清除旧的不必要的 ReplicaSet。

无状态服务和有状态服务最直接的区别就是:

  1. pod是否需要分配固定的存储
  2. pod是否需要固定的网络参数
  3. pod 启动顺序是否有序

对于有状态服务,一般使用statefulSet来实现,常见用例:

  1. 稳定的持久化存储,即 Pod 重新调度后还是能访问到相同的持久化数据,基于 PVC 来实现,eg: 关系型数据库
  2. 稳定的网络标志,即 Pod 重新调度后其 PodName 和 HostName 不变,基于 Headless Service(即没有 Cluster IP 的 Service)来实现
  3. 有序部署,有序扩展,即 Pod 是有顺序的,在部署或者扩展的时候要依据定义的顺序依次依次进行(即从 0 到 N-1,在下一个 Pod 运行之前所有之前的 Pod 必须都是 Running 和 Ready 状态),基于 init containers 来实现
  4. 有序收缩,有序删除(即从 N-1 到 0)

使用nginx 作为反向代理 // 无状态服务  不涉及任何真实应用数据

使用lnmp架构运行web应用 // 有状态服务  与应用代码,且数据库保存应用数据,因此数据一定需要进行持久化

使用nginx 作为statefulset的练习:

[root@control ~]# cat nginx-stateful.yml
apiVersion: v1
kind: Service
metadata:name: nginxlabels:app: nginx
spec:ports:- port: 80name: webclusterIP: Noneselector:app: nginx
---
apiVersion: apps/v1
kind: StatefulSet
metadata:name: web
spec:serviceName: "nginx"replicas: 2selector:matchLabels:app: nginxtemplate:metadata:labels:app: nginxspec:containers:- name: nginximage: nginx:1.19.1imagePullPolicy: IfNotPresentports:- containerPort: 80name: webvolumeMounts:- name: wwwmountPath: /usr/share/nginx/htmlvolumeClaimTemplates:- metadata:name: wwwspec:storageClassName: local-storageaccessModes: [ "ReadWriteOnce" ]resources:requests:storage: 4Gi[root@control ~]# cat pv-1.yml
apiVersion: v1
kind: PersistentVolume
metadata:name: example-pv
spec:capacity:storage: 4GivolumeMode: FilesystemaccessModes:- ReadWriteOncepersistentVolumeReclaimPolicy: DeletestorageClassName: local-storagelocal:path: /mnt/lv/swapnodeAffinity:required:nodeSelectorTerms:- matchExpressions:- key: kubernetes.io/hostnameoperator: Invalues:- node1
[root@control ~]# cat pv-2.yml
apiVersion: v1
kind: PersistentVolume
metadata:name: example-pv2
spec:capacity:storage: 4GivolumeMode: FilesystemaccessModes:- ReadWriteOncepersistentVolumeReclaimPolicy: DeletestorageClassName: local-storagelocal:path: /mnt/lv/swapnodeAffinity:required:nodeSelectorTerms:- matchExpressions:- key: kubernetes.io/hostnameoperator: Invalues:- node2
[root@control ~]# cat local_storage.yml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
[root@control ~]# kubectl create -f local_storage.yml
storageclass.storage.k8s.io/local-storage created
[root@control ~]# kubectl get storageclasses.storage.k8s.io
NAME            PROVISIONER                    RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
local-storage   kubernetes.io/no-provisioner   Delete          WaitForFirstConsumer   false                  2snode1 he node2
[root@node1 ~]# mkfs.xfs -f /dev/cs_bogon/swap
meta-data=/dev/cs_bogon/swap     isize=512    agcount=4, agsize=256768 blks=                       sectsz=512   attr=2, projid32bit=1=                       crc=1        finobt=1, sparse=1, rmapbt=0=                       reflink=1    bigtime=1 inobtcount=1 nrext64=0
data     =                       bsize=4096   blocks=1027072, imaxpct=25=                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
log      =internal log           bsize=4096   blocks=16384, version=2=                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[root@node1 ~]# mkdir /mnt/lv/swap -p
[root@node1 ~]# mount /dev/cs_bogon/swap /mnt/lv/swap/返回控制节点:
创建持久卷
[root@control ~]# kubectl create -f pv-1.yml
persistentvolume/example-pv created
[root@control ~]# kubectl create -f pv-2.yml
Error from server (AlreadyExists): error when creating "pv-2.yml": persistentvolumes "example-pv" already exists
[root@control ~]# vim pv-2.yml
[root@control ~]# kubectl create -f pv-2.yml
persistentvolume/example-pv2 created
[root@control ~]# kubectl get pv
NAME          CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS    VOLUMEATTRIBUTESCLASS   REASON   AGE
example-pv    4Gi        RWO            Delete           Available           local-storage   <unset>                          25s
example-pv2   4Gi        RWO            Delete           Available           local-storage   <unset>                          6s
[root@control ~]# kubectl apply -f nginx-stateful.yml
service/nginx created
statefulset.apps/web created
[root@control ~]# kubectl get statefulsets.apps
NAME   READY   AGE
web    2/2     7s
[root@control ~]# kubectl get pvc
NAME        STATUS   VOLUME        CAPACITY   ACCESS MODES   STORAGECLASS    VOLUMEATTRIBUTESCLASS   AGE
www-web-0   Bound    example-pv    4Gi        RWO            local-storage   <unset>                 16s
www-web-1   Bound    example-pv2   4Gi        RWO            local-storage   <unset>                 14s
[root@control ~]# kubectl describe statefulsets.apps web
Name:               web
Namespace:          default
CreationTimestamp:  Tue, 24 Sep 2024 17:18:48 +0800
Selector:           app=nginx
Labels:             <none>
Annotations:        <none>
Replicas:           2 desired | 2 total
Update Strategy:    RollingUpdatePartition:        0
Pods Status:        2 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:Labels:  app=nginxContainers:nginx:Image:        nginx:1.19.1Port:         80/TCPHost Port:    0/TCPEnvironment:  <none>Mounts:/usr/share/nginx/html from www (rw)Volumes:         <none>Node-Selectors:  <none>Tolerations:     <none>
Volume Claims:Name:          wwwStorageClass:  local-storageLabels:        <none>Annotations:   <none>Capacity:      4GiAccess Modes:  [ReadWriteOnce]
Events:Type    Reason            Age   From                    Message----    ------            ----  ----                    -------Normal  SuccessfulCreate  55s   statefulset-controller  create Claim www-web-0 Pod web-0 in StatefulSet web successNormal  SuccessfulCreate  55s   statefulset-controller  create Pod web-0 in StatefulSet web successfulNormal  SuccessfulCreate  53s   statefulset-controller  create Claim www-web-1 Pod web-1 in StatefulSet web successNormal  SuccessfulCreate  53s   statefulset-controller  create Pod web-1 in StatefulSet web successful
[root@control ~]# kubectl get pods --watch -l app=nginx
NAME                                READY   STATUS    RESTARTS   AGE
nginx-deployment-7f588fbd68-bjd67   1/1     Running   0          160m
nginx-deployment-7f588fbd68-n2fls   1/1     Running   0          160m
nginx-deployment-7f588fbd68-qnwdn   1/1     Running   0          160m
web-0                               1/1     Running   0          3m15s
web-1                               1/1     Running   0          3m13s
^C[root@control ~]# kubectl delete pod -l app=nginx
pod "nginx-deployment-7f588fbd68-bjd67" deleted
pod "nginx-deployment-7f588fbd68-n2fls" deleted
pod "nginx-deployment-7f588fbd68-qnwdn" deleted
pod "web-0" deleted
pod "web-1" deleted[root@control ~]#
[root@control ~]# kubectl get statefulsets.apps
NAME   READY   AGE
web    2/2     4m43s
[root@control ~]# kubectl get pods  -l app=nginx
NAME                                READY   STATUS    RESTARTS   AGE
nginx-deployment-7f588fbd68-4lrb5   1/1     Running   0          36s
nginx-deployment-7f588fbd68-8jnm5   1/1     Running   0          36s
nginx-deployment-7f588fbd68-mvgzg   1/1     Running   0          36s
web-0                               1/1     Running   0          35s
web-1                               1/1     Running   0          33s
[root@control ~]# kubectl describe statefulsets.apps web
Name:               web
Namespace:          default
CreationTimestamp:  Tue, 24 Sep 2024 17:18:48 +0800
Selector:           app=nginx
Labels:             <none>
Annotations:        <none>
Replicas:           2 desired | 2 total
Update Strategy:    RollingUpdatePartition:        0
Pods Status:        2 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:Labels:  app=nginxContainers:nginx:Image:        nginx:1.19.1Port:         80/TCPHost Port:    0/TCPEnvironment:  <none>Mounts:/usr/share/nginx/html from www (rw)Volumes:         <none>Node-Selectors:  <none>Tolerations:     <none>
Volume Claims:Name:          wwwStorageClass:  local-storageLabels:        <none>Annotations:   <none>Capacity:      4GiAccess Modes:  [ReadWriteOnce]
Events:Type    Reason            Age                  From                    Message----    ------            ----                 ----                    -------Normal  SuccessfulCreate  5m16s                statefulset-controller  create Claim www-web-0 Pod web-0 in StatefulSet web successNormal  SuccessfulCreate  5m14s                statefulset-controller  create Claim www-web-1 Pod web-1 in StatefulSet web successNormal  SuccessfulCreate  48s (x2 over 5m16s)  statefulset-controller  create Pod web-0 in StatefulSet web successfulNormal  SuccessfulCreate  46s (x2 over 5m14s)  statefulset-controller  create Pod web-1 in StatefulSet web successful
[root@control ~]# for i in 0 1; do kubectl exec web-$i -- sh -c 'hostname'; done
web-0
web-1

结论:pod名称和持久卷的绑定都不会发生改变,因此可以实现数据和通信(无论pod如何变更,可以基于pod的名称进行通信)的固定。

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.xdnf.cn/news/1548070.html

如若内容造成侵权/违法违规/事实不符,请联系一条长河网进行投诉反馈,一经查实,立即删除!

相关文章

重磅信息!灰豚数字人发布首个为直播而生的AI语音大模型

AI社消息&#xff0c;近日灰豚数字人发布首个为直播而生的AI语音大模型。该声音大模型在我国获得多个之最。 灰豚语音大模型 与市面上所有声音机械化语音大模型不同的是&#xff0c;灰豚语音大模型的声音媲美真人。该大模型有语种、有内容、有韵律、有音色、有情绪、观众听众无…

windows通过文件系统访问ftp传输中文乱码

windows通过文件系统访问ftp传输中文乱码 问题原因&#xff1a;windows默认的编码格式使ftp发送文档时不支持中文&#xff0c;导致发送出去的文档是乱码文件&#xff0c;此问题是客户端问题&#xff0c;非服务端解析问题。 1、问题 windows通过文件系统访问ftp服务器&#x…

华为 HCIP-Datacom H12-821 题库 (28)

&#x1f423;博客最下方微信公众号回复题库,领取题库和教学资源 &#x1f424;诚挚欢迎IT交流有兴趣的公众号回复交流群 &#x1f998;公众号会持续更新网络小知识&#x1f63c; 1.使用 NAT 技术&#xff0c;只可以对数据报文中的网络层信息&#xff08;IP 地址&#xff09…

2024百度云智大会:众数信科携寻知AI亮相,荣获“大模型先锋伙伴”奖

9月25日&#xff0c;百度云智大会在北京中关村国际创新中心顺利举行。百度智能云携手众多伙伴围绕算力、模型、应用三个话题&#xff0c;共同探讨如何在新一轮技术变革中更好抢抓机遇、激发产业活力、实现智能跃升。 众数信科作为百度智能云的重要合作伙伴受邀出席本次大会&…

入选ECCV 2024!覆盖5.4w+图像,MIT提出医学图像分割通用模型ScribblePrompt,性能优于SAM

外行看热闹&#xff0c;内行看门道&#xff0c;这句话在医学影像领域可谓是绝对真理。不仅如此&#xff0c;即便身为内行人&#xff0c;要想在复杂的 X 光片、CT 光片或 MRI 等医学影像上准确看出些「门道」来&#xff0c;也并非易事。而医学图像分割则是通过将复杂的医学图像中…

学习记录:js算法(四十七):相同的树

文章目录 相同的树我的思路网上思路队列序列化方法 总结 相同的树 给你两棵二叉树的根节点 p 和 q &#xff0c;编写一个函数来检验这两棵树是否相同。 如果两个树在结构上相同&#xff0c;并且节点具有相同的值&#xff0c;则认为它们是相同的。 图一&#xff1a; 图二&…

基于SSM的“在线汽车交易系统”的设计与实现(源码+数据库+文档+开题报告)

基于SSM的“在线汽车交易系统”的设计与实现&#xff08;源码数据库文档开题报告) 开发语言&#xff1a;Java 数据库&#xff1a;MySQL 技术&#xff1a;SSM 工具&#xff1a;IDEA/Ecilpse、Navicat、Maven 系统展示 系统总体设计图 首页 新闻信息 用户注册 后台登录界面…

Llama 3.2:轻量级设计与多模态能力

前沿科技速递&#x1f680; 9月26日Meta 推出了 Llama 3.2&#xff0c;这是一个前沿的多模态大语言模型系列。该系列包括轻量级文本模型&#xff08;1B 和 3B&#xff09;以及视觉模型&#xff08;11B 和 90B&#xff09;&#xff0c;专为在边缘和移动设备上的高效应用而设计。…

学习之什么是生成器

什么是生成器&#xff08;Generator&#xff09; 1、是一种数据类型能源源不断地生成数据 2、"惰性"特点:一次生成一个值&#xff0c;而不是生成一个序列 3、生成器一定是迭代器比迭代器更简洁使用生成器表达式创建生成器 from typing import Generator, Iterator,…

OCR识别系统 YOLOv8 +Paddle 方案落地

YOLOv8 PaddleOCR 技术方案落地 Yolov8相关文档Step 1 证件模型的训练Step 2 Yolov8进行图片推理Step 3 PaddleOCR进行识别Step 4 整合Yolov8 PaddleOCR 进行OCR Yolov8相关文档 《yolov8 官方网站》 《Yolov8 保姆级别安装》 Ultralytics YOLOv8 是一款尖端的、最先进的 (S…

深入探索与实战:高效利用苏宁商品详情API实现精准数据抓取与解析技术

在电商平台的开发中&#xff0c;获取商品详情是构建用户购物体验的重要一环。苏宁作为国内领先的电商平台&#xff0c;提供了丰富的商品信息和API接口供开发者使用。本文将介绍如何通过苏宁的商品详情接口获取特定商品的详细信息&#xff0c;并给出Python代码示例。 点击获取ke…

DreamBench++:由清华大学和西安交通大学等联合创建:一种人机交互的个性化图像生成基准测试

2024-07-10&#xff0c;由清华大学和西安交通大学等机构联合创建的DreamBench&#xff0c;这个任务目的是通过使用先进的多模态GPT模型来自动化评估&#xff0c;实现与人类评估一致的结果&#xff0c;从而提高个性化图像生成的可靠性和准确性。 一、引言&#xff1a; 个性化图…

Maven项目常见各类 QA

一、pom.xml文件 1.1 there is no POM in this directory [ERROR] The goal you specified requires a project to execute but there is no POM in this directory (/home/cys/SEtesting/example/smartut-report). Please verify you invoked Maven from the correct directo…

消费类摄像头热销海内外,萤石出货量全球排名第一

随着消费者对家庭安全、便捷生活的需求日益增长&#xff0c;智能摄像头作为智能家居的重要组成部分&#xff0c;其市场需求将持续扩大。 IDC《全球智能家居设备市场季度跟踪报告&#xff0c;2024年第二季度》显示&#xff0c;二季度全球智能摄像头市场&#xff08;包含消费级室…

Vue2项目中vuex如何简化程序代码,提升代码质量和开发效率

Vuex为Vue中提供了集中式存储 库&#xff0c;其主要分为state、getter、mutation、action四个模块&#xff0c;它们每个担任了不同角色&#xff0c;分工不同&#xff1b;Vuex允许所有的组件共享状态抽取出来&#xff0c;以一个全局单例模式管理&#xff0c;状态集中存储在同一…

AniJS:无需编程的动画解决方案

前言 在网页设计中&#xff0c;动画效果能够显著提升用户体验&#xff0c;但传统的动画实现往往需要复杂的 JavaScript 代码。 AniJS 库的出现&#xff0c;为设计师和开发者带来了一种全新的动画实现方式&#xff0c;它通过简单的 HTML 属性就能创建出令人惊叹的动画效果。 介…

文档解析与向量化技术加速 RAG 应用落地

在不久前举办的 AICon 全球人工智能开发与应用大会上&#xff0c;合合信息智能创新事业部研发总监&#xff0c;复旦博士常扬从 RAG 应用落地时常见问题与需求&#xff08;文档解析、检索精度&#xff09;出发&#xff0c;分享了针对性的高精度、高泛化性、多版面多元素识别支持…

LeetCode[中等] 138. 随机链表的复制

给你一个长度为 n 的链表&#xff0c;每个节点包含一个额外增加的随机指针 random &#xff0c;该指针可以指向链表中的任何节点或空节点。 构造这个链表的 深拷贝。 深拷贝应该正好由 n 个 全新 节点组成&#xff0c;其中每个新节点的值都设为其对应的原节点的值。新节点的 n…

贴片式TF卡(SD NAND)参考设计

【MK 方德】贴片 TF 卡参考设计 一、电路设计 1、 参考电路&#xff1a; R1~R5 (10K-100 kΩ)是上拉电阻&#xff0c;当 SD NAND 处于高阻抗模式时&#xff0c;保护 CMD 和 DAT 线免受总线浮动。 即使主机使用 SD NAND SD 模式下的 1 位模式&#xff0c;主机也应通过上拉电阻…

SpringBoot 流式输出时,正常输出后为何突然报错?

一个 SpringBoot 项目同时使用了 Tomcat 的过滤器和 Spring 的拦截器&#xff0c;一些线程变量在过滤器中初始化并在拦截器中使用。 该项目需要调用大语言模型进行流式输出。 项目中&#xff0c;笔者使用 SpringBoot 的 ResponseEntity<StreamingResponseBody> 将流式输…