目录
一 kubernetes 中的资源
1.1 资源管理介绍
1.2 资源管理方式
1.2.1 命令式对象管理
1.2.2 资源类型
1.2.3 基本命令示例
1.2.3 运行和调试命令示例
1.2.4 高级命令示例
二 什么是pod
2.1 创建自主式pod (生产不推荐)
2.2 利用控制器管理pod(推荐)
2.3 应用版本的更新
2.4 利用yaml文件部署应用
2.4.1 用yaml文件部署应用有以下优点
2.4.2 资源清单参数
2.4.3 如何获得资源帮助
2.4.4 编写示例
2.4.4.1 示例1:运行简单的单个容器pod
2.4.4.2 示例2:运行多个容器pod
2.4.4.3 示例3:理解pod间的网络整合
2.4.4.4 示例4:端口映射
2.4.4.5 示例5:如何设定环境变量
2.4.4.6 示例6:资源限制
2.4.4.7 示例7 容器启动管理
2.4.4.8 示例8 选择运行节点
2.4.4.9 示例9 共享宿主机网络
三 pod的生命周期
3.1 INIT 容器
3.1.1 INIT 容器的功能
3.1.2 INIT 容器示例
3.2 探针
3.2.1 探针实例
3.2.1.1 存活探针示例:
3.2.1.2 就绪探针示例:
一 kubernetes 中的资源
1.1 资源管理介绍
- 在kubernetes中,所有的内容都抽象为资源,用户需要通过操作资源来管理kubernetes。
- kubernetes的本质上就是一个集群系统,用户可以在集群中部署各种服务
- 所谓的部署服务,其实就是在kubernetes集群中运行一个个的容器,并将指定的程序跑在容器中。
- kubernetes的最小管理单元是pod而不是容器,只能将容器放在
Pod
中,- Pod中服务服务的访问是由kubernetes提供的
Service
资源来实现。- Pod中程序的数据需要持久化是由kubernetes提供的各种
存储
系统来实现
1.2 资源管理方式
命令式对象管理:直接使用命令去操作kubernetes资源
kubectl run nginx-pod --image=nginx:latest --port=80
命令式对象配置:通过命令配置和配置文件去操作kubernetes资源
kubectl create/patch -f nginx-pod.yaml
声明式对象配置:通过apply命令和配置文件去操作kubernetes资源
kubectl apply -f nginx-pod.yaml
类型 | 适用环境 | 优点 | 缺点 |
命令式对象管理 | 测试 | 简单 | 只能操作活动对象,无法审计、跟踪 |
命令式对象配置 | 开发 | 可以审计、跟踪 | 项目大时,配置文件多,操作麻烦 |
声明式对象配置 | 开发 | 支持目录操作 | 意外情况下难以调试 |
1.2.1 命令式对象管理
kubectl是kubernetes集群的命令行工具,通过它能够对集群本身进行管理,并能够在集群上进行容器化应用的安装部署
kubectl命令的语法如下:
kubectl [command] [type] [name] [flags]
comand:指定要对资源执行的操作,例如create、get、delete
type:指定资源类型,比如deployment、pod、service
name:指定资源的名称,名称大小写敏感
flags:指定额外的可选参
# 查看所有pod
kubectl get pod# 查看某个pod
kubectl get pod pod_name# 查看某个pod,以yaml格式展示结果
kubectl get pod pod_name -o yaml
1.2.2 资源类型
kubernetes中所有的内容都抽象为资源
kubectl api-resources
常用资源类型
kubect 常见命令操作
1.2.3 基本命令示例
kubectl的详细说明地址:Kubectl Reference Docs
#显示集群版本
[root@k8s-master ~]# kubectl version
Client Version: v1.30.0
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.30.0
#显示集群信息
[root@k8s-master ~]# kubectl cluster-info
Kubernetes control plane is running at https://192.168.10.100:6443
CoreDNS is running at https://192.168.10.100:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
#创建一个webcluster控制器,控制器中pod数量为3
[root@k8s-master ~]# kubectl create deployment webcluseter --image nginx --replicas 3
deployment.apps/webcluseter created
#查看控制器
[root@k8s-master ~]# kubectl get deployments.apps
NAME READY UP-TO-DATE AVAILABLE AGE
webcluseter 3/3 3 3 43s
#查看资源帮助
[root@k8s-master ~]# kubectl explain deployment
GROUP: apps
KIND: Deployment
VERSION: v1DESCRIPTION:
Deployment enables declarative updates for Pods and ReplicaSets.
FIELDS:
apiVersion <string>
APIVersion defines the versioned schema of this representation of an object.
Servers should convert recognized schemas to the latest internal value, and
may reject unrecognized values. More info:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resourceskind <string>
Kind is a string value representing the REST resource this object
represents. Servers may infer this from the endpoint the client submits
requests to. Cannot be updated. In CamelCase. More info:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kindsmetadata <ObjectMeta>
Standard object's metadata. More info:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadataspec <DeploymentSpec>
Specification of the desired behavior of the Deployment.status <DeploymentStatus>
Most recently observed status of the Deployment.#查看控制器参数帮助
[root@k8s-master ~]# kubectl explain deployment.spec
GROUP: apps
KIND: Deployment
VERSION: v1FIELD: spec <DeploymentSpec>
DESCRIPTION:
Specification of the desired behavior of the Deployment.
DeploymentSpec is the specification of the desired behavior of the
Deployment.
FIELDS:
minReadySeconds <integer>
Minimum number of seconds for which a newly created pod should be ready
without any of its container crashing, for it to be considered available.
Defaults to 0 (pod will be considered available as soon as it is ready)paused <boolean>
Indicates that the deployment is paused.progressDeadlineSeconds <integer>
The maximum time in seconds for a deployment to make progress before it is
considered to be failed. The deployment controller will continue to process
failed deployments and a condition with a ProgressDeadlineExceeded reason
will be surfaced in the deployment status. Note that progress will not be
estimated during the time a deployment is paused. Defaults to 600s.replicas <integer>
Number of desired pods. This is a pointer to distinguish between explicit
zero and not specified. Defaults to 1.revisionHistoryLimit <integer>
The number of old ReplicaSets to retain to allow rollback. This is a pointer
to distinguish between explicit zero and not specified. Defaults to 10.selector <LabelSelector> -required-
Label selector for pods. Existing ReplicaSets whose pods are selected by
this will be the ones affected by this deployment. It must match the pod
template's labels.strategy <DeploymentStrategy>
The deployment strategy to use to replace existing pods with new ones.template <PodTemplateSpec> -required-
Template describes the pods that will be created. The only allowed
template.spec.restartPolicy value is "Always".
#编辑控制器配置
[root@k8s-master ~]# kubectl edit deployments.apps webcluseter
.....
spec:
progressDeadlineSeconds: 600
replicas: 2[root@k8s-master ~]# kubectl get deployments.apps
NAME READY UP-TO-DATE AVAILABLE AGE
webcluseter 2/2 2 2 16m
#利用补丁更改控制器配置
[root@k8s-master ~]# kubectl patch deployments.apps webcluseter -p '{"spec":{"replicas":4}}'
deployment.apps/webcluseter patched
[root@k8s-master ~]# kubectl get deployments.apps
NAME READY UP-TO-DATE AVAILABLE AGE
webcluseter 4/4 4 4 5m24s
#删除资源
[root@k8s-master ~]# kubectl delete deployments.apps webcluseter
deployment.apps "webcluseter" deleted
[root@k8s-master ~]# kubectl get deployments.apps
No resources found in default namespace.
1.2.3 运行和调试命令示例
#运行pod
[root@k8s-master ~]# kubectl run testpod --image reg.timinglee.org/library/nginx:latest
pod/testpod created
[root@k8s-master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
testpod 1/1 Running 0 7s
#端口暴漏[root@k8s-master ~]# kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 24d
[root@k8s-master ~]# kubectl expose pod testpod --port 80 --target-port 80
service/testpod exposed
[root@k8s-master ~]# kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 24d
testpod ClusterIP 10.102.209.96 <none> 80/TCP 9s[root@k8s-master ~]# curl 10.102.209.96
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p><p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p><p><em>Thank you for using nginx.</em></p>
</body>
</html>
#查看资源详细信息[root@k8s-master ~]# kubectl describe pods testpod
Name: testpod
Namespace: default
Priority: 0
Service Account: default
Node: k8s-node/192.168.10.10
Start Time: Wed, 09 Oct 2024 14:22:16 +0800
Labels: run=testpod
Annotations: <none>
Status: Running
IP: 10.244.1.4
IPs:
IP: 10.244.1.4
Containers:
testpod:
Container ID: docker://7bc1cf842607dcd01ec383399d5dbe16e374933c71ffb61fb0881c911f7e8f32
Image: reg.timinglee.org/library/nginx:latest
Image ID: docker-pullable://reg.timinglee.org/library/nginx@sha256:127262f8c4c716652d0e7863bba3b8c45bc9214a57d13786c854272102f7c945
Port: <none>
Host Port: <none>
State: Running
Started: Wed, 09 Oct 2024 14:22:17 +0800
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8vxhh (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-8vxhh:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 9m7s default-scheduler Successfully assigned default/testpod to k8s-node
Normal Pulling 9m6s kubelet Pulling image "reg.timinglee.org/library/nginx:latest"
Normal Pulled 9m6s kubelet Successfully pulled image "reg.timinglee.org/library/nginx:latest" in 188ms (188ms including waiting). Image size: 187694648 bytes.
Normal Created 9m6s kubelet Created container testpod
Normal Started 9m6s kubelet Started container testpod
[root@k8s-master ~]#
#查看资源日志[root@k8s-master ~]# kubectl logs pods/testpod
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
/docker-entrypoint.sh: Sourcing /docker-entrypoint.d/15-local-resolvers.envsh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
2024/10/09 06:22:17 [notice] 1#1: using the "epoll" event method
2024/10/09 06:22:17 [notice] 1#1: nginx/1.27.1
2024/10/09 06:22:17 [notice] 1#1: built by gcc 12.2.0 (Debian 12.2.0-14)
2024/10/09 06:22:17 [notice] 1#1: OS: Linux 5.14.0-162.6.1.el9_1.x86_64
2024/10/09 06:22:17 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1073741816:1073741816
2024/10/09 06:22:17 [notice] 1#1: start worker processes
2024/10/09 06:22:17 [notice] 1#1: start worker process 29
10.244.0.0 - - [09/Oct/2024:06:26:50 +0000] "GET / HTTP/1.1" 200 615 "-" "curl/7.76.1" "-"
10.244.1.1 - - [09/Oct/2024:06:29:08 +0000] "GET / HTTP/1.1" 200 615 "-" "curl/7.76.1" "-"
[root@k8s-master ~]#
#运行交互pod[root@k8s-master ~]# kubectl run -it testpod --image reg.timinglee.org/library/busybox
If you don't see a command prompt, try pressing enter.
/ #
/ #
/ # #ctrl+pq 退出不停止 pod#运行非交互pod[root@k8s-master ~]# kubectl run nginx --image reg.timinglee.org/library/nginx
pod/nginx created
#进入到已经运行的容器,且容器有交互环境[root@k8s-master ~]# kubectl attach pods/testpod -it
If you don't see a command prompt, try pressing enter.
/ #
/ #
/ #
#在已经运行的pod中运行指定命令[root@k8s-master ~]# kubectl exec -it pods/nginx /bin/bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
root@nginx:/#
#日志文件到pod中[root@k8s-master ~]# kubectl cp anaconda-ks.cfg nginx:/
[root@k8s-master ~]# kubectl exec -it pods/nginx /bin/bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
root@nginx:/# ls
anaconda-ks.cfg docker-entrypoint.d lib opt sbin usr
bin docker-entrypoint.sh lib64 proc srv var
boot etc media root sys
dev home mnt run tmp
root@nginx:/#
#复制pod中的文件到本机[root@k8s-master ~]# ls #删除原有的文件[root@k8s-master ~]# kubectl cp nginx:/anaconda-ks.cfg anaconda-ks.cfg
tar: Removing leading `/' from member names
[root@k8s-master ~]# ls #查看复制的文件anaconda-ks.cfg
1.2.4 高级命令示例
#利用命令生成yaml模板文件[root@k8s-master ~]# kubectl create deployment --image reg.timinglee.org/library/nginx webcluster --dry-run=client -o yaml > webcluster.yml
#利用yaml文件生成资源[root@k8s-master ~]# vim webcluster.yml
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: webcluster
name: webcluster
spec:
replicas: 1
selector:
matchLabels:
app: webcluster
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: webcluster
spec:
containers:
- image: reg.timinglee.org/library/nginx
name: nginx
resources: {}
status: {}[root@k8s-master ~]# kubectl apply -f webcluster.yml
deployment.apps/webcluster created
[root@k8s-master ~]# kubectl get deployments.apps
NAME READY UP-TO-DATE AVAILABLE AGE
webcluster 1/1 1 1 10s[root@k8s-master ~]# kubectl delete -f webcluster.yml
deployment.apps "webcluster" deleted
#管理资源标签[root@k8s-master ~]# kubectl run nginx --image reg.timinglee.org/library/nginx
pod/nginx created
[root@k8s-master ~]# kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
nginx 1/1 Running 0 20s run=nginx[root@k8s-master ~]# kubectl label pods nginx app=lee
pod/nginx labeled
[root@k8s-master ~]# kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
nginx 1/1 Running 0 2m12s app=lee,run=nginx#更改标签[root@k8s-master ~]# kubectl label pods nginx app=webcluster --overwrite
pod/nginx labeled
[root@k8s-master ~]# kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
nginx 1/1 Running 0 3m41s app=webcluster,run=nginx#删除标签[root@k8s-master ~]# kubectl label pods nginx app-
pod/nginx unlabeled
[root@k8s-master ~]# kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
nginx 1/1 Running 0 4m44s run=nginx#标签时控制器识别pod示例的标识[root@k8s-master ~]# kubectl apply -f webcluster.yml
deployment.apps/webcluster created[root@k8s-master ~]# kubectl label pods webcluster-6695789bbf-rqt5b app=webcluster
pod/webcluster-6695789bbf-rqt5b not labeled
[root@k8s-master ~]# kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
nginx 1/1 Running 0 10m run=nginx
webcluster-6695789bbf-rqt5b 1/1 Running 0 2m app=webcluster,pod-template-hash=6695789bbf
#删除pod上的标签[root@k8s-master ~]# kubectl label pods webcluster-6695789bbf-rqt5b app-
pod/webcluster-6695789bbf-rqt5b unlabeled
#控制器会重新启动新pod[root@k8s-master ~]# kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
nginx 1/1 Running 0 11m run=nginx
webcluster-6695789bbf-rqt5b 1/1 Running 0 3m16s pod-template-hash=6695789bbf
webcluster-6695789bbf-v47qs 1/1 Running 0 19s app=webcluster,pod-template-hash=6695789bbf
二 什么是pod
- Pod是可以创建和管理Kubernetes计算的最小可部署单元
- 一个Pod代表着集群中运行的一个进程,每个pod都有一个唯一的ip。
- 一个pod类似一个豌豆荚,包含一个或多个容器(通常是docker)
- 多个容器间共享IPC、Network和UTC namespace。
2.1 创建自主式pod (生产不推荐)
优点:灵活性高 :
- 可以精确控制 Pod 的各种配置参数,包括容器的镜像、资源限制、环境变量、命令和参数等,满足特定的应用需求。
学习和调试方便 :
- 对于学习 Kubernetes 的原理和机制非常有帮助,通过手动创建 Pod 可以深入了解 Pod 的结构和配置方式。在调试问题时,可以更直接地观察和调整 Pod 的设置。
适用于特殊场景 :
- 在一些特殊情况下,如进行一次性任务、快速验证概念或在资源受限的环境中进行特定配置时,手动创建 Pod 可能是一种有效的方式。
缺点:管理复杂 :
- 如果需要管理大量的 Pod,手动创建和维护会变得非常繁琐和耗时。难以实现自动化的扩缩容、故障恢复等操作。
缺乏高级功能 :
- 无法自动享受 Kubernetes 提供的高级功能,如自动部署、滚动更新、服务发现等。这可能导致应用的部署和管理效率低下。
可维护性差 :
- 手动创建的 Pod 在更新应用版本或修改配置时需要手动干预,容易出现错误,并且难以保证一致性。相比之下,通过声明式配置或使用 Kubernetes 的部署工具可以更方便地进行应用的维护和更新。
#查看所有pods[root@k8s-master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx 1/1 Running 0 17m
webcluster-6695789bbf-rqt5b 1/1 Running 0 8m52s
webcluster-6695789bbf-v47qs 1/1 Running 0 5m55s
#建立一个名为timinglee的pod[root@k8s-master ~]# kubectl run timinglee --image reg.timinglee.org/library/nginx
pod/timinglee created
[root@k8s-master ~]# kubectl get podsNAME READY STATUS RESTARTS AGE
timinglee 1/1 Running 0 8s
#显示pod的较为详细的信息[root@k8s-master ~]# kubectl get pods timinglee -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
timinglee 1/1 Running 0 92s 10.244.2.8 k8s-node2 <none> <none>
2.2 利用控制器管理pod(推荐)
高可用性和可靠性 :
- 自动故障恢复:如果一个 Pod 失败或被删除,控制器会自动创建新的 Pod 来维持期望的副本数量。确保应用始终处于可用状态,减少因单个 Pod 故障导致的服务中断。
- 健康检查和自愈:可以配置控制器对 Pod 进行健康检查(如存活探针和就绪探针)。如果 Pod 不健康,控制器会采取适当的行动,如重启 Pod 或删除并重新创建它,以保证应用的正常运行。
可扩展性 :
- 轻松扩缩容:可以通过简单的命令或配置更改来增加或减少 Pod 的数量,以满足不同的工作负载需求。例如,在高流量期间可以快速扩展以处理更多请求,在低流量期间可以缩容以节省资源。
- 水平自动扩缩容(HPA):可以基于自定义指标(如 CPU 利用率、内存使用情况或应用特定的指标)自动调整 Pod 的数量,实现动态的资源分配和成本优化。
版本管理和更新 :
- 滚动更新:对于 Deployment 等控制器,可以执行滚动更新来逐步替换旧版本的 Pod 为新版本,确保应用在更新过程中始终保持可用。可以控制更新的速率和策略,以减少对用户的影响。
- 回滚:如果更新出现问题,可以轻松回滚到上一个稳定版本,保证应用的稳定性和可靠性。
声明式配置 :
- 简洁的配置方式:使用 YAML 或 JSON 格式的声明式配置文件来定义应用的部署需求。这种方式使得配置易于理解、维护和版本控制,同时也方便团队协作。
- 期望状态管理:只需要定义应用的期望状态(如副本数量、容器镜像等),控制器会自动调整实际状态与期望状态保持一致。无需手动管理每个 Pod 的创建和删除,提高了管理效率。
服务发现和负载均衡 :
- 自动注册和发现:Kubernetes 中的服务(Service)可以自动发现由控制器管理的 Pod,并将流量路由到它们。这使得应用的服务发现和负载均衡变得简单和可靠,无需手动配置负载均衡器。
- 流量分发:可以根据不同的策略(如轮询、随机等)将请求分发到不同的 Pod,提高应用的性能和可用性。
多环境一致性 :
- 一致的部署方式:在不同的环境(如开发、测试、生产)中,可以使用相同的控制器和配置来部署 应用,确保应用在不同环境中的行为一致。这有助于减少部署差异和错误,提高开发和运维效率。
#建立控制器并自动运行pod[root@k8s-master ~]# kubectl create deployment timinglee --image reg.timinglee.org/library/nginx
deployment.apps/timinglee created[root@k8s-master ~]# kubectl get pods timinglee-55f64d74bf-kddwb
NAME READY STATUS RESTARTS AGE
timinglee-55f64d74bf-kddwb 1/1 Running 0 19s
#为timinglee扩容[root@k8s-master ~]# kubectl scale deployment timinglee --replicas 6deployment.apps/timinglee scaled[root@k8s-master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
timinglee-55f64d74bf-6lrph 1/1 Running 0 74s
timinglee-55f64d74bf-jr5l8 1/1 Running 0 74s
timinglee-55f64d74bf-kddwb 1/1 Running 0 3m54s
timinglee-55f64d74bf-wl4c4 1/1 Running 0 74s
timinglee-55f64d74bf-zvdp9 1/1 Running 0 74s
timinglee-55f64d74bf-zvkpq 1/1 Running 0 74s
#为timinglee缩容[root@k8s-master ~]# kubectl scale deployment timinglee --replicas 2deployment.apps/timinglee scaled
[root@k8s-master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
timinglee-55f64d74bf-jr5l8 1/1 Running 0 3m49s
timinglee-55f64d74bf-kddwb 1/1 Running 0 6m29s#删除timinglee节点[root@k8s-master ~]# kubectl delete deployments.apps timinglee
deployment.apps "timinglee" deleted
2.3 应用版本的更新
#利用控制器建立pod[root@k8s-master ~]# kubectl create deployment timinglee --image reg.timinglee.org/library/myapp:v1 --replicas 2
deployment.apps/timinglee created
[root@k8s-master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
timinglee-56f99b7f4b-sl68k 1/1 Running 0 6s
timinglee-56f99b7f4b-wpdsx 1/1 Running 0 6s
#暴漏端口[root@k8s-master ~]# kubectl expose deployment timinglee --port 80 --target-port 80
service/timinglee exposed
[root@k8s-master ~]# kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 25d
timinglee ClusterIP 10.106.32.183 <none> 80/TCP 71#访问服务[root@k8s-master ~]# curl 10.106.32.183
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@k8s-master ~]# curl 10.106.32.183
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@k8s-master ~]# curl 10.106.32.183
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@k8s-master ~]#
#查看历史版本[root@k8s-master ~]# kubectl rollout history deployment timinglee
deployment.apps/timinglee
REVISION CHANGE-CAUSE
1 <none>
#更新控制器镜像版本[root@k8s-master ~]# kubectl set image deployments/timinglee myapp=reg.timinglee.org/library/myapp:v2
deployment.apps/timinglee image updated
#查看历史版本[root@k8s-master ~]# kubectl rollout history deployment timinglee
deployment.apps/timinglee
REVISION CHANGE-CAUSE
1 <none>
2 <none>
#访问内容测试[root@k8s-master ~]# curl 10.106.32.183
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
[root@k8s-master ~]# curl 10.106.32.183
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
[root@k8s-master ~]# curl 10.106.32.183
#版本回滚[root@k8s-master ~]# kubectl rollout undo deployment timinglee --to-revision 1
deployment.apps/timinglee rolled back
[root@k8s-master ~]# curl 10.106.32.183
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
#查看历史版本[root@k8s-master ~]# kubectl rollout history deployment timinglee
deployment.apps/timinglee
REVISION CHANGE-CAUSE
2 <none>
3 <none>
2.4 利用yaml文件部署应用
2.4.1 用yaml文件部署应用有以下优点
声明式配置 :
- 清晰表达期望状态:以声明式的方式描述应用的部署需求,包括副本数量、容器配置、网络设置等。这使得配置易于理解和维护,并且可以方便地查看应用的预期状态。
- 可重复性和版本控制:配置文件可以被版本控制,确保在不同环境中的部署一致性。可以轻松回滚到以前的版本或在不同环境中重复使用相同的配置。
- 团队协作:便于团队成员之间共享和协作,大家可以对配置文件进行审查和修改,提高部署的可靠性和稳定性。
灵活性和可扩展性 :
- 丰富的配置选项:可以通过 YAML 文件详细地配置各种 Kubernetes 资源,如 Deployment、Service、ConfigMap、Secret 等。可以根据应用的特定需求进行高度定制化。
- 组合和扩展:可以将多个资源的配置组合在一个或多个 YAML 文件中,实现复杂的应用部署架构。同时,可以轻松地添加新的资源或修改现有资源以满足不断变化的需求。
与工具集成 :
- 与 CI/CD 流程集成:可以将 YAML 配置文件与持续集成和持续部署(CI/CD)工具集成,实现自动化的应用部署。例如,可以在代码提交后自动触发部署流程,使用配置文件来部署应用到不同的环境。
- 命令行工具支持:Kubernetes 的命令行工具 kubectl 对 YAML 配置文件有很好的支持,可以方便地应用、更新和删除配置。同时,还可以使用其他工具来验证和分析 YAML 配置文件,确保其正确性和安全性。
2.4.2 资源清单参数
2.4.3 如何获得资源帮助
[root@k8s-master ~]# kubectl explain pod.spec.containers
2.4.4 编写示例
2.4.4.1 示例1:运行简单的单个容器pod
#用命令获取yaml模板
[root@k8s-master ~]# kubectl run timinglee --image reg.timinglee.org/library/myapp:v1 --dry-run=client -o yaml > pod1.yml [root@k8s-master ~]# cat pod1.yml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: timinglee #pod标签
name: timinglee #pod名称
spec:
containers:
- image: reg.timinglee.org/library/myapp:v1 #pod镜像
name: timinglee #容器名称
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
2.4.4.2 示例2:运行多个容器pod
注意:
如果多个容器运行在一个pod中,资源共享的同时在使用相同资源时也会干扰,比如端口
#一个端口干扰示例:
[root@k8s-master ~]# vim pod1.yml
apiVersion: v1
kind: Pod
metadata:
labels:
run: timinglee
name: timinglee
spec:
containers:
- image: reg.timinglee.org/library/myapp:v1
name: timinglee1- image: reg.timinglee.org/library/myapp:v1
name: timinglee2
[root@k8s-master ~]# kubectl apply -f pod1.yml
pod/timinglee created[root@k8s-master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
timinglee 1/2 Error 1 (6s ago) 10s
#查看日志[root@k8s-master ~]# kubectl logs timinglee timinglee2
2024/10/09 09:21:27 [emerg] 1#1: bind() to 0.0.0.0:80 failed (98: Address in use)
nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address in use)
2024/10/09 09:21:27 [emerg] 1#1: bind() to 0.0.0.0:80 failed (98: Address in use)
nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address in use)
2024/10/09 09:21:27 [emerg] 1#1: bind() to 0.0.0.0:80 failed (98: Address in use)
nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address in use)
2024/10/09 09:21:27 [emerg] 1#1: bind() to 0.0.0.0:80 failed (98: Address in use)
nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address in use)
2024/10/09 09:21:27 [emerg] 1#1: bind() to 0.0.0.0:80 failed (98: Address in use)
nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address in use)
2024/10/09 09:21:27 [emerg] 1#1: still could not bind()
nginx: [emerg] still could not bind()
注意:
在一个pod中开启多个容器时一定要确保容器彼此不能互相干扰
[root@k8s-master ~]# cat pod1.yml
apiVersion: v1
kind: Pod
metadata:
labels:
run: timinglee
name: timinglee
spec:
containers:
- image: reg.timinglee.org/library/myapp:v1
name: timinglee1- image: reg.timinglee.org/library/nginx:latest
name: web
command: ["/bin/sh","-c","sleep 10000000"]
[root@k8s-master ~]# kubectl apply -f pod1.yml
pod/timinglee created
[root@k8s-master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
timinglee 2/2 Running 0 8s
2.4.4.3 示例3:理解pod间的网络整合
[root@k8s-master ~]# vim pod1.yml
apiVersion: v1
kind: Pod
metadata:
labels:
run: timinglee
name: test
spec:
containers:
- image: reg.timinglee.org/library/myapp:v1
name: myapp1- image: reg.timinglee.org/library/busyboxplus:latest
name: busyboxplus
command: ["/bin/sh","-c","sleep 10000000"][root@k8s-master ~]# kubectl apply -f pod1.yml
pod/test created
[root@k8s-master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
test 2/2 Running 0 9s
[root@k8s-master ~]# kubectl exec test -c busyboxplus -- curl -s localhost
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
2.4.4.4 示例4:端口映射
[root@k8s-master ~]# cat pod1.yml
apiVersion: v1
kind: Pod
metadata:
labels:
run: timinglee
name: test
spec:
containers:
- image: reg.timinglee.org/library/myapp:v1
name: myapp1
ports:
- name: http
containerPort: 80
hostPort: 80
protocol: TCP#测试[root@k8s-master ~]# kubectl apply -f pod1.yml
pod/test created
[root@k8s-master ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
test 1/1 Running 0 12s 10.244.1.24 k8s-node <none> <none>
[root@k8s-master ~]# curl k8s-node
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
2.4.4.5 示例5:如何设定环境变量
[root@k8s-master ~]# vim pod1.yml
apiVersion: v1
kind: Pod
metadata:
labels:
run: timinglee
name: test
spec:
containers:
- image: reg.timinglee.org/library/busybox:latest
name: busybox
command: ["/bin/sh","-c","echo $NAME;sleep 30000000"]
env:
- name: NAME
value: timinglee
[root@k8s-master ~]# kubectl apply -f pod1.yml
pod/test created
[root@k8s-master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
test 1/1 Running 0 11s
[root@k8s-master ~]# kubectl logs pods/test busybox
timinglee
2.4.4.6 示例6:资源限制
QoS(Quality of Service)即服务质量
资源设定 | 优先级类型 |
资源限定未设定 | BestEffort |
资源限定设定且最大和最小不一致 | Burstable |
资源限定设定且最大和最小一致 | Guaranteed |
[root@k8s-master ~]# vim pod1.yml
apiVersion: v1
kind: Pod
metadata:
labels:
run: timinglee
name: test
spec:
containers:
- image: reg.timinglee.org/library/myapp:v1
name: myapp
resources:
limits: #pod使用资源的最高限制
cpu: 500m
memory: 100M
requests: #pod期望使用资源量,不能大于limits
cpu: 500m
memory: 100M
[root@k8s-master ~]# kubectl apply -f pod1.yml
pod/test created[root@k8s-master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
test 1/1 Running 0 106s
[root@k8s-master ~]# kubectl describe pods testLimits:
cpu: 500m
memory: 100M
Requests:
cpu: 500m
memory: 100MQoS Class: Guaranteed
2.4.4.7 示例7 容器启动管理
[root@k8s-master ~]# vim pod1.yml
apiVersion: v1
kind: Pod
metadata:
labels:
run: timinglee
name: test
spec:
restartPolicy: Always
containers:
- image: reg.timinglee.org/library/myapp:v1
name: myapp[root@k8s-master ~]# kubectl apply -f pod1.yml
pod/test created
[root@k8s-master ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
test 1/1 Running 0 14s 10.244.1.27 k8s-node <none> <none>
2.4.4.8 示例8 选择运行节点
[root@k8s-master ~]# cat pod1.yml
apiVersion: v1
kind: Pod
metadata:
labels:
run: timinglee
name: test
spec:
nodeSelector:
kubernetes.io/hostname: k8s-node
restartPolicy: Always
containers:
- image: reg.timinglee.org/library/myapp:v1
name: myapp[root@k8s-master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
test 1/1 Running 0 7s
[root@k8s-master ~]# kubectl get pods test -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
test 1/1 Running 0 21s 10.244.1.28 k8s-node <none> <none>
2.4.4.9 示例9 共享宿主机网络
[root@k8s-master ~]# vim pod1.yml
apiVersion: v1
kind: Pod
metadata:
labels:
run: timinglee
name: test
spec:
hostNetwork: true
restartPolicy: Always
containers:
- image: reg.timinglee.org/library/busybox:latest
name: busybox
command: ["/bin/sh","-c","sleep 1000000"]
[root@k8s-master ~]# kubectl apply -f pod1.yml
pod/test created[root@k8s-master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
test 1/1 Running 0 6s[root@k8s-master ~]# kubectl exec -it pods/test -c busybox -- /bin/sh/ #
/ # ifconfig
cni0 Link encap:Ethernet HWaddr DE:DD:C7:E6:69:61
inet addr:10.244.2.1 Bcast:10.244.2.255 Mask:255.255.255.0
inet6 addr: fe80::dcdd:c7ff:fee6:6961/64 Scope:Link
UP BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:276 errors:0 dropped:0 overruns:0 frame:0
TX packets:114 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:18807 (18.3 KiB) TX bytes:11736 (11.4 KiB)docker0 Link encap:Ethernet HWaddr 02:42:16:DD:DD:10
inet addr:172.17.0.1 Bcast:172.17.255.255 Mask:255.255.0.0
UP BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)eth0 Link encap:Ethernet HWaddr 00:0C:29:25:3B:0B
inet addr:192.168.10.20 Bcast:192.168.10.255 Mask:255.255.255.0
inet6 addr: fe80::7ae4:7301:9341:544a/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:107616 errors:0 dropped:0 overruns:0 frame:0
TX packets:38866 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:109264910 (104.2 MiB) TX bytes:4347521 (4.1 MiB)flannel.1 Link encap:Ethernet HWaddr 7E:C0:50:F7:CE:22
inet addr:10.244.2.0 Bcast:0.0.0.0 Mask:255.255.255.255
inet6 addr: fe80::7cc0:50ff:fef7:ce22/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1450 Metric:1
RX packets:49 errors:0 dropped:0 overruns:0 frame:0
TX packets:35 errors:0 dropped:56 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:3143 (3.0 KiB) TX bytes:3983 (3.8 KiB)lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:1033 errors:0 dropped:0 overruns:0 frame:0
TX packets:1033 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:96395 (94.1 KiB) TX bytes:96395 (94.1 KiB)/ # exit
三 pod的生命周期
3.1 INIT 容器
- Pod 可以包含多个容器,应用运行在这些容器里面,同时 Pod 也可以有一个或多个先于应用容器启动的 Init 容器。
- Init 容器与普通的容器非常像,除了如下两点:
- 它们总是运行到完成
- init 容器不支持 Readiness,因为它们必须在 Pod 就绪之前运行完成,每个 Init 容器必须运行成功,下一个才能够运行。
- 如果Pod的 Init 容器失败,Kubernetes 会不断地重启该 Pod,直到 Init 容器成功为止。但是,如果 Pod 对应的 restartPolicy 值为 Never,它不会重新启动。
3.1.1 INIT 容器的功能
- Init 容器可以包含一些安装过程中应用容器中不存在的实用工具或个性化代码。
- Init 容器可以安全地运行这些工具,避免这些工具导致应用镜像的安全性降低。
- 应用镜像的创建者和部署者可以各自独立工作,而没有必要联合构建一个单独的应用镜像。
- Init 容器能以不同于Pod内应用容器的文件系统视图运行。因此,Init容器可具有访问 Secrets 的权限,而应用容器不能够访问。
- 由于 Init 容器必须在应用容器启动之前运行完成,因此 Init 容器提供了一种机制来阻塞或延迟应用容器的启动,直到满足了一组先决条件。一旦前置条件满足,Pod内的所有的应用容器会并行启动。
3.1.2 INIT 容器示例
[root@k8s-master ~]# vim pod1.yml
apiVersion: v1
kind: Pod
metadata:
labels:
name: initpod
name: initpod
spec:
containers:
- image: reg.timinglee.org/library/myapp:v1
name: myapp
initContainers:
- name: init-myservice
image: reg.timinglee.org/library/busybox:latest
command: ["sh","-c","until test -e /testfile;do echo wating for myservice;sleep 2;done"]
[root@k8s-master ~]# kubectl apply -f pod1.yml
pod/initpod created
[root@k8s-master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
initpod 0/1 Init:0/1 0 5s
[root@k8s-master ~]# kubectl logs pods/initpod init-myservice
wating for myservice
wating for myservice
wating for myservice
wating for myservice
wating for myservice
wating for myservice
wating for myservice
wating for myservice
wating for myservice
wating for myservice
[root@k8s-master ~]# kubectl exec pods/initpod -c init-myservice -- /bin/sh -c "touch /testfile"
[root@k8s-master ~]#[root@k8s-master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
initpod 1/1 Running 0 2m49s
3.2 探针
探针是由 kubelet 对容器执行的定期诊断:
- ExecAction:在容器内执行指定命令。如果命令退出时返回码为 0 则认为诊断成功。
- TCPSocketAction:对指定端口上的容器的 IP 地址进行 TCP 检查。如果端口打开,则诊断被认为是成功的。
- HTTPGetAction:对指定的端口和路径上的容器的 IP 地址执行 HTTP Get 请求。如果响应的状态码大于等于200 且小于 400,则诊断被认为是成功的。
每次探测都将获得以下三种结果之一:
- 成功:容器通过了诊断。
- 失败:容器未通过诊断。
- 未知:诊断失败,因此不会采取任何行动。
Kubelet 可以选择是否执行在容器上运行的三种探针执行和做出反应:
- livenessProbe:指示容器是否正在运行。如果存活探测失败,则 kubelet 会杀死容器,并且容器将受到其 重启策略 的影响。如果容器不提供存活探针,则默认状态为 Success。
- readinessProbe:指示容器是否准备好服务请求。如果就绪探测失败,端点控制器将从与 Pod 匹配的所有 Service 的端点中删除该 Pod 的 IP 地址。初始延迟之前的就绪状态默认为 Failure。如果容器不提供就绪探针,则默认状态为 Success。
- startupProbe: 指示容器中的应用是否已经启动。如果提供了启动探测(startup probe),则禁用所有其他探测,直到它成功为止。如果启动探测失败,kubelet 将杀死容器,容器服从其重启策略进行重启。如果容器没有提供启动探测,则默认状态为成功Success。
ReadinessProbe 与 LivenessProbe 的区别
- ReadinessProbe 当检测失败后,将 Pod 的 IP:Port 从对应的 EndPoint 列表中删除。
- LivenessProbe 当检测失败后,将杀死容器并根据 Pod 的重启策略来决定作出对应的措施
StartupProbe 与 ReadinessProbe 、 LivenessProbe 的区别
- 如果三个探针同时存在,先执行 StartupProbe 探针,其他两个探针将会被暂时禁用,直到 pod 满足 StartupProbe 探针配置的条件,其他 2 个探针启动,如果不满足按照规则重启容器。
- 另外两种探针在容器启动后,会按照配置,直到容器消亡才停止探测,而 StartupProbe 探针只是 在容器启动后按照配置满足一次后,不在进行后续的探测。
3.2.1 探针实例
3.2.1.1 存活探针示例:
[root@k8s-master ~]# vim pod1.yml
apiVersion: v1
kind: Pod
metadata:
labels:
name: liveness
name: liveness
spec:
containers:
- image: reg.timinglee.org/library/myapp:v1
name: myapp
livenessProbe:
tcpSocket: #检测端口存在性
port: 8080
initialDelaySeconds: 3 #容器启动后要等待多少秒后就探针开始工作,默认是 0
periodSeconds: 1 #执行探测的时间间隔,默认为 10s
timeoutSeconds: 1 #探针执行检测请求后,等待响应的超时时间,默认为 1s#测试:
[root@k8s-master ~]# kubectl apply -f pod1.yml
pod/liveness created
[root@k8s-master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
liveness 1/1 Running 1 (2s ago) 7s
[root@k8s-master ~]# kubectl describe pods
Name: liveness
Namespace: default
Priority: 0
Service Account: default
Node: k8s-node2/192.168.10.20
Start Time: Wed, 09 Oct 2024 18:48:58 +0800
Labels: name=liveness
Annotations: <none>
Status: Running
IP: 10.244.2.26
IPs:
IP: 10.244.2.26
Containers:
myapp:
Container ID: docker://127309a1c4f1ff1e3d1d9012355d27a57ce60807da406208dc9fea92ec3041d2
Image: reg.timinglee.org/library/myapp:v1
Image ID: docker-pullable://reg.timinglee.org/library/myapp@sha256:9eeca44ba2d410e54fccc54cbe9c021802aa8b9836a0bcf3d3229354e4c8870e
Port: <none>
Host Port: <none>
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Wed, 09 Oct 2024 18:49:26 +0800
Finished: Wed, 09 Oct 2024 18:49:31 +0800
Ready: False
Restart Count: 3
Liveness: tcp-socket :8080 delay=3s timeout=1s period=1s #success=1 #failure=3
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4k7nw (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-4k7nw:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 50s default-scheduler Successfully assigned default/liveness to k8s-node2
Normal Started 40s (x3 over 50s) kubelet Started container myapp
Warning Unhealthy 35s (x9 over 47s) kubelet Liveness probe failed: dial tcp 10.244.2.26:8080: connect: connection refused
Normal Killing 35s (x3 over 45s) kubelet Container myapp failed liveness probe, will be restarted
Warning BackOff 34s (x2 over 35s) kubelet Back-off restarting failed container myapp in pod liveness_default(44d7e71c-e4f1-4145-9eb0-594a42f60613)
Normal Pulled 22s (x4 over 50s) kubelet Container image "reg.timinglee.org/library/myapp:v1" already present on machine
Normal Created 22s (x4 over 50s) kubelet Created container myapp
[root@k8s-master ~]#
3.2.1.2 就绪探针示例:
[root@k8s-master ~]# vim pod1.yml
apiVersion: v1
kind: Pod
metadata:
labels:
name: readiness
name: readiness
spec:
containers:
- image: reg.timinglee.org/library/myapp:v1
name: myapp
readinessProbe:
httpGet:
path: /test.html
port: 80
initialDelaySeconds: 1
periodSeconds: 3
timeoutSeconds: 1
[root@k8s-master ~]# kubectl apply -f pod1.yml
pod/readiness created
[root@k8s-master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
readiness 0/1 Running 0 8s
[root@k8s-master ~]# kubectl expose pod readiness --port 80 --target-port 80
service/readiness exposed
[root@k8s-master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
readiness 0/1 Running 0 40s[root@k8s-master ~]# kubectl describe pods readiness
Warning Unhealthy 27s (x22 over 86s) kubelet Readiness probe failed: HTTP probe failed with statuscode: 404
[root@k8s-master ~]# kubectl describe service readiness
Name: readiness
Namespace: default
Labels: name=readiness
Annotations: <none>
Selector: name=readiness
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.111.85.242
IPs: 10.111.85.242
Port: <unset> 80/TCP
TargetPort: 80/TCP
Endpoints: #没有暴漏端口,就绪探针探测不满足暴漏条件
Session Affinity: None
Events: <none>[root@k8s-master ~]# kubectl exec pods/readiness -c myapp -- /bin/sh -c "echo test > /usr/share/nginx/html/test.html"
[root@k8s-master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
readiness 1/1 Running 0 5m13s
[root@k8s-master ~]# kubectl describe service readiness
Name: readiness
Namespace: default
Labels: name=readiness
Annotations: <none>
Selector: name=readiness
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.111.85.242
IPs: 10.111.85.242
Port: <unset> 80/TCP
TargetPort: 80/TCP
Endpoints: 10.244.1.30:80 #满组条件端口暴漏
Session Affinity: None
Events: <none>