97、prometheus之yaml文件

命令回顾
[root@master01 ~]# kubectl explain ingressKIND:     Ingress
VERSION:  networking.k8s.io/v1DESCRIPTION:Ingress is a collection of rules that allow inbound connections to reachthe endpoints defined by a backend. An Ingress can be configured to giveservices externally-reachable urls, load balance traffic, terminate SSL,offer name based virtual hosting etc.FIELDS:apiVersion	<string>APIVersion defines the versioned schema of this representation of anobject. Servers should convert recognized schemas to the latest internalvalue, and may reject unrecognized values. More info:https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resourceskind	<string>Kind is a string value representing the REST resource this objectrepresents. Servers may infer this from the endpoint the client submitsrequests to. Cannot be updated. In CamelCase. More info:https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kindsmetadata	<Object>Standard object's metadata. More info:https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadataspec	<Object>Spec is the desired state of the Ingress. More info:https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-statusstatus	<Object>Status is the current state of the Ingress. More info:https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status[root@master01 ~]# kubectl describe ingressName:             nginx-daemon-ingress
Namespace:        default
Address:          10.96.183.19
Default backend:  default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
TLS:tls.secret terminates www.xy102.com
Rules:Host           Path  Backends----           ----  --------www.xy102.com  /   nginx-daemon-svc:80 (<none>)
Annotations:     <none>
Events:          <none>

一、prometheus

node_exporter

节点数据收集器

daemonset-------->保证每个节点都有一个收集器

prometheus------->监控主程序

grafana------->图形化

altermanager---->告警模块

node_exporter组件安装

[root@master01 opt]# mkdir prometheus
[root@master01 opt]# cd prometheus/
[root@master01 prometheus]# vim node_exporter.yaml
[root@master01 prometheus]# 
apiVersion: apps/v1
kind: DaemonSet
metadata:name: node-exporternamespace: monitor-salabels:name: node-exporter
spec:selector:matchLabels:name: node-exportertemplate:metadata:labels:name: node-exporterspec:hostPID: truehostIPC: truehostNetwork: truecontainers:- name: node-exporterimage: prom/node-exporterports:- containerPort: 9100resources:limits:cpu: "0.5"securityContext:privileged: trueargs:- --path.procfs- /host/proc- --path.sysfs- /host/sys- --collector.filesystem.ignored-mount-points- '"^/(sys|proc|dev|host|etc)($|/)"'volumeMounts:- name: devmountPath: /host/dev- name: procmountPath: /host/proc- name: sysmountPath: /host/sys- name: rootfsmountPath: /rootfsvolumes:- name: prochostPath:path: /proc- name: devhostPath:path: /dev- name: syshostPath:path: /sys- name: rootfshostPath:path: /[root@master01 ~]# cd /opt/
[root@master01 opt]# kubectl create ns monitor-sa
namespace/monitor-sa created
[root@master01 opt]# ls
cni                                 ingress
cni_bak                             jenkins-2.396-1.1.noarch.rpm
cni-plugins-linux-amd64-v0.8.6.tgz  k8s-yaml
configmap                           kube-flannel.yml
containerd                          nginx-de.yaml
data1                               secret
flannel.tar                         test
ingree.contro-0.30.0.tar            update-kubeadm-cert.sh
ingree.contro-0.30.0.tar.gz
[root@master01 opt]# mkdir prometheus
[root@master01 opt]# cd prometheus/
[root@master01 prometheus]# vim node_exporter.yaml
[root@master01 prometheus]# kubectl apply -f node_exporter.yaml 
daemonset.apps/node-exporter created[root@master01 prometheus]# kubectl get pod -n monitor-sa -o wide
NAME                  READY   STATUS             RESTARTS   AGE     IP               NODE       NOMINATED NODE   READINESS GATES
node-exporter-7mfnf   0/1     ErrImagePull       0          2m29s   192.168.168.81   master01   <none>           <none>
node-exporter-c6hq2   0/1     ImagePullBackOff   0          13m     192.168.168.82   node01     <none>           <none>
node-exporter-jgz96   0/1     ImagePullBackOff   0          13m     192.168.168.83   node02     <none>           <none>##镜像拉取失败##镜像拉不下来
导入镜像
[root@master01 prometheus]# rz -E
rz waiting to receive.
[root@master01 prometheus]# ls
node_exporter.yaml  node.tar
[root@master01 prometheus]# docker load -i node.tar    ##所有节点都部署[root@node01 opt]# mkdir prometheus
[root@node01 opt]# rz -E
rz waiting to receive.
[root@node01 opt]# docker load -i node.tar
1e604deea57d: Loading layer  1.458MB/1.458MB
6b83872188a9: Loading layer  2.455MB/2.455MB
4f3f7dd00054: Loading layer   20.5MB/20.5MB
Loaded image: prom/node-exporter:v1[root@node02 ~]# cd /opt/
[root@node02 opt]# mkdir prometheus
[root@node02 opt]# cd prometheus/
[root@node02 prometheus]# rz -E
rz waiting to receive.
[root@node02 prometheus]# docker load -i node.tar
1e604deea57d: Loading layer  1.458MB/1.458MB
6b83872188a9: Loading layer  2.455MB/2.455MB
4f3f7dd00054: Loading layer   20.5MB/20.5MB
Loaded image: prom/node-exporter:v1[root@master01 prometheus]# vim node_exporter.yamlapiVersion: apps/v1
kind: DaemonSet
metadata:name: node-exporternamespace: monitor-salabels:name: node-exporter
spec:selector:matchLabels:name: node-exportertemplate:metadata:labels:name: node-exporterspec:hostPID: truehostIPC: truehostNetwork: truecontainers:- name: node-exporterimage: prom/node-exporter:v1ports:- containerPort: 9100resources:limits:cpu: "0.5"securityContext:privileged: trueargs:- --path.procfs- /host/proc- --path.sysfs- /host/sys- --collector.filesystem.ignored-mount-points- '"^/(sys|proc|dev|host|etc)($|/)"'volumeMounts:- name: devmountPath: /host/dev- name: procmountPath: /host/proc- name: sysmountPath: /host/sys- name: rootfsmountPath: /rootfsvolumes:- name: prochostPath:path: /proc- name: devhostPath:path: /dev- name: syshostPath:path: /sys- name: rootfshostPath:path: /[root@master01 prometheus]# kubectl get pod -n monitor-sa -o wide
NAME                  READY   STATUS             RESTARTS   AGE     IP               NODE       NOMINATED NODE   READINESS GATES
node-exporter-7mfnf   0/1     ErrImagePull       0          2m29s   192.168.168.81   master01   <none>           <none>
node-exporter-c6hq2   0/1     ImagePullBackOff   0          13m     192.168.168.82   node01     <none>           <none>
node-exporter-jgz96   0/1     ImagePullBackOff   0          13m     192.168.168.83   node02     <none>           <none>##已经导入镜像,重启[root@master01 prometheus]# kubectl delete pod node-exporter-7mfnf -n monitor-sa 
pod "node-exporter-7mfnf" deleted[root@master01 prometheus]# kubectl get pod -n monitor-sa -o wide
NAME                  READY   STATUS             RESTARTS   AGE   IP               NODE       NOMINATED NODE   READINESS GATES
node-exporter-76nkz   1/1     Running            0          26s   192.168.168.81   master01   <none>           <none>
node-exporter-c6hq2   0/1     ImagePullBackOff   0          14m   192.168.168.82   node01     <none>           <none>
node-exporter-jgz96   0/1     ImagePullBackOff   0          13m   192.168.168.83   node02     <none>           <none>##已经导入镜像,重启
[root@master01 prometheus]# kubectl delete pod node-exporter-c6hq2 -n monitor-sa 
pod "node-exporter-c6hq2" deleted[root@master01 prometheus]# kubectl get pod -n monitor-sa -o wide
NAME                  READY   STATUS              RESTARTS   AGE   IP               NODE       NOMINATED NODE   READINESS GATES
node-exporter-487lb   1/1     Running             0          55s   192.168.168.82   node01     <none>           <none>
node-exporter-76nkz   1/1     Running             0          98s   192.168.168.81   master01   <none>           <none>
node-exporter-jj92l   0/1     ContainerCreating   0          10s   192.168.168.83   node02     <none>           <none>##已经导入镜像,重启
[root@master01 prometheus]# kubectl delete pod node-exporter-jgz96 -n monitor-sa 
pod "node-exporter-jgz96" deleted[root@master01 prometheus]# kubectl get pod -n monitor-sa -o wide
NAME                  READY   STATUS    RESTARTS   AGE   IP               NODE       NOMINATED NODE   READINESS GATES
node-exporter-487lb   1/1     Running   0          12m   192.168.168.82   node01     <none>           <none>
node-exporter-76nkz   1/1     Running   0          13m   192.168.168.81   master01   <none>           <none>
node-exporter-jj92l   1/1     Running   0          12m   192.168.168.83   node02     <none>           <none>

http://192.168.168.81:9100/metrics

http://192.168.168.81:9100/metrics[root@master01 prometheus]# kubectl create serviceaccount monitor -n monitor-sa
serviceaccount/monitor created[root@master01 prometheus]# kubectl create clusterrolebinding monitor-clusterrolebinding -n monitor-sa --clusterrole=cluster-admin  --serviceaccount=monitor-sa:monitor
clusterrolebinding.rbac.authorization.k8s.io/monitor-clusterrolebinding created

192.168.168.81:9100/metrics

在这里插入图片描述

设置告警的配置

[root@master01 prometheus]# rz -E
rz waiting to receive.
[root@master01 prometheus]# ls
node_exporter.yaml  node.tar  prometheus-alertmanager-cfg.yaml[root@master01 prometheus]# vim prometheus-alertmanager-cfg.yaml 120       - targets: ['192.168.168.81:10251']
121     - job_name: 'kubernetes-controller-manager'
122       scrape_interval: 5s
123       static_configs:
124       - targets: ['192.168.168.81:10252']
125     - job_name: 'kubernetes-kube-proxy'
126       scrape_interval: 5s
127       static_configs:
128       - targets: ['192.168.168.81:10249','192.168.168.82:10249','192.168    .168.83:10249']137       - targets: ['192.168.168.81:2379']221       - alert: kube-state-metrics的cpu使用率大于90%
222         expr: rate(process_cpu_seconds_total{k8s_app=~"kube-state-metric    s"}[1m]) * 100 > 90description: "{{$labels.mountpoint }} 磁盘分区使用大于80%(目前使用
:{{$value}}%)"- alert: HighPodCpuUsage#告警邮件的标题expr: sum(rate(container_cpu_usage_seconds_total{namespace="default", pod=~".+"}[5m])) by (pod) > 0.9#收集指标数据for: 5m#占用cpu90%的持续时间5minute,告警labels:severity: warningannotations:#告警的内容description: "{{ $labels.pod }} 的CPU使用率高于90%."summary: "Pod {{ $labels.pod }} 的CPU使用率高"[root@master01 prometheus]# kubectl apply -f prometheus-alertmanager-cfg.yaml 
configmap/prometheus-config created

在这里插入图片描述
在这里插入图片描述

在这里插入图片描述

在这里插入图片描述

在这里插入图片描述

邮件邮箱设置

prometheus的svc

prometheus告警的svc

prometheus+nodeport部署prometheus

创建secret资源

grafana的yaml文件

[root@master01 prometheus]# vim alter-mail.yamlkind: ConfigMap
apiVersion: v1
metadata:name: alertmanagernamespace: monitor-sa
data:alertmanager.yml: |-global:resolve_timeout: 1msmtp_smarthost: 'smtp.qq.com:25'smtp_from: '1435678619@qq.com'smtp_auth_username: '1435678619@qq.com'smtp_auth_password: 'yniumbpaclkggfcc'smtp_require_tls: falseroute:group_by: [alertname]group_wait: 10sgroup_interval: 10srepeat_interval: 10m receiver: default-receiverreceivers:- name: 'default-receiver'email_configs:- to: '1435678619@qq.com'send_resolved: true[root@master01 prometheus]# kubectl apply -f alter-mail.yaml 
configmap/alertmanager created##prometheus的svc
[root@master01 prometheus]# vim prometheus-svc.yamlapiVersion: v1
kind: Service
metadata:name: prometheusnamespace: monitor-salabels:app: prometheus
spec:type: NodePortports:- port: 9090targetPort: 9090protocol: TCPselector:app: prometheuscomponent: server##prometheus告警的svc
[root@master01 prometheus]# vim prometheus-alter.yamlapiVersion: v1
kind: Service
metadata:labels:name: prometheuskubernetes.io/cluster-service: 'true'name: alertmanagernamespace: monitor-sa
spec:ports:- name: alertmanagernodePort: 30066port: 9093protocol: TCPtargetPort: 9093selector:app: prometheussessionAffinity: Nonetype: NodePort##prometheus+nodeport部署prometheus[root@master01 prometheus]# vim prometheus-deploy.yamlapiVersion: apps/v1
kind: Deployment
metadata:name: prometheus-servernamespace: monitor-salabels:app: prometheus
spec:replicas: 1selector:matchLabels:app: prometheuscomponent: servertemplate:metadata:labels:app: prometheuscomponent: serverannotations:prometheus.io/scrape: 'false'spec:serviceAccountName: monitorinitContainers:- name: init-chmodimage: busybox:latestcommand: ['sh','-c','chmod -R 777 /prometheus;chmod -R 777 /etc']volumeMounts:- mountPath: /prometheusname: prometheus-storage-volume- mountPath: /etc/localtimename: timezonecontainers:- name: prometheusimage: prom/prometheus:v2.45.0command:- prometheus- --config.file=/etc/prometheus/prometheus.yml- --storage.tsdb.path=/prometheus- --storage.tsdb.retention=720h- --web.enable-lifecycleports:- containerPort: 9090volumeMounts:- name: prometheus-configmountPath: /etc/prometheus/- mountPath: /prometheus/name: prometheus-storage-volume- name: timezonemountPath: /etc/localtime- name: k8s-certsmountPath: /var/run/secrets/kubernetes.io/k8s-certs/etcd/- name: alertmanagerimage: prom/alertmanager:v0.20.0args:- "--config.file=/etc/alertmanager/alertmanager.yml"- "--log.level=debug"ports:- containerPort: 9093protocol: TCPname: alertmanagervolumeMounts:- name: alertmanager-configmountPath: /etc/alertmanager- name: alertmanager-storagemountPath: /alertmanager- name: localtimemountPath: /etc/localtimevolumes:- name: prometheus-configconfigMap:name: prometheus-configdefaultMode: 0777- name: prometheus-storage-volumehostPath:path: /datatype: DirectoryOrCreate- name: k8s-certssecret:secretName: etcd-certs- name: timezonehostPath:path: /usr/share/zoneinfo/Asia/Shanghai- name: alertmanager-configconfigMap:name: alertmanager- name: alertmanager-storagehostPath:path: /data/alertmanagertype: DirectoryOrCreate- name: localtimehostPath:path: /usr/share/zoneinfo/Asia/Shanghai[root@master01 prometheus]# kubectl apply -f prometheus-deploy.yaml 
deployment.apps/prometheus-server created
[root@master01 prometheus]# kubectl apply -f prometheus-svc.yaml 
service/prometheus created
[root@master01 prometheus]# kubectl apply -f prometheus-alter.yaml 
service/alertmanager created##创建secret资源
[root@master01 prometheus]# kubectl -n monitor-sa create secret generic etcd-certs --from-file=/etc/kubernetes/pki/etcd/server.key --from-file=/etc/kubernetes/pki/etcd/server.crt --from-file=/etc/kubernetes/pki/etcd/ca.crt
secret/etcd-certs created[root@master01 prometheus]# kubectl describe pod -n monitor-sa ##prometheus启动情况
[root@master01 prometheus]# kubectl get pod -n monitor-sa 
NAME                                 READY   STATUS    RESTARTS   AGE
node-exporter-487lb                  1/1     Running   0          3h50m
node-exporter-76nkz                  1/1     Running   0          3h51m
node-exporter-jj92l                  1/1     Running   0          3h50m
prometheus-server-55d866cb44-6n2bf   2/2     Running   0          4m4s##查看命名空间下的端口
[root@master01 prometheus]# kubectl get svc -n monitor-sa 
NAME           TYPE       CLUSTER-IP    EXTERNAL-IP   PORT(S)          AGE
alertmanager   NodePort   10.96.54.65   <none>        9093:30066/TCP   5m25s
prometheus     NodePort   10.96.29.5    <none>        9090:30493/TCP   5m40s##grafana的yaml文件[root@master01 prometheus]# vim pro-gra.yamlapiVersion: v1
kind: PersistentVolumeClaim
metadata:name: grafananamespace: kube-system
spec:accessModes:- ReadWriteManystorageClassName: nfs-client-storageclassresources:requests:storage: 2Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:name: monitoring-grafananamespace: kube-system
spec:replicas: 1selector:matchLabels:task: monitoringk8s-app: grafanatemplate:metadata:labels:task: monitoringk8s-app: grafanaspec:containers:- name: grafanaimage: grafana/grafana:7.5.11securityContext:runAsUser: 104runAsGroup: 107ports:- containerPort: 3000protocol: TCPvolumeMounts:- mountPath: /etc/ssl/certsname: ca-certificatesreadOnly: false- mountPath: /varname: grafana-storage- mountPath: /var/lib/grafananame: graf-testenv:- name: INFLUXDB_HOSTvalue: monitoring-influxdb- name: GF_SERVER_HTTP_PORTvalue: "3000"- name: GF_AUTH_BASIC_ENABLEDvalue: "false"- name: GF_AUTH_ANONYMOUS_ENABLEDvalue: "true"- name: GF_AUTH_ANONYMOUS_ORG_ROLEvalue: Admin- name: GF_SERVER_ROOT_URLvalue: /volumes:- name: ca-certificateshostPath:path: /etc/ssl/certs- name: grafana-storageemptyDir: {}- name: graf-testpersistentVolumeClaim:claimName: grafana
---
apiVersion: v1
kind: Service
metadata:labels:name: monitoring-grafananamespace: kube-system
spec:ports:- port: 80targetPort: 3000selector:k8s-app: grafanatype: NodePort[root@master01 prometheus]# kubectl apply -f pro-gra.yaml 
persistentvolumeclaim/grafana created
deployment.apps/monitoring-grafana created
service/monitoring-grafana created[root@master01 prometheus]# kubectl get svc -n kube-system 
NAME                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                  AGE
kube-dns             ClusterIP   10.96.0.10      <none>        53/UDP,53/TCP,9153/TCP   23d
monitoring-grafana   NodePort    10.96.131.109   <none>        80:30901/TCP             39shttp://192.168.168.81:30066/#/alerts

在这里插入图片描述

http://192.168.168.81:30493/
在这里插入图片描述

http://192.168.168.81:30901/

在这里插入图片描述
在这里插入图片描述

[root@master01 prometheus]# kubectl edit configmap kube-proxy -n kube-system//处理 kube-proxy 监控告警
kubectl edit configmap kube-proxy -n kube-system
......
metricsBindAddress: "0.0.0.0:10249"
#因为 kube-proxy 默认端口10249是监听在 127.0.0.1 上的,需要改成监听到物理节点上configmap/kube-proxy edited#重新启动 kube-proxy
kubectl get pods -n kube-system | grep kube-proxy |awk '{print $1}' | xargs kubectl delete pods -n kube-system[root@master01 prometheus]# kubectl get pods -n kube-system | grep kube-proxy |awk '{print $1}' | xargs kubectl delete pods -n kube-system
pod "kube-proxy-d5fnf" deleted
pod "kube-proxy-kpvs2" deleted
pod "kube-proxy-nrszf" deleted

在这里插入图片描述

在这里插入图片描述

在这里插入图片描述

http://prometheus.monitor-sa.svc:9090

在这里插入图片描述

在这里插入图片描述

在这里插入图片描述

在这里插入图片描述

压力测试

[root@master01 prometheus]# vim ylcs.yamlapiVersion: apps/v1
kind: Deployment
metadata:name: hpa-testlabels:hpa: test
spec:replicas: 1selector:matchLabels:hpa: testtemplate:metadata:labels:hpa: testspec:containers:- name: centosimage: centos:7                command: ["/bin/bash", "-c", "yum install -y stress --nogpgcheck && sleep 3600"]volumeMounts:- name: yummountPath: /etc/yum.repos.d/volumes:- name: yumhostPath:path: /etc/yum.repos.d/[root@master01 prometheus]# kubectl get pod
NAME                       READY   STATUS             RESTARTS   AGE
hpa-test-c9b658d84-7pvc8   0/1     CrashLoopBackOff   6          10m
nfs1-76f66b958-68wpl       1/1     Running            1          13d[root@master01 prometheus]# kubectl logs -f hpa-test-c9b658d84-7pvc8 
Loaded plugins: fastestmirror, ovl
Repository base is listed more than once in the configuration
Repository updates is listed more than once in the configuration
Repository extras is listed more than once in the configuration
Repository centosplus is listed more than once in the configuration
Determining fastest mirrors
Could not retrieve mirrorlist http://mirrorlist.centos.org/?release=7&arch=x86_64&repo=os&infra=container error was
14: curl#6 - "Could not resolve host: mirrorlist.centos.org; Unknown error"One of the configured repositories failed (Unknown),and yum doesn't have enough cached data to continue. At this point the onlysafe thing yum can do is fail. There are a few ways to work "fix" this:1. Contact the upstream for the repository and get them to fix the problem.2. Reconfigure the baseurl/etc. for the repository, to point to a workingupstream. This is most often useful if you are using a newerdistribution release than is supported by the repository (and thepackages for the previous distribution release still work).3. Run the command with the repository temporarily disabledyum --disablerepo=<repoid> ...4. Disable the repository permanently, so yum won't use it by default. Yumwill then just ignore the repository until you permanently enable itagain or use --enablerepo for temporary usage:yum-config-manager --disable <repoid>orsubscription-manager repos --disable=<repoid>5. Configure the failing repository to be skipped, if it is unavailable.Note that yum will try to contact the repo. when it runs most commands,so will have to try and fail each time (and thus. yum will be be muchslower). If it is a very temporary problem though, this is often a nicecompromise:yum-config-manager --save --setopt=<repoid>.skip_if_unavailable=trueCannot find a valid baseurl for repo: base/7/x86_64[root@master01 prometheus]# wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
--2024-09-19 14:31:18--  http://mirrors.aliyun.com/repo/Centos-7.repo
正在解析主机 mirrors.aliyun.com (mirrors.aliyun.com)... 114.232.93.242, 58.218.92.241, 114.232.93.243, ...
正在连接 mirrors.aliyun.com (mirrors.aliyun.com)|114.232.93.242|:80... 已连接。
已发出 HTTP 请求,正在等待回应... 200 OK
长度:2523 (2.5K) [application/octet-stream]
正在保存至: “/etc/yum.repos.d/CentOS-Base.repo”100%[==================================>] 2,523       --.-K/s 用时 0s      2024-09-19 14:31:18 (106 MB/s) - 已保存 “/etc/yum.repos.d/CentOS-Base.repo” [2523/2523])

解决报错

[root@master01 prometheus]# kubectl delete -f ylcs.yaml 
deployment.apps "hpa-test" deleted
[root@master01 prometheus]# kubectl apply -f ylcs.yaml 
deployment.apps/hpa-test created[root@master01 prometheus]# cd /etc/yum.repos.d/
[root@master01 yum.repos.d]# ls
backup            CentOS-Debuginfo.repo  CentOS-Vault.repo  kubernetes.repo
Centos-7.repo     CentOS-fasttrack.repo  docker-ce.repo     local.repo
CentOS-Base.repo  CentOS-Media.repo      epel.repo
CentOS-CR.repo    CentOS-Sources.repo    epel-testing.repo
[root@master01 yum.repos.d]# rm -rf local.repo 
[root@master01 yum.repos.d]# wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
--2024-09-19 14:38:36--  http://mirrors.aliyun.com/repo/Centos-7.repo
正在解析主机 mirrors.aliyun.com (mirrors.aliyun.com)... 114.232.93.240, 58.218.92.243, 114.232.93.241, ...
正在连接 mirrors.aliyun.com (mirrors.aliyun.com)|114.232.93.240|:80... 已连接。
已发出 HTTP 请求,正在等待回应... 200 OK
长度:2523 (2.5K) [application/octet-stream]
正在保存至: “/etc/yum.repos.d/CentOS-Base.repo”100%[==================================>] 2,523       --.-K/s 用时 0s      2024-09-19 14:38:36 (73.3 MB/s) - 已保存 “/etc/yum.repos.d/CentOS-Base.repo” [2523/2523])[root@master01 yum.repos.d]# cd -
/opt/prometheus
[root@master01 prometheus]# kubectl get pod
NAME                       READY   STATUS    RESTARTS   AGE
hpa-test-c9b658d84-bs457   0/1     Error     3          50s
nfs1-76f66b958-68wpl       1/1     Running   1          13d
[root@master01 prometheus]# kubectl delete -f ylcs.yaml 
deployment.apps "hpa-test" deleted
[root@master01 prometheus]# kubectl get pod
NAME                       READY   STATUS        RESTARTS   AGE
hpa-test-c9b658d84-bs457   0/1     Terminating   3          56s
nfs1-76f66b958-68wpl       1/1     Running       1          13d
[root@master01 prometheus]# kubectl get pod
NAME                       READY   STATUS        RESTARTS   AGE
hpa-test-c9b658d84-bs457   0/1     Terminating   3          57s
nfs1-76f66b958-68wpl       1/1     Running       1          13d
[root@master01 prometheus]# kubectl get pod
NAME                       READY   STATUS        RESTARTS   AGE
hpa-test-c9b658d84-bs457   0/1     Terminating   3          58s
nfs1-76f66b958-68wpl       1/1     Running       1          13d
[root@master01 prometheus]# kubectl get pod
NAME                   READY   STATUS    RESTARTS   AGE
nfs1-76f66b958-68wpl   1/1     Running   1          13d
[root@master01 prometheus]# kubectl apply -f ylcs.yaml 
deployment.apps/hpa-test created
[root@master01 prometheus]# kubectl get pod
NAME                       READY   STATUS    RESTARTS   AGE
hpa-test-c9b658d84-h9xvf   1/1     Running   0          1s
nfs1-76f66b958-68wpl       1/1     Running   1          13d[root@node01 ~]# cd /etc/yum.repos.d/
[root@node01 yum.repos.d]# ls
backup            CentOS-Debuginfo.repo  CentOS-Vault.repo  kubernetes.repo
Centos-7.repo     CentOS-fasttrack.repo  docker-ce.repo     local.repo
CentOS-Base.repo  CentOS-Media.repo      epel.repo
CentOS-CR.repo    CentOS-Sources.repo    epel-testing.repo
[root@node01 yum.repos.d]# rm -rf local.repo [root@node02 ~]# cd /etc/yum.repos.d/
[root@node02 yum.repos.d]# ls
backup            CentOS-Debuginfo.repo  CentOS-Vault.repo  kubernetes.repo
Centos-7.repo     CentOS-fasttrack.repo  docker-ce.repo     local.repo
CentOS-Base.repo  CentOS-Media.repo      epel.repo
CentOS-CR.repo    CentOS-Sources.repo    epel-testing.repo
[root@node02 yum.repos.d]# rm -rf local.repo 
[root@node02 yum.repos.d]# ls[root@node01 yum.repos.d]# wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo[root@node02 yum.repos.d]# wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo[root@master01 prometheus]# kubectl apply -f ylcs.yaml 
deployment.apps/hpa-test created
[root@master01 prometheus]# kubectl get pod
NAME                       READY   STATUS    RESTARTS   AGE
hpa-test-c9b658d84-cqklr   1/1     Running   0          3s
nfs1-76f66b958-68wpl       1/1     Running   1          13d[root@master01 prometheus]# kubectl get pod -o wide
NAME                       READY   STATUS    RESTARTS   AGE    IP             NODE     NOMINATED NODE   READINESS GATES
hpa-test-c9b658d84-cqklr   1/1     Running   0          110s   10.244.2.251   node02   <none>           <none>
nfs1-76f66b958-68wpl       1/1     Running   1          13d    10.244.2.173   node02   <none>           <none>##到node02上top情况
[root@node02 yum.repos.d]#

在这里插入图片描述
在这里插入图片描述

[root@master01 prometheus]# kubectl exec -it hpa-test-c9b658d84-cqklr bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
[root@hpa-test-c9b658d84-cqklr /]# stress -c 4
stress: info: [64] dispatching hogs: 4 cpu, 0 io, 0 vm, 0 hdd

在这里插入图片描述

在这里插入图片描述

在这里插入图片描述

在这里插入图片描述

在这里插入图片描述

在这里插入图片描述

在这里插入图片描述

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.xdnf.cn/news/144448.html

如若内容造成侵权/违法违规/事实不符,请联系一条长河网进行投诉反馈,一经查实,立即删除!

相关文章

Day.js时间插件的安装引用与常用方法大全

&#x1f680; 个人简介&#xff1a;某大型国企资深软件研发工程师&#xff0c;信息系统项目管理师、CSDN优质创作者、阿里云专家博主&#xff0c;华为云云享专家&#xff0c;分享前端后端相关技术与工作常见问题~ &#x1f49f; 作 者&#xff1a;码喽的自我修养&#x1f9…

kafka之路-01从零搭建环境到SpringBoot集成

kafka之路-01从零搭建环境到SpringBoot集成 原创 今夜写代码 今夜写代码 2024年07月21日 21:58 浙江 一、kafka 架构简单介绍 1) 生产者将消息发送到Broker 节点&#xff0c;消费者从Broker 订阅消息 2&#xff09;消息订阅通常有服务端Push 和 消费端Pull两种方式&#xff…

家用小型洗衣机哪个牌子好?五款热搜爆火型号,速来围观

在日常生活中&#xff0c;内衣洗衣机已成为现代家庭必备的重要家电之一。选择一款耐用、质量优秀的内衣洗衣机&#xff0c;不仅可以减少洗衣负担&#xff0c;还能提供高效的洗涤效果。然而&#xff0c;市场上众多内衣洗衣机品牌琳琅满目&#xff0c;让我们往往难以选择。那么&a…

【JavaEE】多线程编程引入——认识Thread类

阿华代码&#xff0c;不是逆风&#xff0c;就是我疯&#xff0c;你们的点赞收藏是我前进最大的动力&#xff01;&#xff01;希望本文内容能帮到你&#xff01; 目录 引入&#xff1a; 一&#xff1a;Thread类 1&#xff1a;Thread类可以直接调用 2&#xff1a;run方法 &a…

K8S容器实例Pod安装curl-vim-telnet工具

在没有域名的情况下&#xff0c;有时候需要调试接口等需要此工具 安装curl、telnet、vim等 直接使用 apk add curlapk add vimapk add tennet

Python编码系列—Python工厂方法模式:构建灵活对象的秘诀

&#x1f31f;&#x1f31f; 欢迎来到我的技术小筑&#xff0c;一个专为技术探索者打造的交流空间。在这里&#xff0c;我们不仅分享代码的智慧&#xff0c;还探讨技术的深度与广度。无论您是资深开发者还是技术新手&#xff0c;这里都有一片属于您的天空。让我们在知识的海洋中…

【深度学习】初识神经网络

神经网络的表示 一个简单的两层神经网络如下图所示&#xff0c;每个圆圈都代表一个神经元&#xff0c;又名预测器。 一个神经元的计算详情如下。在我们原本输入的变量x的基础上&#xff0c;还有权重w和偏置b&#xff1b;在计算z过后&#xff0c;再将其带入sigmoid激活函数&…

招行 CBS8银企直连 前置机对接技术指南

引言 集团企业在使用CBS财资系统之后&#xff0c;如需构建企业自身ERP系统与CBS财资系统数据互通&#xff0c;可选择前置方式做数据中转&#xff0c;解决客户系统与CBS财资系统进行数据交换过程中的特殊需求。CBSLink&#xff08;即&#xff1a;前置机&#xff09;仍通过OpenAP…

10 vue3之全局组件,局部组件,递归组件,动态组件

全局组件 使用频率非常高的组件可以搞成全局组件&#xff0c;无需再组件中再次import引入 在main.ts 注册 import Card from ./components/Card/index.vuecreateApp(App).component(Card,Card).mount(#app) 使用方法 直接在其他vue页面 立即使用即可 无需引入 <templat…

.NET 音频播放器 界面优雅,体验流畅

目录 前言 项目介绍 项目页面 用户界面与动画效果 音频格式支持与封面模式 任务栏模式 歌词功能 更多功能探索 项目源码 项目地址 前言 本文介绍一款使用 C# 与 WPF 开发的音频播放器&#xff0c;其界面简洁大方&#xff0c;操作体验流畅。该播放器支持多种音频格式&…

UDS协议介绍-------28服务

功能描述 根据ISO14229-1标准中所述&#xff0c;诊断服务28服务主要用于网络中的报文发送与接收&#xff0c;例如控制应用报文的发送与接收&#xff0c;又或是控制网络管理报文的发送与接收。 应用场景 对于28诊断服务&#xff0c;主要应用场景为以下场合&#xff1a; 1、存…

eclipse git 不小心点了igore,文件如何加到git中去。

1、创建了文件&#xff0c;或者利用三方工具&#xff0c;或者用mybatis plus生成了文件以后&#xff0c;我们需要右键文件&#xff0c;然后加入到git中。 右键有问号的java文件 -- Team -- Add to Index &#xff0c;然后变成个号就可以了。 2、不小心&#xff0c;点了一下Ign…

Navicat如何实现Excel表格内数据导入数据库?

Navicat-MySQL数据导入 数据已被写在excel内&#xff0c;对应字段对应数据 找到需要导数据的表&#xff0c;右击改表选择仅结构的复制&#xff0c;复制出的新表和旧表字段相等结构相同 右击新表选择导入向导进行数据的导入&#xff0c;我采用excel表的方式进行导入 选择自己数…

电子看板实时监控数据可视化助力工厂精细化管理

在当今竞争激烈的制造业领域&#xff0c;工厂的精细化管理成为提高竞争力的关键。而电子看板实时监控数据可视化作为一种先进的管理工具&#xff0c;正为工厂的精细化管理带来巨大的助力。 一、工厂精细化管理的挑战 随着市场需求的不断变化和客户对产品质量要求的日益提高&am…

OpenCV运动分析和目标跟踪(4)创建汉宁窗函数createHanningWindow()的使用

操作系统&#xff1a;ubuntu22.04 OpenCV版本&#xff1a;OpenCV4.9 IDE:Visual Studio Code 编程语言&#xff1a;C11 算法描述 此函数计算二维的汉宁窗系数。 createHanningWindow是OpenCV中的一个函数&#xff0c;用于创建汉宁窗&#xff08;Hann window&#xff09;。汉宁…

零基础制作一个ST-LINK V2 附PCB文件原理图 AD格式

资料下载地址&#xff1a;零基础制作一个ST-LINK V2 附PCB文件原理图 AD格式 ST-LINK/V2是一款可以在线仿真以及下载STM8以及STM32的开发工具。支持所有带SWIM接口的STM8系列单片机;支持所有带JTAG / SWD接口的STM32系列单片机。 基本属性 ST-LINK/V2是ST意法半导体为评估、开…

网络安全学习路线,史上最全网络安全学习路线整理

很多小伙伴在网上搜索网络安全时&#xff0c;会出来网络安全工程师这样一个职位&#xff0c;它的范围很广&#xff0c;只要是与网络安全挂钩的技术人员都算网络安全工程师&#xff0c;一些小伙伴就有疑问了&#xff0c;网络安全现在真的很火吗&#xff1f; 那么无涯就带大家看…

组态软件之万维组态介绍(web组态、html组态、vue2/vue3组态)

一、什么是组态软件 组态软件是一种用于创建、配置和管理监控和控制系统的软件工具。组态是指不需要编写计算机程序、通过配置的方式完成工业应用开发的系统。它们通常用于工业自动化领域&#xff0c;用于实时监视和控制工业过程。组态软件提供了丰富的功能和工具&#xff0c;使…

C++之第十二课

课程列表 哎呀呀&#xff0c;失踪人口回归了&#xff01;&#xff08;前段时间跑去B站了&#xff0c;久等了&#xff09; 今天来讲——数组 有一道题是这样的&#xff1a; 有n个数&#xff0c;请输出其中最大的数。 原来我们就要&#xff1a; int a,b,c... 但是——数组…

浅谈Spring Cloud:Nacos的配置

Nacos&#xff0c;一个更易于构建云原生应用的动态服务发现&#xff0c;配置管理和服务管理平台。所以Nacos是⼀个注册中心组件&#xff0c;但它又不仅仅是注册中心组件。 目录 安装 注册 负载均衡 环境隔离 配置管理 搭建集群 安装 在官网下载好安装包解压后&#xf…