国产服务器平台离线部署k8s和kubesphere(含离线部署新方式)

254b482627efbd81f3400974ecd5af42.jpeg

 

"信创:鲲鹏+麒麟,ARM64架构,实现K8s和Kubesphere的离线部署,全新方式助力企业高效运维。"

   

本文将深入探讨如何借助鲲鹏CPU(arm64)和操作系统Kylin V10 SP2/SP3,通过KubeKey制作KubeSphere与Kubernetes的离线安装包,并实践部署KubeSphere 3.3.1与Kubernetes 1.22.12集群。


离线环境 KubeSphere/k8s-master

master-2

192.168.10.3

Kunpeng-920

Kylin V10 SP2

离线环境 KubeSphere/k8s-master

master-3

192.168.10.4

Kunpeng-920

Kylin V10 SP2

离线环境 KubeSphere/k8s-master

deploy

192.168.200.7

Kunpeng-920

Kylin V10 SP3

联网主机用于制作离线包

实战环境涉及软件版本信息

  • 服务器芯片:Kunpeng-920
  • 操作系统:麒麟V10 SP2/SP3 aarch64
  • Docker: 24.0.7
  • Harbor: v2.7.1
  • KubeSphere:v3.3.1
  • Kubernetes:v1.22.12
  • KubeKey: v2.3.1

本文介绍

本文详解在麒麟V10 aarch64架构服务器上,如何实现KubeSphere和Kubernetes集群的制品和离线部署。借助KubeSphere开发的KubeKey工具,我们成功在三台服务器上实现了高可用模式的最小化Kubernetes集群及KubeSphere部署。

KubeSphere 和 Kubernetes 在 ARM 架构 和 X86 架构的服务器上部署,最大的区别在于所有服务使用的容器镜像架构类型的不同,KubeSphere 开源版对于 ARM 架构的默认支持可以实现 KubeSphere-Core 功能,即可以实现最小化的 KubeSphere 和完整的 Kubernetes 集群的部署。

当启用了 KubeSphere 可插拔组件时,会遇到个别组件部署失败的情况,需要我们手工替换官方或是第三方提供的 ARM 版镜像或是根据官方源码手工构建 ARM 版镜像。如果需要实现开箱即用及更多的技术支持,则需要购买企业版的 KubeSphere。

1.1 确认操作系统配置

在执行下文的任务之前,先确认操作系统相关配置。

  • 操作系统类型

[root@localhost ~]# cat /etc/os-release
Kylin Linux Advanced Server V10 (Halberd)
ID="kylin"
VERSION_ID="V10"
PRETTY_NAME="Kylin Linux Advanced Server V10 (Halberd)"
ANSI_COLOR="0;31"

  • 操作系统内核

优化后的文章内容如下:[root@node1 ~]# uname -a,Linux 4.19.90-52.22.v2207.ky10.aarch64

  • 服务器 CPU 信息

[root@node1 ~]# lscpuArchitecture: aarch64CPU op-mode(s): 64-bitByte Order: Little EndianCPU(s): 32On-line CPU(s) list: 0-31Thread(s) per core: 1Core(s) per socket: 1Socket(s): 32NUMA node(s): 2Vendor ID: HiSiliconModel: 0Model name: Kunpeng-920Stepping: 0x1BogoMIPS: 200.00NUMA node0 CPU(s): 0-15NUMA node1 CPU(s): 16-31Vulnerability Itlb multihit: Not affectedVulnerability L1tf: Not affectedVulnerability Mds: Not affectedVulnerability Meltdown: Not affectedVulnerability Spec store bypass: Not affectedVulnerability Spectre v1: Mitigation; __user pointer sanitizationVulnerability Spectre v2: Not affectedVulnerability Srbds: Not affectedVulnerability Tsx async abort: Not affectedFlags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma dcpop asimddp asimdfhm

2. 安装k8s依赖服务

利用可联网的部署节点创建离线资源包。由于Harbor官方不支持ARM,我们首先在线安装KubeSphere。稍后,根据KubeKey生成的文件作为伪制品进行部署。因此,在192.168.200.7服务器上以单节点形式部署KubeSphere。

以下为多阶段部署,目的是方便制作离线安装包

2.1 部署docker和docker-compose

具体可参考以下文章第一部分

天行1st,公众号:编码如写诗鲲鹏+欧拉部署KubeSphere3.4

这里采用制作好的安装包形式,直接安装,解压后执行其中的install.sh

2.2 部署harbor

上传安装包后解压后执行其中的install.sh


1. 确保标题简洁明了,能够准确反映文章的主题。

2. 在文章开头用简短的段落概括文章的主要内容,吸引读者继续阅读。

3. 使用数据和事实支持观点,使文章更具说服力。

4. 保持文章结构清晰,每个段落只讨论一个主要观点。

5. 使用简洁、明了的语言,避免冗长和复杂的句子。

6. 在文章结尾进行总结,强调文章的核心观点。

2.3 下载麒麟系统k8s依赖包

mkdir -p /root/kubesphere/k8s-init# 该命令将下载相关依赖到/root/kubesphere/k8s-init目录yum -y install openssl socat conntrack ipset ebtables chrony ipvsadm --downloadonly --downloaddir /root/kubesphere/k8s-init# 编写安装脚本cat install.sh#!/bin/bash#
rpm -ivh *.rpm --force --nodeps
# 打成压缩包,方便离线部署使用tar -czvf k8s-init-Kylin_V10-arm.tar.gz ./k8s-init/*


2.4 下载ks相关镜像

下载kubesphere3.3.1所需要的arm镜像


#!/bin/bash# docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/ks-console:v3.3.1docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/ks-controller-manager:v3.3.1docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/ks-installer:v3.3.1docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/ks-apiserver:v3.3.1docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/openpitrix-jobs:v3.3.1docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/alpine:3.14docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/kube-apiserver:v1.22.12docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/kube-scheduler:v1.22.12docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.22.12docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controller-manager:v1.22.12docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/provisioner-localpv:3.3.0docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/linux-utils:3.3.0docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controllers:v3.23.2docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/cni:v3.23.2docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/pod2daemon-flexvol:v3.23.2docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/node:v3.23.2docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/kube-state-metrics:v2.5.0docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/fluent-bit:v1.8.11docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus-config-reloader:v0.55.1docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus-operator:v0.55.1docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/thanos:v0.25.2docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus:v2.34.0docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/fluentbit-operator:v0.13.0docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/node-exporter:v1.3.1docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/kubectl:v1.22.0docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/notification-manager:v1.4.0docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/notification-tenant-sidecar:v3.2.0docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/notification-manager-operator:v1.4.0docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/alertmanager:v0.23.0docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/kube-rbac-proxy:v0.11.0docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/docker:19.03docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.5docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/configmap-reload:v0.5.0docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/snapshot-controller:v4.0.0docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/kube-rbac-proxy:v0.8.0docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/coredns:1.8.0docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/log-sidecar-injector:1.1docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/k8s-dns-node-cache:1.15.12docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/mc:RELEASE.2019-08-07T23-14-43Zdocker pull registry.cn-beijing.aliyuncs.com/kubesphereio/minio:RELEASE.2019-08-07T01-59-21Zdocker pull registry.cn-beijing.aliyuncs.com/kubesphereio/defaultbackend-amd64:1.4docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/redis:5.0.14-alpinedocker pull registry.cn-beijing.aliyuncs.com/kubesphereio/haproxy:2.3docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/opensearch:2.6.0docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/busybox:latestdocker pull kubesphere/fluent-bit:v2.0.6

阿里云镜像使用中可能会遇到下载失败的问题,可参考运维有术文章获取解决方案。对于无法通过阿里云镜像下载的镜像,可尝试直接访问hub.docker.com进行本地下载。

docker pull kubesphere/fluent-bit:v2.0.6 --platform arm64#官方ks-console:v3.3.1(arm版)在麒麟中跑不起来,据运维有术介绍,需要使用node14基础镜像。当在鲲鹏服务器准备自己构建时报错淘宝源https过期,使用https://registry.npmmirror.com仍然报错,于是放弃使用该3.3.0镜像,重命名为3.3.1docker pull zl862520682/ks-console:v3.3.0docker tag zl862520682/ks-console:v3.3.0 dockerhub.kubekey.local/kubesphereio/ks-console:v3.3.1## mc和minio也需要重新拉取打tagdocker pull minio/minio:RELEASE.2020-11-25T22-36-25Z-arm64docker tag minio/minio:RELEASE.2020-11-25T22-36-25Z-arm64 dockerhub.kubekey.local/kubesphereio/minio:RELEASE


2.5 重命名镜像

重新给镜像打tag,标记为私有仓库镜像

878 docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controllers:v3.27.3 dockerhub.kubekey.local/kubesphereio/kube-controllers:v3.27.3879 docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/cni:v3.27.3 dockerhub.kubekey.local/kubesphereio/cni:v3.27.3880 docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/pod2daemon-flexvol:v3.27.3 dockerhub.kubekey.local/kubesphereio/pod2daemon-flexvol:v3.27.3881 docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/node:v3.27.3 dockerhub.kubekey.local/kubesphereio/node:v3.27.3882 docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/ks-console:v3.3.1 dockerhub.kubekey.local/kubesphereio/ks-console:v3.3.1883 docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/alpine:3.14 dockerhub.kubekey.local/kubesphereio/alpine:3.14884 docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/k8s-dns-node-cache:1.22.20 dockerhub.kubekey.local/kubesphereio/k8s-dns-node-cache:1.22.20885 docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/ks-controller-manager:v3.3.1 dockerhub.kubekey.local/kubesphereio/ks-controller-manager:v3.3.1886 docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/ks-installer:v3.3.1 dockerhub.kubekey.local/kubesphereio/ks-installer:v3.3.1887 docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/ks-apiserver:v3.3.1 dockerhub.kubekey.local/kubesphereio/ks-apiserver:v3.3.1888 docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/openpitrix-jobs:v3.3.1 dockerhub.kubekey.local/kubesphereio/openpitrix-jobs:v3.3.1889 docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/kube-apiserver:v1.22.12 dockerhub.kubekey.local/kubesphereio/kube-apiserver:v1.22.12890 docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.22.12 dockerhub.kubekey.local/kubesphereio/891 docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controller-manager:v1.22.12 dockerhub.kubekey.local/kubesphereio/kube-controller-manager:v1.22.12892 docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/kube-scheduler:v1.22.12 dockerhub.kubekey.local/kubesphereio/kube-scheduler:v1.22.12893 docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/provisioner-localpv:3.3.0 dockerhub.kubekey.local/kubesphereio/provisioner-localpv:3.3.0894 docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/linux-utils:3.3.0 dockerhub.kubekey.local/kubesphereio/linux-utils:3.3.0895 docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/kube-state-metrics:v2.5.0 dockerhub.kubekey.local/kubesphereio/kube-state-metrics:v2.5.0896 docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/fluent-bit:v1.8.11 dockerhub.kubekey.local/kubesphereio/fluent-bit:v1.8.11897 docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus-config-reloader:v0.55.1 dockerhub.kubekey.local/kubesphereio/prometheus-config-reloader:v0.55.1898 docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus-operator:v0.55.1 dockerhub.kubekey.local/kubesphereio/prometheus-operator:v0.55.1899 docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/thanos:v0.25.2 dockerhub.kubekey.local/kubesphereio/thanos:v0.25.2900 docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus:v2.34.0 dockerhub.kubekey.local/kubesphereio/prometheus:v2.34.0901 docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/fluentbit-operator:v0.13.0 dockerhub.kubekey.local/kubesphereio/fluentbit-operator:v0.13.0903 docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/node-exporter:v1.3.1 dockerhub.kubekey.local/kubesphereio/node-exporter:v1.3.1904 docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/kubectl:v1.22.0 dockerhub.kubekey.local/kubesphereio/kubectl:v1.22.0905 docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/notification-manager:v1.4.0 dockerhub.kubekey.local/kubesphereio/notification-manager:v1.4.0906 docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/notification-tenant-sidecar:v3.2.0 dockerhub.kubekey.local/kubesphereio/notification-tenant-sidecar:v3.2.0907 docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/notification-manager-operator:v1.4.0 dockerhub.kubekey.local/kubesphereio/notification-manager-operator:v1.4.0908 docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/alertmanager:v0.23.0 dockerhub.kubekey.local/kubesphereio/alertmanager:v0.23.0909 docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/kube-rbac-proxy:v0.11.0 dockerhub.kubekey.local/kubesphereio/kube-rbac-proxy:v0.11.0910 docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/docker:19.03 dockerhub.kubekey.local/kubesphereio/docker:19.03911 docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/metrics-server:v0.4.2 dockerhub.kubekey.local/kubesphereio/metrics-server:v0.4.2912 docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.5 dockerhub.kubekey.local/kubesphereio/pause:3.5913 docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/configmap-reload:v0.5.0 dockerhub.kubekey.local/kubesphereio/configmap-reload:v0.5.0914 docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/snapshot-controller:v4.0.0 dockerhub.kubekey.local/kubesphereio/snapshot-controller:v4.0.0915 docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/mc:RELEASE.2019-08-07T23-14-43Z dockerhub.kubekey.local/kubesphereio/mc:RELEASE.2019-08-07T23-14-43Z916 docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/minio:RELEASE.2019-08-07T01-59-21Z dockerhub.kubekey.local/kubesphereio/minio:RELEASE.2019-08-07T01-59-21Z917 docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/kube-rbac-proxy:v0.8.0 dockerhub.kubekey.local/kubesphereio/kube-rbac-proxy:v0.8.0918 docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/coredns:1.8.0 dockerhub.kubekey.local/kubesphereio/coredns:1.8.0919 docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/log-sidecar-injector:1.1 dockerhub.kubekey.local/kubesphereio/log-sidecar-injector:1.1921 docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/defaultbackend-amd64:1.4 dockerhub.kubekey.local/kubesphereio/defaultbackend-amd64:1.4922 docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.22.12 dockerhub.kubekey.local/kubesphereio/kube-proxy:v1.22.12docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/k8s-dns-node-cache:1.22.20 dockerhub.kubekey.local/kubesphereio/k8s-dns-node-cache:1.15.12docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controllers:v3.23.2 dockerhub.kubekey.local/kubesphereio/kube-controllers:v3.23.2docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/cni:v3.23.2 dockerhub.kubekey.local/kubesphereio/cni:v3.23.2docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/pod2daemon-flexvol:v3.23.2 dockerhub.kubekey.local/kubesphereio/pod2daemon-flexvol:v3.23.2docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/node:v3.23.2 dockerhub.kubekey.local/kubesphereio/node:v3.23.2docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/opensearch:2.6.0 dockerhub.kubekey.local/kubesphereio/opensearch:2.6.0docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/busybox:latest dockerhub.kubekey.local/kubesphereio/busybox:latestdocker tag kubesphere/fluent-bit:v2.0.6 dockerhub.kubekey.local/kubesphereio/fluent-bit:v2.0.6 # 也可重命名为v1.8.11,可省下后续修改fluent的yaml,这里采用后修改方式

2.6 推送至harbor私有仓库


#!/bin/bash#
docker load < ks3.3.1-images.tar.gz
docker login -u admin -p Harbor12345 dockerhub.kubekey.local
docker push dockerhub.kubekey.local/kubesphereio/ks-console:v3.3.1docker push dockerhub.kubekey.local/kubesphereio/ks-controller-manager:v3.3.1docker push dockerhub.kubekey.local/kubesphereio/ks-installer:v3.3.1docker push dockerhub.kubekey.local/kubesphereio/ks-apiserver:v3.3.1docker push dockerhub.kubekey.local/kubesphereio/openpitrix-jobs:v3.3.1docker push dockerhub.kubekey.local/kubesphereio/alpine:3.14docker push dockerhub.kubekey.local/kubesphereio/kube-apiserver:v1.22.12docker push dockerhub.kubekey.local/kubesphereio/kube-scheduler:v1.22.12docker push dockerhub.kubekey.local/kubesphereio/kube-proxy:v1.22.12docker push dockerhub.kubekey.local/kubesphereio/kube-controller-manager:v1.22.12docker push dockerhub.kubekey.local/kubesphereio/provisioner-localpv:3.3.0docker push dockerhub.kubekey.local/kubesphereio/linux-utils:3.3.0docker push dockerhub.kubekey.local/kubesphereio/kube-controllers:v3.23.2docker push dockerhub.kubekey.local/kubesphereio/cni:v3.23.2docker push dockerhub.kubekey.local/kubesphereio/pod2daemon-flexvol:v3.23.2docker push dockerhub.kubekey.local/kubesphereio/node:v3.23.2docker push dockerhub.kubekey.local/kubesphereio/kube-state-metrics:v2.5.0docker push dockerhub.kubekey.local/kubesphereio/fluent-bit:v1.8.11docker push dockerhub.kubekey.local/kubesphereio/prometheus-config-reloader:v0.55.1docker push dockerhub.kubekey.local/kubesphereio/prometheus-operator:v0.55.1docker push dockerhub.kubekey.local/kubesphereio/thanos:v0.25.2docker push dockerhub.kubekey.local/kubesphereio/prometheus:v2.34.0docker push dockerhub.kubekey.local/kubesphereio/fluentbit-operator:v0.13.0docker push dockerhub.kubekey.local/kubesphereio/node-exporter:v1.3.1docker push dockerhub.kubekey.local/kubesphereio/kubectl:v1.22.0docker push dockerhub.kubekey.local/kubesphereio/notification-manager:v1.4.0docker push dockerhub.kubekey.local/kubesphereio/notification-tenant-sidecar:v3.2.0docker push dockerhub.kubekey.local/kubesphereio/notification-manager-operator:v1.4.0docker push dockerhub.kubekey.local/kubesphereio/alertmanager:v0.23.0docker push dockerhub.kubekey.local/kubesphereio/kube-rbac-proxy:v0.11.0docker push dockerhub.kubekey.local/kubesphereio/docker:19.03docker push dockerhub.kubekey.local/kubesphereio/pause:3.5docker push dockerhub.kubekey.local/kubesphereio/configmap-reload:v0.5.0docker push dockerhub.kubekey.local/kubesphereio/snapshot-controller:v4.0.0docker push dockerhub.kubekey.local/kubesphereio/kube-rbac-proxy:v0.8.0docker push dockerhub.kubekey.local/kubesphereio/coredns:1.8.0docker push dockerhub.kubekey.local/kubesphereio/log-sidecar-injector:1.1docker push dockerhub.kubekey.local/kubesphereio/k8s-dns-node-cache:1.15.12docker push dockerhub.kubekey.local/kubesphereio/mc:RELEASE.2019-08-07T23-14-43Zdocker push dockerhub.kubekey.local/kubesphereio/minio:RELEASE.2019-08-07T01-59-21Zdocker push dockerhub.kubekey.local/kubesphereio/defaultbackend-amd64:1.4docker push dockerhub.kubekey.local/kubesphereio/redis:5.0.14-alpinedocker push dockerhub.kubekey.local/kubesphereio/haproxy:2.3docker push dockerhub.kubekey.local/kubesphereio/opensearch:2.6.0docker push dockerhub.kubekey.local/kubesphereio/busybox:latestdocker push dockerhub.kubekey.local/kubesphereio/fluent-bit:v2.0.6


3. 使用kk部署kubesphere

3.1 移除麒麟系统自带的podman

Podman是麒麟系统自带的轻量级容器引擎,为了避免与Docker产生冲突,建议直接卸载。这将有助于确保后续的coredns/nodelocaldns正常启动,同时避免Docker权限问题。

yum remove podman

3.2&nbsp;下载kubekey

请下载 kubekey-v2.3.1-linux-arm64.tar.gz。KubeKey 发行页面(链接)可查看具体版本号。

  • 方式一

请参考以下优化后的文章:

```
cd kubesphere/
export KKZONE=cn
curl -sfL https://get-kk.kubesphere.io/v2.3.1/kubekey-v2.3.1-linux-arm64.tar.gz | tar xzf -
```

以上代码实现了在 `kubesphere` 目录下创建一个名为 `kubesphere` 的文件夹,并设置中文区域下载。然后,通过执行 curl 命令从 GitHub 获取最新版的 `kubekey`,并将其解压到当前目录中。这样可以方便地获取并安装 KubeSphere 的最新版本。请注意,由于网络限制,有时需要多次执行该命令才能成功下载。

  • 方式二

在本地电脑上,直接访问GitHub Releases · kubesphere/kubekey 下载KubeKey。

上传至服务器/root/kubesphere目录解压&nbsp;

3.3&nbsp;&nbsp;生成集群创建配置文件


本示例中,创建集群配置文件,选择 KubeSphere v3.3.1 和 Kubernetes v1.22.12。

```
/kk create config -f kubesphere-v331-v12212.yaml --with-kubernetes v1.22.12 --with-kubesphere v3.3.1
```

在成功执行命令后,当前目录将生成名为 kubesphere-v331-v12212.yaml 的配置文件。

本文展示如何配置3个节点,包括control-plane、etcd和worker,提高系统性能。

请优化以下文章内容:编辑kubesphere-v331-v12212.yaml配置文件,主要调整Cluster和ClusterConfiguration两部分的相关设置。


请调整"Cluster"小节中的"hosts"和"roleGroups"等设置,详细修改说明如下。

  • hosts:指定节点的 IP、ssh 用户、ssh 密码、ssh 端口。特别注意:一定要手工指定 arch: arm64,否则部署的时候会安装 X86 架构的软件包。

    在 hosts 文件中,设置节点的 IP、SSH 用户名、密码和端口。请注意:务必手动指定 arch 为 arm64,否则部署时将安装 X86 架构的软件包。
  • 在Kubernetes中,通过roleGroups可以实现将3个etcd和control-plane节点设置为复用相同机器的worker节点。这样可以提高集群的可扩展性和资源利用率。
  • domain:自定义了一个 opsman.top
  • containerManager:使用了 containerd
  • 请将 `storage.openebs.basePath` 配置项设置为默认存储路径 `/data/openebs/local`,以便在 OpenEBS 中使用。

修改后的示例如下:

apiVersion: kubekey.kubesphere.io/v1alpha2kind: Clustermetadata: name: samplespec: hosts: - {name: node1, address: 192.168.200.7, internalAddress: 192.168.200.7, user: root, password: "123456", arch: arm64} roleGroups: etcd: - node1 control-plane: - node1 worker: - node1 registry: - node1 controlPlaneEndpoint: ## Internal loadbalancer for apiservers # internalLoadbalancer: haproxy
domain: lb.kubesphere.local address: "" port: 6443 kubernetes: version: v1.22.12 clusterName: cluster.local autoRenewCerts: true containerManager: docker etcd: type: kubekey network: plugin: calico kubePodsCIDR: 10.233.64.0/18 kubeServiceCIDR: 10.233.0.0/18 ## multus support. https://github.com/k8snetworkplumbingwg/multus-cni multusCNI: enabled: false registry: type: harbor auths: "dockerhub.kubekey.local": username: admin password: Harbor12345 privateRegistry: "dockerhub.kubekey.local" namespaceOverride: "kubesphereio" registryMirrors: [] insecureRegistries: [] addons: []


---apiVersion: installer.kubesphere.io/v1alpha1kind: ClusterConfigurationmetadata: name: ks-installer namespace: kubesphere-system labels: version: v3.3.1spec: persistence: storageClass: "" authentication: jwtSecret: "" zone: "" local_registry: "" namespace_override: "" # dev_tag: "" etcd: monitoring: true endpointIps: localhost port: 2379 tlsEnable: true common: core: console: enableMultiLogin: true port: 30880 type: NodePort # apiserver: # resources: {} # controllerManager: # resources: {} redis: enabled: false volumeSize: 2Gi openldap: enabled: false volumeSize: 2Gi minio: volumeSize: 20Gi monitoring: # type: external endpoint: http://prometheus-operated.kubesphere-monitoring-system.svc:9090 GPUMonitoring: enabled: false gpu: kinds: - resourceName: "nvidia.com/gpu" resourceType: "GPU" default: true es: # master: # volumeSize: 4Gi # replicas: 1 # resources: {} # data: # volumeSize: 20Gi # replicas: 1 # resources: {} logMaxAge: 7 elkPrefix: logstash basicAuth: enabled: false username: "" password: "" externalElasticsearchHost: "" externalElasticsearchPort: "" alerting: enabled: true # thanosruler: # replicas: 1 # resources: {} auditing: enabled: false # operator: # resources: {} # webhook: # resources: {} devops: enabled: false # resources: {} jenkinsMemoryLim: 8Gi jenkinsMemoryReq: 4Gi jenkinsVolumeSize: 8Gi events: enabled: false # operator: # resources: {} # exporter: # resources: {} # ruler: # enabled: true # replicas: 2 # resources: {} logging: enabled: true logsidecar: enabled: true replicas: 2 # resources: {} metrics_server: enabled: false monitoring: storageClass: "" node_exporter: port: 9100 # resources: {} # kube_rbac_proxy: # resources: {} # kube_state_metrics: # resources: {} # prometheus: # replicas: 1 # volumeSize: 20Gi # resources: {} # operator: # resources: {} # alertmanager: # replicas: 1 # resources: {} # notification_manager: # resources: {} # operator: # resources: {} # proxy: # resources: {} gpu: nvidia_dcgm_exporter: enabled: false # resources: {} multicluster: clusterRole: none network: networkpolicy: enabled: false ippool: type: none topology: type: none openpitrix: store: enabled: true servicemesh: enabled: false istio: components: ingressGateways: - name: istio-ingressgateway enabled: false cni: enabled: false edgeruntime: enabled: false kubeedge: enabled: false cloudCore: cloudHub: advertiseAddress: - "" service: cloudhubNodePort: "30000" cloudhubQuicNodePort: "30001" cloudhubHttpsNodePort: "30002" cloudstreamNodePort: "30003" tunnelNodePort: "30004" # resources: {} # hostNetWork: false iptables-manager: enabled: true mode: "external" # resources: {} # edgeService: # resources: {} terminal: timeout: 600


3.4 执行安装


```
请使用以下命令创建集群:kk create cluster -f kubesphere-v331-v122123.yaml
```

安装kubesphere的初衷在于在过程中自动生成kubekey文件夹,从而一次性下载并保存Kubernetes所需的所有依赖。此后,我们主要通过离线使用kubekey文件夹和相应的脚本来部署系统,以此替代之前的制品。

4. 制作离线部署资源

4.1&nbsp;导出k8s基础依赖包

```
# 安装所需软件包
yum -y install openssl socat conntrack ipset ebtables chrony ipvsadm --downloadonly --downloaddir /root/kubesphere/k8s-init

# 将安装文件打包成压缩包
tar -czvf k8s-init-Kylin_V10-arm.tar.gz /root/kubesphere/k8s-init/*
```

4.2&nbsp;导出ks需要的镜像

导出ks相关的镜像至ks3.3.1-images.tar

docker save -o ks3.3.1-images.tar dockerhub.kubekey.local/kubesphereio/kube-controllers:v3.27.3 dockerhub.kubekey.local/kubesphereio/cni:v3.27.3 dockerhub.kubekey.local/kubesphereio/pod2daemon-flexvol:v3.27.3 dockerhub.kubekey.local/kubesphereio/node:v3.27.3 dockerhub.kubekey.local/kubesphereio/ks-console:v3.3.1 dockerhub.kubekey.local/kubesphereio/alpine:3.14 dockerhub.kubekey.local/kubesphereio/k8s-dns-node-cache:1.22.20 dockerhub.kubekey.local/kubesphereio/ks-controller-manager:v3.3.1 dockerhub.kubekey.local/kubesphereio/ks-installer:v3.3.1 dockerhub.kubekey.local/kubesphereio/ks-apiserver:v3.3.1 dockerhub.kubekey.local/kubesphereio/openpitrix-jobs:v3.3.1 dockerhub.kubekey.local/kubesphereio/kube-apiserver:v1.22.12 dockerhub.kubekey.local/kubesphereio/kube-proxy:v1.22.12 dockerhub.kubekey.local/kubesphereio/kube-controller-manager:v1.22.12 dockerhub.kubekey.local/kubesphereio/kube-scheduler:v1.22.12 dockerhub.kubekey.local/kubesphereio/provisioner-localpv:3.3.0 dockerhub.kubekey.local/kubesphereio/linux-utils:3.3.0 dockerhub.kubekey.local/kubesphereio/kube-state-metrics:v2.5.0 dockerhub.kubekey.local/kubesphereio/fluent-bit:v2.0.6 dockerhub.kubekey.local/kubesphereio/prometheus-config-reloader:v0.55.1 dockerhub.kubekey.local/kubesphereio/prometheus-operator:v0.55.1 dockerhub.kubekey.local/kubesphereio/thanos:v0.25.2 dockerhub.kubekey.local/kubesphereio/prometheus:v2.34.0 dockerhub.kubekey.local/kubesphereio/fluentbit-operator:v0.13.0 dockerhub.kubekey.local/kubesphereio/node-exporter:v1.3.1 dockerhub.kubekey.local/kubesphereio/kubectl:v1.22.0 dockerhub.kubekey.local/kubesphereio/notification-manager:v1.4.0 dockerhub.kubekey.local/kubesphereio/notification-tenant-sidecar:v3.2.0 dockerhub.kubekey.local/kubesphereio/notification-manager-operator:v1.4.0 dockerhub.kubekey.local/kubesphereio/alertmanager:v0.23.0 dockerhub.kubekey.local/kubesphereio/kube-rbac-proxy:v0.11.0 dockerhub.kubekey.local/kubesphereio/docker:19.03 dockerhub.kubekey.local/kubesphereio/metrics-server:v0.4.2 dockerhub.kubekey.local/kubesphereio/pause:3.5 dockerhub.kubekey.local/kubesphereio/configmap-reload:v0.5.0 dockerhub.kubekey.local/kubesphereio/snapshot-controller:v4.0.0 dockerhub.kubekey.local/kubesphereio/mc:RELEASE.2019-08-07T23-14-43Z dockerhub.kubekey.local/kubesphereio/minio:RELEASE.2019-08-07T01-59-21Z dockerhub.kubekey.local/kubesphereio/kube-rbac-proxy:v0.8.0 dockerhub.kubekey.local/kubesphereio/coredns:1.8.0 dockerhub.kubekey.local/kubesphereio/defaultbackend-amd64:1.4 dockerhub.kubekey.local/kubesphereio/redis:5.0.14-alpine dockerhub.kubekey.local/kubesphereio/k8s-dns-node-cache:1.15.12 dockerhub.kubekey.local/kubesphereio/node:v3.23.2 dockerhub.kubekey.local/kubesphereio/pod2daemon-flexvol:v3.23.2 dockerhub.kubekey.local/kubesphereio/cni:v3.23.2 dockerhub.kubekey.local/kubesphereio/kube-controllers:v3.23.2 dockerhub.kubekey.local/kubesphereio/haproxy:2.3 dockerhub.kubekey.local/kubesphereio/busybox:latest dockerhub.kubekey.local/kubesphereio/opensearch:2.6.0 dockerhub.kubekey.local/kubesphereio/fluent-bit:v2.0.6

压缩

gzip ks3.3.1-images.tar

4.3&nbsp;导出kubesphere文件夹

[root@node1 ~]# cd /root/kubesphere[root@node1 kubesphere]# lscreate_project_harbor.sh docker-24.0.7-arm.tar.gz fluent-bit-daemonset.yaml harbor-arm.tar.gz harbor.tar.gz install.sh k8s-init-Kylin_V10-arm.tar.gz ks3.3.1-images.tar.gz ks3.3.1-offline push-images.sh
tar -czvf kubeshpere.tar.gz ./kubesphere/*

编写install.sh用于后续一键离线安装kk

#!/usr/bin/env bashread -p "请先修改机器配置文件ks3.3.1-offline/kubesphere-v331-v12212.yaml中相关IP地址,是否已修改(yes/no)" B do_k8s_init(){ echo "--------开始进行依赖包初始化------" yum remove podman -y tar zxf k8s-init-Kylin_V10-arm.tar.gz cd k8s-init && ./install.sh cd - rm -rf k8s-init}
install_docker(){ echo "--------开始安装docker--------" tar zxf docker-24.0.7-arm.tar.gz cd docker && ./install.sh cd -
}install_harbor(){ echo "-------开始安装harbor----------" tar zxf harbor-arm.tar.gz cd harbor && ./install.sh cd - echo "--------开始推送镜像----------" source create_project_harbor.sh source push-images.sh echo "--------镜像推送完成--------"}install_ks(){ echo "--------开始安装kubesphere--------"# tar zxf ks3.3.1-offline.tar.gz cd ks3.3.1-offline && ./install.sh}
if [ "$B" = "yes" ] || [ "$B" = "y" ]; then do_k8s_init install_docker install_harbor install_kselse echo "请先配置集群配置文件" exit 1fi

5. 离线环境安装k8s和kubesphere

5.1&nbsp;卸载podman和安装k8s依赖

所有节点都需要操作,

上传K8s-init-Kylin_V10-arm.tar.gz并解压后,执行install.sh。若单节点离线部署,可直接进入下一步。

yum remove podman -y

5.2&nbsp;安装ks集群

**************************************************

Waiting for all tasks to be completed ...task alerting status is successful (1/6)task network status is successful (2/6)task multicluster status is successful (3/6)task openpitrix status is successful (4/6)task logging status is successful (5/6)task monitoring status is successful (6/6)**************************************************

Collecting installation results ...######################################################## Welcome to KubeSphere! ########################################################

Console: http://192.168.10.2:30880Account: adminPassword: P@88w0rdNOTES: 1. After you log into the console, please check the monitoring status of service components in "Cluster Management". If any service is not ready, please wait patiently until all components are up and running. 2. Please change the default password after login.
#####################################################

https://kubesphere.io 2024-07-03 11:10:11

#####################################################

5.3&nbsp;其他修改

请将kubectl edit daemonsets fluent-bit -n kubesphere-logging-system#中的fluent-bit版本1.8.11修改为2.0.6。

如果不需要日志,可以修改ks创建集群配置文件不安装log插件,镜像也可以更加简化

6.测试查看

6.1 验证集群状态

[root@node1 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSIONnode1 Ready control-plane,master,worker 25h v1.22.12node2 Ready control-plane,master,worker 25h v1.22.12node3 Ready control-plane,master,worker 25h v1.22.12

页面查看

59aa140c6a033594d85f4e126fe9d9fe.jpeg

7.总结

本文实战演示了ARM版麒麟V10服务器在离线环境下部署k8s和kubesphere。通过在线环境下载所需依赖、docker镜像和harbor,以及使用kubekey部署ks下载的各类包,实现一键安装。shell脚本编写的简单部署过程,让离线环境安装k8s和kubesphere变得轻松可行。

离线安装主要知识点

  • 卸载podman
  • 安装k8s依赖包
  • 安装Docker
  • 安装harbor
  • 将k8s和ks需要的镜像推送到harbor
  • 使用kk部署集群


-对此,您有什么看法见解?-

-欢迎在评论区留言探讨和分享。-

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.xdnf.cn/news/6487.html

如若内容造成侵权/违法违规/事实不符,请联系一条长河网进行投诉反馈,一经查实,立即删除!

相关文章

SpringBoot在线教育系统:技术与实践

2相关技术 2.1 MYSQL数据库 MySQL是一个真正的多用户、多线程SQL数据库服务器。 是基于SQL的客户/服务器模式的关系数据库管理系统&#xff0c;它的有点有有功能强大、使用简单、管理方便、安全可靠性高、运行速度快、多线程、跨平台性、完全网络化、稳定性等&#xff0c;非常…

初始JavaEE篇——多线程(7):定时器、CAS

找往期文章包括但不限于本期文章中不懂的知识点&#xff1a; 个人主页&#xff1a;我要学编程程(ಥ_ಥ)-CSDN博客 所属专栏&#xff1a;JavaEE 目录 定时器的使用 定时器的原理 模拟实现定时器 CAS 介绍 CAS的应用场景 解析 AtomicInteger 类 实现自旋锁 CAS的缺陷…

【金融风控】相关业务介绍及代码详解

金融风控相关业务介绍 【了解】项目整体介绍 1.风控业务和风控报表</span> 零售金融产品 相关的指标 风控建模流程 ​ #2.特征工程 特征构造 特征筛选 ​ 3.评分卡模型构建 逻辑回归 集成学习 XGBoost LightGBM 模型评估 ​ #4.样本不均衡问题/异常点检测 【了解】今日…

Spring Bean的作用域和生命周期

在 Spring 框架中&#xff0c;Bean 是用于管理对象的核心组成部分。Spring 的 IoC 容器通过 Bean 的作用域来控制它们的生命周期。理解 Spring Bean 的作用域和生命周期对于开发灵活、高效的 Spring 应用至关重要。 Spring Bean 的五种作用域 Spring 提供了五种 Bean 作用域&a…

Linux 配置JDK

文章目录 一、下载Oracle-JDK1.1、如何正确的下载JDK二、配置JDK环境变量2.1 环境变量配置2.1.1、修改vim /etc/profile 添加jdk的路径一、下载Oracle-JDK 1.1、如何正确的下载JDK 首先我要安装的是oracle-jdk,这个时候什么地方都不要去,就去oracle的官网,然后找到,jdk的下…

adb 常用命令汇总

目录 adb 常用命令 1、显示已连接的设备列表 2、进入设备 3、安装 APK 文件到设备 4、卸载指定包名的应用 5、从设备中复制文件到本地 6、将本地文件复制到设备 7、查看设备日志信息 8、重启设备 9、截取设备屏幕截图 10、屏幕分辨率 11、屏幕密度 12、显示设备的…

人工智能技术:未来生活的“魔法师”

想象一下&#xff0c;未来的某一天&#xff0c;你醒来时&#xff0c;智能助手已经为你准备好了早餐&#xff0c;你的智能家居系统根据你的心情和日程安排调整了室内的光线和音乐&#xff0c;而你的自动驾驶汽车已经在门口等你。这不是科幻小说&#xff0c;这是人工智能技术为我…

JavaWeb

一,JavaWeb JavaWeb就是用Java技术来解决相关web互联网领域的技术。 软件架构模式&#xff1a; 1.BS模式&#xff1a;browser server 浏览器服务器 优点&#xff1a;只需要开发服务器代码&#xff0c;用户下载浏览器&#xff0c;维护方便&#xff1b;减少用户的磁盘空间 缺…

【C++笔记】模版的特化及其编译分离

【C笔记】模版的特化及其编译分离 &#x1f525;个人主页&#xff1a;大白的编程日记 &#x1f525;专栏&#xff1a;C笔记 文章目录 【C笔记】模版的特化及其编译分离前言一.模版1.1非类型模板参数 二.模板的特化2.1特化的定义2.2 函数模板特化2.3底层const2.4 类模板特化 三…

解决:无法在此设备上激活Windows因为无法连接到你的组织的激活服务器

问题&#xff1a; 桌面右下角会出现这个东西&#x1f447; 在设置里查看激活状态就会看到&#x1f447; 解决方法 &#xff1a; 1.打开CMD 搜索CMD&#xff0c;然后以管理员身份运行 2.设置 KMS服务器 1&#xff09;命令行输入&#xff1a; slmgr /skms kms.03k.org 然后…

1.6K+ Star!GenAIScript:一个可自动化的GenAI脚本环境

GenAIScript 简介 GenAIScript[1] 是一个 JavaScript-ish 环境&#xff0c;提供了便捷的工具用于文件摄入、提示开发和结构化数据提取。它允许用户以编程方式组装大型语言模型&#xff08;LLM&#xff09;的提示&#xff0c;并通过单一脚本协调 LLM、工具和数据。 项目特点 主…

高效管理iPhone存储:苹果手机怎么删除相似照片

在使用iPhone的过程中&#xff0c;我们经常会遇到存储空间不足的问题&#xff0c;尤其是当相册中充满了大量相似照片时。这些照片不仅占用了宝贵的存储空间&#xff0c;还可能使iPhone出现运行卡顿的情况。因此&#xff0c;我们迫切需要寻找苹果手机怎么删除相似照片的方法&…

TARE-PLANNER学习记录

参考&#xff1a; CMU-TARE 探索算法官方社区问答汇总_cmu localplanner 部署-CSDN博客 Tare_planner学习笔记_tare planner-CSDN博客 Tare_planner 学习教程(二)_tareplanner-CSDN博客 &#xff08;学习笔记&#xff09;机器人自主导航从零开始第七步——TARE Planner自主…

Moonshine - 新型开源ASR(语音识别)模型,体积小,速度快,比OpenAI Whisper快五倍 本地一键整合包下载

Moonshine 是由 Useful Sensors 公司推出的一系列「语音到文本&#xff08;speech-to-text, STT&#xff09;转换模型」&#xff0c;旨在为资源受限设备提供快速而准确的「自动语音识别&#xff08;ASR&#xff09;服务」。Moonshine 的设计特别适合于需要即时响应的应用场景&a…

【实验八】前馈神经网络(4)优化问题

1 参数初始化 模型构建 模型训练 优化 完整代码 2 梯度消失问题 模型构建 模型训练 完整代码 3 死亡Relu问题 模型构建 模型训练 优化 完整代码 1 参数初始化 实现一个神经网络前&#xff0c;需要先初始化模型参数。如果对每一层的权重和偏置都用0初始化&#xff0…

华为-宝塔-MongoDB无法登录

1、宝塔防火墙服务器安全组放开端口号 2、用数据库对应的用户名和密码登录 2-1&#xff1a;不指定验证数据库时用root账号密码登录 2-2&#xff1a;如果设置了验证数据库就用验证数据库对应的账号和密码登录

Scala入门基础(16)scala的包

Scala的包定义包定义包对象Scala的包的导入导入重命名 一.Scala的包 package&#xff08;包&#xff1a;一个容器。可以把类&#xff0c;对象&#xff0c;包&#xff0c;装入。 好处&#xff1a; 区分同名的类&#xff1b;类很多时&#xff0c;更好地管理类&#xff1b;控制…

Android IPC机制(一)多进程模式

1. 什么是进程&#xff1f; 进程是操作系统分配资源&#xff08;如 CPU、内存等&#xff09;的基本单位。简单来说&#xff0c;进程是一个正在执行的程序的实例。每个进程都有自己的内存空间、数据栈和其他辅助数据&#xff0c;用于跟踪进程的执行状态。在 Android 中&#xff…

【笔记】铜导线在高频下的损耗

参考资料&#xff1a;Litz Wire: Practical Design Considerations for Todays High Frequency Applications&#xff0c;kyle jensen,2020 1.高频条件下因为集肤效应&#xff0c;需要选择多股线 否则高频下因为集肤效应和接近效应&#xff0c;所引发的交流阻抗上升&#xff…

火语言RPA流程组件介绍--指纹浏览器管理

&#x1f6a9;【组件功能】&#xff1a;指纹浏览器配置管理创建、删除、判断是否存在 配置预览 配置说明 操作类型 有“创建、删除、判断是否存在”3种类型供选择。 指纹浏览器配置名称 支持T或# 默认FLOW输入项 填写指纹环境分身名称。 操作方式 有“名称、Id”2种方式…