Centos7系统安装部署Kubernetes(k8s)集群教程( 二 )

社区服务
高级搜索
猴岛论坛电脑百科Centos7系统安装部署Kubernetes(k8s)集群教程( 二 )
发帖 回复
倒序阅读 最近浏览的帖子最近浏览的版块
3个回复

[技术小组]Centos7系统安装部署Kubernetes(k8s)集群教程( 二 )

楼层直达
   三岁

ZxID:44344

糖果

举报 只看楼主 使用道具 楼主   发表于: 2023-08-06 0
教程一连接地址:Centos7系统安装部署Kubernetes(k8s)集群教程( 一)

4.2 配置节点的基本环境

先配置节点的基本环境,3个节点都要同时设置,在此以k8scloude1作为示例

首先设置主机名

  1. [root@localhost ~]# vim /etc/hostname
  2. [root@localhost ~]# cat /etc/hostname
  3. k8scloude1


配置节点IP地址(可选)
  1. [root@localhost ~]# vim /etc/sysconfig/network-scripts/ifcfg-ens32
  2. [root@k8scloude1 ~]# cat  /etc/sysconfig/network-scripts/ifcfg-ens32
  3. TYPE=Ethernet
  4. BOOTPROTO=static
  5. NAME=ens32
  6. DEVICE=ens32
  7. ONBOOT=yes
  8. DNS1=114.114.114.114
  9. IPADDR=192.168.110.130
  10. NETMASK=255.255.255.0
  11. GATEWAY=192.168.110.2
  12. ZONE=trusted


重启网络
  1. [root@localhost ~]# service network restart
  2. Restarting network (via systemctl):                        [  确定  ]
  3. [root@localhost ~]# systemctl restart NetworkManager

重启机器之后,主机名变为k8scloude1,测试机器是否可以访问网络
  1. [root@k8scloude1 ~]# ping www.baidu.com
  2. PING www.a.shifen.com (14.215.177.38) 56(84) bytes of data.
  3. 64 bytes from 14.215.177.38 (14.215.177.38): icmp_seq=1 ttl=128 time=25.9 ms
  4. 64 bytes from 14.215.177.38 (14.215.177.38): icmp_seq=2 ttl=128 time=26.7 ms
  5. 64 bytes from 14.215.177.38 (14.215.177.38): icmp_seq=3 ttl=128 time=26.4 ms
  6. ^C
  7. --- www.a.shifen.com ping statistics ---
  8. 3 packets transmitted, 3 received, 0% packet loss, time 2004ms
  9. rtt min/avg/max/mdev = 25.960/26.393/26.724/0.320 ms


配置IP和主机名映射

  1. [root@k8scloude1 ~]# vim /etc/hosts
  2. [root@k8scloude1 ~]# cat /etc/hosts
  3. 127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
  4. ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
  5. 192.168.110.130 k8scloude1
  6. 192.168.110.129 k8scloude2
  7. 192.168.110.128 k8scloude3
  8. #复制 /etc/hosts到其他两个节点
  9. [root@k8scloude1 ~]# scp /etc/hosts 192.168.110.129:/etc/hosts
  10.                                                           
  11. [root@k8scloude1 ~]# scp /etc/hosts 192.168.110.128:/etc/hosts
  12. #可以ping通其他两个节点则成功
  13. [root@k8scloude1 ~]# ping k8scloude1
  14. PING k8scloude1 (192.168.110.130) 56(84) bytes of data.
  15. 64 bytes from k8scloude1 (192.168.110.130): icmp_seq=1 ttl=64 time=0.044 ms
  16. 64 bytes from k8scloude1 (192.168.110.130): icmp_seq=2 ttl=64 time=0.053 ms
  17. ^C
  18. --- k8scloude1 ping statistics ---
  19. 2 packets transmitted, 2 received, 0% packet loss, time 999ms
  20. rtt min/avg/max/mdev = 0.044/0.048/0.053/0.008 ms
  21. [root@k8scloude1 ~]# ping k8scloude2
  22. PING k8scloude2 (192.168.110.129) 56(84) bytes of data.
  23. 64 bytes from k8scloude2 (192.168.110.129): icmp_seq=1 ttl=64 time=0.297 ms
  24. 64 bytes from k8scloude2 (192.168.110.129): icmp_seq=2 ttl=64 time=1.05 ms
  25. 64 bytes from k8scloude2 (192.168.110.129): icmp_seq=3 ttl=64 time=0.254 ms
  26. ^C
  27. --- k8scloude2 ping statistics ---
  28. 3 packets transmitted, 3 received, 0% packet loss, time 2001ms
  29. rtt min/avg/max/mdev = 0.254/0.536/1.057/0.368 ms
  30. [root@k8scloude1 ~]# ping k8scloude3
  31. PING k8scloude3 (192.168.110.128) 56(84) bytes of data.
  32. 64 bytes from k8scloude3 (192.168.110.128): icmp_seq=1 ttl=64 time=0.285 ms
  33. 64 bytes from k8scloude3 (192.168.110.128): icmp_seq=2 ttl=64 time=0.513 ms
  34. 64 bytes from k8scloude3 (192.168.110.128): icmp_seq=3 ttl=64 time=0.390 ms
  35. ^C
  36. --- k8scloude3 ping statistics ---
  37. 3 packets transmitted, 3 received, 0% packet loss, time 2002ms
  38. rtt min/avg/max/mdev = 0.285/0.396/0.513/0.093 ms


关闭屏保(可选)
  1. [root@k8scloude1 ~]# setterm -blank 0


下载新的yum源

  1. [root@k8scloude1 ~]# rm -rf /etc/yum.repos.d/* ;wget ftp://ftp.rhce.cc/k8s/* -P /etc/yum.repos.d/
  2. --2022-01-07 17:07:28--  ftp://ftp.rhce.cc/k8s/*
  3.            => “/etc/yum.repos.d/.listing”
  4. 正在解析主机 ftp.rhce.cc (ftp.rhce.cc)... 101.37.152.41
  5. 正在连接 ftp.rhce.cc (ftp.rhce.cc)|101.37.152.41|:21... 已连接。
  6. 正在以 anonymous 登录 ... 登录成功!
  7. ==> SYST ... 完成。   ==> PWD ... 完成。
  8. ......
  9. 100%[=======================================================================================================================================================================>] 276         --.-K/s 用时 0s      
  10. 2022-01-07 17:07:29 (81.9 MB/s) - “/etc/yum.repos.d/k8s.repo” 已保存 [276]
  11. #新的repo文件如下
  12. [root@k8scloude1 ~]# ls /etc/yum.repos.d/
  13. CentOS-Base.repo  docker-ce.repo  epel.repo  k8s.repo


关闭selinux,设置SELINUX=disabled
  1. [root@k8scloude1 ~]# cat /etc/selinux/config
  2. # This file controls the state of SELinux on the system.
  3. # SELINUX= can take one of these three values:
  4. #     enforcing - SELinux security policy is enforced.
  5. #     permissive - SELinux prints warnings instead of enforcing.
  6. #     disabled - No SELinux policy is loaded.
  7. SELINUX=disabled
  8. # SELINUXTYPE= can take one of three two values:
  9. #     targeted - Targeted processes are protected,
  10. #     minimum - Modification of targeted policy. Only selected processes are protected.
  11. #     mls - Multi Level Security protection.
  12. SELINUXTYPE=targeted
  13. [root@k8scloude1 ~]# getenforce
  14. Disabled
  15. [root@k8scloude1 ~]# setenforce 0
  16. setenforce: SELinux is disabled


配置防火墙允许所有数据包通过
  1. [root@k8scloude1 ~]# firewall-cmd --set-default-zone=trusted
  2. Warning: ZONE_ALREADY_SET: trusted
  3. success
  4. [root@k8scloude1 ~]# firewall-cmd --get-default-zone
  5. trusted


Linux swapoff命令用于关闭系统交换分区(swap area)。

注意:如果不关闭swap,就会在kubeadm初始化Kubernetes的时候报错:“[ERROR Swap]: running with swap on is not supported. Please disable swap”

  1. [root@k8scloude1 ~]# swapoff -a ;sed -i '/swap/d' /etc/fstab
  2. [root@k8scloude1 ~]# cat /etc/fstab
  3. # /etc/fstab
  4. # Created by anaconda on Thu Oct 18 23:09:54 2018
  5. #
  6. # Accessible filesystems, by reference, are maintained under '/dev/disk'
  7. # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
  8. #
  9. UUID=9875fa5e-2eea-4fcc-a83e-5528c7d0f6a5 /                       xfs     defaults        0 0


4.3 节点安装docker,并进行相关配置

k8s是容器编排工具,需要容器管理工具,所以三个节点同时安装docker,还是以k8scloude1为例。

安装docker
  1. [root@k8scloude1 ~]# yum -y install docker-ce
  2. 已加载插件:fastestmirror
  3. base                                                                                                                                           | 3.6 kB  00:00:00    
  4. ......
  5. 已安装:
  6.   docker-ce.x86_64 3:20.10.12-3.el7                                                                                                                                  
  7. ......
  8. 完毕!


设置docker开机自启动并现在启动docker
  1. [root@k8scloude1 ~]# systemctl enable docker --now
  2. Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
  3. [root@k8scloude1 ~]# systemctl status docker
  4. ● docker.service - Docker Application Container Engine
  5.    Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
  6.    Active: active (running) since 六 2022-01-08 22:10:38 CST; 18s ago
  7.      Docs: https://docs.docker.com
  8. Main PID: 1377 (dockerd)
  9.    Memory: 30.8M
  10.    CGroup: /system.slice/docker.service
  11.            └─1377 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock


查看docker版本
  1. [root@k8scloude1 ~]# docker --version
  2. Docker version 20.10.12, build e91ed57


配置docker镜像加速器
  1. [root@k8scloude1 ~]# cat > /etc/docker/daemon.json <<EOF
  2. > {
  3. > "registry-mirrors": ["https://frz7i079.mirror.aliyuncs.com"]
  4. > }
  5. > EOF
  6. [root@k8scloude1 ~]# cat /etc/docker/daemon.json
  7. {
  8. "registry-mirrors": ["https://frz7i079.mirror.aliyuncs.com"]
  9. }


重启docker
  1. [root@k8scloude1 ~]# systemctl restart docker
  2. [root@k8scloude1 ~]# systemctl status docker
  3. ● docker.service - Docker Application Container Engine
  4.    Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
  5.    Active: active (running) since 六 2022-01-08 22:17:45 CST; 8s ago
  6.      Docs: https://docs.docker.com
  7. Main PID: 1529 (dockerd)
  8.    Memory: 32.4M
  9.    CGroup: /system.slice/docker.service
  10.            └─1529 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock


设置iptables不对bridge的数据进行处理,启用IP路由转发功能

  1. [root@k8scloude1 ~]# cat <<EOF> /etc/sysctl.d/k8s.conf
  2. > net.bridge.bridge-nf-call-ip6tables = 1
  3. > net.bridge.bridge-nf-call-iptables = 1
  4. > net.ipv4.ip_forward = 1
  5. > EOF
  6. #使配置生效
  7. [root@k8scloude1 ~]# sysctl -p /etc/sysctl.d/k8s.conf
  8. net.bridge.bridge-nf-call-ip6tables = 1
  9. net.bridge.bridge-nf-call-iptables = 1
  10. net.ipv4.ip_forward = 1


4.4 安装kubelet,kubeadm,kubectl

三个节点都安装kubelet,kubeadm,kubectl:

    Kubelet 是 kubernetes 工作节点上的一个*****组件,运行在每个节点上
    Kubeadm 是一个快捷搭建kubernetes(k8s)的安装工具,它提供了 kubeadm init 以及 kubeadm join 这两个命令来快速创建 kubernetes 集群,kubeadm 通过执行必要的操作来启动和运行一个最小可用的集群
    kubectl是Kubernetes集群的命令行工具,通过kubectl能够对集群本身进行管理,并能够在集群上进行容器化应用的安装部署。
  1. #repoid:禁用为给定kubernetes定义的排除
  2. ##--disableexcludes=kubernetes  禁掉除了这个之外的别的仓库
  3. [root@k8scloude1 ~]# yum -y install kubelet-1.21.0-0 kubeadm-1.21.0-0 kubectl-1.21.0-0 --disableexcludes=kubernetes
  4. 已加载插件:fastestmirror
  5. Loading mirror speeds from cached hostfile
  6. 正在解决依赖关系
  7. --> 正在检查事务
  8. ---> 软件包 kubeadm.x86_64.0.1.21.0-0 将被 安装
  9. ......
  10. 已安装:
  11.   kubeadm.x86_64 0:1.21.0-0                              kubectl.x86_64 0:1.21.0-0                              kubelet.x86_64 0:1.21.0-0                            
  12. ......
  13. 完毕!

设置kubelet开机自启动并现在启动kubelet
  1. [root@k8scloude1 ~]# systemctl enable kubelet --now
  2. Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
  3. #kubelet现在是启动不了的
  4. [root@k8scloude1 ~]# systemctl status kubelet
  5. ● kubelet.service - kubelet: The Kubernetes Node Agent
  6.    Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
  7.   Drop-In: /usr/lib/systemd/system/kubelet.service.d
  8.            └─10-kubeadm.conf
  9.    Active: activating (auto-restart) (Result: exit-code) since 六 2022-01-08 22:35:33 CST; 3s ago
  10.      Docs: https://kubernetes.io/docs/
  11.   Process: 1722 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=1/FAILURE)
  12. Main PID: 1722 (code=exited, status=1/FAILURE)
  13. 1月 08 22:35:33 k8scloude1 systemd[1]: kubelet.service: main process exited, code=exited, status=1/FAILURE
  14. 1月 08 22:35:33 k8scloude1 systemd[1]: Unit kubelet.service entered failed state.
  15. 1月 08 22:35:33 k8scloude1 systemd[1]: kubelet.service failed.


4.5 kubeadm初始化

查看kubeadm哪些版本是可用的
  1. [root@k8scloude2 ~]# yum list --showduplicates kubeadm --disableexcludes=kubernetes
  2. 已加载插件:fastestmirror
  3. Loading mirror speeds from cached hostfile
  4. 已安装的软件包
  5. kubeadm.x86_64                                                                  1.21.0-0                                                                   @kubernetes
  6. 可安装的软件包
  7. kubeadm.x86_64                                                                  1.6.0-0                                                                    kubernetes
  8. kubeadm.x86_64                                                                  1.6.1-0                                                                    kubernetes
  9. kubeadm.x86_64                                                                  1.6.2-0                                                                    kubernetes
  10. ......                                                          
  11. kubeadm.x86_64                                                                  1.23.0-0                                                                   kubernetes
  12. kubeadm.x86_64                                                                  1.23.1-0


kubeadm init:在主节点k8scloude1上初始化 Kubernetes 控制平面节点

  1. #进行kubeadm初始化
  2. #--image-repository registry.aliyuncs.com/google_containers:使用阿里云镜像仓库,不然有些镜像下载不下来
  3. #--kubernetes-version=v1.21.0:指定k8s的版本
  4. #--pod-network-cidr=10.244.0.0/16:指定pod的网段
  5. #如下报错:registry.aliyuncs.com/google_containers/coredns/coredns:v1.8.0下载不下来,原因为:coredns改名为coredns/coredns了,手动下载coredns即可
  6. #coredns是一个用go语言编写的开源的DNS服务
  7. [root@k8scloude1 ~]# kubeadm init --image-repository registry.aliyuncs.com/google_containers --kubernetes-version=v1.21.0 --pod-network-cidr=10.244.0.0/16
  8. [init] Using Kubernetes version: v1.21.0
  9. [preflight] Running pre-flight checks
  10.         [WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
  11.         [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
  12. [preflight] Pulling p_w_picpath required for setting up a Kubernetes cluster
  13. [preflight] This might take a minute or two, depending on the speed of your internet connection
  14. [preflight] You can also perform this action in beforehand using 'kubeadm config p_w_picpath pull'
  15. error execution phase preflight: [preflight] Some fatal errors occurred:
  16.         [ERROR ImagePull]: failed to pull image registry.aliyuncs.com/google_containers/coredns/coredns:v1.8.0: output: Error response from daemon: pull access denied for registry.aliyuncs.com/google_containers/coredns/coredns, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
  17. , error: exit status 1
  18. [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
  19. To see the stack trace of this error execute with --v=5 or higher


手动下载coredns镜像
  1. [root@k8scloude1 ~]# docker pull coredns/coredns:1.8.0
  2. 1.8.0: Pulling from coredns/coredns
  3. c6568d217a00: Pull complete
  4. 5984b6d55edf: Pull complete
  5. Digest: sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e
  6. Status: Downloaded newer image for coredns/coredns:1.8.0
  7. docker.io/coredns/coredns:1.8.0


需要重命名coredns镜像,不然识别不了

  1. [root@k8scloude1 ~]# docker tag coredns/coredns:1.8.0 registry.aliyuncs.com/google_containers/coredns/coredns:v1.8.0
  2. #删除coredns/coredns:1.8.0镜像
  3. [root@k8scloude1 ~]# docker rmi coredns/coredns:1.8.0


此时可以发现现在k8scloude1上有7个镜像,缺一个镜像,kubeadm初始化都不能成功
  1. [root@k8scloude1 ~]# docker p_w_picpath
  2. REPOSITORY                                                        TAG        IMAGE ID       CREATED         SIZE
  3. registry.aliyuncs.com/google_containers/kube-apiserver            v1.21.0    4d217480042e   9 months ago    126MB
  4. registry.aliyuncs.com/google_containers/kube-proxy                v1.21.0    38ddd85fe90e   9 months ago    122MB
  5. registry.aliyuncs.com/google_containers/kube-controller-manager   v1.21.0    09708983cc37   9 months ago    120MB
  6. registry.aliyuncs.com/google_containers/kube-scheduler            v1.21.0    62ad3129eca8   9 months ago    50.6MB
  7. registry.aliyuncs.com/google_containers/pause                     3.4.1      0f8457a4c2ec   12 months ago   683kB
  8. registry.aliyuncs.com/google_containers/coredns/coredns           v1.8.0     296a6d5035e2   14 months ago   42.5MB
  9. registry.aliyuncs.com/google_containers/etcd                      3.4.13-0   0369cf4303ff   16 months ago   253MB


重新进行kubeadm初始化
  1. [root@k8scloude1 ~]# kubeadm init --image-repository registry.aliyuncs.com/google_containers --kubernetes-version=v1.21.0 --pod-network-cidr=10.244.0.0/16
  2. [init] Using Kubernetes version: v1.21.0
  3. [preflight] Running pre-flight checks
  4.         [WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
  5.         [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
  6. [preflight] Pulling p_w_picpath required for setting up a Kubernetes cluster
  7. [preflight] This might take a minute or two, depending on the speed of your internet connection
  8. [preflight] You can also perform this action in beforehand using 'kubeadm config p_w_picpath pull'
  9. [certs] Using certificateDir folder "/etc/kubernetes/pki"
  10. [certs] Generating "ca" certificate and key
  11. [certs] Generating "apiserver" certificate and key
  12. [certs] apiserver serving cert is signed for DNS names [k8scloude1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.110.130]
  13. [certs] Generating "apiserver-kubelet-client" certificate and key
  14. [certs] Generating "front-proxy-ca" certificate and key
  15. [certs] Generating "front-proxy-client" certificate and key
  16. [certs] Generating "etcd/ca" certificate and key
  17. [certs] Generating "etcd/server" certificate and key
  18. [certs] etcd/server serving cert is signed for DNS names [k8scloude1 localhost] and IPs [192.168.110.130 127.0.0.1 ::1]
  19. [certs] Generating "etcd/peer" certificate and key
  20. [certs] etcd/peer serving cert is signed for DNS names [k8scloude1 localhost] and IPs [192.168.110.130 127.0.0.1 ::1]
  21. [certs] Generating "etcd/healthcheck-client" certificate and key
  22. [certs] Generating "apiserver-etcd-client" certificate and key
  23. [certs] Generating "sa" key and public key
  24. [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
  25. [kubeconfig] Writing "admin.conf" kubeconfig file
  26. [kubeconfig] Writing "kubelet.conf" kubeconfig file
  27. [kubeconfig] Writing "controller-manager.conf" kubeconfig file
  28. [kubeconfig] Writing "scheduler.conf" kubeconfig file
  29. [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
  30. [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
  31. [kubelet-start] Starting the kubelet
  32. [control-plane] Using manifest folder "/etc/kubernetes/manifests"
  33. [control-plane] Creating static Pod manifest for "kube-apiserver"
  34. [control-plane] Creating static Pod manifest for "kube-controller-manager"
  35. [control-plane] Creating static Pod manifest for "kube-scheduler"
  36. [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
  37. [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
  38. [kubelet-check] Initial timeout of 40s passed.
  39. [apiclient] All control plane components are healthy after 65.002757 seconds
  40. [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
  41. [kubelet] Creating a ConfigMap "kubelet-config-1.21" in namespace kube-system with the configuration for the kubelets in the cluster
  42. [upload-certs] Skipping phase. Please see --upload-certs
  43. [mark-control-plane] Marking the node k8scloude1 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
  44. [mark-control-plane] Marking the node k8scloude1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
  45. [bootstrap-token] Using token: nta3x4.3e54l2dqtmj9tlry
  46. [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
  47. [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
  48. [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
  49. [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
  50. [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
  51. [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
  52. [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
  53. [addons] Applied essential addon: CoreDNS
  54. [addons] Applied essential addon: kube-proxy
  55. Your Kubernetes control-plane has initialized successfully!
  56. To start using your cluster, you need to run the following as a regular user:
  57.   mkdir -p $HOME/.kube
  58.   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  59.   sudo chown $(id -u):$(id -g) $HOME/.kube/config
  60. Alternatively, if you are the root user, you can run:
  61.   export KUBECONFIG=/etc/kubernetes/admin.conf
  62. You should now deploy a pod network to the cluster.
  63. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  64.   https://kubernetes.io/docs/concepts/cluster-administration/addons/
  65. Then you can join any number of worker nodes by running the following on each as root:
  66. kubeadm join 192.168.110.130:6443 --token nta3x4.3e54l2dqtmj9tlry \
  67.         --discovery-token-ca-cert-hash sha256:9add1314177ac5660d9674dab8c13aa996520028514246c4cd103cf08a211cc8


根据提示创建目录和配置文件
  1. [root@k8scloude1 ~]# mkdir -p $HOME/.kube
  2. [root@k8scloude1 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  3. [root@k8scloude1 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config


现在已经可以看到master节点了

  1. [root@k8scloude1 ~]# kubectl get node
  2. NAME         STATUS     ROLES                  AGE     VERSION
  3. k8scloude1   NotReady   control-plane,master   5m54s   v1.21.0


教程三地址:Centos7系统安装部署Kubernetes(k8s)集群教程( 三)

[ 此帖被   三岁在2023-08-06 17:15重新编辑 ]
[/url]
猴岛论坛技术组-神一般的头衔 各个领域 天才云集 福利多多 你还在等什么 赶快加入我们吧!

小野博客
老赵.

ZxID:7272

等级: 版主
配偶: 大小姐 
啊?

举报 只看该作者 沙发   发表于: 2023-08-07 0
感谢分享
算账

ZxID:29357411

等级: 版主
配偶: 浓酒与歌

举报 只看该作者 板凳   发表于: 2023-08-07 0
谢谢分享  
本帖de评分: 1 条评分 DB +10
DB+10 2023-08-07

精品文章

Null.

ZxID:171717

等级: 版主
配偶: 金度延
接电脑组装配置咨询  IP查定位等

举报 只看该作者 地板   发表于: 2023-08-07 0
精品文章
« 返回列表
发帖 回复