教程一连接地址:
Centos7系统安装部署Kubernetes(k8s)集群教程( 一) 4.2 配置节点的基本环境先配置节点的基本环境,3个节点都要同时设置,在此以k8scloude1作为示例
首先设置主机名
- [root@localhost ~]# vim /etc/hostname
- [root@localhost ~]# cat /etc/hostname
- k8scloude1
配置节点IP地址(可选)
- [root@localhost ~]# vim /etc/sysconfig/network-scripts/ifcfg-ens32
- [root@k8scloude1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-ens32
- TYPE=Ethernet
- BOOTPROTO=static
- NAME=ens32
- DEVICE=ens32
- ONBOOT=yes
- DNS1=114.114.114.114
- IPADDR=192.168.110.130
- NETMASK=255.255.255.0
- GATEWAY=192.168.110.2
- ZONE=trusted
重启网络
- [root@localhost ~]# service network restart
- Restarting network (via systemctl): [ 确定 ]
- [root@localhost ~]# systemctl restart NetworkManager
重启机器之后,主机名变为k8scloude1,测试机器是否可以访问网络
- [root@k8scloude1 ~]# ping www.baidu.com
- PING www.a.shifen.com (14.215.177.38) 56(84) bytes of data.
- 64 bytes from 14.215.177.38 (14.215.177.38): icmp_seq=1 ttl=128 time=25.9 ms
- 64 bytes from 14.215.177.38 (14.215.177.38): icmp_seq=2 ttl=128 time=26.7 ms
- 64 bytes from 14.215.177.38 (14.215.177.38): icmp_seq=3 ttl=128 time=26.4 ms
- ^C
- --- www.a.shifen.com ping statistics ---
- 3 packets transmitted, 3 received, 0% packet loss, time 2004ms
- rtt min/avg/max/mdev = 25.960/26.393/26.724/0.320 ms
配置IP和主机名映射
- [root@k8scloude1 ~]# vim /etc/hosts
- [root@k8scloude1 ~]# cat /etc/hosts
- 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
- ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
- 192.168.110.130 k8scloude1
- 192.168.110.129 k8scloude2
- 192.168.110.128 k8scloude3
- #复制 /etc/hosts到其他两个节点
- [root@k8scloude1 ~]# scp /etc/hosts 192.168.110.129:/etc/hosts
-
- [root@k8scloude1 ~]# scp /etc/hosts 192.168.110.128:/etc/hosts
- #可以ping通其他两个节点则成功
- [root@k8scloude1 ~]# ping k8scloude1
- PING k8scloude1 (192.168.110.130) 56(84) bytes of data.
- 64 bytes from k8scloude1 (192.168.110.130): icmp_seq=1 ttl=64 time=0.044 ms
- 64 bytes from k8scloude1 (192.168.110.130): icmp_seq=2 ttl=64 time=0.053 ms
- ^C
- --- k8scloude1 ping statistics ---
- 2 packets transmitted, 2 received, 0% packet loss, time 999ms
- rtt min/avg/max/mdev = 0.044/0.048/0.053/0.008 ms
- [root@k8scloude1 ~]# ping k8scloude2
- PING k8scloude2 (192.168.110.129) 56(84) bytes of data.
- 64 bytes from k8scloude2 (192.168.110.129): icmp_seq=1 ttl=64 time=0.297 ms
- 64 bytes from k8scloude2 (192.168.110.129): icmp_seq=2 ttl=64 time=1.05 ms
- 64 bytes from k8scloude2 (192.168.110.129): icmp_seq=3 ttl=64 time=0.254 ms
- ^C
- --- k8scloude2 ping statistics ---
- 3 packets transmitted, 3 received, 0% packet loss, time 2001ms
- rtt min/avg/max/mdev = 0.254/0.536/1.057/0.368 ms
- [root@k8scloude1 ~]# ping k8scloude3
- PING k8scloude3 (192.168.110.128) 56(84) bytes of data.
- 64 bytes from k8scloude3 (192.168.110.128): icmp_seq=1 ttl=64 time=0.285 ms
- 64 bytes from k8scloude3 (192.168.110.128): icmp_seq=2 ttl=64 time=0.513 ms
- 64 bytes from k8scloude3 (192.168.110.128): icmp_seq=3 ttl=64 time=0.390 ms
- ^C
- --- k8scloude3 ping statistics ---
- 3 packets transmitted, 3 received, 0% packet loss, time 2002ms
- rtt min/avg/max/mdev = 0.285/0.396/0.513/0.093 ms
关闭屏保(可选)
- [root@k8scloude1 ~]# setterm -blank 0
下载新的yum源
- [root@k8scloude1 ~]# rm -rf /etc/yum.repos.d/* ;wget ftp://ftp.rhce.cc/k8s/* -P /etc/yum.repos.d/
- --2022-01-07 17:07:28-- ftp://ftp.rhce.cc/k8s/*
- => “/etc/yum.repos.d/.listing”
- 正在解析主机 ftp.rhce.cc (ftp.rhce.cc)... 101.37.152.41
- 正在连接 ftp.rhce.cc (ftp.rhce.cc)|101.37.152.41|:21... 已连接。
- 正在以 anonymous 登录 ... 登录成功!
- ==> SYST ... 完成。 ==> PWD ... 完成。
- ......
- 100%[=======================================================================================================================================================================>] 276 --.-K/s 用时 0s
- 2022-01-07 17:07:29 (81.9 MB/s) - “/etc/yum.repos.d/k8s.repo” 已保存 [276]
- #新的repo文件如下
- [root@k8scloude1 ~]# ls /etc/yum.repos.d/
- CentOS-Base.repo docker-ce.repo epel.repo k8s.repo
关闭selinux,设置SELINUX=disabled
- [root@k8scloude1 ~]# cat /etc/selinux/config
- # This file controls the state of SELinux on the system.
- # SELINUX= can take one of these three values:
- # enforcing - SELinux security policy is enforced.
- # permissive - SELinux prints warnings instead of enforcing.
- # disabled - No SELinux policy is loaded.
- SELINUX=disabled
- # SELINUXTYPE= can take one of three two values:
- # targeted - Targeted processes are protected,
- # minimum - Modification of targeted policy. Only selected processes are protected.
- # mls - Multi Level Security protection.
- SELINUXTYPE=targeted
- [root@k8scloude1 ~]# getenforce
- Disabled
- [root@k8scloude1 ~]# setenforce 0
- setenforce: SELinux is disabled
配置防火墙允许所有数据包通过
- [root@k8scloude1 ~]# firewall-cmd --set-default-zone=trusted
- Warning: ZONE_ALREADY_SET: trusted
- success
- [root@k8scloude1 ~]# firewall-cmd --get-default-zone
- trusted
Linux swapoff命令用于关闭系统交换分区(swap area)。
注意:如果不关闭swap,就会在kubeadm初始化Kubernetes的时候报错:“[ERROR Swap]: running with swap on is not supported. Please disable swap”
- [root@k8scloude1 ~]# swapoff -a ;sed -i '/swap/d' /etc/fstab
- [root@k8scloude1 ~]# cat /etc/fstab
- # /etc/fstab
- # Created by anaconda on Thu Oct 18 23:09:54 2018
- #
- # Accessible filesystems, by reference, are maintained under '/dev/disk'
- # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
- #
- UUID=9875fa5e-2eea-4fcc-a83e-5528c7d0f6a5 / xfs defaults 0 0
4.3 节点安装docker,并进行相关配置k8s是容器编排工具,需要容器管理工具,所以三个节点同时安装docker,还是以k8scloude1为例。
安装docker
- [root@k8scloude1 ~]# yum -y install docker-ce
- 已加载插件:fastestmirror
- base | 3.6 kB 00:00:00
- ......
- 已安装:
- docker-ce.x86_64 3:20.10.12-3.el7
- ......
- 完毕!
设置docker开机自启动并现在启动docker
- [root@k8scloude1 ~]# systemctl enable docker --now
- Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
- [root@k8scloude1 ~]# systemctl status docker
- ● docker.service - Docker Application Container Engine
- Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
- Active: active (running) since 六 2022-01-08 22:10:38 CST; 18s ago
- Docs: https://docs.docker.com
- Main PID: 1377 (dockerd)
- Memory: 30.8M
- CGroup: /system.slice/docker.service
- └─1377 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
查看docker版本
- [root@k8scloude1 ~]# docker --version
- Docker version 20.10.12, build e91ed57
配置docker镜像加速器
- [root@k8scloude1 ~]# cat > /etc/docker/daemon.json <<EOF
- > {
- > "registry-mirrors": ["https://frz7i079.mirror.aliyuncs.com"]
- > }
- > EOF
- [root@k8scloude1 ~]# cat /etc/docker/daemon.json
- {
- "registry-mirrors": ["https://frz7i079.mirror.aliyuncs.com"]
- }
重启docker
- [root@k8scloude1 ~]# systemctl restart docker
- [root@k8scloude1 ~]# systemctl status docker
- ● docker.service - Docker Application Container Engine
- Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
- Active: active (running) since 六 2022-01-08 22:17:45 CST; 8s ago
- Docs: https://docs.docker.com
- Main PID: 1529 (dockerd)
- Memory: 32.4M
- CGroup: /system.slice/docker.service
- └─1529 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
设置iptables不对bridge的数据进行处理,启用IP路由转发功能
- [root@k8scloude1 ~]# cat <<EOF> /etc/sysctl.d/k8s.conf
- > net.bridge.bridge-nf-call-ip6tables = 1
- > net.bridge.bridge-nf-call-iptables = 1
- > net.ipv4.ip_forward = 1
- > EOF
- #使配置生效
- [root@k8scloude1 ~]# sysctl -p /etc/sysctl.d/k8s.conf
- net.bridge.bridge-nf-call-ip6tables = 1
- net.bridge.bridge-nf-call-iptables = 1
- net.ipv4.ip_forward = 1
4.4 安装kubelet,kubeadm,kubectl三个节点都安装kubelet,kubeadm,kubectl:
Kubelet 是 kubernetes 工作节点上的一个*****组件,运行在每个节点上
Kubeadm 是一个快捷搭建kubernetes(k8s)的安装工具,它提供了 kubeadm init 以及 kubeadm join 这两个命令来快速创建 kubernetes 集群,kubeadm 通过执行必要的操作来启动和运行一个最小可用的集群
kubectl是Kubernetes集群的命令行工具,通过kubectl能够对集群本身进行管理,并能够在集群上进行容器化应用的安装部署。
- #repoid:禁用为给定kubernetes定义的排除
- ##--disableexcludes=kubernetes 禁掉除了这个之外的别的仓库
- [root@k8scloude1 ~]# yum -y install kubelet-1.21.0-0 kubeadm-1.21.0-0 kubectl-1.21.0-0 --disableexcludes=kubernetes
- 已加载插件:fastestmirror
- Loading mirror speeds from cached hostfile
- 正在解决依赖关系
- --> 正在检查事务
- ---> 软件包 kubeadm.x86_64.0.1.21.0-0 将被 安装
- ......
- 已安装:
- kubeadm.x86_64 0:1.21.0-0 kubectl.x86_64 0:1.21.0-0 kubelet.x86_64 0:1.21.0-0
- ......
- 完毕!
设置kubelet开机自启动并现在启动kubelet
- [root@k8scloude1 ~]# systemctl enable kubelet --now
- Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
- #kubelet现在是启动不了的
- [root@k8scloude1 ~]# systemctl status kubelet
- ● kubelet.service - kubelet: The Kubernetes Node Agent
- Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
- Drop-In: /usr/lib/systemd/system/kubelet.service.d
- └─10-kubeadm.conf
- Active: activating (auto-restart) (Result: exit-code) since 六 2022-01-08 22:35:33 CST; 3s ago
- Docs: https://kubernetes.io/docs/
- Process: 1722 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=1/FAILURE)
- Main PID: 1722 (code=exited, status=1/FAILURE)
- 1月 08 22:35:33 k8scloude1 systemd[1]: kubelet.service: main process exited, code=exited, status=1/FAILURE
- 1月 08 22:35:33 k8scloude1 systemd[1]: Unit kubelet.service entered failed state.
- 1月 08 22:35:33 k8scloude1 systemd[1]: kubelet.service failed.
4.5 kubeadm初始化查看kubeadm哪些版本是可用的
- [root@k8scloude2 ~]# yum list --showduplicates kubeadm --disableexcludes=kubernetes
- 已加载插件:fastestmirror
- Loading mirror speeds from cached hostfile
- 已安装的软件包
- kubeadm.x86_64 1.21.0-0 @kubernetes
- 可安装的软件包
- kubeadm.x86_64 1.6.0-0 kubernetes
- kubeadm.x86_64 1.6.1-0 kubernetes
- kubeadm.x86_64 1.6.2-0 kubernetes
- ......
- kubeadm.x86_64 1.23.0-0 kubernetes
- kubeadm.x86_64 1.23.1-0
kubeadm init:在主节点k8scloude1上初始化 Kubernetes 控制平面节点
- #进行kubeadm初始化
- #--image-repository registry.aliyuncs.com/google_containers:使用阿里云镜像仓库,不然有些镜像下载不下来
- #--kubernetes-version=v1.21.0:指定k8s的版本
- #--pod-network-cidr=10.244.0.0/16:指定pod的网段
- #如下报错:registry.aliyuncs.com/google_containers/coredns/coredns:v1.8.0下载不下来,原因为:coredns改名为coredns/coredns了,手动下载coredns即可
- #coredns是一个用go语言编写的开源的DNS服务
- [root@k8scloude1 ~]# kubeadm init --image-repository registry.aliyuncs.com/google_containers --kubernetes-version=v1.21.0 --pod-network-cidr=10.244.0.0/16
- [init] Using Kubernetes version: v1.21.0
- [preflight] Running pre-flight checks
- [WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
- [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
- [preflight] Pulling p_w_picpath required for setting up a Kubernetes cluster
- [preflight] This might take a minute or two, depending on the speed of your internet connection
- [preflight] You can also perform this action in beforehand using 'kubeadm config p_w_picpath pull'
- error execution phase preflight: [preflight] Some fatal errors occurred:
- [ERROR ImagePull]: failed to pull image registry.aliyuncs.com/google_containers/coredns/coredns:v1.8.0: output: Error response from daemon: pull access denied for registry.aliyuncs.com/google_containers/coredns/coredns, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
- , error: exit status 1
- [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
- To see the stack trace of this error execute with --v=5 or higher
手动下载coredns镜像
- [root@k8scloude1 ~]# docker pull coredns/coredns:1.8.0
- 1.8.0: Pulling from coredns/coredns
- c6568d217a00: Pull complete
- 5984b6d55edf: Pull complete
- Digest: sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e
- Status: Downloaded newer image for coredns/coredns:1.8.0
- docker.io/coredns/coredns:1.8.0
需要重命名coredns镜像,不然识别不了
- [root@k8scloude1 ~]# docker tag coredns/coredns:1.8.0 registry.aliyuncs.com/google_containers/coredns/coredns:v1.8.0
- #删除coredns/coredns:1.8.0镜像
- [root@k8scloude1 ~]# docker rmi coredns/coredns:1.8.0
此时可以发现现在k8scloude1上有7个镜像,缺一个镜像,kubeadm初始化都不能成功
- [root@k8scloude1 ~]# docker p_w_picpath
- REPOSITORY TAG IMAGE ID CREATED SIZE
- registry.aliyuncs.com/google_containers/kube-apiserver v1.21.0 4d217480042e 9 months ago 126MB
- registry.aliyuncs.com/google_containers/kube-proxy v1.21.0 38ddd85fe90e 9 months ago 122MB
- registry.aliyuncs.com/google_containers/kube-controller-manager v1.21.0 09708983cc37 9 months ago 120MB
- registry.aliyuncs.com/google_containers/kube-scheduler v1.21.0 62ad3129eca8 9 months ago 50.6MB
- registry.aliyuncs.com/google_containers/pause 3.4.1 0f8457a4c2ec 12 months ago 683kB
- registry.aliyuncs.com/google_containers/coredns/coredns v1.8.0 296a6d5035e2 14 months ago 42.5MB
- registry.aliyuncs.com/google_containers/etcd 3.4.13-0 0369cf4303ff 16 months ago 253MB
重新进行kubeadm初始化
- [root@k8scloude1 ~]# kubeadm init --image-repository registry.aliyuncs.com/google_containers --kubernetes-version=v1.21.0 --pod-network-cidr=10.244.0.0/16
- [init] Using Kubernetes version: v1.21.0
- [preflight] Running pre-flight checks
- [WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
- [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
- [preflight] Pulling p_w_picpath required for setting up a Kubernetes cluster
- [preflight] This might take a minute or two, depending on the speed of your internet connection
- [preflight] You can also perform this action in beforehand using 'kubeadm config p_w_picpath pull'
- [certs] Using certificateDir folder "/etc/kubernetes/pki"
- [certs] Generating "ca" certificate and key
- [certs] Generating "apiserver" certificate and key
- [certs] apiserver serving cert is signed for DNS names [k8scloude1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.110.130]
- [certs] Generating "apiserver-kubelet-client" certificate and key
- [certs] Generating "front-proxy-ca" certificate and key
- [certs] Generating "front-proxy-client" certificate and key
- [certs] Generating "etcd/ca" certificate and key
- [certs] Generating "etcd/server" certificate and key
- [certs] etcd/server serving cert is signed for DNS names [k8scloude1 localhost] and IPs [192.168.110.130 127.0.0.1 ::1]
- [certs] Generating "etcd/peer" certificate and key
- [certs] etcd/peer serving cert is signed for DNS names [k8scloude1 localhost] and IPs [192.168.110.130 127.0.0.1 ::1]
- [certs] Generating "etcd/healthcheck-client" certificate and key
- [certs] Generating "apiserver-etcd-client" certificate and key
- [certs] Generating "sa" key and public key
- [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
- [kubeconfig] Writing "admin.conf" kubeconfig file
- [kubeconfig] Writing "kubelet.conf" kubeconfig file
- [kubeconfig] Writing "controller-manager.conf" kubeconfig file
- [kubeconfig] Writing "scheduler.conf" kubeconfig file
- [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
- [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
- [kubelet-start] Starting the kubelet
- [control-plane] Using manifest folder "/etc/kubernetes/manifests"
- [control-plane] Creating static Pod manifest for "kube-apiserver"
- [control-plane] Creating static Pod manifest for "kube-controller-manager"
- [control-plane] Creating static Pod manifest for "kube-scheduler"
- [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
- [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
- [kubelet-check] Initial timeout of 40s passed.
- [apiclient] All control plane components are healthy after 65.002757 seconds
- [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
- [kubelet] Creating a ConfigMap "kubelet-config-1.21" in namespace kube-system with the configuration for the kubelets in the cluster
- [upload-certs] Skipping phase. Please see --upload-certs
- [mark-control-plane] Marking the node k8scloude1 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
- [mark-control-plane] Marking the node k8scloude1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
- [bootstrap-token] Using token: nta3x4.3e54l2dqtmj9tlry
- [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
- [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
- [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
- [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
- [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
- [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
- [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
- [addons] Applied essential addon: CoreDNS
- [addons] Applied essential addon: kube-proxy
- Your Kubernetes control-plane has initialized successfully!
- To start using your cluster, you need to run the following as a regular user:
- mkdir -p $HOME/.kube
- sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
- sudo chown $(id -u):$(id -g) $HOME/.kube/config
- Alternatively, if you are the root user, you can run:
- export KUBECONFIG=/etc/kubernetes/admin.conf
- You should now deploy a pod network to the cluster.
- Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
- https://kubernetes.io/docs/concepts/cluster-administration/addons/
- Then you can join any number of worker nodes by running the following on each as root:
- kubeadm join 192.168.110.130:6443 --token nta3x4.3e54l2dqtmj9tlry \
- --discovery-token-ca-cert-hash sha256:9add1314177ac5660d9674dab8c13aa996520028514246c4cd103cf08a211cc8
根据提示创建目录和配置文件
- [root@k8scloude1 ~]# mkdir -p $HOME/.kube
- [root@k8scloude1 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
- [root@k8scloude1 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
现在已经可以看到master节点了
- [root@k8scloude1 ~]# kubectl get node
- NAME STATUS ROLES AGE VERSION
- k8scloude1 NotReady control-plane,master 5m54s v1.21.0
教程三地址:
Centos7系统安装部署Kubernetes(k8s)集群教程( 三) [ 此帖被 三岁在2023-08-06 17:15重新编辑 ]