记录日常工作关于系统运维,虚拟化云计算,数据库,网络安全等各方面问题。

openEuler部署Kubernetes 1.29.4版本集群

一、Kubernetes集群节点准备

1.1 主机操作系统说明

序号 操作系统及版本 备注
1 CentOS7u9或 OpenEuler2203

1.2 主机硬件配置说明

需求 CPU 内存 硬盘 角色 主机名
值 8C 8G 1024GB master k8s-master01
值 8C 16G 1024GB worker(node) k8s-worker01
值 8C 16G 1024GB worker(node) k8s-worker02

1.3 主机配置

1.3.1 主机名配置

由于本次使用3台主机完成kubernetes集群部署,其中1台为master节点,名称为k8s-master01;其中2台为worker节点,名称分别为:k8s-worker01及k8s-worker02

# master节点

hostnamectl set-hostname k8s-master01

#worker01节点
hostnamectl set-hostname k8s-worker01
 
#worker02节点
hostnamectl set-hostname k8s-worker02

1.3.2 IP地址,名称解析与互信

#IP配置这里不再讲解

#下面是名称解析配置
[root@k8s-master01 ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.0.11 k8s-master01
192.168.0.12 k8s-worker01
192.168.0.13 k8s-worker02

#主机互信配置  
[root@k8s-master01 ~]# ssh-keygen 
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/id_rsa
Your public key has been saved in /root/.ssh/id_rsa.pub
The key fingerprint is:
SHA256:Rr6W4rdnY350fzMeszeWFR/jUJt0VOZ3yZECp5VJJQA root@k8s-master01
The key's randomart image is:
+---[RSA 3072]----+
|         E.o+=++*|
|            ++o*+|
|        .  .  +oB|
|       o     . *o|
|        S     o =|
|       . o  . ..o|
|      . +  . . +o|
|     . o. = .  *B|
|      ...*.o  oo*|
+----[SHA256]-----+
[root@k8s-master01 ~]# for i in {11..13};do ssh-copy-id 192.168.0.${i};done

/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host '192.168.0.11 (192.168.0.11)' can't be established.
ED25519 key fingerprint is SHA256:s2R582xDIla4wyNozHa/HEmRR7LOU4WAciEcAw57U/Q.
This key is not known by any other names
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys

Authorized users only. All activities may be monitored and reported.
root@192.168.0.11's password: 

Number of key(s) added: 1

1.3.4 防火墙配置

所有主机均需要操作。

关闭现有防火墙firewalld

# systemctl disable firewalld

# systemctl stop firewalld

systemctl disable --now firewalld

查看firewalld状态

# firewall-cmd --state

not running

参考运行命令:

[root@k8s-master01 ~]# for i in {11..13};do ssh  192.168.0.${i} 'systemctl disable --now firewalld' ;done

Authorized users only. All activities may be monitored and reported.

Authorized users only. All activities may be monitored and reported.

Authorized users only. All activities may be monitored and reported.
[root@k8s-master01 ~]# for i in {11..13};do ssh  192.168.0.${i} 'firewall-cmd --state' ;done

Authorized users only. All activities may be monitored and reported.
not running

Authorized users only. All activities may be monitored and reported.
not running

Authorized users only. All activities may be monitored and reported.
not running
 

1.3.5 SELINUX配置

所有主机均需要操作。修改SELinux配置需要重启操作系统。

# sed -ri 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config
# sestatus

参考运行命令:

[root@k8s-master01 ~]# for i in {11..13};do ssh  192.168.0.${i} 'sed -ri 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config' ;done

Authorized users only. All activities may be monitored and reported.

Authorized users only. All activities may be monitored and reported.

Authorized users only. All activities may be monitored and reported.
 
[root@k8s-master01 ~]# for i in {11..13};do ssh  192.168.0.${i} 'sestatus' ;done

Authorized users only. All activities may be monitored and reported.
SELinux status:                 disabled

Authorized users only. All activities may be monitored and reported.
SELinux status:                 disabled

Authorized users only. All activities may be monitored and reported.
SELinux status:                 disabled
 

1.3.6 时间同步配置

所有主机均需要操作。最小化安装系统需要安装ntpdate软件。

# crontab -l

0 */1 * * * /usr/sbin/ntpdate time1.aliyun.com
for i in {11..13};do ssh  192.168.0.${i} ' echo '0 */1 * * * /usr/sbin/ntpdate time1.aliyun.com' >> /etc/crontab' ;done
#设置上海时区,东八区

timedatectl set-timezone Asia/Shanghai

for i in {11..13};do ssh  192.168.0.${i} ' timedatectl set-timezone Asia/Shanghai' ;done
 

1.3.7 升级操作系统内核

centos系统需要升级内容,具体百度,OpenEuler2203不需要

1.3.8 配置内核路由转发及网桥过滤

所有主机均需要操作。

添加网桥过滤及内核转发配置文件

sed -i 's/net.ipv4.ip_forward=0/net.ipv4.ip_forward=1/g' /etc/sysctl.conf
# cat > /etc/sysctl.d/k8s.conf << EOF

net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
vm.swappiness = 0
EOF
# 配置加载br_netfilter模块

cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF

#加载br_netfilter overlay模块
modprobe br_netfilter
modprobe overlay
#查看是否加载

# lsmod | grep br_netfilter

br_netfilter           22256  0
bridge                151336  1 br_netfilter

# 使其生效

 sysctl --system

# 使用默认配置文件生效
sysctl -p 

# 使用新添加配置文件生效
sysctl -p /etc/sysctl.d/k8s.conf  

1.3.9 安装ipset及ipvsadm

所有主机均需要操作。

安装ipset及ipvsadm

# yum -y install ipset ipvsadm
配置ipvsadm模块加载方式
添加需要加载的模块

# cat > /etc/sysconfig/modules/ipvs.modules <<EOF

#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
EOF
 
授权、运行、检查是否加载
chmod 755 /etc/sysconfig/modules/ipvs.module &&  /etc/sysconfig/modules/ipvs.module

查看对应的模块是否加载成功
# lsmod | grep -e ip_vs -e nf_conntrack_ipv4
k8s集群默认采用iptables 方式,如果集群在部署后已经是iptables 可以修改为ipvs模式

1.在master节点执行
# kubectl edit cm kube-proxy -n kube-system
...
    kind: KubeProxyConfiguration
    metricsBindAddress: ""
    mode: "ipvs"   # 此处修改为ipvs,默认为空
    nodePortAddresses: null
 
...

1.在master节点执行
# kubectl edit cm kube-proxy -n kube-system
...
    kind: KubeProxyConfiguration
    metricsBindAddress: ""
    mode: "ipvs"   # 此处修改为ipvs,默认为空
    nodePortAddresses: null
 
...
 
2.查看当前的kube-proxy
# kubectl get pods -n kube-system
NAME                                       READY   STATUS    RESTARTS      AGE
calico-kube-controllers-84c476996d-8kz5d   1/1     Running   0             62m
calico-node-8tb29                          1/1     Running   0             62m
calico-node-9dkpd                          1/1     Running   0             62m
calico-node-wnlgv                          1/1     Running   0             62m
coredns-74586cf9b6-jgtlq                   1/1     Running   0             84m
coredns-74586cf9b6-nvkz4                   1/1     Running   0             84m
etcd-k8s-master01                          1/1     Running   2             84m
kube-apiserver-k8s-master01                1/1     Running   0             84m
kube-controller-manager-k8s-master01       1/1     Running   1 (69m ago)   84m
kube-proxy-l2vfq                           1/1     Running   0             45m
kube-proxy-v4drh                           1/1     Running   0             45m
kube-proxy-xvtnh                           1/1     Running   0             45m
kube-scheduler-k8s-master01                1/1     Running   1 (69m ago)   84m
 
3.删除当前的kube-proxy
# kubectl delete pod kube-proxy-f7rcx kube-proxy-ggchx kube-proxy-hbt94 -n kube-system
pod "kube-proxy-f7rcx" deleted
pod "kube-proxy-ggchx" deleted
pod "kube-proxy-hbt94" deleted
 
4.查看新自动创建的kube-proxy
# kubectl get pods -n kube-system
NAME                                 READY   STATUS    RESTARTS   AGE
coredns-74586cf9b6-5bfk7             1/1     Running   0          77m
coredns-74586cf9b6-d29mj             1/1     Running   0          77m
etcd-master-140                      1/1     Running   0          78m
kube-apiserver-master-140            1/1     Running   0          78m
kube-controller-manager-master-140   1/1     Running   0          78m
kube-proxy-7859q                     1/1     Running   0          44s
kube-proxy-l4gqx                     1/1     Running   0          43s
kube-proxy-nnjr2                     1/1     Running   0          43s
kube-scheduler-master-140            1/1     Running   0          78m
 

1.3.10 关闭SWAP分区

修改完成后需要重启操作系统,如不重启,可临时关闭,命令为swapoff -a

永远关闭swap分区,需要重启操作系统

# cat /etc/fstab

......

# /dev/mapper/centos-swap swap                    swap    defaults        0 0

在上一行中行首添加#

二、containerd容器环境安装

2.1 安装containerd环境包

所有主机均需要操作。

 # 打包的文件

for i in {11..13};do ssh  192.168.0.${i} ' wget https://blog-source-mkt.oss-cn-chengdu.aliyuncs.com/resources/k8s/kubeadm%20init/k8s1.29.tar.gz'; done

# 解压containerd并安装
for i in {11..13};do ssh  192.168.0.${i} ' tar -zxvf /root/k8s1.29.tar.gz'; done

for i in {11..13};do ssh  192.168.0.${i} ' tar -zxvf /root/workdir/containerd-1.7.11-linux-amd64.tar.gz && mv /root/bin/* /usr/local/bin/ && rm -rf /root/bin'; done
# 创建服务,所有主机都要操作
cat << EOF > /usr/lib/systemd/system/containerd.service
[Unit]
Description=containerd container runtime
Documentation=https://containerd.io
After=network.target local-fs.target

[Service]
ExecStartPre=-/sbin/modprobe overlay
ExecStart=/usr/local/bin/containerd

Type=notify
Delegate=yes
KillMode=process
Restart=always
RestartSec=5

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNPROC=infinity
LimitCORE=infinity

# Comment TasksMax if your systemd version does not supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
OOMScoreAdjust=-999

[Install]
WantedBy=multi-user.target
EOF
# 启动容器服务
for i in {11..13};do ssh  192.168.0.${i} 'systemctl daemon-reload && systemctl enable --now containerd '; done

# 安装runc
for i in {11..13};do ssh  192.168.0.${i} 'install -m 755 /root/workdir/runc.amd64 /usr/local/sbin/runc '; done
# 安装cni插件
for i in {11..13};do ssh  192.168.0.${i} 'mkdir -p /opt/cni/bin && tar -xzvf  /root/workdir/cni-plugins-linux-amd64-v1.4.0.tgz -C /opt/cni/bin/ '; done
# 生成容器配置文件并修改
for i in {11..13};do ssh  192.168.0.${i} 'mkdir -p /etc/containerd && containerd config default | sudo tee /etc/containerd/config.toml '; done 
 
# 修改沙箱镜像,所有主机都要操作

sed -i 's#sandbox_image = "registry.k8s.io/pause:.*"#sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.9"#' /etc/containerd/config.toml
#重启containerd
systemctl restart containerd

2.2 master主机安装k8s

# 配置k8s v2.19源,所有节点均要安装
cat <<EOF | tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.29/rpm/
enabled=1
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.29/rpm/repodata/repomd.xml.key
EOF
# 安装k8s工具,所有节点均要安装
yum clean all && yum makecache

yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
#  配置kubelet为了实现docker使用的cgroupdriver与kubelet使用的cgroup的一致性,建议修改如下文件内容。所有节点均要安装

# vim /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--cgroup-driver=systemd"

或是下面命令
echo 'KUBELET_EXTRA_ARGS="--cgroup-driver=systemd"' > /etc/sysconfig/kubelet
systemctl enable kubelet 

#注意,kubelet不要启动,kubeadm会自动启动,如果已启动,安装会报错。

# 安装k8s命令,主master节点执行,这里只有1.29.4版本镜像

kubeadm init --apiserver-advertise-address=192.168.0.11  --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.29.4 --service-cidr=10.96.0.0/12 --pod-network-cidr=10.224.0.0/16
# 最后执行以下命令
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

export KUBECONFIG=/etc/kubernetes/admin.conf

2.3 安装calico网络插件

kubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/calico.yaml
# 最后查看节点与pod支行情况

kubectl get nodes
 
kubectl get pods -A


转载请标明出处【openEuler部署Kubernetes 1.29.4版本集群】。

《www.micoder.cc》 虚拟化云计算,系统运维,安全技术服务.

网站已经关闭评论