记录日常工作关于系统运维,虚拟化云计算,数据库,网络安全等各方面问题。

K8s运维-集群升级 -- kubeadm v1.20 安装方式升级


kubeadm安装方式升级

级k8s集群必须 先升级kubeadm版本到目的k8s版本,也就是说kubeadm是k8s升级的准升证。

1.1 升级准备

在k8s的所有master节点进行组件升级,将管理端服务kube-controller-manager、kube-apiserver、kube-scheduler、kube-proxy进行版本升级。

1.1.1 验证当前k8s master版本

[root@k8s-master01 ~]# kubeadm versionkubeadm version: &version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.14", 
GitCommit:"57a3aa3f13699cf3db9c52d228c18db94fa81876", GitTreeState:"clean", 
BuildDate:"2021-12-15T14:51:22Z",GoVersion:"go1.15.15", Compiler:"gc", Platform:"linux/amd64"}

1.1.2 验证当前k8s node版本

[root@k8s-master01 ~]# kubectl get nodesNAME                         STATUS   ROLES                  AGE   VERSIONk8s-master01.example.local   Ready    control-plane,master   20h   v1.20.14k8s-master02.example.local   Ready    control-plane,master   20h   v1.20.14k8s-master03.example.local   Ready    control-plane,master   20h   v1.20.14k8s-node01.example.local     Ready    <none>                 20h   v1.20.14k8s-node02.example.local     Ready    <none>                 20h   v1.20.14k8s-node03.example.local     Ready    <none>                 20h   v1.20.14

1.2 升级k8s master节点版本

升级各k8s master节点版本

1.2.1 查看升级计划

[root@k8s-master01 ~]# kubeadm upgrade plan[upgrade/config] Making sure the configuration is correct:[upgrade/config] Reading configuration from the cluster...[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'[preflight] Running pre-flight checks.[upgrade] Running cluster health checks[upgrade] Fetching available versions to upgrade to[upgrade/versions] Cluster version: v1.20.14[upgrade/versions] kubeadm version: v1.21.9W0125 18:50:01.026004  119208 version.go:102] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable.txt": 
Get "https://storage.googleapis.com/kubernetes-release/release/stable.txt": context deadline exceeded (Client.Timeout exceeded while awaiting headers)W0125 18:50:01.026067  119208 version.go:103] falling back to the local client version: v1.21.9[upgrade/versions] Target version: v1.21.9[upgrade/versions] Latest version in the v1.20 series: v1.20.15
Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':COMPONENT CURRENT TARGETkubelet 6 x v1.20.14 v1.20.15
Upgrade to the latest version in the v1.20 series:
COMPONENT CURRENT TARGETkube-apiserver v1.20.14 v1.20.15kube-controller-manager v1.20.14 v1.20.15kube-scheduler v1.20.14 v1.20.15kube-proxy v1.20.14 v1.20.15CoreDNS 1.7.0 v1.8.0etcd 3.4.13-0 3.4.13-0
You can now apply the upgrade by executing the following command:
kubeadm upgrade apply v1.20.15
_____________________________________________________________________
Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':COMPONENT CURRENT TARGETkubelet 6 x v1.20.14 v1.21.9
Upgrade to the latest stable version:
COMPONENT CURRENT TARGETkube-apiserver v1.20.14 v1.21.9kube-controller-manager v1.20.14 v1.21.9kube-scheduler v1.20.14 v1.21.9kube-proxy v1.20.14 v1.21.9CoreDNS 1.7.0 v1.8.0etcd 3.4.13-0 3.4.13-0
You can now apply the upgrade by executing the following command:
kubeadm upgrade apply v1.21.9
_____________________________________________________________________

The table below shows the current state of component configs as understood by this version of kubeadm.Configs that have a "yes" mark in the "MANUAL UPGRADE REQUIRED" column require manual config upgrade orresetting to kubeadm defaults before a successful upgrade can be performed. The version to manuallyupgrade to is denoted in the "PREFERRED VERSION" column.
API GROUP CURRENT VERSION PREFERRED VERSION MANUAL UPGRADE REQUIREDkubeproxy.config.k8s.io v1alpha1 v1alpha1 nokubelet.config.k8s.io v1beta1 v1beta1 no_____________________________________________________________________


1.2.2 升级 k8s master节点版本

master01

#CentOS[root@k8s-master01 ~]# yum list kubeadm.x86_64 --showduplicates | grep 1.20.*Repository base is listed more than once in the configurationRepository extras is listed more than once in the configurationRepository updates is listed more than once in the configurationRepository centosplus is listed more than once in the configurationkubeadm.x86_64                       1.20.0-0                        kubernetes kubeadm.x86_64                       1.20.1-0                        kubernetes kubeadm.x86_64                       1.20.2-0                        kubernetes kubeadm.x86_64                       1.20.4-0                        kubernetes kubeadm.x86_64                       1.20.5-0                        kubernetes kubeadm.x86_64                       1.20.6-0                        kubernetes kubeadm.x86_64                       1.20.7-0                        kubernetes kubeadm.x86_64                       1.20.8-0                        kubernetes kubeadm.x86_64                       1.20.9-0                        kubernetes kubeadm.x86_64                       1.20.10-0                       kubernetes kubeadm.x86_64                       1.20.11-0                       kubernetes kubeadm.x86_64                       1.20.12-0                       kubernetes kubeadm.x86_64                       1.20.13-0                       kubernetes kubeadm.x86_64                       1.20.14-0                       kubernetes kubeadm.x86_64                       1.20.15-0                       kubernetes 
#Ubunturoot@k8s-master01:~# apt-cache madison kubeadm |grep 1.20.* kubeadm | 1.20.15-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages kubeadm | 1.20.14-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages kubeadm | 1.20.13-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages kubeadm | 1.20.12-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages kubeadm | 1.20.11-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages kubeadm | 1.20.10-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages kubeadm | 1.20.9-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages kubeadm | 1.20.8-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages kubeadm | 1.20.7-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages kubeadm | 1.20.6-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages kubeadm | 1.20.5-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages kubeadm | 1.20.4-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages kubeadm | 1.20.2-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages kubeadm | 1.20.1-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages kubeadm | 1.20.0-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
#ha01和ha02上安装#CentOS[root@k8s-ha01 ~]# yum -y install socat
#Ubunturoot@k8s-master01:~# apt -y install socat
#下线master01[root@k8s-master01 ~]# ssh -o StrictHostKeyChecking=no root@k8s-lb "echo "disable server kubernetes-6443/172.31.3.101" | socat stdio /var/lib/haproxy/haproxy.sock"
#CentOS[root@k8s-master01 ~]# yum -y install kubeadm-1.20.15 kubelet-1.20.15 kubectl-1.20.15
#Ubunturoot@k8s-master01:~# apt -y install kubeadm=1.20.15-00 kubelet=1.20.15-00 kubectl=1.20.15-00
[root@k8s-master01 ~]# kubeadm config images list --kubernetes-version v1.20.15k8s.gcr.io/kube-apiserver:v1.20.15k8s.gcr.io/kube-controller-manager:v1.20.15k8s.gcr.io/kube-scheduler:v1.20.15k8s.gcr.io/kube-proxy:v1.20.15k8s.gcr.io/pause:3.2k8s.gcr.io/etcd:3.4.13-0k8s.gcr.io/coredns:1.7.0
[root@k8s-master01 ~]# cat download_kubeadm_images_1.20-2.sh#!/bin/bash##**********************************************************************************************#Author: knowclub#FileName: download_kubeadm_images.sh#Description: The test script#Copyright (C): 2022 All rights reserved#*********************************************************************************************COLOR="echo -e \\033[01;31m"END='\033[0m'
KUBEADM_VERSION=1.20.15images=$(kubeadm config images list --kubernetes-version=v${KUBEADM_VERSION} | awk -F "/" '{print $NF}')HARBOR_DOMAIN=harbor.knowclub.cc
images_download(){ ${COLOR}"开始下载Kubeadm镜像"${END} for i in ${images};do docker pull registry.aliyuncs.com/google_containers/$i docker tag registry.aliyuncs.com/google_containers/$i ${HARBOR_DOMAIN}/google_containers/$i docker rmi registry.aliyuncs.com/google_containers/$i docker push ${HARBOR_DOMAIN}/google_containers/$i done ${COLOR}"Kubeadm镜像下载完成"${END}}
images_download
[root@k8s-master01 ~]# bash download_kubeadm_images_1.20-2.sh
[root@k8s-master01 ~]# kubeadm upgrade apply v1.20.15[upgrade/config] Making sure the configuration is correct:[upgrade/config] Reading configuration from the cluster...[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'[preflight] Running pre-flight checks.[upgrade] Running cluster health checks[upgrade/version] You have chosen to change the cluster version to "v1.20.15"[upgrade/versions] Cluster version: v1.20.14[upgrade/versions] kubeadm version: v1.20.15[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y[upgrade/prepull] Pulling images required for setting up a Kubernetes cluster[upgrade/prepull] This might take a minute or two, depending on the speed of your internet connection[upgrade/prepull] You can also perform this action in beforehand using 'kubeadm config images pull'[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.20.15"...Static pod: kube-apiserver-k8s-master01.example.local hash: 2580447932d6e4d10decb9e9e70593ceStatic pod: kube-controller-manager-k8s-master01.example.local hash: 55914312d14f1d04628f11b3b59de589Static pod: kube-scheduler-k8s-master01.example.local hash: 0f18c3e9299e8083d6aff52dd9fcab53[upgrade/etcd] Upgrading to TLS for etcdStatic pod: etcd-k8s-master01.example.local hash: 658c8782b9bd9c52da0b94be666192d6[upgrade/staticpods] Preparing for "etcd" upgrade[upgrade/staticpods] Current and new manifests of etcd are equal, skipping upgrade[upgrade/etcd] Waiting for etcd to become available[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests401207376"[upgrade/staticpods] Preparing for "kube-apiserver" upgrade[upgrade/staticpods] Renewing apiserver certificate[upgrade/staticpods] Renewing apiserver-kubelet-client certificate[upgrade/staticpods] Renewing front-proxy-client certificate[upgrade/staticpods] Renewing apiserver-etcd-client certificate[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2022-01-25-19-07-06/kube-apiserver.yaml"[upgrade/staticpods] Waiting for the kubelet to restart the component[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)Static pod: kube-apiserver-k8s-master01.example.local hash: 2580447932d6e4d10decb9e9e70593ceStatic pod: kube-apiserver-k8s-master01.example.local hash: 2580447932d6e4d10decb9e9e70593ceStatic pod: kube-apiserver-k8s-master01.example.local hash: 2580447932d6e4d10decb9e9e70593ceStatic pod: kube-apiserver-k8s-master01.example.local hash: 2580447932d6e4d10decb9e9e70593ceStatic pod: kube-apiserver-k8s-master01.example.local hash: 2580447932d6e4d10decb9e9e70593ceStatic pod: kube-apiserver-k8s-master01.example.local hash: 2580447932d6e4d10decb9e9e70593ceStatic pod: kube-apiserver-k8s-master01.example.local hash: 2580447932d6e4d10decb9e9e70593ceStatic pod: kube-apiserver-k8s-master01.example.local hash: 2580447932d6e4d10decb9e9e70593ceStatic pod: kube-apiserver-k8s-master01.example.local hash: 2580447932d6e4d10decb9e9e70593ceStatic pod: kube-apiserver-k8s-master01.example.local hash: 2580447932d6e4d10decb9e9e70593ceStatic pod: kube-apiserver-k8s-master01.example.local hash: 2580447932d6e4d10decb9e9e70593ceStatic pod: kube-apiserver-k8s-master01.example.local hash: 2580447932d6e4d10decb9e9e70593ceStatic pod: kube-apiserver-k8s-master01.example.local hash: 2580447932d6e4d10decb9e9e70593ceStatic pod: kube-apiserver-k8s-master01.example.local hash: 2580447932d6e4d10decb9e9e70593ceStatic pod: kube-apiserver-k8s-master01.example.local hash: 2580447932d6e4d10decb9e9e70593ceStatic pod: kube-apiserver-k8s-master01.example.local hash: 2580447932d6e4d10decb9e9e70593ceStatic pod: kube-apiserver-k8s-master01.example.local hash: 2580447932d6e4d10decb9e9e70593ceStatic pod: kube-apiserver-k8s-master01.example.local hash: 2580447932d6e4d10decb9e9e70593ceStatic pod: kube-apiserver-k8s-master01.example.local hash: 2580447932d6e4d10decb9e9e70593ceStatic pod: kube-apiserver-k8s-master01.example.local hash: 2580447932d6e4d10decb9e9e70593ceStatic pod: kube-apiserver-k8s-master01.example.local hash: 2580447932d6e4d10decb9e9e70593ceStatic pod: kube-apiserver-k8s-master01.example.local hash: 2580447932d6e4d10decb9e9e70593ceStatic pod: kube-apiserver-k8s-master01.example.local hash: 2580447932d6e4d10decb9e9e70593ceStatic pod: kube-apiserver-k8s-master01.example.local hash: 2580447932d6e4d10decb9e9e70593ceStatic pod: kube-apiserver-k8s-master01.example.local hash: 2580447932d6e4d10decb9e9e70593ceStatic pod: kube-apiserver-k8s-master01.example.local hash: 2580447932d6e4d10decb9e9e70593ceStatic pod: kube-apiserver-k8s-master01.example.local hash: 2580447932d6e4d10decb9e9e70593ceStatic pod: kube-apiserver-k8s-master01.example.local hash: 2580447932d6e4d10decb9e9e70593ceStatic pod: kube-apiserver-k8s-master01.example.local hash: 2580447932d6e4d10decb9e9e70593ceStatic pod: kube-apiserver-k8s-master01.example.local hash: 2580447932d6e4d10decb9e9e70593ceStatic pod: kube-apiserver-k8s-master01.example.local hash: 9b2053cdff6353cc35c3abf3a2e091b2[apiclient] Found 3 Pods for label selector component=kube-apiserver[upgrade/staticpods] Component "kube-apiserver" upgraded successfully![upgrade/staticpods] Preparing for "kube-controller-manager" upgrade[upgrade/staticpods] Renewing controller-manager.conf certificate[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2022-01-25-19-07-06/kube-controller-manager.yaml"[upgrade/staticpods] Waiting for the kubelet to restart the component[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)Static pod: kube-controller-manager-k8s-master01.example.local hash: 55914312d14f1d04628f11b3b59de589Static pod: kube-controller-manager-k8s-master01.example.local hash: 55914312d14f1d04628f11b3b59de589Static pod: kube-controller-manager-k8s-master01.example.local hash: 55914312d14f1d04628f11b3b59de589Static pod: kube-controller-manager-k8s-master01.example.local hash: 55914312d14f1d04628f11b3b59de589Static pod: kube-controller-manager-k8s-master01.example.local hash: 55914312d14f1d04628f11b3b59de589Static pod: kube-controller-manager-k8s-master01.example.local hash: 55914312d14f1d04628f11b3b59de589Static pod: kube-controller-manager-k8s-master01.example.local hash: 55914312d14f1d04628f11b3b59de589Static pod: kube-controller-manager-k8s-master01.example.local hash: 55914312d14f1d04628f11b3b59de589Static pod: kube-controller-manager-k8s-master01.example.local hash: 55914312d14f1d04628f11b3b59de589Static pod: kube-controller-manager-k8s-master01.example.local hash: 55914312d14f1d04628f11b3b59de589Static pod: kube-controller-manager-k8s-master01.example.local hash: 55914312d14f1d04628f11b3b59de589Static pod: kube-controller-manager-k8s-master01.example.local hash: 55914312d14f1d04628f11b3b59de589Static pod: kube-controller-manager-k8s-master01.example.local hash: 55914312d14f1d04628f11b3b59de589Static pod: kube-controller-manager-k8s-master01.example.local hash: 55914312d14f1d04628f11b3b59de589Static pod: kube-controller-manager-k8s-master01.example.local hash: 55914312d14f1d04628f11b3b59de589Static pod: kube-controller-manager-k8s-master01.example.local hash: 55914312d14f1d04628f11b3b59de589Static pod: kube-controller-manager-k8s-master01.example.local hash: 55914312d14f1d04628f11b3b59de589Static pod: kube-controller-manager-k8s-master01.example.local hash: 55914312d14f1d04628f11b3b59de589Static pod: kube-controller-manager-k8s-master01.example.local hash: 55914312d14f1d04628f11b3b59de589Static pod: kube-controller-manager-k8s-master01.example.local hash: 55914312d14f1d04628f11b3b59de589Static pod: kube-controller-manager-k8s-master01.example.local hash: 55914312d14f1d04628f11b3b59de589Static pod: kube-controller-manager-k8s-master01.example.local hash: 55914312d14f1d04628f11b3b59de589Static pod: kube-controller-manager-k8s-master01.example.local hash: 55914312d14f1d04628f11b3b59de589Static pod: kube-controller-manager-k8s-master01.example.local hash: 4d05e725d6cdb548bee78744c22e0fb8[apiclient] Found 3 Pods for label selector component=kube-controller-manager[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully![upgrade/staticpods] Preparing for "kube-scheduler" upgrade[upgrade/staticpods] Renewing scheduler.conf certificate[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2022-01-25-19-07-06/kube-scheduler.yaml"[upgrade/staticpods] Waiting for the kubelet to restart the component[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)Static pod: kube-scheduler-k8s-master01.example.local hash: 0f18c3e9299e8083d6aff52dd9fcab53Static pod: kube-scheduler-k8s-master01.example.local hash: 0f18c3e9299e8083d6aff52dd9fcab53Static pod: kube-scheduler-k8s-master01.example.local hash: 0f18c3e9299e8083d6aff52dd9fcab53Static pod: kube-scheduler-k8s-master01.example.local hash: 0f18c3e9299e8083d6aff52dd9fcab53Static pod: kube-scheduler-k8s-master01.example.local hash: 0f18c3e9299e8083d6aff52dd9fcab53Static pod: kube-scheduler-k8s-master01.example.local hash: 0f18c3e9299e8083d6aff52dd9fcab53Static pod: kube-scheduler-k8s-master01.example.local hash: 0f18c3e9299e8083d6aff52dd9fcab53Static pod: kube-scheduler-k8s-master01.example.local hash: 0f18c3e9299e8083d6aff52dd9fcab53Static pod: kube-scheduler-k8s-master01.example.local hash: 0f18c3e9299e8083d6aff52dd9fcab53Static pod: kube-scheduler-k8s-master01.example.local hash: 0f18c3e9299e8083d6aff52dd9fcab53Static pod: kube-scheduler-k8s-master01.example.local hash: 0f18c3e9299e8083d6aff52dd9fcab53Static pod: kube-scheduler-k8s-master01.example.local hash: 0f18c3e9299e8083d6aff52dd9fcab53Static pod: kube-scheduler-k8s-master01.example.local hash: 0f18c3e9299e8083d6aff52dd9fcab53Static pod: kube-scheduler-k8s-master01.example.local hash: 0f18c3e9299e8083d6aff52dd9fcab53Static pod: kube-scheduler-k8s-master01.example.local hash: 0f18c3e9299e8083d6aff52dd9fcab53Static pod: kube-scheduler-k8s-master01.example.local hash: 0f18c3e9299e8083d6aff52dd9fcab53Static pod: kube-scheduler-k8s-master01.example.local hash: 0f18c3e9299e8083d6aff52dd9fcab53Static pod: kube-scheduler-k8s-master01.example.local hash: 0f18c3e9299e8083d6aff52dd9fcab53Static pod: kube-scheduler-k8s-master01.example.local hash: 0f18c3e9299e8083d6aff52dd9fcab53Static pod: kube-scheduler-k8s-master01.example.local hash: 0f18c3e9299e8083d6aff52dd9fcab53Static pod: kube-scheduler-k8s-master01.example.local hash: 0f18c3e9299e8083d6aff52dd9fcab53Static pod: kube-scheduler-k8s-master01.example.local hash: 0f18c3e9299e8083d6aff52dd9fcab53Static pod: kube-scheduler-k8s-master01.example.local hash: 0f18c3e9299e8083d6aff52dd9fcab53Static pod: kube-scheduler-k8s-master01.example.local hash: 0f18c3e9299e8083d6aff52dd9fcab53Static pod: kube-scheduler-k8s-master01.example.local hash: 0f18c3e9299e8083d6aff52dd9fcab53Static pod: kube-scheduler-k8s-master01.example.local hash: 0f18c3e9299e8083d6aff52dd9fcab53Static pod: kube-scheduler-k8s-master01.example.local hash: aeca07c98134c1fa9650088b47670d3c[apiclient] Found 3 Pods for label selector component=kube-scheduler[upgrade/staticpods] Component "kube-scheduler" upgraded successfully![upgrade/postupgrade] Applying label node-role.kubernetes.io/control-plane='' to Nodes with label node-role.kubernetes.io/master='' (deprecated)[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace[kubelet] Creating a ConfigMap "kubelet-config-1.20" in namespace kube-system with the configuration for the kubelets in the cluster[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster[addons] Applied essential addon: CoreDNS[addons] Applied essential addon: kube-proxy
[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.20.15". Enjoy!
[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
[root@k8s-master01 ~]# systemctl daemon-reload[root@k8s-master01 ~]# systemctl restart kubelet
#上线master01[root@k8s-master01 ~]# ssh -o StrictHostKeyChecking=no root@k8s-lb "echo "enable server kubernetes-6443/172.31.3.101" | socat stdio /var/lib/haproxy/haproxy.sock"

master02

#下线master02[root@k8s-master01 ~]# ssh -o StrictHostKeyChecking=no root@k8s-lb "echo "disable server kubernetes-6443/172.31.3.102" | socat stdio /var/lib/haproxy/haproxy.sock"
#CentOS[root@k8s-master02 ~]# yum -y install kubeadm-1.20.15 kubelet-1.20.15 kubectl-1.20.15
#Ubunturoot@k8s-master02:~# apt -y install kubeadm=1.20.15-00 kubelet=1.20.15-00 kubectl=1.20.15-00
[root@k8s-master02 ~]# cat download_kubeadm_images_1.20-3.sh #!/bin/bash##**********************************************************************************************#Author: knowclub#FileName: download_kubeadm_images.sh#Description: The test script#Copyright (C): 2022 All rights reserved#*********************************************************************************************COLOR="echo -e \\033[01;31m"END='\033[0m'
KUBEADM_VERSION=1.20.15images=$(kubeadm config images list --kubernetes-version=v${KUBEADM_VERSION} | awk -F "/" '{print $NF}')HARBOR_DOMAIN=harbor.knowclub.cc
images_download(){ ${COLOR}"开始下载Kubeadm镜像"${END} for i in ${images};do docker pull ${HARBOR_DOMAIN}/google_containers/$i done ${COLOR}"Kubeadm镜像下载完成"${END}}
images_download
[root@k8s-master02 ~]# bash download_kubeadm_images_1.20-3.sh
[root@k8s-master02 ~]# kubeadm upgrade apply v1.20.15
[root@k8s-master02 ~]# systemctl daemon-reload[root@k8s-master02 ~]# systemctl restart kubelet
#上线master02[root@k8s-master01 ~]# ssh -o StrictHostKeyChecking=no root@k8s-lb "echo "enable server kubernetes-6443/172.31.3.102" | socat stdio /var/lib/haproxy/haproxy.sock"
master03

#下线master03[root@k8s-master01 ~]# ssh -o StrictHostKeyChecking=no root@k8s-lb "echo "disable server kubernetes-6443/172.31.3.103" | socat stdio /var/lib/haproxy/haproxy.sock"
#CentOS[root@k8s-master03 ~]# yum -y install kubeadm-1.20.15 kubelet-1.20.15 kubectl-1.20.15
#Ubunturoot@k8s-master03:~# apt -y install kubeadm=1.20.15-00 kubelet=1.20.15-00 kubectl=1.20.15-00
[root@k8s-master03 ~]# bash download_kubeadm_images_1.20-3.sh
[root@k8s-master03 ~]# kubeadm upgrade apply v1.20.15
[root@k8s-master03 ~]# systemctl daemon-reload[root@k8s-master03 ~]# systemctl restart kubelet
#上线master03[root@k8s-master01 ~]# ssh -o StrictHostKeyChecking=no root@k8s-lb "echo "enable server kubernetes-6443/172.31.3.103" | socat stdio /var/lib/haproxy/haproxy.sock"
[root@k8s-master01 ~]# kubectl get nodesNAME STATUS ROLES AGE VERSIONk8s-master01.example.local Ready control-plane,master 4d v1.20.15k8s-master02.example.local Ready control-plane,master 4d v1.20.15k8s-master03.example.local Ready control-plane,master 4d v1.20.15k8s-node01.example.local Ready <none> 4d v1.20.14k8s-node02.example.local Ready <none> 4d v1.20.14k8s-node03.example.local Ready <none> 4d v1.20.14
[root@k8s-master01 ~]# kubeadm versionkubeadm version: &version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.15", GitCommit:"8f1e5bf0b9729a899b8df86249b56e2c74aebc55", GitTreeState:"clean", BuildDate:"2022-01-19T17:26:37Z", GoVersion:"go1.15.15", Compiler:"gc", Platform:"linux/amd64"}
[root@k8s-master01 ~]# kubectl versionClient Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.15", GitCommit:"8f1e5bf0b9729a899b8df86249b56e2c74aebc55", GitTreeState:"clean", BuildDate:"2022-01-19T17:27:39Z", GoVersion:"go1.15.15", Compiler:"gc", Platform:"linux/amd64"}Server Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.15", GitCommit:"8f1e5bf0b9729a899b8df86249b56e2c74aebc55", GitTreeState:"clean", BuildDate:"2022-01-19T17:23:01Z", GoVersion:"go1.15.15", Compiler:"gc", Platform:"linux/amd64"}
[root@k8s-master01 ~]# kubelet --versionKubernetes v1.20.15

1.3 升级calico

[root@k8s-master01 ~]# curl https://docs.projectcalico.org/manifests/calico-etcd.yaml -O
[root@k8s-master01 ~]# vim calico-etcd.yaml...spec: selector: matchLabels: k8s-app: calico-node updateStrategy: type: OnDelete #修改这里,calico不会滚动更新,只有重启了kubelet,才会更新 template: metadata: labels: k8s-app: calico-node...

修改calico-etcd.yaml的以下位置

[root@k8s-master01 ~]# grep "etcd_endpoints:.*" calico-etcd.yaml   etcd_endpoints: "http://<ETCD_IP>:<ETCD_PORT>"
[root@k8s-master01 ~]# sed -i 's#etcd_endpoints: "http://<ETCD_IP>:<ETCD_PORT>"#etcd_endpoints: "https://172.31.3.101:2379,https://172.31.3.102:2379,https://172.31.3.103:2379"#g' calico-etcd.yaml
[root@k8s-master01 ~]# grep "etcd_endpoints:.*" calico-etcd.yaml etcd_endpoints: "https://172.31.3.101:2379,https://172.31.3.102:2379,https://172.31.3.103:2379"
[root@k8s-master01 ~]# grep -E "(.*etcd-key:.*|.*etcd-cert:.*|.*etcd-ca:.*)" calico-etcd.yaml # etcd-key: null # etcd-cert: null # etcd-ca: null
[root@k8s-master01 ~]# ETCD_KEY=`cat /etc/kubernetes/pki/etcd/server.key | base64 | tr -d '\n'`[root@k8s-master01 ~]# ETCD_CERT=`cat /etc/kubernetes/pki/etcd/server.crt | base64 | tr -d '\n'`[root@k8s-master01 ~]# ETCD_CA=`cat /etc/kubernetes/pki/etcd/ca.crt | base64 | tr -d '\n'`
[root@k8s-master01 ~]# sed -i "s@# etcd-key: null@etcd-key: ${ETCD_KEY}@g; s@# etcd-cert: null@etcd-cert: ${ETCD_CERT}@g; s@# etcd-ca: null@etcd-ca: ${ETCD_CA}@g" calico-etcd.yaml
[root@k8s-master01 ~]# grep -E "(.*etcd-key:.*|.*etcd-cert:.*|.*etcd-ca:.*)" calico-etcd.yaml etcd-key: etcd-cert: etcd-ca:
[root@k8s-master01 ~]# grep -E "(.*etcd_ca:.*|.*etcd_cert:.*|.*etcd_key:.*)" calico-etcd.yaml etcd_ca: "" # "/calico-secrets/etcd-ca" etcd_cert: "" # "/calico-secrets/etcd-cert" etcd_key: "" # "/calico-secrets/etcd-key"
[root@k8s-master01 ~]# sed -i 's#etcd_ca: ""#etcd_ca: "/calico-secrets/etcd-ca"#g; s#etcd_cert: ""#etcd_cert: "/calico-secrets/etcd-cert"#g; s#etcd_key: "" #etcd_key: "/calico-secrets/etcd-key" #g' calico-etcd.yaml
[root@k8s-master01 ~]# grep -E "(.*etcd_ca:.*|.*etcd_cert:.*|.*etcd_key:.*)" calico-etcd.yaml etcd_ca: "/calico-secrets/etcd-ca" # "/calico-secrets/etcd-ca" etcd_cert: "/calico-secrets/etcd-cert" # "/calico-secrets/etcd-cert" etcd_key: "/calico-secrets/etcd-key" # "/calico-secrets/etcd-key"
[root@k8s-master01 ~]# POD_SUBNET=`cat /etc/kubernetes/manifests/kube-controller-manager.yaml | grep cluster-cidr= | awk -F= '{print $NF}'`[root@k8s-master01 ~]# echo $POD_SUBNET192.168.0.0/12

注意下面的这个步骤是把calico-etcd.yaml文件里面的CALICO_IPV4POOL_CIDR下的网段改成自己的Pod网段,也就是把192.168.x.x/16改成自己的集群网段,并打开注释:

[root@k8s-master01 ~]# grep -E "(.*CALICO_IPV4POOL_CIDR.*|.*192.168.0.0.*)" calico-etcd.yaml             # - name: CALICO_IPV4POOL_CIDR            #   value: "192.168.0.0/16"
[root@k8s-master01 ~]# sed -i 's@# - name: CALICO_IPV4POOL_CIDR@- name: CALICO_IPV4POOL_CIDR@g; s@# value: "192.168.0.0/16"@ value: '"${POD_SUBNET}"'@g' calico-etcd.yaml
[root@k8s-master01 ~]# grep -E "(.*CALICO_IPV4POOL_CIDR.*|.*192.168.0.0.*)" calico-etcd.yaml - name: CALICO_IPV4POOL_CIDR value: 192.168.0.0/12
[root@k8s-master01 ~]# grep "image:" calico-etcd.yaml image: docker.io/calico/cni:v3.21.4 image: docker.io/calico/pod2daemon-flexvol:v3.21.4 image: docker.io/calico/node:v3.21.4 image: docker.io/calico/kube-controllers:v3.21.4

下载calico镜像并上传harbor

[root@k8s-master01 ~]# cat download_calico_images.sh#!/bin/bash##**********************************************************************************************#Author:        knowclub#FileName:      download_calico_images.sh#Description:   The test script#Copyright (C): 2022 All rights reserved#*********************************************************************************************COLOR="echo -e \\033[01;31m"END='\033[0m'
images=$(awk -F "/" '/image:/{print $NF}' calico-etcd.yaml)HARBOR_DOMAIN=harbor.knowclub.cc
images_download(){ ${COLOR}"开始下载Calico镜像"${END} for i in ${images};do docker pull registry.cn-beijing.aliyuncs.com/$i docker tag registry.cn-beijing.aliyuncs.com/$i ${HARBOR_DOMAIN}/google_containers/$i docker rmi registry.cn-beijing.aliyuncs.com/$i docker push ${HARBOR_DOMAIN}/google_containers/$i done ${COLOR}"Calico镜像下载完成"${END}}
images_download
[root@k8s-master01 ~]# bash download_calico_images.sh
[root@k8s-master01 ~]# sed -ri 's@(.*image:) docker.io/calico(/.*)@\1 harbor.knowclub.cc/google_containers\2@g' calico-etcd.yaml [root@k8s-master01 ~]# grep "image:" calico-etcd.yaml image: harbor.knowclub.cc/google_containers/cni:v3.21.4 image: harbor.knowclub.cc/google_containers/pod2daemon-flexvol:v3.21.4 image: harbor.knowclub.cc/google_containers/node:v3.21.4 image: harbor.knowclub.cc/google_containers/kube-controllers:v3.21.4
[root@k8s-master01 ~]# kubectl apply -f calico-etcd.yaml secret/calico-etcd-secrets createdconfigmap/calico-config createdclusterrole.rbac.authorization.k8s.io/calico-kube-controllers createdclusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers createdclusterrole.rbac.authorization.k8s.io/calico-node createdclusterrolebinding.rbac.authorization.k8s.io/calico-node createddaemonset.apps/calico-node createdserviceaccount/calico-node createddeployment.apps/calico-kube-controllers createdserviceaccount/calico-kube-controllers created

#下线master01[root@k8s-master01 ~]# ssh -o StrictHostKeyChecking=no root@k8s-lb "echo "disable server kubernetes-6443/172.31.3.101" | socat stdio /var/lib/haproxy/haproxy.sock"
[root@k8s-master01 ~]# kubectl get pod -n kube-system -o wide |grep calico |grep master01calico-node-q4dg7 1/1 Running 0 65m 172.31.3.101 k8s-master01 <none> <none>
[root@k8s-master01 ~]# kubectl get pod calico-node-q4dg7 -n kube-system -o yaml |grep "image:" f:image: {} f:image: {} f:image: {} image: harbor.knowclub.cc/google_containers/node:v3.15.3 image: harbor.knowclub.cc/google_containers/cni:v3.15.3 - image: harbor.knowclub.cc/google_containers/pod2daemon-flexvol:v3.15.3 image: harbor.knowclub.cc/google_containers/node:v3.15.3 image: harbor.knowclub.cc/google_containers/cni:v3.15.3 image: harbor.knowclub.cc/google_containers/pod2daemon-flexvol:v3.15.3#镜像并没有升级
[root@k8s-master01 ~]# kubectl delete pod calico-node-q4dg7 -n kube-systempod "calico-node-q4dg7" deleted[root@k8s-master01 ~]# kubectl get pod -n kube-system -o wide |grep calico |grep master01calico-node-xngd8 0/1 PodInitializing 0 4s 172.31.3.101 k8s-master01 <none> <none>[root@k8s-master01 ~]# kubectl get pod calico-node-xngd8 -n kube-system -o yaml |grep "image:"[root@k8s-master01 ~]# kubectl get pod calico-node-xngd8 -n kube-system -o yaml |grep "image:" f:image: {} f:image: {} f:image: {} image: harbor.knowclub.cc/google_containers/node:v3.21.4 image: harbor.knowclub.cc/google_containers/cni:v3.21.4 - image: harbor.knowclub.cc/google_containers/pod2daemon-flexvol:v3.21.4 image: harbor.knowclub.cc/google_containers/node:v3.21.4 image: harbor.knowclub.cc/google_containers/cni:v3.21.4 image: harbor.knowclub.cc/google_containers/pod2daemon-flexvol:v3.21.4
#上线master01[root@k8s-master01 ~]# ssh -o StrictHostKeyChecking=no root@k8s-lb "echo "enable server kubernetes-6443/172.31.3.101" | socat stdio /var/lib/haproxy/haproxy.sock"

#下线master02[root@k8s-master01 ~]# ssh -o StrictHostKeyChecking=no root@k8s-lb "echo "disable server kubernetes-6443/172.31.3.102" | socat stdio /var/lib/haproxy/haproxy.sock"
[root@k8s-master01 ~]# kubectl get pod -n kube-system -o wide |grep calico |grep master02calico-node-v8nqp 1/1 Running 0 69m 172.31.3.102 k8s-master02.example.local <none> <none>[root@k8s-master01 ~]# kubectl get pod calico-node-v8nqp -n kube-system -o yaml |grep "image:" f:image: {} f:image: {} f:image: {} image: harbor.knowclub.cc/google_containers/node:v3.15.3 image: harbor.knowclub.cc/google_containers/cni:v3.15.3 - image: harbor.knowclub.cc/google_containers/pod2daemon-flexvol:v3.15.3 image: harbor.knowclub.cc/google_containers/node:v3.15.3 image: harbor.knowclub.cc/google_containers/cni:v3.15.3 image: harbor.knowclub.cc/google_containers/pod2daemon-flexvol:v3.15.3[root@k8s-master01 ~]# kubectl delete pod calico-node-v8nqp -n kube-systempod "calico-node-v8nqp" deleted[root@k8s-master01 ~]# kubectl get pod -n kube-system -o wide |grep calico |grep master02calico-node-n76qk 1/1 Running 0 27s 172.31.3.102 k8s-master02.example.local <none> <none>[root@k8s-master01 ~]# kubectl get pod calico-node-n76qk -n kube-system -o yaml |grep "image:" f:image: {} f:image: {} f:image: {} image: harbor.knowclub.cc/google_containers/node:v3.21.4 image: harbor.knowclub.cc/google_containers/cni:v3.21.4 - image: harbor.knowclub.cc/google_containers/pod2daemon-flexvol:v3.21.4 image: harbor.knowclub.cc/google_containers/node:v3.21.4 image: harbor.knowclub.cc/google_containers/cni:v3.21.4 image: harbor.knowclub.cc/google_containers/pod2daemon-flexvol:v3.21.4
#上线master02[root@k8s-master01 ~]# ssh -o StrictHostKeyChecking=no root@k8s-lb "echo "enable server kubernetes-6443/172.31.3.102" | socat stdio /var/lib/haproxy/haproxy.sock"
#下线master03[root@k8s-master01 ~]# ssh -o StrictHostKeyChecking=no root@k8s-lb "echo "disable server kubernetes-6443/172.31.3.103" | socat stdio /var/lib/haproxy/haproxy.sock"
[root@k8s-master01 ~]# kubectl get pod -n kube-system -o wide |grep calico |grep master03calico-node-4mdv6 1/1 Running 0 71m 172.31.3.103 k8s-master03.example.local <none> <none>[root@k8s-master01 ~]# kubectl get pod calico-node-4mdv6 -n kube-system -o yaml |grep "image:" f:image: {} f:image: {} f:image: {} image: harbor.knowclub.cc/google_containers/node:v3.15.3 image: harbor.knowclub.cc/google_containers/cni:v3.15.3 - image: harbor.knowclub.cc/google_containers/pod2daemon-flexvol:v3.15.3 image: harbor.knowclub.cc/google_containers/node:v3.15.3 image: harbor.knowclub.cc/google_containers/cni:v3.15.3 image: harbor.knowclub.cc/google_containers/pod2daemon-flexvol:v3.15.3[root@k8s-master01 ~]# kubectl delete pod calico-node-4mdv6 -n kube-systempod "calico-node-4mdv6" deleted[root@k8s-master01 ~]# kubectl get pod -n kube-system -o wide |grep calico |grep master03calico-node-qr67n 0/1 Init:0/2 0 4s 172.31.3.103 k8s-master03.example.local <none> <none>[root@k8s-master01 ~]# kubectl get pod calico-node-qr67n -n kube-system -o yaml |grep "image:" f:image: {} f:image: {} f:image: {} image: harbor.knowclub.cc/google_containers/node:v3.21.4 image: harbor.knowclub.cc/google_containers/cni:v3.21.4 - image: harbor.knowclub.cc/google_containers/pod2daemon-flexvol:v3.21.4 image: harbor.knowclub.cc/google_containers/node:v3.21.4 image: harbor.knowclub.cc/google_containers/cni:v3.21.4 image: harbor.knowclub.cc/google_containers/pod2daemon-flexvol:v3.21.4
#上线master03[root@k8s-master01 ~]# ssh -o StrictHostKeyChecking=no root@k8s-lb "echo "enable server kubernetes-6443/172.31.3.103" | socat stdio /var/lib/haproxy/haproxy.sock"

1.4 升级k8s node节点版本

[root@k8s-master01 ~]# kubectl drain k8s-node01.example.local --delete-emptydir-data --force --ignore-daemonsets
[root@k8s-master01 ~]# kubectl get nodesNAME STATUS ROLES AGE VERSIONk8s-master01.example.local Ready control-plane,master 4d v1.20.15k8s-master02.example.local Ready control-plane,master 4d v1.20.15k8s-master03.example.local Ready control-plane,master 4d v1.20.15k8s-node01.example.local Ready,SchedulingDisabled <none> 4d v1.20.14k8s-node02.example.local Ready <none> 4d v1.20.14k8s-node03.example.local Ready <none> 4d v1.20.14
#CentOS[root@k8s-node01 ~]# yum -y install kubeadm-1.20.15 kubelet-1.20.15
#Ubunturoot@k8s-node01:~# apt -y install kubeadm=1.20.15-00 kubelet=1.20.15-00
[root@k8s-node01 ~]# systemctl daemon-reload[root@k8s-node01 ~]# systemctl restart kubelet
[root@k8s-master01 ~]# kubectl get pod -A -o wide|grep calico |grep node01kube-system calico-node-fzgq7 1/1 Running 0 46m 172.31.3.108 k8s-node01.example.local <none> <none>
[root@k8s-master01 ~]# kubectl get pod calico-node-fzgq7 -n kube-system -o yaml |grep "image:" f:image: {} f:image: {} f:image: {} image: harbor.knowclub.cc/google_containers/node:v3.15.3 image: harbor.knowclub.cc/google_containers/cni:v3.15.3 - image: harbor.knowclub.cc/google_containers/pod2daemon-flexvol:v3.15.3 image: harbor.knowclub.cc/google_containers/node:v3.15.3 image: harbor.knowclub.cc/google_containers/cni:v3.15.3 image: harbor.knowclub.cc/google_containers/pod2daemon-flexvol:v3.15.3[root@k8s-master01 ~]# kubectl delete pod calico-node-fzgq7 -n kube-systempod "calico-node-fzgq7" deleted[root@k8s-master01 ~]# kubectl get pod -A -o wide|grep calico |grep node01kube-system calico-node-dqk5p 0/1 Init:1/2 0 7s 172.31.3.108 k8s-node01.example.local <none> <none>[root@k8s-master01 ~]# kubectl get pod calico-node-dqk5p -n kube-system -o yaml |grep "image:" f:image: {} f:image: {} f:image: {} image: harbor.knowclub.cc/google_containers/node:v3.21.4 image: harbor.knowclub.cc/google_containers/cni:v3.21.4 - image: harbor.knowclub.cc/google_containers/pod2daemon-flexvol:v3.21.4 image: harbor.knowclub.cc/google_containers/node:v3.21.4 image: harbor.knowclub.cc/google_containers/cni:v3.21.4 image: harbor.knowclub.cc/google_containers/pod2daemon-flexvol:v3.21.4
[root@k8s-master01 ~]# kubectl uncordon k8s-node01.example.localnode/k8s-node01.example.local uncordoned[root@k8s-master01 ~]# kubectl get nodesNAME STATUS ROLES AGE VERSIONk8s-master01.example.local Ready control-plane,master 4d v1.20.15k8s-master02.example.local Ready control-plane,master 4d v1.20.15k8s-master03.example.local Ready control-plane,master 4d v1.20.15k8s-node01.example.local Ready <none> 4d v1.20.15k8s-node02.example.local Ready <none> 4d v1.20.14k8s-node03.example.local Ready <none> 4d v1.20.14
[root@k8s-master01 ~]# kubectl drain k8s-node02.example.local --delete-emptydir-data --force --ignore-daemonsets
[root@k8s-master01 ~]# kubectl get nodesNAME STATUS ROLES AGE VERSIONk8s-master01.example.local Ready control-plane,master 4d v1.20.15k8s-master02.example.local Ready control-plane,master 4d v1.20.15k8s-master03.example.local Ready control-plane,master 4d v1.20.15k8s-node01.example.local Ready <none> 4d v1.20.15k8s-node02.example.local Ready,SchedulingDisabled <none> 4d v1.20.14k8s-node03.example.local Ready <none> 4d v1.20.14
#CentOS[root@k8s-node02 ~]# yum -y install kubeadm-1.20.15 kubelet-1.20.15
#Ubunturoot@k8s-node03:~# apt -y install kubeadm=1.20.15-00 kubelet=1.20.15-00
[root@k8s-node02 ~]# systemctl daemon-reload[root@k8s-node02 ~]# systemctl restart kubelet
[root@k8s-master01 ~]# kubectl get pod -A -o wide|grep calico |grep node02 | tail -n1kube-system calico-node-ktmc9 1/1 Running 0 48m 172.31.3.109 k8s-node02.example.local <none> <none>[root@k8s-master01 ~]# kubectl get pod calico-node-ktmc9 -n kube-system -o yaml| grep "image:" f:image: {} f:image: {} f:image: {} image: harbor.knowclub.cc/google_containers/node:v3.15.3 image: harbor.knowclub.cc/google_containers/cni:v3.15.3 - image: harbor.knowclub.cc/google_containers/pod2daemon-flexvol:v3.15.3 image: harbor.knowclub.cc/google_containers/node:v3.15.3 image: harbor.knowclub.cc/google_containers/cni:v3.15.3 image: harbor.knowclub.cc/google_containers/pod2daemon-flexvol:v3.15.3
[root@k8s-master01 ~]# kubectl delete pod calico-node-ktmc9 -n kube-systempod "calico-node-ktmc9" deleted[root@k8s-master01 ~]# kubectl get pod -A -o wide|grep calico |grep node02 | tail -n1kube-system calico-node-p8czc 0/1 PodInitializing 0 8s 172.31.3.109 k8s-node02.example.local <none> <none>[root@k8s-master01 ~]# kubectl get pod calico-node-p8czc -n kube-system -o yaml| grep "image:" f:image: {} f:image: {} f:image: {} image: harbor.knowclub.cc/google_containers/node:v3.21.4 image: harbor.knowclub.cc/google_containers/cni:v3.21.4 - image: harbor.knowclub.cc/google_containers/pod2daemon-flexvol:v3.21.4 image: harbor.knowclub.cc/google_containers/node:v3.21.4 image: harbor.knowclub.cc/google_containers/cni:v3.21.4 image: harbor.knowclub.cc/google_containers/pod2daemon-flexvol:v3.21.4
[root@k8s-master01 ~]# kubectl uncordon k8s-node02.example.localnode/k8s-node02.example.local uncordoned[root@k8s-master01 ~]# kubectl get nodesNAME STATUS ROLES AGE VERSIONk8s-master01.example.local Ready control-plane,master 4d v1.20.15k8s-master02.example.local Ready control-plane,master 4d v1.20.15k8s-master03.example.local Ready control-plane,master 4d v1.20.15k8s-node01.example.local Ready <none> 4d v1.20.15k8s-node02.example.local Ready <none> 4d v1.20.15k8s-node03.example.local Ready <none> 4d v1.20.14
[root@k8s-master01 ~]# kubectl drain k8s-node03.example.local --delete-emptydir-data --force --ignore-daemonsets
[root@k8s-master01 ~]# kubectl get nodesNAME STATUS ROLES AGE VERSIONk8s-master01.example.local Ready control-plane,master 4d v1.20.15k8s-master02.example.local Ready control-plane,master 4d v1.20.15k8s-master03.example.local Ready control-plane,master 4d v1.20.15k8s-node01.example.local Ready <none> 4d v1.20.15k8s-node02.example.local Ready <none> 4d v1.20.15k8s-node03.example.local Ready,SchedulingDisabled <none> 4d v1.20.14
#CentOS[root@k8s-node03 ~]# yum -y install kubeadm-1.20.15 kubelet-1.20.15
#Ubunturoot@k8s-node03:~# apt -y install kubeadm=1.20.15-00 kubelet=1.20.15-00
[root@k8s-node03 ~]# systemctl daemon-reload[root@k8s-node03 ~]# systemctl restart kubelet
[root@k8s-master01 ~]# kubectl get pod -A -o wide|grep calico |grep node03 | tail -n1kube-system calico-node-922s8 1/1 Running 0 51m 172.31.3.110 k8s-node03.example.local <none> <none>[root@k8s-master01 ~]# kubectl get pod calico-node-922s8 -n kube-system -o yaml| grep "image:" f:image: {} f:image: {} f:image: {} image: harbor.knowclub.cc/google_containers/node:v3.15.3 image: harbor.knowclub.cc/google_containers/cni:v3.15.3 - image: harbor.knowclub.cc/google_containers/pod2daemon-flexvol:v3.15.3 image: harbor.knowclub.cc/google_containers/node:v3.15.3 image: harbor.knowclub.cc/google_containers/cni:v3.15.3 image: harbor.knowclub.cc/google_containers/pod2daemon-flexvol:v3.15.3
[root@k8s-master01 ~]# kubectl delete pod calico-node-922s8 -n kube-systempod "calico-node-922s8" deleted[root@k8s-master01 ~]# kubectl get pod -A -o wide|grep calico |grep node03 | tail -n1kube-system calico-node-j9f2s 0/1 Init:0/2 0 5s 172.31.3.110 k8s-node03.example.local <none> <none>[root@k8s-master01 ~]# kubectl get pod calico-node-j9f2s -n kube-system -o yaml| grep "image:" f:image: {} f:image: {} f:image: {} image: harbor.knowclub.cc/google_containers/node:v3.21.4 image: harbor.knowclub.cc/google_containers/cni:v3.21.4 - image: harbor.knowclub.cc/google_containers/pod2daemon-flexvol:v3.21.4 image: harbor.knowclub.cc/google_containers/node:v3.21.4 image: harbor.knowclub.cc/google_containers/cni:v3.21.4 image: harbor.knowclub.cc/google_containers/pod2daemon-flexvol:v3.21.4
[root@k8s-master01 ~]# kubectl uncordon k8s-node03.example.localnode/k8s-node03.example.local uncordoned[root@k8s-master01 ~]# kubectl get nodesNAME STATUS ROLES AGE VERSIONk8s-master01.example.local Ready control-plane,master 4d v1.20.15k8s-master02.example.local Ready control-plane,master 4d v1.20.15k8s-master03.example.local Ready control-plane,master 4d v1.20.15k8s-node01.example.local Ready <none> 4d v1.20.15k8s-node02.example.local Ready <none> 4d v1.20.15k8s-node03.example.local Ready <none> 4d v1.20.15

1.5 升级metrics-server

[root@k8s-master01 ~]# wget https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
#修改成下面内容[root@k8s-master01 ~]# vim components.yaml... spec: containers: - args: - --cert-dir=/tmp - --secure-port=4443 - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname - --kubelet-use-node-status-port - --metric-resolution=15s - --kubelet-insecure-tls #添加这行 ...
[root@k8s-master01 ~]# grep "image:" components.yaml image: k8s.gcr.io/metrics-server/metrics-server:v0.5.2
[root@k8s-master01 ~]# cat download_metrics_images.sh#!/bin/bash##**********************************************************************************************#Author: knowclub#FileName: download_metrics_images.sh#Description: The test script#Copyright (C): 2022 All rights reserved#*********************************************************************************************COLOR="echo -e \\033[01;31m"END='\033[0m'
images=$(awk -F "/" '/image:/{print $NF}' components.yaml)HARBOR_DOMAIN=harbor.knowclub.cc
images_download(){ ${COLOR}"开始下载Metrics镜像"${END} for i in ${images};do docker pull registry.aliyuncs.com/google_containers/$i docker tag registry.aliyuncs.com/google_containers/$i ${HARBOR_DOMAIN}/google_containers/$i docker rmi registry.aliyuncs.com/google_containers/$i docker push ${HARBOR_DOMAIN}/google_containers/$i done ${COLOR}"Metrics镜像下载完成"${END}}
images_download
[root@k8s-master01 ~]# bash download_metrics_images.sh
[root@k8s-master01 ~]# docker images |grep metricsharbor.knowclub.cc/google_containers/metrics-server v0.5.2 f73640fb5061 8 weeks ago 64.3MB
[root@k8s-master01 ~]# sed -ri 's@(.*image:) k8s.gcr.io/metrics-server(/.*)@\1 harbor.knowclub.cc/google_containers\2@g' components.yaml
[root@k8s-master01 ~]# grep "image:" components.yaml image: harbor.knowclub.cc/google_containers/metrics-server:v0.5.2
[root@k8s-master01 ~]# kubectl apply -f components.yamlserviceaccount/metrics-server createdclusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader createdclusterrole.rbac.authorization.k8s.io/system:metrics-server createdrolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader createdclusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator createdclusterrolebinding.rbac.authorization.k8s.io/system:metrics-server createdservice/metrics-server createddeployment.apps/metrics-server createdapiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created

查看状态

[root@k8s-master01 ~]# kubectl get pod -n kube-system |grep metricsmetrics-server-545b8b99c6-25csw            1/1     Running   0          45s
[root@k8s-master01 ~]# kubectl top nodesNAME CPU(cores) CPU% MEMORY(bytes) MEMORY% k8s-master01.example.local 152m 7% 1066Mi 27% k8s-master02.example.local 136m 6% 1002Mi 26% k8s-master03.example.local 143m 7% 1127Mi 29% k8s-node01.example.local 65m 3% 651Mi 17% k8s-node02.example.local 83m 4% 700Mi 18% k8s-node03.example.local 76m 3% 666Mi 17%




转载请标明出处【K8s运维-集群升级 -- kubeadm v1.20 安装方式升级】。

《www.micoder.cc》 虚拟化云计算,系统运维,安全技术服务.

网站已经关闭评论