记录日常工作关于系统运维,虚拟化云计算,数据库,网络安全等各方面问题。


k8s组件controller-manager与scheduler状态为Unhealthy处理


master初始化完成后,以下两个组件状态显示依然为Unhealthy

root@master1:~$ sudo kubectl get cs
NAME                         STATUS      MESSAGE                                                                                     ERROR
controller-manager   Unhealthy   Get http://127.0.0.1:10252/healthz: dial tcp 127.0.0.1:10252: connect: connection refused
scheduler                  Unhealthy   Get http://127.0.0.1:10251/healthz: dial tcp 127.0.0.1:10251: connect: connection refused
etcd-0                       Healthy     {"health":"true"}

网上说修改

root@master1:~$ ls /etc/kubernetes/manifests/
etcd.yaml kube-apiserver.yaml kube-controller-manager.yaml kube-scheduler.yaml

修改清单文件,注释掉--port=0这一行,在对清单文件进行修改时先做备份操作

注意:
在对清单文件做备份时,不要直接把清单文件备份在平级目录里,即/etc/kubernetes/manifests目录,

需要备份到其他目录中或在平级目录再创建一个类似/etc/kubernetes/manifests/bak的备份目录,

否则按照以下操作后master节点上依然无法监听10251和10252两个端口,组件健康状态依然无法恢复为health状态。

root@master1:~$ vim /etc/kubernetes/manifests/kube-controller-manager.yaml
- command:
    - kube-controller-manager
    - --allocate-node-cidrs=true
    - --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf
    - --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf
    - --bind-address=127.0.0.1
    - --client-ca-file=/etc/kubernetes/pki/ca.crt
    - --cluster-cidr=10.244.0.0/16
    - --cluster-name=kubernetes
    - --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt
    - --cluster-signing-key-file=/etc/kubernetes/pki/ca.key
    - --controllers=*,bootstrapsigner,tokencleaner
    - --kubeconfig=/etc/kubernetes/controller-manager.conf
    - --leader-elect=true
    - --node-cidr-mask-size=24
    - --port=0 ########################## 删除这行  #########
    - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
    - --root-ca-file=/etc/kubernetes/pki/ca.crt
    - --service-account-private-key-file=/etc/kubernetes/pki/sa.key
    - --service-cluster-ip-range=10.96.0.0/12
    - --use-service-account-credentials=true

root@master1:~$  vim /etc/kubernetes/manifests/kube-scheduler.yaml
- command:
    - kube-scheduler
    - --authentication-kubeconfig=/etc/kubernetes/scheduler.conf
    - --authorization-kubeconfig=/etc/kubernetes/scheduler.conf
    - --bind-address=127.0.0.1
    - --kubeconfig=/etc/kubernetes/scheduler.conf
    - --leader-elect=true
    - --port=0   ########### 删除这行 #################

重启kubelet服务,不会影响pod运行。

root@master1:~$ systemctl restart kubelet

查看监听监听端口以及组件状态

root@master1:~$ ss -tanlp | grep '10251\|10252'
LISTEN   0         128                        *:10251                  *:*       users:(("kube-scheduler",pid=51054,fd=5))
LISTEN   0         128                        *:10252                  *:*       users:(("kube-controller",pid=51100,fd=5))

root@master1:~$ kubectl get cs
NAME                         STATUS    MESSAGE             ERROR
scheduler                    Healthy   ok
controller-manager    Healthy   ok
etcd-0                         Healthy   {"health":"true"}

k8s组件controller-manager与scheduler状态为Unhealthy处理


master初始化完成后,以下两个组件状态显示依然为Unhealthy

root@master1:~$ sudo kubectl get cs
NAME                 STATUS      MESSAGE                                                                                     ERROR
controller-manager   Unhealthy   Get http://127.0.0.1:10252/healthz: dial tcp 127.0.0.1:10252: connect: connection refused
scheduler            Unhealthy   Get http://127.0.0.1:10251/healthz: dial tcp 127.0.0.1:10251: connect: connection refused
etcd-0               Healthy     {"health":"true"}

网上说修改

root@master1:~$ ls /etc/kubernetes/manifests/
etcd.yaml kube-apiserver.yaml kube-controller-manager.yaml kube-scheduler.yaml

修改清单文件,注释掉--port=0这一行,在对清单文件进行修改时先做备份操作

注意:
在对清单文件做备份时,不要直接把清单文件备份在平级目录里,即/etc/kubernetes/manifests目录,需要备份到其他目录中或在平级目录再创建一个类似/etc/kubernetes/manifests/bak的备份目录,

否则按照以下操作后master节点上依然无法监听10251和10252两个端口,组件健康状态依然无法恢复为health状态。

root@master1:~$ vim /etc/kubernetes/manifests/kube-controller-manager.yaml
- command:
    - kube-controller-manager
    - --allocate-node-cidrs=true
    - --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf
    - --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf
    - --bind-address=127.0.0.1
    - --client-ca-file=/etc/kubernetes/pki/ca.crt
    - --cluster-cidr=10.244.0.0/16
    - --cluster-name=kubernetes
    - --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt
    - --cluster-signing-key-file=/etc/kubernetes/pki/ca.key
    - --controllers=*,bootstrapsigner,tokencleaner
    - --kubeconfig=/etc/kubernetes/controller-manager.conf
    - --leader-elect=true
    - --node-cidr-mask-size=24
    - --port=0 ########################## 删除这行  #########
    - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
    - --root-ca-file=/etc/kubernetes/pki/ca.crt
    - --service-account-private-key-file=/etc/kubernetes/pki/sa.key
    - --service-cluster-ip-range=10.96.0.0/12
    - --use-service-account-credentials=true

root@master1:~$  vim /etc/kubernetes/manifests/kube-scheduler.yaml
- command:
    - kube-scheduler
    - --authentication-kubeconfig=/etc/kubernetes/scheduler.conf
    - --authorization-kubeconfig=/etc/kubernetes/scheduler.conf
    - --bind-address=127.0.0.1
    - --kubeconfig=/etc/kubernetes/scheduler.conf
    - --leader-elect=true
    - --port=0   ########### 删除这行 #################

重启kubelet服务

root@master1:~$ systemctl restart kubelet

查看监听监听端口以及组件状态

root@master1:~$ ss -tanlp | grep '10251\|10252'
LISTEN   0         128                        *:10251                  *:*       users:(("kube-scheduler",pid=51054,fd=5))
LISTEN   0         128                        *:10252                  *:*       users:(("kube-controller",pid=51100,fd=5))

root@master1:~$ kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok
controller-manager   Healthy   ok
etcd-0               Healthy   {"health":"true"}



转载请标明出处【k8s组件controller-manager与scheduler状态为Unhealthy处理】。

《www.micoder.cc》 虚拟化云计算,系统运维,安全技术服务.

网站已经关闭评论