执行kubeadm重置后,Kubernetes未能设置networking

我使用kubeadm init来初始化Kubernetes,当我发现--pod-network-cidr错误时,我使用kubeadm reset重置它。 纠正后,我试图用kubeadm再次启动Kubernetes

 kubeadm init --use-kubernetes-version v1.5.1 --external-etcd endpoints=http://10.111.125.131:2379 --pod-network-cidr=10.244.0.0/16 

然后我得到了一些节点上的错误

 12月 28 15:30:55 ydtf-node-137 kubelet[13333]: E1228 15:30:55.838700 13333 cni.go:255] Error adding network: no IP addresses available in network: cbr0 12月 28 15:30:55 ydtf-node-137 kubelet[13333]: E1228 15:30:55.838727 13333 cni.go:209] Error while adding to cni network: no IP addresses available in network: cbr0 12月 28 15:30:55 ydtf-node-137 kubelet[13333]: E1228 15:30:55.838781 13333 docker_manager.go:2201] Failed to setup network for pod "test-701078429-tl3j2_default(6945191b-ccce-11e6-b53d-78acc0f9504e)" using network plugins "cni": no IP addresses available in network: cbr0; Skipping pod 12月 28 15:30:56 ydtf-node-137 kubelet[13333]: E1228 15:30:56.205596 13333 pod_workers.go:184] Error syncing pod 6945191b-ccce-11e6-b53d-78acc0f9504e, skipping: failed to "SetupNetwork" for "test-701078429-tl3j2_default" with SetupNetworkError: "Failed to setup network for pod \"test-701078429-tl3j2_default(6945191b-ccce-11e6-b53d-78acc0f9504e)\" using network plugins \"cni\": no IP addresses available in network: cbr0; Skipping pod" 

要么

Dec 29 10:20:02 ydtf-node-137 kubelet: E1229 10:20:02.065142 22259 pod_workers.go:184] Error syncing pod 235cd9c6-cd6c-11e6-a9cd-78acc0f9504e, skipping: failed to "SetupNetwork" for "test-701078429-zmkdf_def ault" with SetupNetworkError: "Failed to setup network for pod \"test-701078429-zmkdf_default(235cd9c6-cd6c-11e6-a9cd-78acc0f9504e)\" using network plugins \"cni\": \"cni0\" already has an IP address different from 10.244.1.1/24; Skipping pod"

为什么我无法为新豆荚创buildnetworking?

顺便说一下,我使用法兰绒作为networking提供商,它工作正常。

 [root@ydtf-master-131 k8s151]# kubectl get pods --all-namespaces -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE default test-701078429-tl3j2 0/1 ContainerCreating 0 2h <none> ydtf-node-137 kube-system dummy-2088944543-hd7b7 1/1 Running 0 2h 10.111.125.131 ydtf-master-131 kube-system kube-apiserver-ydtf-master-131 1/1 Running 7 2h 10.111.125.131 ydtf-master-131 kube-system kube-controller-manager-ydtf-master-131 1/1 Running 0 2h 10.111.125.131 ydtf-master-131 kube-system kube-discovery-1769846148-bjgp8 1/1 Running 0 2h 10.111.125.131 ydtf-master-131 kube-system kube-dns-2924299975-q8x2m 4/4 Running 0 2h 10.244.0.3 ydtf-master-131 kube-system kube-flannel-ds-3fsjh 2/2 Running 0 2h 10.111.125.137 ydtf-node-137 kube-system kube-flannel-ds-89r72 2/2 Running 0 2h 10.111.125.131 ydtf-master-131 kube-system kube-proxy-7w8c4 1/1 Running 0 2h 10.111.125.137 ydtf-node-137 kube-system kube-proxy-jk6z6 1/1 Running 0 2h 10.111.125.131 ydtf-master-131 kube-system kube-scheduler-ydtf-master-131 1/1 Running 0 2h 10.111.125.131 ydtf-master-131 

我知道,如果在通过kubeadm init重新初始化kubernetes时更改了–pod-network-cidr ,则应该删除一些自动创build的内容,只需按照以下步骤重新执行kubeadm init即可

  1. 在主节点上执行Kubeadm重置

  2. 执行etcdctl rm –在etcd中recursionregistry重置数据。

  3. 主节点上的rm -rf / var / lib / cni
  4. 在主节点上使用rm -rf / run / flannel
  5. 主节点上的rm -rf / etc / cni
  6. ifconfig cni0在主机和节点上
  7. 主控和节点上的brctl delbr cni0

现在,我的kubernets工作正常:)

我有一个类似的问题,在这种情况下的修复是将绒布豆networking应用到集群:

 wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml kubectl apply -f kube-flannel.yml 

我更改了--pod-network-cidrjoin报告成功,但没有添加节点。 kubeadm reset和重新join没有任何影响。 通过apt-get remove kubelet kubeadm kubectl kubernetes-cni解决复位后, apt-get remove kubelet kubeadm kubectl kubernetes-cni ,然后docker和/或机器重新启动,然后重新安装,然后join agin。