level = fatal msg =“ipset failed:ipset v6.29:集合不能被销毁:内核组件正在使用\ n:退出状态1”

  • 基本信息:

    1. 系统:

      # cat /proc/version Linux version 3.10.0-514.2.2.el7.x86_64 (builder@kbuilder.dev.centos.org) (gcc version 4.8.5 20150623 (Red Hat 4.8.5-11) (GCC) ) #1 SMP Tue Dec 6 23:06:41 UTC 2016 
    2. Kubeadm版本:

       # kubeadm version kubeadm version: version.Info{Major:"1", Minor:"6+", GitVersion:"v1.6.0-alpha.0.2074+a092d8e0f95f52", GitCommit:"a092d8e0f95f5200f7ae2cba45c75ab42da36537", GitTreeState:"clean", BuildDate:"2016-12-13T17:03:18Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"} 
    3. Kubectl版本

       # kubectl version Client Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.1", GitCommit:"82450d03cb057bab0950214ef122b67c83fb11df", GitTreeState:"clean", BuildDate:"2016-12-14T00:57:05Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.1", GitCommit:"82450d03cb057bab0950214ef122b67c83fb11df", GitTreeState:"clean", BuildDate:"2016-12-14T00:52:01Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"} 
    4. Docker版本

       # docker version Client: Version: 1.12.5 API version: 1.24 Go version: go1.6.4 Git commit: 7392c3b Built: Fri Dec 16 02:23:59 2016 OS/Arch: linux/amd64 Server: Version: 1.12.5 API version: 1.24 Go version: go1.6.4 Git commit: 7392c3b Built: Fri Dec 16 02:23:59 2016 OS/Arch: linux/amd64 
    5. 编织图像

       REPOSITORY TAG IMAGE ID CREATED SIZE weaveworks/weave-npc 1.8.2 c91ef3f4642b 4 weeks ago 68.77 MB weaveworks/weave-kube 1.8.2 a4740ae55aae 4 weeks ago 166.7 MB 
  • 问题

    我正在使用kubeadm部署k8s。 奇怪的是, 第一次编织在新VM上用kube-dns工作正常, 但是 ,在复位kubeadm并重新初始化之后,编织不能再工作了。

    • Kubectl获取豆荚

       [root@192-168-1-177 pod_network]# kubectl get pods -o wide --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE kube-system dummy-2088944543-tdxck 1/1 Running 0 59m 192.168.1.177 192-168-1-177.master kube-system etcd-192-168-1-177.master 1/1 Running 0 59m 192.168.1.177 192-168-1-177.master kube-system kube-apiserver-192-168-1-177.master 1/1 Running 0 59m 192.168.1.177 192-168-1-177.master kube-system kube-controller-manager-192-168-1-177.master 1/1 Running 0 59m 192.168.1.177 192-168-1-177.master kube-system kube-discovery-1769846148-87pgm 1/1 Running 0 59m 192.168.1.177 192-168-1-177.master kube-system kube-dns-2924299975-82sb6 4/4 Running 0 59m 10.32.0.2 192-168-1-177.master kube-system kube-proxy-8xprh 1/1 Running 0 59m 192.168.1.177 192-168-1-177.master kube-system kube-scheduler-192-168-1-177.master 1/1 Running 0 59m 192.168.1.177 192-168-1-177.master kube-system weave-net-ssqtd 1/2 CrashLoopBackOff 16 58m 192.168.1.177 192-168-1-177.master 
    • Kubectl日志

        # kubectl logs $(kubectl get pods --all-namespaces | grep weave-net | awk '{print $2}') -n kube-system weave-npc time="2017-01-09T11:11:17Z" level=info msg="Starting Weaveworks NPC 1.8.2" time="2017-01-09T11:11:17Z" level=info msg="Serving /metrics on :6781" Mon Jan 9 11:11:17 2017 <5> ulogd.c:843 building new pluginstance stack: 'log1:NFLOG,base1:BASE,pcap1:PCAP' time="2017-01-09T11:11:17Z" level=fatal msg="ipset [destroy] failed: ipset v6.29: Set cannot be destroyed: it is in use by a kernel component\n: exit status 1" 
  • 基本操作

    • Kubeadm Init

       kubeadm init --api-advertise-addresses 192.168.1.177 --use-kubernetes-version v1.5.1 
    • 应用编织

       kubectl apply -f https://git.io/weave-kube 
    • Kubeadm重置

       kubeadm reset docker rm `docker ps -a -q` find /var/lib/kubelet | xargs -n 1 findmnt -n -t tmpfs -o TARGET -T | uniq | xargs -r umount -v rm -r -f /etc/kubernetes /var/lib/kubelet /var/lib/etcd 

我只是通过重新启动我的VM机器来解决自己的问题。

重新启动之前:

  • Ipset列表

     # ipset list Name: weave-k?Z;25^M}|1s7P3|H9i;*;MhG Type: hash:ip Revision: 1 Header: family inet hashsize 1024 maxelem 65536 Size in memory: 16528 References: 0 Members: Name: weave-iuZcey(5DeXbzgRFs8Szo]<@p Type: hash:ip Revision: 1 Header: family inet hashsize 1024 maxelem 65536 Size in memory: 16528 References: 0 Members: Name: weave-#Of<X6ofOD9U?jkdAmmuY.VL( Type: hash:ip Revision: 1 Header: family inet hashsize 1024 maxelem 65536 Size in memory: 16528 References: 0 Members: Name: felix-calico-hosts-4 Type: hash:ip Revision: 1 Header: family inet hashsize 1024 maxelem 1048576 Size in memory: 16528 References: 1 Members: Name: felix-all-ipam-pools Type: hash:net Revision: 3 Header: family inet hashsize 1024 maxelem 1048576 Size in memory: 16784 References: 1 Members: Name: felix-masq-ipam-pools Type: hash:net Revision: 3 Header: family inet hashsize 1024 maxelem 1048576 Size in memory: 16784 References: 1 Members: 
  • Ipset摧毁

     # ipset destroy ipset v6.19: Set cannot be destroyed: it is in use by a kernel component 

重启后:

  • Ipset列表

     # ipset list Name: weave-iuZcey(5DeXbzgRFs8Szo]<@p Type: hash:ip Revision: 1 Header: family inet hashsize 1024 maxelem 65536 Size in memory: 16544 References: 1 Members: 10.32.0.2 Name: weave-k?Z;25^M}|1s7P3|H9i;*;MhG Type: hash:ip Revision: 1 Header: family inet hashsize 1024 maxelem 65536 Size in memory: 16528 References: 1 Members: 

一切都好。

  • Kubectl获取豆荚

     # kubectl get pods -o wide --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE default busybox 1/1 Running 0 6s 10.44.0.1 192-168-1-178.node kube-system dummy-2088944543-05kj3 1/1 Running 0 19m 192.168.1.177 192-168-1-177.master kube-system etcd-192-168-1-177.master 1/1 Running 0 18m 192.168.1.177 192-168-1-177.master kube-system kube-apiserver-192-168-1-177.master 1/1 Running 0 18m 192.168.1.177 192-168-1-177.master kube-system kube-controller-manager-192-168-1-177.master 1/1 Running 0 17m 192.168.1.177 192-168-1-177.master kube-system kube-discovery-1769846148-3t242 1/1 Running 0 19m 192.168.1.177 192-168-1-177.master kube-system kube-dns-2924299975-6bv1x 4/4 Running 0 19m 10.32.0.2 192-168-1-177.master kube-system kube-proxy-4jqzb 1/1 Running 0 19m 192.168.1.177 192-168-1-177.master kube-system kube-proxy-kxkxm 1/1 Running 0 10m 192.168.1.178 192-168-1-178.node kube-system kube-scheduler-192-168-1-177.master 1/1 Running 0 18m 192.168.1.177 192-168-1-177.master kube-system weave-net-jgwwt 2/2 Running 0 10m 192.168.1.178 192-168-1-178.node kube-system weave-net-s4w7w 2/2 Running 0 17m 192.168.1.177 192-168-1-177.master