Kubernetes法兰绒networking不能按预期工作

我遇到一个非常奇怪的kubernetesnetworking问题与法兰绒kubeadm安装。 能否请你帮忙?

我有3个节点,1个为master,2个为minion节点,有4个pods正在运行。

列出所有节点(添加一列#以简化说明)

[root@snap460c04 ~]# kubectl get nodes # NAME STATUS AGE 1 snap460c03 Ready 11h 2 snap460c04 Ready,master 11h 3 snap460c06 Ready 11h 

列出所有POD(添加一列#以简化说明)

 [root@snap460c04 ~]# kubectl get pods -o wide -n eium1 # NAME READY STATUS RESTARTS AGE IP NODE Node# 1 demo-1229769353-7gf70 1/1 Running 0 10h 192.168.2.4 snap460c03 1 2 demo-1229769353-93xwm 1/1 Running 0 10h 192.168.1.4 snap460c06 3 3 demo-1229769353-kxzs9 1/1 Running 0 10h 192.168.1.5 snap460c06 3 4 demo-1229769353-ljvtg 1/1 Running 0 10h 192.168.2.3 snap460c03 1 

我做了两个testing,一个是节点 – >豆荚,另一个是豆荚 – >豆荚。

在节点 – > podstesting中,结果是:

testing1:节点=> PODtesting

从节点#1(c03)=>为什么只能ping本地节点pod?

 Ping POD #1: OK (ping 192.168.2.4) Ping POD #2: NOK (ping 192.168.1.4) Ping POD #3: NOK (ping 192.168.1.5) Ping POD #4: OK (ping 192.168.2.3) 

从节点#2(c04)=>所有的pod都是远程的,为什么不能在节点#3上ping pod?

 Ping POD #1: OK (ping 192.168.2.4) Ping POD #2: NOK (ping 192.168.1.4) Ping POD #3: NOK (ping 192.168.1.5) Ping POD #4: OK (ping 192.168.2.3) 

从节点#3(c06)=>这是一个预期的结果

 Ping POD #1: OK (ping 192.168.2.4) Ping POD #2: OK (ping 192.168.1.4) Ping POD #3: OK (ping 192.168.1.5) Ping POD #4: OK (ping 192.168.2.3) 

testing2:POD => POD =>为什么pod只能ping本地节点pod?

来自POD#1 @节点#1

 Ping POD #1: OK (kubectl -n eium1 exec demo-1229769353-7gf70 ping 192.168.2.4) Ping POD #2: NOK (kubectl -n eium1 exec demo-1229769353-7gf70 ping 192.168.1.4) Ping POD #3: NOK (kubectl -n eium1 exec demo-1229769353-7gf70 ping 192.168.1.5) Ping POD #4: OK (kubectl -n eium1 exec demo-1229769353-7gf70 ping 192.168.2.3) 

来自POD#2 @节点#3

 Ping POD #1: NOK (kubectl -n eium1 exec demo-1229769353-93xwm ping 192.168.2.4) Ping POD #2: OK (kubectl -n eium1 exec demo-1229769353-93xwm ping 192.168.1.4) Ping POD #3: OK (kubectl -n eium1 exec demo-1229769353-93xwm ping 192.168.1.5) Ping POD #4: NOK (kubectl -n eium1 exec demo-1229769353-93xwm ping 192.168.2.3) 

来自POD#3 @节点#3

 Ping POD #1: NOK (kubectl -n eium1 exec demo-1229769353-kxzs9 ping 192.168.2.4) Ping POD #2: OK (kubectl -n eium1 exec demo-1229769353-kxzs9 ping 192.168.1.4) Ping POD #3: OK (kubectl -n eium1 exec demo-1229769353-kxzs9 ping 192.168.1.5) Ping POD #4: NOK (kubectl -n eium1 exec demo-1229769353-kxzs9 ping 192.168.2.3) 

来自POD#4 @节点#1

 Ping POD #1: OK (kubectl -n eium1 exec demo-1229769353-ljvtg ping 192.168.2.4) Ping POD #2: NOK (kubectl -n eium1 exec demo-1229769353-ljvtg ping 192.168.1.4) Ping POD #3: NOK (kubectl -n eium1 exec demo-1229769353-ljvtg ping 192.168.1.5) Ping POD #4: OK (kubectl -n eium1 exec demo-1229769353-ljvtg ping 192.168.2.3) 

环境信息

K8s版本

 [root@snap460c04 ~]# kubectl version Client Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.1", GitCommit:"82450d03cb057bab0950214ef122b67c83fb11df", GitTreeState:"clean", BuildDate:"2016-12-14T00:57:05Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.0", GitCommit:"58b7c16a52c03e4a849874602be42ee71afdcab1", GitTreeState:"clean", BuildDate:"2016-12-12T23:31:15Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"} 

法兰绒荚

 [root@snap460c04 ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE kube-flannel-ds-03w6l 2/2 Running 0 11h 15.114.116.126 snap460c04 kube-flannel-ds-fdgdh 2/2 Running 0 11h 15.114.116.125 snap460c03 kube-flannel-ds-xnzx3 2/2 Running 0 11h 15.114.116.128 snap460c06 

系统PODS

 [root@snap460c04 ~]# kubectl get pods -o wide -n kube-system NAME READY STATUS RESTARTS AGE IP NODE dummy-2088944543-kcj44 1/1 Running 0 11h 15.114.116.126 snap460c04 etcd-snap460c04 1/1 Running 19 11h 15.114.116.126 snap460c04 kube-apiserver-snap460c04 1/1 Running 0 11h 15.114.116.126 snap460c04 kube-controller-manager-snap460c04 1/1 Running 0 11h 15.114.116.126 snap460c04 kube-discovery-1769846148-5x4gr 1/1 Running 0 11h 15.114.116.126 snap460c04 kube-dns-2924299975-9tdl9 4/4 Running 0 11h 192.168.0.2 snap460c04 kube-proxy-7wtr4 1/1 Running 0 11h 15.114.116.128 snap460c06 kube-proxy-j0h4g 1/1 Running 0 11h 15.114.116.126 snap460c04 kube-proxy-knbrl 1/1 Running 0 11h 15.114.116.125 snap460c03 kube-scheduler-snap460c04 1/1 Running 0 11h 15.114.116.126 snap460c04 kubernetes-dashboard-3203831700-1nw59 1/1 Running 0 10h 192.168.0.4 snap460c04 

法兰绒与指南一起安装:

 kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml 

节点1的networking信息(c03)

 [root@snap460c03 ~]# iptables-save # Generated by iptables-save v1.4.21 on Wed Feb 22 18:01:12 2017 *nat :PREROUTING ACCEPT [1:78] :INPUT ACCEPT [1:78] :OUTPUT ACCEPT [0:0] :POSTROUTING ACCEPT [0:0] :DOCKER - [0:0] :KUBE-MARK-DROP - [0:0] :KUBE-MARK-MASQ - [0:0] :KUBE-NODEPORTS - [0:0] :KUBE-POSTROUTING - [0:0] :KUBE-SEP-2MEKZI7PJUEHR67T - [0:0] :KUBE-SEP-3OT3I6HGM4K7SHGI - [0:0] :KUBE-SEP-6TVSO4B75FMUOZPV - [0:0] :KUBE-SEP-6YIOEPRBG6LZYDNQ - [0:0] :KUBE-SEP-A6J4YW3AMR2ZVZMA - [0:0] :KUBE-SEP-DBP5C3QJN36XNYPX - [0:0] :KUBE-SEP-ES7Q53Y6P2YLIO4O - [0:0] :KUBE-SEP-FWJIKOY3NRVP7HUX - [0:0] :KUBE-SEP-JTN4UBVS7OG5RONX - [0:0] :KUBE-SEP-PNOYUP2SIIHRG34N - [0:0] :KUBE-SEP-UPZX2EM3TRFH2ASL - [0:0] :KUBE-SEP-X7MGMJMV5H5T4NJN - [0:0] :KUBE-SEP-ZZLC6ELJT43VDXYQ - [0:0] :KUBE-SERVICES - [0:0] :KUBE-SVC-5J5TVDDOSFKU7A7D - [0:0] :KUBE-SVC-5RKFNKIUXDFI3AVK - [0:0] :KUBE-SVC-EP4VGANCYXDST444 - [0:0] :KUBE-SVC-ERIFXISQEP7F7OF4 - [0:0] :KUBE-SVC-KOBH2JYY2L2SF2XK - [0:0] :KUBE-SVC-NGBEVGRJNPASKNGR - [0:0] :KUBE-SVC-NPX46M4PTMTKRN6Y - [0:0] :KUBE-SVC-OL65KRZ5QEUS2RPN - [0:0] :KUBE-SVC-TCOU7JCQXEZGVUNU - [0:0] :KUBE-SVC-XGLOHA7QRQ3V22RZ - [0:0] -A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES -A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER -A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES -A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER -A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE -A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING -A POSTROUTING -s 172.17.0.87/32 -d 172.17.0.87/32 -p tcp -m tcp --dport 8158 -j MASQUERADE -A POSTROUTING -s 172.17.0.87/32 -d 172.17.0.87/32 -p tcp -m tcp --dport 8159 -j MASQUERADE -A POSTROUTING -s 10.244.0.0/16 -d 10.244.0.0/16 -j RETURN -A POSTROUTING -s 10.244.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE -A POSTROUTING ! -s 10.244.0.0/16 -d 10.244.0.0/16 -j MASQUERADE -A DOCKER -i docker0 -j RETURN -A KUBE-MARK-DROP -j MARK --set-xmark 0x8000/0x8000 -A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000 -A KUBE-NODEPORTS -p tcp -m comment --comment "eium1/ems:db" -m tcp --dport 30156 -j KUBE-MARK-MASQ -A KUBE-NODEPORTS -p tcp -m comment --comment "eium1/ems:db" -m tcp --dport 30156 -j KUBE-SVC-5RKFNKIUXDFI3AVK -A KUBE-NODEPORTS -p tcp -m comment --comment "kube-system/kubernetes-dashboard:" -m tcp --dport 32180 -j KUBE-MARK-MASQ -A KUBE-NODEPORTS -p tcp -m comment --comment "kube-system/kubernetes-dashboard:" -m tcp --dport 32180 -j KUBE-SVC-XGLOHA7QRQ3V22RZ -A KUBE-NODEPORTS -p tcp -m comment --comment "eium1/ems:repo" -m tcp --dport 30157 -j KUBE-MARK-MASQ -A KUBE-NODEPORTS -p tcp -m comment --comment "eium1/ems:repo" -m tcp --dport 30157 -j KUBE-SVC-OL65KRZ5QEUS2RPN -A KUBE-NODEPORTS -p tcp -m comment --comment "eium1/ems:ior" -m tcp --dport 30158 -j KUBE-MARK-MASQ -A KUBE-NODEPORTS -p tcp -m comment --comment "eium1/ems:ior" -m tcp --dport 30158 -j KUBE-SVC-KOBH2JYY2L2SF2XK -A KUBE-NODEPORTS -p tcp -m comment --comment "eium1/ems:web" -m tcp --dport 30159 -j KUBE-MARK-MASQ -A KUBE-NODEPORTS -p tcp -m comment --comment "eium1/ems:web" -m tcp --dport 30159 -j KUBE-SVC-NGBEVGRJNPASKNGR -A KUBE-NODEPORTS -p tcp -m comment --comment "eium1/ems:vnc" -m tcp --dport 30160 -j KUBE-MARK-MASQ -A KUBE-NODEPORTS -p tcp -m comment --comment "eium1/ems:vnc" -m tcp --dport 30160 -j KUBE-SVC-5J5TVDDOSFKU7A7D -A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x4000/0x4000 -j MASQUERADE -A KUBE-SEP-2MEKZI7PJUEHR67T -s 192.168.0.4/32 -m comment --comment "kube-system/kubernetes-dashboard:" -j KUBE-MARK-MASQ -A KUBE-SEP-2MEKZI7PJUEHR67T -p tcp -m comment --comment "kube-system/kubernetes-dashboard:" -m tcp -j DNAT --to-destination 192.168.0.4:9090 -A KUBE-SEP-3OT3I6HGM4K7SHGI -s 192.168.1.5/32 -m comment --comment "eium1/demo:ro" -j KUBE-MARK-MASQ -A KUBE-SEP-3OT3I6HGM4K7SHGI -p tcp -m comment --comment "eium1/demo:ro" -m tcp -j DNAT --to-destination 192.168.1.5:9901 -A KUBE-SEP-6TVSO4B75FMUOZPV -s 15.114.116.128/32 -m comment --comment "eium1/ems:db" -j KUBE-MARK-MASQ -A KUBE-SEP-6TVSO4B75FMUOZPV -p tcp -m comment --comment "eium1/ems:db" -m tcp -j DNAT --to-destination 15.114.116.128:3306 -A KUBE-SEP-6YIOEPRBG6LZYDNQ -s 15.114.116.128/32 -m comment --comment "eium1/ems:vnc" -j KUBE-MARK-MASQ -A KUBE-SEP-6YIOEPRBG6LZYDNQ -p tcp -m comment --comment "eium1/ems:vnc" -m tcp -j DNAT --to-destination 15.114.116.128:5911 -A KUBE-SEP-A6J4YW3AMR2ZVZMA -s 192.168.1.4/32 -m comment --comment "eium1/demo:ro" -j KUBE-MARK-MASQ -A KUBE-SEP-A6J4YW3AMR2ZVZMA -p tcp -m comment --comment "eium1/demo:ro" -m tcp -j DNAT --to-destination 192.168.1.4:9901 -A KUBE-SEP-DBP5C3QJN36XNYPX -s 15.114.116.128/32 -m comment --comment "eium1/ems:ior" -j KUBE-MARK-MASQ -A KUBE-SEP-DBP5C3QJN36XNYPX -p tcp -m comment --comment "eium1/ems:ior" -m tcp -j DNAT --to-destination 15.114.116.128:8158 -A KUBE-SEP-ES7Q53Y6P2YLIO4O -s 15.114.116.128/32 -m comment --comment "eium1/ems:web" -j KUBE-MARK-MASQ -A KUBE-SEP-ES7Q53Y6P2YLIO4O -p tcp -m comment --comment "eium1/ems:web" -m tcp -j DNAT --to-destination 15.114.116.128:8159 -A KUBE-SEP-FWJIKOY3NRVP7HUX -s 15.114.116.126/32 -m comment --comment "default/kubernetes:https" -j KUBE-MARK-MASQ -A KUBE-SEP-FWJIKOY3NRVP7HUX -p tcp -m comment --comment "default/kubernetes:https" -m recent --set --name KUBE-SEP-FWJIKOY3NRVP7HUX --mask 255.255.255.255 --rsource -m tcp -j DNAT --to-destination 15.114.116.126:6443 -A KUBE-SEP-JTN4UBVS7OG5RONX -s 192.168.0.2/32 -m comment --comment "kube-system/kube-dns:dns" -j KUBE-MARK-MASQ -A KUBE-SEP-JTN4UBVS7OG5RONX -p udp -m comment --comment "kube-system/kube-dns:dns" -m udp -j DNAT --to-destination 192.168.0.2:53 -A KUBE-SEP-PNOYUP2SIIHRG34N -s 192.168.2.4/32 -m comment --comment "eium1/demo:ro" -j KUBE-MARK-MASQ -A KUBE-SEP-PNOYUP2SIIHRG34N -p tcp -m comment --comment "eium1/demo:ro" -m tcp -j DNAT --to-destination 192.168.2.4:9901 -A KUBE-SEP-UPZX2EM3TRFH2ASL -s 192.168.0.2/32 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-MARK-MASQ -A KUBE-SEP-UPZX2EM3TRFH2ASL -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp" -m tcp -j DNAT --to-destination 192.168.0.2:53 -A KUBE-SEP-X7MGMJMV5H5T4NJN -s 192.168.2.3/32 -m comment --comment "eium1/demo:ro" -j KUBE-MARK-MASQ -A KUBE-SEP-X7MGMJMV5H5T4NJN -p tcp -m comment --comment "eium1/demo:ro" -m tcp -j DNAT --to-destination 192.168.2.3:9901 -A KUBE-SEP-ZZLC6ELJT43VDXYQ -s 15.114.116.128/32 -m comment --comment "eium1/ems:repo" -j KUBE-MARK-MASQ -A KUBE-SEP-ZZLC6ELJT43VDXYQ -p tcp -m comment --comment "eium1/ems:repo" -m tcp -j DNAT --to-destination 15.114.116.128:8300 -A KUBE-SERVICES -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU -A KUBE-SERVICES -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y -A KUBE-SERVICES -d 10.110.146.207/32 -p tcp -m comment --comment "eium1/ems:db cluster IP" -m tcp --dport 3306 -j KUBE-SVC-5RKFNKIUXDFI3AVK -A KUBE-SERVICES -d 10.102.162.2/32 -p tcp -m comment --comment "eium1/demo:ro cluster IP" -m tcp --dport 9901 -j KUBE-SVC-EP4VGANCYXDST444 -A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-ERIFXISQEP7F7OF4 -A KUBE-SERVICES -d 10.108.36.183/32 -p tcp -m comment --comment "kube-system/kubernetes-dashboard: cluster IP" -m tcp --dport 80 -j KUBE-SVC-XGLOHA7QRQ3V22RZ -A KUBE-SERVICES -d 10.110.146.207/32 -p tcp -m comment --comment "eium1/ems:repo cluster IP" -m tcp --dport 8300 -j KUBE-SVC-OL65KRZ5QEUS2RPN -A KUBE-SERVICES -d 10.110.146.207/32 -p tcp -m comment --comment "eium1/ems:ior cluster IP" -m tcp --dport 8158 -j KUBE-SVC-KOBH2JYY2L2SF2XK -A KUBE-SERVICES -d 10.110.146.207/32 -p tcp -m comment --comment "eium1/ems:web cluster IP" -m tcp --dport 8159 -j KUBE-SVC-NGBEVGRJNPASKNGR -A KUBE-SERVICES -d 10.110.146.207/32 -p tcp -m comment --comment "eium1/ems:vnc cluster IP" -m tcp --dport 5911 -j KUBE-SVC-5J5TVDDOSFKU7A7D -A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS -A KUBE-SVC-5J5TVDDOSFKU7A7D -m comment --comment "eium1/ems:vnc" -j KUBE-SEP-6YIOEPRBG6LZYDNQ -A KUBE-SVC-5RKFNKIUXDFI3AVK -m comment --comment "eium1/ems:db" -j KUBE-SEP-6TVSO4B75FMUOZPV -A KUBE-SVC-EP4VGANCYXDST444 -m comment --comment "eium1/demo:ro" -m statistic --mode random --probability 0.25000000000 -j KUBE-SEP-A6J4YW3AMR2ZVZMA -A KUBE-SVC-EP4VGANCYXDST444 -m comment --comment "eium1/demo:ro" -m statistic --mode random --probability 0.33332999982 -j KUBE-SEP-3OT3I6HGM4K7SHGI -A KUBE-SVC-EP4VGANCYXDST444 -m comment --comment "eium1/demo:ro" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-X7MGMJMV5H5T4NJN -A KUBE-SVC-EP4VGANCYXDST444 -m comment --comment "eium1/demo:ro" -j KUBE-SEP-PNOYUP2SIIHRG34N -A KUBE-SVC-ERIFXISQEP7F7OF4 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-SEP-UPZX2EM3TRFH2ASL -A KUBE-SVC-KOBH2JYY2L2SF2XK -m comment --comment "eium1/ems:ior" -j KUBE-SEP-DBP5C3QJN36XNYPX -A KUBE-SVC-NGBEVGRJNPASKNGR -m comment --comment "eium1/ems:web" -j KUBE-SEP-ES7Q53Y6P2YLIO4O -A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -m recent --rcheck --seconds 10800 --reap --name KUBE-SEP-FWJIKOY3NRVP7HUX --mask 255.255.255.255 --rsource -j KUBE-SEP-FWJIKOY3NRVP7HUX -A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -j KUBE-SEP-FWJIKOY3NRVP7HUX -A KUBE-SVC-OL65KRZ5QEUS2RPN -m comment --comment "eium1/ems:repo" -j KUBE-SEP-ZZLC6ELJT43VDXYQ -A KUBE-SVC-TCOU7JCQXEZGVUNU -m comment --comment "kube-system/kube-dns:dns" -j KUBE-SEP-JTN4UBVS7OG5RONX -A KUBE-SVC-XGLOHA7QRQ3V22RZ -m comment --comment "kube-system/kubernetes-dashboard:" -j KUBE-SEP-2MEKZI7PJUEHR67T COMMIT # Completed on Wed Feb 22 18:01:12 2017 # Generated by iptables-save v1.4.21 on Wed Feb 22 18:01:12 2017 *filter :INPUT ACCEPT [147:180978] :FORWARD ACCEPT [16:1344] :OUTPUT ACCEPT [20:11774] :DOCKER - [0:0] :DOCKER-ISOLATION - [0:0] :KUBE-FIREWALL - [0:0] :KUBE-SERVICES - [0:0] -A INPUT -j KUBE-FIREWALL -A FORWARD -j DOCKER-ISOLATION -A FORWARD -o docker0 -j DOCKER -A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -A FORWARD -i docker0 ! -o docker0 -j ACCEPT -A FORWARD -i docker0 -o docker0 -j ACCEPT -A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES -A OUTPUT -j KUBE-FIREWALL -A DOCKER-ISOLATION -j RETURN -A KUBE-FIREWALL -m comment --comment "kubernetes firewall for dropping marked packets" -m mark --mark 0x8000/0x8000 -j DROP COMMIT # Completed on Wed Feb 22 18:01:12 2017 [root@snap460c03 ~]# ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: enp2s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000 link/ether 98:4b:e1:62:14:00 brd ff:ff:ff:ff:ff:ff inet 15.114.116.125/22 brd 15.114.119.255 scope global enp2s0f0 valid_lft forever preferred_lft forever inet6 2002:109d:45fd:b:9a4b:e1ff:fe62:1400/64 scope global dynamic valid_lft 6703sec preferred_lft 1303sec inet6 fec0::b:9a4b:e1ff:fe62:1400/64 scope site dynamic valid_lft 6703sec preferred_lft 1303sec inet6 fe80::9a4b:e1ff:fe62:1400/64 scope link valid_lft forever preferred_lft forever 3: enp2s0f1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN qlen 1000 link/ether 98:4b:e1:62:14:04 brd ff:ff:ff:ff:ff:ff 4: enp2s0f2: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN qlen 1000 link/ether 98:4b:e1:62:14:01 brd ff:ff:ff:ff:ff:ff 5: enp2s0f3: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN qlen 1000 link/ether 98:4b:e1:62:14:05 brd ff:ff:ff:ff:ff:ff 6: enp2s0f4: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN qlen 1000 link/ether 98:4b:e1:62:14:02 brd ff:ff:ff:ff:ff:ff 7: enp2s0f5: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN qlen 1000 link/ether 98:4b:e1:62:14:06 brd ff:ff:ff:ff:ff:ff 8: enp2s0f6: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN qlen 1000 link/ether 98:4b:e1:62:14:03 brd ff:ff:ff:ff:ff:ff 9: enp2s0f7: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN qlen 1000 link/ether 98:4b:e1:62:14:07 brd ff:ff:ff:ff:ff:ff 10: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP link/ether 56:84:7a:fe:97:99 brd ff:ff:ff:ff:ff:ff inet 172.17.42.1/16 scope global docker0 valid_lft forever preferred_lft forever inet6 fe80::5484:7aff:fefe:9799/64 scope link valid_lft forever preferred_lft forever 1822: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP link/ether 0a:58:c0:a8:02:01 brd ff:ff:ff:ff:ff:ff inet 192.168.2.1/24 scope global cni0 valid_lft forever preferred_lft forever inet6 fe80::858:c0ff:fea8:201/64 scope link valid_lft forever preferred_lft forever 1824: veth6c162dff: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP link/ether 36:2b:f9:cf:1d:aa brd ff:ff:ff:ff:ff:ff inet6 fe80::342b:f9ff:fecf:1daa/64 scope link valid_lft forever preferred_lft forever 1825: veth34ca824a: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP link/ether ae:79:85:50:0b:da brd ff:ff:ff:ff:ff:ff inet6 fe80::ac79:85ff:fe50:bda/64 scope link valid_lft forever preferred_lft forever 916: vethab43ed7: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP link/ether da:5a:e9:f2:6b:0a brd ff:ff:ff:ff:ff:ff inet6 fe80::d85a:e9ff:fef2:6b0a/64 scope link valid_lft forever preferred_lft forever 918: veth1bbb133: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP link/ether 56:3c:47:e1:5a:c0 brd ff:ff:ff:ff:ff:ff inet6 fe80::543c:47ff:fee1:5ac0/64 scope link valid_lft forever preferred_lft forever 921: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN link/ether 8e:8a:81:67:a6:92 brd ff:ff:ff:ff:ff:ff inet 192.168.2.0/32 scope global flannel.1 valid_lft forever preferred_lft forever inet6 fe80::8c8a:81ff:fe67:a692/64 scope link valid_lft forever preferred_lft forever [root@snap460c03 ~]# [root@snap460c03 ~]# ip -s link 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 RX: bytes packets errors dropped overrun mcast 1652461277938 1256303971 0 0 0 0 TX: bytes packets errors dropped carrier collsns 1652461277938 1256303971 0 0 0 0 2: enp2s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT qlen 1000 link/ether 98:4b:e1:62:14:00 brd ff:ff:ff:ff:ff:ff RX: bytes packets errors dropped overrun mcast 273943938058 464934981 0 4475811 0 4994783 TX: bytes packets errors dropped carrier collsns 112001303439 313492490 0 0 0 0 3: enp2s0f1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN mode DEFAULT qlen 1000 link/ether 98:4b:e1:62:14:04 brd ff:ff:ff:ff:ff:ff RX: bytes packets errors dropped overrun mcast 0 0 0 0 0 0 TX: bytes packets errors dropped carrier collsns 0 0 0 0 0 0 4: enp2s0f2: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN mode DEFAULT qlen 1000 link/ether 98:4b:e1:62:14:01 brd ff:ff:ff:ff:ff:ff RX: bytes packets errors dropped overrun mcast 0 0 0 0 0 0 TX: bytes packets errors dropped carrier collsns 0 0 0 0 0 0 5: enp2s0f3: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN mode DEFAULT qlen 1000 link/ether 98:4b:e1:62:14:05 brd ff:ff:ff:ff:ff:ff RX: bytes packets errors dropped overrun mcast 0 0 0 0 0 0 TX: bytes packets errors dropped carrier collsns 0 0 0 0 0 0 6: enp2s0f4: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN mode DEFAULT qlen 1000 link/ether 98:4b:e1:62:14:02 brd ff:ff:ff:ff:ff:ff RX: bytes packets errors dropped overrun mcast 0 0 0 0 0 0 TX: bytes packets errors dropped carrier collsns 0 0 0 0 0 0 7: enp2s0f5: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN mode DEFAULT qlen 1000 link/ether 98:4b:e1:62:14:06 brd ff:ff:ff:ff:ff:ff RX: bytes packets errors dropped overrun mcast 0 0 0 0 0 0 TX: bytes packets errors dropped carrier collsns 0 0 0 0 0 0 8: enp2s0f6: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN mode DEFAULT qlen 1000 link/ether 98:4b:e1:62:14:03 brd ff:ff:ff:ff:ff:ff RX: bytes packets errors dropped overrun mcast 0 0 0 0 0 0 TX: bytes packets errors dropped carrier collsns 0 0 0 0 0 0 9: enp2s0f7: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN mode DEFAULT qlen 1000 link/ether 98:4b:e1:62:14:07 brd ff:ff:ff:ff:ff:ff RX: bytes packets errors dropped overrun mcast 0 0 0 0 0 0 TX: bytes packets errors dropped carrier collsns 0 0 0 0 0 0 10: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT link/ether 56:84:7a:fe:97:99 brd ff:ff:ff:ff:ff:ff RX: bytes packets errors dropped overrun mcast 326660431 4780762 0 0 0 0 TX: bytes packets errors dropped carrier collsns 3574619827 5529921 0 0 0 0 1822: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP mode DEFAULT link/ether 0a:58:c0:a8:02:01 brd ff:ff:ff:ff:ff:ff RX: bytes packets errors dropped overrun mcast 12473828 150176 0 0 0 0 TX: bytes packets errors dropped carrier collsns 116444 2577 0 0 0 0 1824: veth6c162dff: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP mode DEFAULT link/ether 36:2b:f9:cf:1d:aa brd ff:ff:ff:ff:ff:ff RX: bytes packets errors dropped overrun mcast 14089148 145722 0 0 0 0 TX: bytes packets errors dropped carrier collsns 7131026 74713 0 0 0 0 1825: veth34ca824a: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP mode DEFAULT link/ether ae:79:85:50:0b:da brd ff:ff:ff:ff:ff:ff RX: bytes packets errors dropped overrun mcast 14647882 151667 0 0 0 0 TX: bytes packets errors dropped carrier collsns 7149198 75141 0 0 0 0 916: vethab43ed7: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT link/ether da:5a:e9:f2:6b:0a brd ff:ff:ff:ff:ff:ff RX: bytes packets errors dropped overrun mcast 66752218 734347 0 0 0 0 TX: bytes packets errors dropped carrier collsns 43439443 733394 0 0 0 0 918: veth1bbb133: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT link/ether 56:3c:47:e1:5a:c0 brd ff:ff:ff:ff:ff:ff RX: bytes packets errors dropped overrun mcast 66755200 734343 0 0 0 0 TX: bytes packets errors dropped carrier collsns 43434663 733264 0 0 0 0 921: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN mode DEFAULT link/ether 8e:8a:81:67:a6:92 brd ff:ff:ff:ff:ff:ff RX: bytes packets errors dropped overrun mcast 82703829917 59818766 0 0 0 0 TX: bytes packets errors dropped carrier collsns 940475554 9717823 0 898 0 0 [root@snap460c03 ~]# [root@snap460c03 ~]# ip route default via 15.114.116.1 dev enp2s0f0 proto static metric 100 15.114.116.0/22 dev enp2s0f0 proto kernel scope link src 15.114.116.125 metric 100 172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.42.1 192.168.0.0/16 dev flannel.1 192.168.2.0/24 dev cni0 proto kernel scope link src 192.168.2.1 

你提供什么参数kubeadm?

如果你想使用flannel作为podnetworking,如果你在下面使用daemonset清单,指定–pod-network-cidr 10.244.0.0/16。 但是,请注意,除了Flannel之外,其他任何networking都不需要这样做

在每个节点上执行这些命令:

 sysctl -w net.ipv4.ip_forward=1 sysctl -w net.bridge.bridge-nf-call-ip6tables=1 sysctl -w net.bridge.bridge-nf-call-iptables=1