Kubernetes吊舱不能访问自己的服务

概观

Pod无法在单节点群集中访问自己的服务(超时)。

  • 操作系统是Debian 8
  • Cloud是DigitalOcean或AWS(转载)
  • Kubernetes版本是1.5.4
  • Kube代理使用iptables
  • Kubernetes手动安装
  • 我不使用覆盖networking,如织法或法兰绒

我已经把服务变成了无头的解决方法,但我想find背后的真正原因。

在GCP计算引擎节点(!?)上正常工作 。 按照这里build议的–proxy-mode = userspace可能会正常工作。

更多细节

服务

{ "apiVersion": "v1", "kind": "Service", "metadata": { "creationTimestamp": "2017-04-13T05:29:18Z", "labels": { "name": "anon-svc" }, "name": "anon-svc", "namespace": "anon", "resourceVersion": "280", "selfLink": "/api/v1/namespaces/anon/services/anon-svc", "uid": "23d178dd-200a-11e7-ba08-42010a8e000a" }, "spec": { "clusterIP": "172.23.6.158", "ports": [ { "name": "agent", "port": 8125, "protocol": "TCP", "targetPort": "agent" } ], "selector": { "name": "anon-svc" }, "sessionAffinity": "None", "type": "ClusterIP" }, "status": { "loadBalancer": {} } } 

Kube-proxy服务(systemd)

 [Unit] After=kube-apiserver.service Requires=kube-apiserver.service [Service] ExecStart=/opt/kubernetes/bin/hyperkube proxy \ --master=127.0.0.1:8080 \ --proxy-mode=iptables \ --logtostderr=true Restart=always RestartSec=10 [Install] WantedBy=multi-user.target 

节点的输出(GCP就是它的工作地点),DO(DigitalOcean是它不起作用的地方)。

$ iptables-save

GCP:

 # Generated by iptables-save v1.4.21 on Thu Apr 13 05:30:33 2017 *nat :PREROUTING ACCEPT [4:364] :INPUT ACCEPT [1:60] :OUTPUT ACCEPT [7:420] :POSTROUTING ACCEPT [19:1460] :DOCKER - [0:0] :KUBE-MARK-DROP - [0:0] :KUBE-MARK-MASQ - [0:0] :KUBE-NODEPORTS - [0:0] :KUBE-POSTROUTING - [0:0] :KUBE-SEP-2UBKOACGE36HHR6Q - [0:0] :KUBE-SEP-5LOF5ZUWMDRFZ2LI - [0:0] :KUBE-SEP-5T3UFOYBS7JA45MK - [0:0] :KUBE-SEP-YBFG2OLQ4DHWIGIM - [0:0] :KUBE-SEP-ZSS7W6PQOP26CZ6F - [0:0] :KUBE-SERVICES - [0:0] :KUBE-SVC-ERIFXISQEP7F7OF4 - [0:0] :KUBE-SVC-NPX46M4PTMTKRN6Y - [0:0] :KUBE-SVC-R6UZIZCIT2GFGDFT - [0:0] :KUBE-SVC-TCOU7JCQXEZGVUNU - [0:0] :KUBE-SVC-TF3HNH35HFDYKE6V - [0:0] -A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES -A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER -A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES -A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER -A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING -A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE -A POSTROUTING -s 172.17.0.2/32 -d 172.17.0.2/32 -p tcp -m tcp --dport 2379 -j MASQUERADE -A POSTROUTING -s 172.17.0.2/32 -d 172.17.0.2/32 -p tcp -m tcp --dport 2379 -j MASQUERADE -A POSTROUTING -s 172.17.0.2/32 -d 172.17.0.2/32 -p tcp -m tcp --dport 2379 -j MASQUERADE -A POSTROUTING -s 172.17.0.2/32 -d 172.17.0.2/32 -p tcp -m tcp --dport 2379 -j MASQUERADE -A POSTROUTING -s 172.17.0.3/32 -d 172.17.0.3/32 -p tcp -m tcp --dport 443 -j MASQUERADE -A POSTROUTING -s 172.17.0.3/32 -d 172.17.0.3/32 -p tcp -m tcp --dport 80 -j MASQUERADE -A DOCKER -i docker0 -j RETURN -A DOCKER -d 127.0.0.1/32 ! -i docker0 -p tcp -m tcp --dport 4001 -j DNAT --to-destination 172.17.0.2:2379 -A DOCKER -d 172.17.0.1/32 ! -i docker0 -p tcp -m tcp --dport 2379 -j DNAT --to-destination 172.17.0.2:2379 -A DOCKER -d 172.17.0.1/32 ! -i docker0 -p tcp -m tcp --dport 4001 -j DNAT --to-destination 172.17.0.2:2379 -A DOCKER -d 127.0.0.1/32 ! -i docker0 -p tcp -m tcp --dport 2379 -j DNAT --to-destination 172.17.0.2:2379 -A DOCKER ! -i docker0 -p tcp -m tcp --dport 443 -j DNAT --to-destination 172.17.0.3:443 -A DOCKER ! -i docker0 -p tcp -m tcp --dport 80 -j DNAT --to-destination 172.17.0.3:80 -A KUBE-MARK-DROP -j MARK --set-xmark 0x8000/0x8000 -A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000 -A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x4000/0x4000 -j MASQUERADE -A KUBE-SEP-2UBKOACGE36HHR6Q -s 10.142.0.10/32 -m comment --comment "default/kubernetes:https" -j KUBE-MARK-MASQ -A KUBE-SEP-2UBKOACGE36HHR6Q -p tcp -m comment --comment "default/kubernetes:https" -m recent --set --name KUBE-SEP-2UBKOACGE36HHR6Q --mask 255.255.255.255 --rsource -m tcp -j DNAT --to-destination 10.142.0.10:6443 -A KUBE-SEP-5LOF5ZUWMDRFZ2LI -s 172.17.0.4/32 -m comment --comment "kube-system/kube-dns:dns" -j KUBE-MARK-MASQ -A KUBE-SEP-5LOF5ZUWMDRFZ2LI -p udp -m comment --comment "kube-system/kube-dns:dns" -m udp -j DNAT --to-destination 172.17.0.4:53 -A KUBE-SEP-5T3UFOYBS7JA45MK -s 172.17.0.4/32 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-MARK-MASQ -A KUBE-SEP-5T3UFOYBS7JA45MK -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp" -m tcp -j DNAT --to-destination 172.17.0.4:53 -A KUBE-SEP-YBFG2OLQ4DHWIGIM -s 172.17.0.3/32 -m comment --comment "anon/anon-svc:agent" -j KUBE-MARK-MASQ -A KUBE-SEP-YBFG2OLQ4DHWIGIM -p tcp -m comment --comment "anon/anon-svc:agent" -m tcp -j DNAT --to-destination 172.17.0.3:8125 -A KUBE-SEP-ZSS7W6PQOP26CZ6F -s 172.17.0.1/32 -m comment --comment "anon/etcd:etcd" -j KUBE-MARK-MASQ -A KUBE-SEP-ZSS7W6PQOP26CZ6F -p tcp -m comment --comment "anon/etcd:etcd" -m tcp -j DNAT --to-destination 172.17.0.1:4001 -A KUBE-SERVICES -d 172.20.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-ERIFXISQEP7F7OF4 -A KUBE-SERVICES -d 172.23.6.157/32 -p tcp -m comment --comment "anon/etcd:etcd cluster IP" -m tcp --dport 4001 -j KUBE-SVC-R6UZIZCIT2GFGDFT -A KUBE-SERVICES -d 172.23.6.158/32 -p tcp -m comment --comment "anon/anon-svc:agent cluster IP" -m tcp --dport 8125 -j KUBE-SVC-TF3HNH35HFDYKE6V -A KUBE-SERVICES -d 172.20.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y -A KUBE-SERVICES -d 172.20.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU -A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS -A KUBE-SVC-ERIFXISQEP7F7OF4 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-SEP-5T3UFOYBS7JA45MK -A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -m recent --rcheck --seconds 10800 --reap --name KUBE-SEP-2UBKOACGE36HHR6Q --mask 255.255.255.255 --rsource -j KUBE-SEP-2UBKOACGE36HHR6Q -A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -j KUBE-SEP-2UBKOACGE36HHR6Q -A KUBE-SVC-R6UZIZCIT2GFGDFT -m comment --comment "anon/etcd:etcd" -j KUBE-SEP-ZSS7W6PQOP26CZ6F -A KUBE-SVC-TCOU7JCQXEZGVUNU -m comment --comment "kube-system/kube-dns:dns" -j KUBE-SEP-5LOF5ZUWMDRFZ2LI -A KUBE-SVC-TF3HNH35HFDYKE6V -m comment --comment "anon/anon-svc:agent" -j KUBE-SEP-YBFG2OLQ4DHWIGIM COMMIT # Completed on Thu Apr 13 05:30:33 2017 # Generated by iptables-save v1.4.21 on Thu Apr 13 05:30:33 2017 *filter :INPUT ACCEPT [1250:625646] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [1325:478496] :DOCKER - [0:0] :DOCKER-ISOLATION - [0:0] :KUBE-FIREWALL - [0:0] :KUBE-SERVICES - [0:0] -A INPUT -j KUBE-FIREWALL -A FORWARD -j DOCKER-ISOLATION -A FORWARD -o docker0 -j DOCKER -A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -A FORWARD -i docker0 ! -o docker0 -j ACCEPT -A FORWARD -i docker0 -o docker0 -j ACCEPT -A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES -A OUTPUT -j KUBE-FIREWALL -A DOCKER -d 172.17.0.2/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 2379 -j ACCEPT -A DOCKER -d 172.17.0.2/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 2379 -j ACCEPT -A DOCKER -d 172.17.0.2/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 2379 -j ACCEPT -A DOCKER -d 172.17.0.2/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 2379 -j ACCEPT -A DOCKER -d 172.17.0.3/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 443 -j ACCEPT -A DOCKER -d 172.17.0.3/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 80 -j ACCEPT -A DOCKER-ISOLATION -j RETURN -A KUBE-FIREWALL -m comment --comment "kubernetes firewall for dropping marked packets" -m mark --mark 0x8000/0x8000 -j DROP COMMIT # Completed on Thu Apr 13 05:30:33 2017 

做:

 # Generated by iptables-save v1.4.21 on Thu Apr 13 05:38:05 2017 *nat :PREROUTING ACCEPT [1:52] :INPUT ACCEPT [1:52] :OUTPUT ACCEPT [13:798] :POSTROUTING ACCEPT [13:798] :DOCKER - [0:0] :KUBE-MARK-DROP - [0:0] :KUBE-MARK-MASQ - [0:0] :KUBE-NODEPORTS - [0:0] :KUBE-POSTROUTING - [0:0] :KUBE-SEP-3VWUJCZC3MSW5W32 - [0:0] :KUBE-SEP-CPJSBS35VMSBOKH6 - [0:0] :KUBE-SEP-K7JQ5XSWBQ7MTKDL - [0:0] :KUBE-SEP-WOG5WH7F5TFFOT4E - [0:0] :KUBE-SEP-ZSS7W6PQOP26CZ6F - [0:0] :KUBE-SERVICES - [0:0] :KUBE-SVC-ERIFXISQEP7F7OF4 - [0:0] :KUBE-SVC-NPX46M4PTMTKRN6Y - [0:0] :KUBE-SVC-R6UZIZCIT2GFGDFT - [0:0] :KUBE-SVC-TCOU7JCQXEZGVUNU - [0:0] :KUBE-SVC-TF3HNH35HFDYKE6V - [0:0] -A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES -A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER -A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES -A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER -A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING -A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE -A POSTROUTING -s 172.17.0.2/32 -d 172.17.0.2/32 -p tcp -m tcp --dport 2379 -j MASQUERADE -A POSTROUTING -s 172.17.0.2/32 -d 172.17.0.2/32 -p tcp -m tcp --dport 2379 -j MASQUERADE -A POSTROUTING -s 172.17.0.2/32 -d 172.17.0.2/32 -p tcp -m tcp --dport 2379 -j MASQUERADE -A POSTROUTING -s 172.17.0.2/32 -d 172.17.0.2/32 -p tcp -m tcp --dport 2379 -j MASQUERADE -A POSTROUTING -s 172.17.0.4/32 -d 172.17.0.4/32 -p tcp -m tcp --dport 443 -j MASQUERADE -A POSTROUTING -s 172.17.0.4/32 -d 172.17.0.4/32 -p tcp -m tcp --dport 80 -j MASQUERADE -A DOCKER -i docker0 -j RETURN -A DOCKER -d 172.17.0.1/32 ! -i docker0 -p tcp -m tcp --dport 2379 -j DNAT --to-destination 172.17.0.2:2379 -A DOCKER -d 172.17.0.1/32 ! -i docker0 -p tcp -m tcp --dport 4001 -j DNAT --to-destination 172.17.0.2:2379 -A DOCKER -d 127.0.0.1/32 ! -i docker0 -p tcp -m tcp --dport 2379 -j DNAT --to-destination 172.17.0.2:2379 -A DOCKER -d 127.0.0.1/32 ! -i docker0 -p tcp -m tcp --dport 4001 -j DNAT --to-destination 172.17.0.2:2379 -A DOCKER ! -i docker0 -p tcp -m tcp --dport 443 -j DNAT --to-destination 172.17.0.4:443 -A DOCKER ! -i docker0 -p tcp -m tcp --dport 80 -j DNAT --to-destination 172.17.0.4:80 -A KUBE-MARK-DROP -j MARK --set-xmark 0x8000/0x8000 -A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000 -A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x4000/0x4000 -j MASQUERADE -A KUBE-SEP-3VWUJCZC3MSW5W32 -s 67.205.156.80/32 -m comment --comment "default/kubernetes:https" -j KUBE-MARK-MASQ -A KUBE-SEP-3VWUJCZC3MSW5W32 -p tcp -m comment --comment "default/kubernetes:https" -m recent --set --name KUBE-SEP-3VWUJCZC3MSW5W32 --mask 255.255.255.255 --rsource -m tcp -j DNAT --to-destination 67.205.156.80:6443 -A KUBE-SEP-CPJSBS35VMSBOKH6 -s 172.17.0.3/32 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-MARK-MASQ -A KUBE-SEP-CPJSBS35VMSBOKH6 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp" -m tcp -j DNAT --to-destination 172.17.0.3:53 -A KUBE-SEP-K7JQ5XSWBQ7MTKDL -s 172.17.0.3/32 -m comment --comment "kube-system/kube-dns:dns" -j KUBE-MARK-MASQ -A KUBE-SEP-K7JQ5XSWBQ7MTKDL -p udp -m comment --comment "kube-system/kube-dns:dns" -m udp -j DNAT --to-destination 172.17.0.3:53 -A KUBE-SEP-WOG5WH7F5TFFOT4E -s 172.17.0.4/32 -m comment --comment "anon/anon-svc:agent" -j KUBE-MARK-MASQ -A KUBE-SEP-WOG5WH7F5TFFOT4E -p tcp -m comment --comment "anon/anon-svc:agent" -m tcp -j DNAT --to-destination 172.17.0.4:8125 -A KUBE-SEP-ZSS7W6PQOP26CZ6F -s 172.17.0.1/32 -m comment --comment "anon/etcd:etcd" -j KUBE-MARK-MASQ -A KUBE-SEP-ZSS7W6PQOP26CZ6F -p tcp -m comment --comment "anon/etcd:etcd" -m tcp -j DNAT --to-destination 172.17.0.1:4001 -A KUBE-SERVICES -d 172.20.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU -A KUBE-SERVICES -d 172.20.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-ERIFXISQEP7F7OF4 -A KUBE-SERVICES -d 172.23.6.158/32 -p tcp -m comment --comment "anon/anon-svc:agent cluster IP" -m tcp --dport 8125 -j KUBE-SVC-TF3HNH35HFDYKE6V -A KUBE-SERVICES -d 172.23.6.157/32 -p tcp -m comment --comment "anon/etcd:etcd cluster IP" -m tcp --dport 4001 -j KUBE-SVC-R6UZIZCIT2GFGDFT -A KUBE-SERVICES -d 172.20.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y -A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS -A KUBE-SVC-ERIFXISQEP7F7OF4 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-SEP-CPJSBS35VMSBOKH6 -A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -m recent --rcheck --seconds 10800 --reap --name KUBE-SEP-3VWUJCZC3MSW5W32 --mask 255.255.255.255 --rsource -j KUBE-SEP-3VWUJCZC3MSW5W32 -A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -j KUBE-SEP-3VWUJCZC3MSW5W32 -A KUBE-SVC-R6UZIZCIT2GFGDFT -m comment --comment "anon/etcd:etcd" -j KUBE-SEP-ZSS7W6PQOP26CZ6F -A KUBE-SVC-TCOU7JCQXEZGVUNU -m comment --comment "kube-system/kube-dns:dns" -j KUBE-SEP-K7JQ5XSWBQ7MTKDL -A KUBE-SVC-TF3HNH35HFDYKE6V -m comment --comment "anon/anon-svc:agent" -j KUBE-SEP-WOG5WH7F5TFFOT4E COMMIT # Completed on Thu Apr 13 05:38:05 2017 # Generated by iptables-save v1.4.21 on Thu Apr 13 05:38:05 2017 *filter :INPUT ACCEPT [1127:469861] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [1181:392136] :DOCKER - [0:0] :DOCKER-ISOLATION - [0:0] :KUBE-FIREWALL - [0:0] :KUBE-SERVICES - [0:0] -A INPUT -j KUBE-FIREWALL -A FORWARD -j DOCKER-ISOLATION -A FORWARD -o docker0 -j DOCKER -A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -A FORWARD -i docker0 ! -o docker0 -j ACCEPT -A FORWARD -i docker0 -o docker0 -j ACCEPT -A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES -A OUTPUT -j KUBE-FIREWALL -A DOCKER -d 172.17.0.2/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 2379 -j ACCEPT -A DOCKER -d 172.17.0.2/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 2379 -j ACCEPT -A DOCKER -d 172.17.0.2/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 2379 -j ACCEPT -A DOCKER -d 172.17.0.2/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 2379 -j ACCEPT -A DOCKER -d 172.17.0.4/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 443 -j ACCEPT -A DOCKER -d 172.17.0.4/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 80 -j ACCEPT -A DOCKER-ISOLATION -j RETURN -A KUBE-FIREWALL -m comment --comment "kubernetes firewall for dropping marked packets" -m mark --mark 0x8000/0x8000 -j DROP COMMIT # Completed on Thu Apr 13 05:38:05 2017 

$ ip route show table local

GCP:

 local 10.142.0.10 dev eth0 proto kernel scope host src 10.142.0.10 broadcast 10.142.0.10 dev eth0 proto kernel scope link src 10.142.0.10 broadcast 127.0.0.0 dev lo proto kernel scope link src 127.0.0.1 local 127.0.0.0/8 dev lo proto kernel scope host src 127.0.0.1 local 127.0.0.1 dev lo proto kernel scope host src 127.0.0.1 broadcast 127.255.255.255 dev lo proto kernel scope link src 127.0.0.1 broadcast 172.17.0.0 dev docker0 proto kernel scope link src 172.17.0.1 local 172.17.0.1 dev docker0 proto kernel scope host src 172.17.0.1 broadcast 172.17.255.255 dev docker0 proto kernel scope link src 172.17.0.1 

做:

 broadcast 10.10.0.0 dev eth0 proto kernel scope link src 10.10.0.5 local 10.10.0.5 dev eth0 proto kernel scope host src 10.10.0.5 broadcast 10.10.255.255 dev eth0 proto kernel scope link src 10.10.0.5 broadcast 67.205.144.0 dev eth0 proto kernel scope link src 67.205.156.80 local 67.205.156.80 dev eth0 proto kernel scope host src 67.205.156.80 broadcast 67.205.159.255 dev eth0 proto kernel scope link src 67.205.156.80 broadcast 127.0.0.0 dev lo proto kernel scope link src 127.0.0.1 local 127.0.0.0/8 dev lo proto kernel scope host src 127.0.0.1 local 127.0.0.1 dev lo proto kernel scope host src 127.0.0.1 broadcast 127.255.255.255 dev lo proto kernel scope link src 127.0.0.1 broadcast 172.17.0.0 dev docker0 proto kernel scope link src 172.17.0.1 local 172.17.0.1 dev docker0 proto kernel scope host src 172.17.0.1 broadcast 172.17.255.255 dev docker0 proto kernel scope link src 172.17.0.1 

$ ip addr show

GCP:

 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1460 qdisc pfifo_fast state UP group default qlen 1000 link/ether 42:01:0a:8e:00:0a brd ff:ff:ff:ff:ff:ff inet 10.142.0.10/32 brd 10.142.0.10 scope global eth0 valid_lft forever preferred_lft forever 3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default link/ether 02:42:d0:6d:28:52 brd ff:ff:ff:ff:ff:ff inet 172.17.0.1/16 scope global docker0 valid_lft forever preferred_lft forever 5: veth1219894: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default link/ether a6:4e:d4:48:4c:ff brd ff:ff:ff:ff:ff:ff 7: vetha516dc6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default link/ether ce:f2:e7:5d:34:d2 brd ff:ff:ff:ff:ff:ff 9: veth4a6b171: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default link/ether ee:42:d4:d8:ca:d4 brd ff:ff:ff:ff:ff:ff 

做:

 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether da:74:7c:ad:9d:4d brd ff:ff:ff:ff:ff:ff inet 67.205.156.80/20 brd 67.205.159.255 scope global eth0 valid_lft forever preferred_lft forever inet 10.10.0.5/16 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::d874:7cff:fead:9d4d/64 scope link valid_lft forever preferred_lft forever 3: eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether 76:66:0a:15:cb:a6 brd ff:ff:ff:ff:ff:ff 4: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default link/ether 02:42:85:21:28:00 brd ff:ff:ff:ff:ff:ff inet 172.17.0.1/16 scope global docker0 valid_lft forever preferred_lft forever inet6 fe80::42:85ff:fe21:2800/64 scope link valid_lft forever preferred_lft forever 6: veth95a5fdf: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default link/ether 12:2c:b9:80:6c:60 brd ff:ff:ff:ff:ff:ff inet6 fe80::102c:b9ff:fe80:6c60/64 scope link valid_lft forever preferred_lft forever 8: veth3fd8422: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default link/ether 56:98:c1:96:0c:83 brd ff:ff:ff:ff:ff:ff inet6 fe80::5498:c1ff:fe96:c83/64 scope link valid_lft forever preferred_lft forever 10: veth3984136: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default link/ether ae:35:39:1c:bd:c1 brd ff:ff:ff:ff:ff:ff inet6 fe80::ac35:39ff:fe1c:bdc1/64 scope link valid_lft forever preferred_lft forever 

请让我知道,如果你需要更多的信息。

“targetPort”:“代理”

我不认为这是在正常服务yaml有效的风格,你可以改变它像8080正常端口,然后再试一次。

你也许可以分享您的部署svc所指的? 确保您的select在svc指向相同,在您的案件名称,标签。

我已经发布了另一个与云提供商不相似的问题。 当使用iptables作为代理模式时,似乎这是一个默认行为(不能访问自己的服务)。