Nginx和Ingress与Kubernetes没有路由我的请求

我有Docker,Kubernetes(1.7)和Nginx都在我的RHEL7服务器上运行,我自己的服务在Docker容器中,被Kubernetes拾取。 我知道Kubernetes正在与docker工作,因为我可以使用自己的IP:PORT地址来调用Kubernete吊舱的获取请求。 我设置了一个默认的后端Nginx,并具备所有这些工作。 我通过调用get podsget svc命令来知道这一点,并且所有的东西都是按照它应该运行的。 当我创build入口时,我知道Nginx正在select它,因为当我使用命令kubectl describe pods {NGNIX-CONTROLLER}我发现它更新了它的入口,甚至logging了我的命名。 现在我使用kubectl clusterinfo获取Kubernetes master的IP地址,并且使用这个ip地址尝试调用我的服务,沿着http://KUBEIPADDRESS/PATH/TO/MY/SERVICE ,没有端口号,但它不起作用。 我不知道发生了什么事。 有人可以帮助我为什么Ingress和/或Nnginx没有正确地路由到我的服务? 我将在下面给我的入口和nginx文件。

(注意,对于nginx yaml文件,nginx控制器的部署完全在底部。)

Ingress yaml

 apiVersion: extensions/v1beta1 kind: Ingress metadata: name: gateway-ingress annotations: kubernetes.io/ingress.class: nginx ingress.kubernetes.io/rewrite-target: / spec: backend: serviceName: default-http-backend servicePort: 80 rules: - host: testhost http: paths: - path: /customer backend: serviceName: customer servicePort: 9001 

nginx控制器yaml

 apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRole metadata: name: ingress rules: - apiGroups: - "" - "extensions" resources: - configmaps - secrets - services - endpoints - ingresses - nodes - pods verbs: - list - watch - apiGroups: - "extensions" resources: - ingresses verbs: - get - apiGroups: - "" resources: - events - services verbs: - create - list - update - get - apiGroups: - "extensions" resources: - ingresses/status - ingresses verbs: - update --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: Role metadata: name: ingress-ns namespace: kube-system rules: - apiGroups: - "" resources: - pods verbs: - list - apiGroups: - "" resources: - services verbs: - get - apiGroups: - "" resources: - endpoints verbs: - get - create - update --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: RoleBinding metadata: name: ingress-ns-binding namespace: kube-system roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: ingress-ns subjects: - kind: ServiceAccount name: ingress namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: ingress-binding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: ingress subjects: - kind: ServiceAccount name: ingress namespace: kube-system --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: default-http-backend labels: k8s-app: default-http-backend namespace: kube-system spec: replicas: 1 template: metadata: labels: k8s-app: default-http-backend spec: terminationGracePeriodSeconds: 60 containers: - name: default-http-backend # Any image is permissable as long as: # 1. It serves a 404 page at / # 2. It serves 200 on a /healthz endpoint image: gcr.io/google_containers/defaultbackend:1.0 livenessProbe: httpGet: path: /healthz port: 8080 scheme: HTTP initialDelaySeconds: 30 timeoutSeconds: 5 ports: - containerPort: 8080 resources: limits: cpu: 10m memory: 20Mi requests: cpu: 10m memory: 20Mi --- apiVersion: v1 kind: Service metadata: name: default-http-backend namespace: kube-system labels: k8s-app: default-http-backend spec: ports: - port: 80 targetPort: 8080 selector: k8s-app: default-http-backend --- apiVersion: v1 kind: ServiceAccount metadata: name: ingress namespace: kube-system --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: nginx-ingress-controller labels: k8s-app: nginx-ingress-controller namespace: kube-system spec: replicas: 1 template: metadata: labels: k8s-app: nginx-ingress-controller spec: # hostNetwork makes it possible to use ipv6 and to preserve the source IP correctly regardless of docker configuration # however, it is not a hard dependency of the nginx-ingress-controller itself and it may cause issues if port 10254 already is taken on the host # that said, since hostPort is broken on CNI (https://github.com/kubernetes/kubernetes/issues/31307) we have to use hostNetwork where CNI is used # like with kubeadm hostNetwork: true terminationGracePeriodSeconds: 60 serviceAccountName: ingress containers: - image: gcr.io/google_containers/nginx-ingress-controller:0.9.0-beta.3 name: nginx-ingress-controller readinessProbe: httpGet: path: /healthz port: 10254 scheme: HTTP livenessProbe: httpGet: path: /healthz port: 10254 scheme: HTTP initialDelaySeconds: 10 timeoutSeconds: 1 ports: - containerPort: 80 hostPort: 80 - containerPort: 443 hostPort: 443 env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace args: - /nginx-ingress-controller - --default-backend-service=$(POD_NAMESPACE)/default-http-backend 

当我做kubectl describe ing我得到

 Name: gateway-ingress Namespace: default Address: Default backend: default-http-backend:80 (<none>) Rules: Host Path Backends ---- ---- -------- testhost /customer customer:9001 ({IP}:9001,{IP}:9001) Annotations: rewrite-target: / Events: <none> 

这是我的部署和客户服务,以防万一需要

 apiVersion: extensions/v1beta1 kind: Deployment metadata: name: customer labels: run: customer spec: replicas: 2 template: metadata: labels: run: customer spec: containers: - name: customer image: customer imagePullPolicy: Always ports: - containerPort: 9001 protocol: TCP --- kind: Service apiVersion: v1 metadata: name: customer spec: selector: run: customer type: NodePort ports: - name: port1 protocol: TCP port: 9001 targetPort: 9001 

就我所见,您的设置有一些问题:

  • 您调用的URL中的KUBEIPADDRESS :IP地址不起作用,因为您将Ingressconfiguration为在testhost上进行testhost 。 所以你需要调用http://testhost/customer ,并configuration你的networking来parsingtesthost到正确的IP地址

  • 但是什么是正确的IP地址? 您正试图在端口80上使用k8s master。如果没有进一步的configuration,这将无法正常工作。 为此,您需要为Ingress Controller使用一个NodePort服务,该端口在端口80(大概是433)上公开。 为了使用低端口,您需要使用kube-apiserver选项,请参阅--service-node-port-range 。 一旦有效,你可以使用你的k8s群集的任何节点的任何IP地址作为testhost 。 注意:确保没有其他应用程序在任何节点上使用这些端口!