没有节点可用于安排pod – 在没有虚拟机的情况下在本地运行Kubernetes

我是Kubernetes的新手 – 我一直在使用docker-compose(在一台机器上)。 现在我想把我的工作扩展到节点集群,并获得Kubernetesfunction(服务发现,负载平衡,健康检查等)。

我正在本地服务器(RHEL7)中工作,尝试运行我的第一个Kubernetes环境(在此文档之后 )。

我运行:

hack/local-up-cluster.sh 

然后(在另一个terminal):

 cluster/kubectl.sh config set-cluster local --server=http://127.0.0.1:8080 --insecure-skip-tls-verify=true cluster/kubectl.sh config set-context local --cluster=local cluster/kubectl.sh config use-context local 

和:

 cluster/kubectl.sh create -f run-aii.yaml 

我的跑步aii.yaml:

 apiVersion: extensions/v1beta1 kind: Deployment metadata: name: aii spec: replicas: 1 template: metadata: labels: run: aii spec: containers: - name: aii image: localhost:5000/dev/aii ports: - containerPort: 5144 env: - name: KAFKA_IP value: kafka volumeMounts: - mountPath: /root/script name: scripts-data readOnly: true - mountPath: /home/aii/core name: core-aii readOnly: true - mountPath: /home/aii/genome name: genome-aii readOnly: true - mountPath: /home/aii/main name: main-aii readOnly: true - name: kafka image: localhost:5000/dev/kafkazoo volumeMounts: - mountPath: /root/script name: scripts-data readOnly: true - mountPath: /root/config name: config-data readOnly: true - name: ws image: localhost:5000/dev/ws ports: - containerPort: 3000 volumes: - name: scripts-data hostPath: path: /home/aii/general/infra/script - name: config-data hostPath: path: /home/aii/general/infra/config - name: core-aii hostPath: path: /home/aii/general/core - name: genome-aii hostPath: path: /home/aii/general/genome - name: main-aii hostPath: path: /home/aii/general/main 

附加信息:

 [aii@localhost kubernetes]$ cluster/kubectl.sh describe pod aii-4073165096-nkdq6 Name: aii-4073165096-nkdq6 Namespace: default Node: / Labels: pod-template-hash=4073165096,run=aii Status: Pending IP: Controllers: ReplicaSet/aii-4073165096 Containers: aii: Image: localhost:5000/dev/aii Port: 5144/TCP QoS Tier: cpu: BestEffort memory: BestEffort Environment Variables: KAFKA_IP: kafka kafka: Image: localhost:5000/dev/kafkazoo Port: QoS Tier: cpu: BestEffort memory: BestEffort Environment Variables: ws: Image: localhost:5000/dev/ws Port: 3000/TCP QoS Tier: cpu: BestEffort memory: BestEffort Environment Variables: Volumes: scripts-data: Type: HostPath (bare host directory volume) Path: /home/aii/general/infra/script config-data: Type: HostPath (bare host directory volume) Path: /home/aii/general/infra/config core-aii: Type: HostPath (bare host directory volume) Path: /home/aii/general/core genome-aii: Type: HostPath (bare host directory volume) Path: /home/aii/general/genome main-aii: Type: HostPath (bare host directory volume) Path: /home/aii/general/main default-token-hiwwo: Type: Secret (a volume populated by a Secret) SecretName: default-token-hiwwo Events: FirstSeen LastSeen Count From SubobjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 37s 6s 6 {default-scheduler } Warning FailedScheduling no nodes available to schedule pods 

docker图片:

 [aii@localhost kubernetes]$ docker images REPOSITORY TAG IMAGE ID CREATED SIZE kube-build build-47381c8eab f221edba30ed 25 hours ago 1.628 GB aii latest 1026cd920723 4 days ago 1.427 GB localhost:5000/dev/aii latest 1026cd920723 4 days ago 1.427 GB registry 2 34bccec54793 4 days ago 171.2 MB localhost:5000/dev/ws latest fa7c5f6ef83a 12 days ago 706.8 MB ws latest fa7c5f6ef83a 12 days ago 706.8 MB kafkazoo latest 84c687b0bd74 2 weeks ago 697.7 MB localhost:5000/dev/kafkazoo latest 84c687b0bd74 2 weeks ago 697.7 MB node 4.4 1a93433cee73 2 weeks ago 647 MB gcr.io/google_containers/hyperkube-amd64 v1.2.4 3c4f38def75b 2 weeks ago 316.7 MB nginx latest 3edcc5de5a79 2 weeks ago 182.7 MB gcr.io/google_containers/debian-iptables-arm v3 aca727a3023c 5 weeks ago 120.5 MB gcr.io/google_containers/debian-iptables-amd64 v3 49b5e076215b 6 weeks ago 129.4 MB spotify/kafka latest 30d3cef1fe8e 3 months ago 421.6 MB gcr.io/google_containers/kube-cross v1.4.2-1 8d2874b4f7e9 3 months ago 1.551 GB wurstmeister/zookeeper latest dc00f1198a44 4 months ago 468.7 MB centos latest 61b442687d68 5 months ago 196.6 MB centos centos7.2.1511 38ea04e19303 5 months ago 194.6 MB hypriot/armhf-busybox latest d7ae69033898 6 months ago 1.267 MB gcr.io/google_containers/etcd 2.2.1 a6cd91debed1 6 months ago 28.19 MB gcr.io/google_containers/pause 2.0 2b58359142b0 7 months ago 350.2 kB gcr.io/google_containers/kube-registry-proxy 0.3 b86ac3f11a0c 9 months ago 151.2 MB 

什么是'没有节点可用来安排豆荚'的意思? 我应该在哪里configuration/定义节点? 在哪里以及如何指定物理机器的IP地址?

编辑:

 [aii@localhost kubernetes]$ kubectl get nodes NAME STATUS AGE 127.0.0.1 Ready 1m 

和:

 [aii@localhost kubernetes]$ kubectl describe nodes Name: 127.0.0.1 Labels: kubernetes.io/hostname=127.0.0.1 CreationTimestamp: Tue, 24 May 2016 09:58:00 +0300 Phase: Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- OutOfDisk True Tue, 24 May 2016 09:59:50 +0300 Tue, 24 May 2016 09:58:10 +0300 KubeletOutOfDisk out of disk space Ready True Tue, 24 May 2016 09:59:50 +0300 Tue, 24 May 2016 09:58:10 +0300 KubeletReady kubelet is posting ready status Addresses: 127.0.0.1,127.0.0.1 Capacity: pods: 110 cpu: 4 memory: 8010896Ki System Info: Machine ID: b939b024448040469dfdbd3dd3c3e314 System UUID: 59FF2897-234D-4069-A5D4-B68648FC7D38 Boot ID: 0153b84d-90e1-4fd1-9afa-f4312e89613e Kernel Version: 3.10.0-327.4.5.el7.x86_64 OS Image: Red Hat Enterprise Linux Container Runtime Version: docker://1.10.3 Kubelet Version: v1.2.4 Kube-Proxy Version: v1.2.4 ExternalID: 127.0.0.1 Non-terminated Pods: (0 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits --------- ---- ------------ ---------- --------------- ------------- Allocated resources: (Total limits may be over 100%, ie, overcommitted. More info: http://releases.k8s.io/HEAD/docs/user-guide/compute-resources.md) CPU Requests CPU Limits Memory Requests Memory Limits ------------ ---------- --------------- ------------- 0 (0%) 0 (0%) 0 (0%) 0 (0%) Events: FirstSeen LastSeen Count From SubobjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 1m 1m 1 {kube-proxy 127.0.0.1} Normal Starting Starting kube-proxy. 1m 1m 1 {kubelet 127.0.0.1} Normal Starting Starting kubelet. 1m 1m 1 {kubelet 127.0.0.1} Normal NodeHasSufficientDisk Node 127.0.0.1 status is now: NodeHasSufficientDisk 1m 1m 1 {controllermanager } Normal RegisteredNode Node 127.0.0.1 event: Registered Node 127.0.0.1 in NodeController 1m 1m 1 {kubelet 127.0.0.1} Normal NodeOutOfDisk Node 127.0.0.1 status is now: NodeOutOfDisk 1m 1m 1 {kubelet 127.0.0.1} Normal NodeReady Node 127.0.0.1 status is now: NodeReady 

但是我有一些空闲的空间:

 [aii@localhost kubernetes]$ df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/rhel-root 47G 42G 3.2G 93% / devtmpfs 3.9G 0 3.9G 0% /dev tmpfs 3.9G 3.7M 3.9G 1% /dev/shm tmpfs 3.9G 17M 3.9G 1% /run tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup /dev/mapper/rhel-var 485M 288M 198M 60% /var /dev/sda1 509M 265M 245M 52% /boot tmpfs 783M 44K 783M 1% /run/user/1000 /dev/sr0 56M 56M 0 100% /run/media/aii/VBOXADDITIONS_5.0.18_106667 

需要多less磁盘空间? (我正在VM中工作,所以我没有太多)

这意味着系统中没有可用的节点来安排连接。 你能提供kubectl get nodeskubectl describe nodes的输出吗?

以下在本地集群文档中描述的步骤应为您提供单个节点。 如果你的节点在那里(应该是),但是没有准备好,你可以看看/tmp/kubelet.log中的日志(将来,如果你不使用本地集群,那么查找/var/log/kubelet.log )来找出可能的原因。