Docker Swarm应该在崩溃的节点重新join时不删除重新安排的容器?

我一直在用Docker Swarm进行很多实验。 有一点我觉得很有趣,Swarm 1.2.0中增加了重新调整策略。 我目前的设置是这样的:

$ docker info Containers: 17 Running: 17 Paused: 0 Stopped: 0 Images: 51 Server Version: swarm/1.2.3 Role: replica Primary: 192.168.137.114:3376 Strategy: spread Filters: health, port, containerslots, dependency, affinity, constraint Nodes: 4 pi1-swarm: 192.168.137.111:2376 ID: HFIY:FDBC:QT4K:HEJ6:VAP4:4ZUR:FC55:PVDX:STST:R457:OLA4:QTEB Status: Healthy Containers: 4 Reserved CPUs: 0 / 4 Reserved Memory: 0 B / 971.8 MiB Labels: executiondriver=, kernelversion=4.1.17-hypriotos-v7+, operatingsystem=Raspbian GNU/Linux 8 (jessie), provider=generic, storagedriver=overlay UpdatedAt: 2016-06-15T14:36:03Z ServerVersion: 1.11.1 pi2-swarm: 192.168.137.112:2376 ID: MRID:YM7F:JAJN:FFAL:J4CF:FMTG:FY5Y:HISE:6OTV:BLT7:CFC3:YHQV Status: Healthy Containers: 5 Reserved CPUs: 0 / 4 Reserved Memory: 0 B / 971.8 MiB Labels: executiondriver=, kernelversion=4.1.17-hypriotos-v7+, operatingsystem=Raspbian GNU/Linux 8 (jessie), provider=generic, storagedriver=overlay UpdatedAt: 2016-06-15T14:36:36Z ServerVersion: 1.11.1 pi3-swarm: 192.168.137.113:2376 ID: L52C:DNZN:NWKT:VRLY:4I2G:7DPY:CBWJ:TXBI:PCZK:6TQ3:HKHE:7EYE Status: Healthy Containers: 4 Reserved CPUs: 0 / 4 Reserved Memory: 0 B / 971.8 MiB Labels: executiondriver=, kernelversion=4.1.17-hypriotos-v7+, operatingsystem=Raspbian GNU/Linux 8 (jessie), provider=generic, storagedriver=overlay UpdatedAt: 2016-06-15T14:36:14Z ServerVersion: 1.11.1 pi4-swarm: 192.168.137.114:2376 ID: FVO2:DVKX:HVUR:2EP6:TYYU:OWWR:TEXW:44CD:AXLL:37VC:QCQJ:XTAF Status: Healthy Containers: 4 Reserved CPUs: 0 / 4 Reserved Memory: 0 B / 971.8 MiB Labels: executiondriver=, kernelversion=4.1.17-hypriotos-v7+, operatingsystem=Raspbian GNU/Linux 8 (jessie), provider=generic, storagedriver=overlay UpdatedAt: 2016-06-15T14:36:38Z ServerVersion: 1.11.1 Plugins: Volume: Network: Kernel Version: 4.1.17-hypriotos-v7+ Operating System: linux Architecture: arm CPUs: 16 Total Memory: 3.796 GiB Name: pi3-swarm 

docker工人ps:

 CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 0f84b1a5f912 hypriot/rpi-busybox-httpd "/bin/busybox httpd -" 29 minutes ago Up 28 minutes 192.168.137.112:80->80/tcp pi2-swarm/small_meninsky 2b8e8a53238f hypriot/rpi-swarm:latest "/swarm join --advert" About an hour ago Up About an hour pi1-swarm/swarm-agent 3e5e2abc0a80 hypriot/rpi-swarm:latest "/swarm manage --tlsv" About an hour ago Up About an hour pi1-swarm/swarm-agent-master 7611766de6df rpi_consul "/bin/consul agent -s" About an hour ago Up About an hour pi4-swarm/rpi4_consul d32d5b47de98 rpi_consul "/bin/consul agent -s" About an hour ago Up About an hour pi3-swarm/rpi3_consul 93242f63307c rpi_consul "/bin/consul agent -s" About an hour ago Up 28 minutes pi2-swarm/rpi2_consul c2c05cd1341e rpi_consul "/bin/consul agent -s" About an hour ago Up About an hour pi1-swarm/rpi1_consul d878e069e127 hypriot/rpi-swarm:latest "/swarm join --advert" About an hour ago Up About an hour pi4-swarm/swarm-agent 2861b315a2fb hypriot/rpi-swarm:latest "/swarm manage --tlsv" About an hour ago Up About an hour pi4-swarm/swarm-agent-master 210cd2755545 hypriot/rpi-swarm:latest "/swarm join --advert" About an hour ago Up 28 minutes pi2-swarm/swarm-agent 9b7698a068d7 hypriot/rpi-swarm:latest "/swarm manage --tlsv" About an hour ago Up 28 minutes pi2-swarm/swarm-agent-master 917aad7e4d6e hypriot/rpi-swarm:latest "/swarm join --advert" About an hour ago Up About an hour pi3-swarm/swarm-agent 043ba968c341 hypriot/rpi-swarm:latest "/swarm manage --tlsv" About an hour ago Up About an hour pi3-swarm/swarm-agent-master 41b936e7d7c2 nimblestratus/rpi-registrator:latest "/bin/registrator -tt" 3 weeks ago Up About an hour pi4-swarm/registrator4 f887095e994c nimblestratus/rpi-registrator:latest "/bin/registrator -tt" 3 weeks ago Up About an hour pi3-swarm/registrator3 82e5b2fa75f6 nimblestratus/rpi-registrator:latest "/bin/registrator -tt" 3 weeks ago Up 27 minutes pi2-swarm/registrator2 10d1a2b7372e nimblestratus/rpi-registrator:latest "/bin/registrator -tt" 3 weeks ago Up About an hour pi1-swarm/registrator1 

这一切工作正常: Consul UI截图

busybox-httpd容器启动如下:

 docker run -d --restart=always -l 'com.docker.swarm.reschedule-policies=["on-node-failure"]' -p 80:80 hypriot/rpi-busybox-httpd 

现在,当我断开连接,关机或以其他方式使busybox运行的节点不可用时,这将是我的期望:

  • 识别不可用的节点
  • 认识到busybox需要重新安排
  • 在可用节点上创buildbusybox容器
  • 启动新的busybox容器
  • 在崩溃的节点重新连接的情况下,删除旧的busybox容器

除了最后一步之外,所有这些都按照我的预期进行。 当节点重新joinSwarm时,我突然运行两个busybox容器。 旧的不会被删除。

这是预期的行为还是我在这里错过了什么?