docker-compose v3 + apache spark,端口7077拒绝连接

我不确定这是100%编程还是系统pipe理员相关的问题。

我正在尝试在docker-swarm版本3中设置一个docker-compose文件,docker version 1.13为我的本地工作streamtestingspark。

不幸的是,端口7077只会绑定到我的swarm集群上的本地主机,因此无法从外部世界访问,我的星际应用程序试图连接到它。

有没有人有一个想法,如何得到docker组成群模式绑定到所有接口?

我发布我的端口,这工作正常说8080,但不是7070。

nmap输出:

Starting Nmap 7.01 ( https://nmap.org ) at 2017-03-02 11:27 PST Nmap scan report for localhost (127.0.0.1) Host is up (0.000096s latency). Other addresses for localhost (not scanned): ::1 Not shown: 994 closed ports PORT STATE SERVICE 22/tcp open ssh 80/tcp open http 443/tcp open https 8080/tcp open http-proxy 8081/tcp open blackice-icecap 8888/tcp open sun-answerbook 

端口说明

 8081 is my spark worker 8080 is my spark master frontend 8888 is the spark hue frontend 

nmap不会列出7077

使用netstat:

 tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1641/sshd tcp6 0 0 :::4040 :::* LISTEN 1634/dockerd tcp6 0 0 :::2377 :::* LISTEN 1634/dockerd tcp6 0 0 :::7946 :::* LISTEN 1634/dockerd tcp6 0 0 :::80 :::* LISTEN 1634/dockerd tcp6 0 0 :::8080 :::* LISTEN 1634/dockerd tcp6 0 0 :::8081 :::* LISTEN 1634/dockerd tcp6 0 0 :::6066 :::* LISTEN 1634/dockerd tcp6 0 0 :::22 :::* LISTEN 1641/sshd tcp6 0 0 :::8888 :::* LISTEN 1634/dockerd tcp6 0 0 :::443 :::* LISTEN 1634/dockerd tcp6 0 0 :::7077 :::* LISTEN 1634/dockerd 

我可以通过本地主机上的telnet连接到7077没有任何问题,但本地主机外我收到连接拒绝错误。

在这个时候(请忍受我,我不是一个系统pipe理员,我是一个软件家伙),我开始有这种感觉,这是与docker网状networking有关。

Docker为我的主configuration组成部分:

 #the spark master, having to run on the frontend of the cluster master: image: eros.fiehnlab.ucdavis.edu/spark command: bin/spark-class org.apache.spark.deploy.master.Master -h master hostname: master environment: MASTER: spark://master:7077 SPARK_CONF_DIR: /conf SPARK_PUBLIC_DNS: blonde.fiehnlab.ucdavis.edu ports: - 4040:4040 - 6066:6066 - 8080:8080 - 7077:7077 volumes: - /tmp:/tmp/data networks: - spark - frontends deploy: placement: #only run on manager node constraints: - node.role == manager 

networking的火花和前端都是覆盖networking

问题是docker-compose文件中的configuration错误。 原始configuration中的-h主节点始终绑定到本地主机接口。

即使在指定了SPARK_LOCAL_IP值之后

  master: image: eros.fiehnlab.ucdavis.edu/spark:latest command: bin/spark-class org.apache.spark.deploy.master.Master hostname: master environment: SPARK_CONF_DIR: /conf SPARK_PUBLIC_DNS: blonde.fiehnlab.ucdavis.edu SPARK_LOCAL_IP: 0.0.0.0 ports: - 4040:4040 - 6066:6066 - 8080:8080 - 7077:7077 volumes: - /tmp:/tmp/data deploy: placement: #only run on manager node constraints: - node.role == manager