rg.apache.spark.SparkException:主要url无效:spark://tasks.501393358-spark-master:7077

我有2个火花集群 – 全球火星大师,100火星大师。 我为全球火星大师创造了2个火花工作者,为100个火星大师创造了一个火花工作者。 所有都在单个节点中创build。

全球火花工人起来,并附属于全球火星大师。 但100火花工人不起来,我得到以下例外。

如何解决这个问题?

线程“main”中的exceptionorg.apache.spark.SparkException:无效的主URL:spark://tasks.100-spark-master:7077 at org.apache.spark.util.Utils $ .extractHostPortFromSparkUrl(Utils.scala:2330 )at org.apache.spark.rpc.RpcAddress $ .fromSparkURL(RpcAddress.scala:47)at org.apache.spark.deploy.worker.Worker $$ anonfun $ 13.apply(Worker.scala:714)at org.apache .spark.deploy.worker.Worker $$ anonfun $ 13.apply(Worker.scala:714)at scala.collection.TraversableLike $$ anonfun $ map $ 1.apply(TraversableLike.scala:234)at scala.collection.TraversableLike $$ anonfun $ map $ 1.apply(TraversableLike.scala:234)at scala.collection.IndexedSeqOptimized $ class.foreach(IndexedSeqOptimized.scala:33)

我创build这些服务的方式是

全球networking:

docker service create –name global-spark-master –limit-cpu 8 –limit-memory 24GB –reserve-cpu 4 –reserve-memory 12GB –network global –network xyzservice –with-registry-auth pricecluster1:5000 / nimbus / xinnici_spark:2.0.2 sh -c'/ opt / spark / bin / spark-class org.apache.spark.deploy.master.Master -i tasks.global-spark-master'

docker service create –name global-spark-worker –limit-cpu 8 –limit-memory 24GB –reserve-cpu 4 –reserve-memory 12GB –network global –network xyzservice –with-registry-auth pricecluster1:5000 / nimbus / xinnici_spark:2.0.2 sh -c'/ opt / spark / bin / spark-class org.apache.spark.deploy.worker.Worker spark://tasks.global-spark-master:7077'

具体networking:

docker service create –name 100-spark-master –limit-cpu 2 –limit-memory 12GB –reserve-cpu 2 –reserve-memory 6GB –network 100 –network xyzservice –with-registry-auth pricecluster1:5000 / nimbus / xinnici_spark:2.0.2 sh -c'/ opt / spark / bin / spark-class org.apache.spark.deploy.master.Master -i tasks.100-spark-master'

docker service create –name 100-spark-worker –limit-cpu 2 –limit-memory 12GB –reserve-cpu 1 –reserve-memory 6GB –network 100 –network xyzservice –with-registry-auth pricecluster1:5000 / nimbus / xinnici_spark:2.0.2 sh -c'/ opt / spark / bin / spark-class org.apache.spark.deploy.worker.Worker spark://tasks.100-spark-master:7077'