如何在Docker中从python连接到远程Spark群集

我将Spark 2.0.0和Python 3安装在用户docker-user的容器中。 独立模式似乎正在工作。

我们在AWS和hadoop上build立了Spark群集。 随着VPN运行,从笔记本电脑我可以ssh到“内部IP”,如

 ssh ubuntu@1.1.1.1 

这个login。然后

 cd /opt/spark/bin ./pyspark 

这显示了Spark 2.0.0和Python 2.7.6。 一个天真的parallelize例子起作用。

现在在Docker支持的Jupyter Notebook中,做

 from pyspark import SparkConf, SparkContext conf = SparkConf().setAppName('hello').setMaster('spark://1.1.1.1:7077').setSparkHome('/opt/spark/') sc = SparkContext(conf=conf) 

这显然进入了集群,因为我可以在1.1.1.1:8080的Spark仪表板中看到应用程序“hello”。 让我感到困惑的是,它已经离开了Docker内部的这么远,不用理会ssh和密码等等。

现在尝试一个天真的parallelize例子,

 x = ['spark', 'rdd', 'example', 'sample', 'example'] y = sc.parallelize(x) 

看起来不错。 然后,

 y.collect() 

它挂在那里。

在仪表板“执行者摘要”表中,我不知道要找什么。 但是一个国家exited工人有这样的情况:

 16/08/16 17:37:01 INFO SignalUtils: Registered signal handler for TERM 16/08/16 17:37:01 INFO SignalUtils: Registered signal handler for HUP 16/08/16 17:37:01 INFO SignalUtils: Registered signal handler for INT 16/08/16 17:37:02 INFO SecurityManager: Changing view acls to: ubuntu,docker-user 16/08/16 17:37:02 INFO SecurityManager: Changing modify acls to: ubuntu,docker-user 16/08/16 17:37:02 INFO SecurityManager: Changing view acls groups to: 16/08/16 17:37:02 INFO SecurityManager: Changing modify acls groups to: 16/08/16 17:37:02 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(ubuntu, docker-user); groups with view permissions: Set(); users with modify permissions: Set(ubuntu, docker-user); groups with modify permissions: Set() Exception in thread "main" java.lang.reflect.UndeclaredThrowableException at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671) at org.apache.spark.deploy.SparkHadoopUtil.runAsSparkUser(SparkHadoopUtil.scala:70) at org.apache.spark.executor.CoarseGrainedExecutorBackend$.run(CoarseGrainedExecutorBackend.scala:166) at org.apache.spark.executor.CoarseGrainedExecutorBackend$.main(CoarseGrainedExecutorBackend.scala:262) at org.apache.spark.executor.CoarseGrainedExecutorBackend.main(CoarseGrainedExecutorBackend.scala) Caused by: org.apache.spark.rpc.RpcTimeoutException: Cannot receive any reply in 120 seconds. This timeout is controlled by spark.rpc.askTimeout at org.apache.spark.rpc.RpcTimeout.org$apache$spark$rpc$RpcTimeout$$createRpcTimeoutException(RpcTimeout.scala:48) at org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:63) at org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:59) at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:36) at scala.util.Failure$$anonfun$recover$1.apply(Try.scala:216) at scala.util.Try$.apply(Try.scala:192) at scala.util.Failure.recover(Try.scala:216) at scala.concurrent.Future$$anonfun$recover$1.apply(Future.scala:326) at scala.concurrent.Future$$anonfun$recover$1.apply(Future.scala:326) at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32) at org.spark_project.guava.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:293) at scala.concurrent.impl.ExecutionContextImpl$$anon$1.execute(ExecutionContextImpl.scala:136) at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:40) at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:248) at scala.concurrent.Promise$class.complete(Promise.scala:55) at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:153) at scala.concurrent.Future$$anonfun$map$1.apply(Future.scala:237) at scala.concurrent.Future$$anonfun$map$1.apply(Future.scala:237) at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32) at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.processBatch$1(BatchingExecutor.scala:63) at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:78) at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:55) at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:55) at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72) at scala.concurrent.BatchingExecutor$Batch.run(BatchingExecutor.scala:54) at scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:601) at scala.concurrent.BatchingExecutor$class.execute(BatchingExecutor.scala:106) at scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:599) at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:40) at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:248) at scala.concurrent.Promise$class.tryFailure(Promise.scala:112) at scala.concurrent.impl.Promise$DefaultPromise.tryFailure(Promise.scala:153) at org.apache.spark.rpc.netty.NettyRpcEnv.org$apache$spark$rpc$netty$NettyRpcEnv$$onFailure$1(NettyRpcEnv.scala:205) at org.apache.spark.rpc.netty.NettyRpcEnv$$anon$1.run(NettyRpcEnv.scala:239) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: java.util.concurrent.TimeoutException: Cannot receive any reply in 120 seconds ... 8 more java.lang.IllegalArgumentException: requirement failed: TransportClient has not yet been set. at scala.Predef$.require(Predef.scala:224) at org.apache.spark.rpc.netty.RpcOutboxMessage.onTimeout(Outbox.scala:70) at org.apache.spark.rpc.netty.NettyRpcEnv$$anonfun$ask$1.applyOrElse(NettyRpcEnv.scala:232) at org.apache.spark.rpc.netty.NettyRpcEnv$$anonfun$ask$1.applyOrElse(NettyRpcEnv.scala:231) at scala.concurrent.Future$$anonfun$onFailure$1.apply(Future.scala:138) at scala.concurrent.Future$$anonfun$onFailure$1.apply(Future.scala:136) at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32) at org.spark_project.guava.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:293) at scala.concurrent.impl.ExecutionContextImpl$$anon$1.execute(ExecutionContextImpl.scala:136) at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:40) at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:248) at scala.concurrent.Promise$class.tryFailure(Promise.scala:112) 

请注意,Docker用户docker-user可能是一个问题,因为那里的服务器机器期待ubuntu 。 可能还有其他问题。

Python包paramiko帮助吗? 我知道如何使用paramiko创build一个客户端对象,通过它来发出命令等,如果我login到服务器。 但不知道如何将它与SparkConfSparkContext结合起来。

SparkConf().setMaster('spark://1.1.1.1:7077')就好像刚刚起作用一样。 我相信login,密码,SSH,身份validation有一些环节是不可避免的。

谢谢!