Spark Dockerconfiguration

我正在使用https://github.com/gettyimages/docker-spark/blob/master/docker-compose.yml来启动Docker组合来运行Spark环境。 它似乎成功开始 – 我可以连接到http:// master:8080 ,它说火花大师是spark:// master:7077。 'master'映射到Windows/System32/drivers/etc/hosts的Docker容器IP。

我有一个Java应用程序。

  SparkSession spark = SparkSession .builder() .appName("Java Spark SQL basic example") .master("spark://master:7077") .getOrCreate(); 

我得到的是以下错误:

19:20:18.240 [netty-rpc-connection-0] DEBUG oasncTransportClientFactory – 连接到master / 192.168.99.100:7077成功,运行bootstraps … 19:20:18.240 [netty-rpc-connection-0]信息oasncTransportClientFactory – 成功创build连接到master / 192.168.99.100:7077后18毫秒(在引导中花费0毫秒)19:20:18.244 [netty-rpc-connection-0] DEBUG inuRecycler – -Dio.netty.recycler.maxCapacity.default:32768 19:20:18.244 [netty-rpc-connection-0] DEBUG inuRecycler – -Dio.netty.recycler.maxSharedCapacityFactor:2 19:20:18.244 [netty-rpc-connection-0] DEBUG inuRecycler – -Dio.netty.recycler .linkCapacity:16 19:20:18.244 [netty-rpc-connection-0] DEBUG inuRecycler – -Dio.netty.recycler.ratio:8 19:20:18.265 [appclient-register-master-threadpool-0] WARN oasdcStandaloneAppClient $ ClientEndpoint – 无法连接到主控制器:7077 org.apache.spark.SparkException:抛出exception在awaitResult:在org.apache.spark.util.ThreadUtils $ .awaitResult(ThreadUtils.scala:205)〜[spa rk-core_2.11-2.2.0.jar:2.2.0] at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:75)〜[spark-core_2.11-2.2.0.jar:2.2 .0] at org.apache.spark.rpc.RpcEnv.setupEndpointRefByURI(RpcEnv.scala:100)〜[spark-core_2.11-2.2.0.jar:2.2.0] at org.apache.spark.rpc.RpcEnv .setupEndpointRef(RpcEnv.scala:108)〜[spark-core_2.11-2.2.0.jar:2.2.0] at org.apache.spark.deploy.client.StandaloneAppClient $ ClientEndpoint $$ anonfun $ tryRegisterAllMasters $ 1 $$ anon $ 1.run(StandaloneAppClient.scala:106)〜[spark-core_2.11-2.2.0.jar:2.2.0] at java.util.concurrent.Executors $ RunnableAdapter.call(Unknown Source)[na:1.8.0_121 ]在java.util.concurrent.FutureTask.run(未知源)[na:1.8.0_121]在java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)[na:1.8.0_121] at java.util.concurrent。 ThreadPoolExecutor $ Worker.run(Unknown Source)[java.util.RuntimeException:java.io.StreamCorruptedException:无效的stream标头:0100 在java.io.ObjectInputStream中的java.io.ObjectInputStream.readStreamHeader(ObjectInputStream.java:857)上的0B31。(ObjectInputStream.java:349)at org.apache.spark.serializer.JavaDeserializationStream $$ anon $ 1。(JavaSerializer.scala: 63)org.apache.spark.serializer.JavaDeserializationStream。(JavaSerializer.scala:63)org.apache.spark.serializer.JavaSerializerInstance.deserializeStream(JavaSerializer.scala:122)at org.apache.spark.serializer.JavaSerializerInstance。反序列化(JavaSerializer.scala:107)在org.apache.spark.rpc.netty.NettyRpcEnv $$ anonfun $ deserialize $ 1 $$ anonfun $ apply $ 1.apply(NettyRpcEnv.scala:259)at scala.util.DynamicVariable.withValue在org.apache.spark.rpc.netty.NettyRpcEnv.deserialize(NettyRpcEnv.scala:308)org.apache.spark.rpc.netty.NettyRpcEnv $$ anonfun $ deserialize $ 1.apply(NettyRpcEnv。 scala.util.DynamicVariable.withValue(DynamicVariable.scala:58)at org.apache.spark.rpc.netty.NettyRpcEnv.deserialize(NettyRpcEnv.scala:257)at org.a 在org.apache.spark.network.server的org.apache.spark.rpc.netty.NettyRpcHandler.receive(NettyRpcEnv.scala:562)上的pache.spark.rpc.netty.NettyRpcHandler.internalReceive(NettyRpcEnv.scala:577)。 TransportRequestHandler.processRpcRequest(TransportRequestHandler.java:159)at org.apache.spark.network.server.TransportRequestHandler.handle(TransportRequestHandler.java:107)at org.apache.spark.network.server.TransportChannelHandler.channelRead(TransportChannelHandler.java: 118)at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:367)at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:353)at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java :346)at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:266)at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:367)at io.netty.chan neo.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:353)at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:346)at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)at io在io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:353) org.apache.spark.network.util.TransportFrameDecoder.channelRead(TransportFrameDecoder.java:85)at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:367)at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext。 java:353)at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(Abst ractChannelHandlerContext.java:346)at io.netty.channel.DefaultChannelPipeline $ HeadContext.channelRead(DefaultChannelPipeline.java:1294)at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:367)at io.netty.channel.AbstractChannelHandlerContext .invokeChannelRead(AbstractChannelHandlerContext.java:353)at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:911)at io.netty.channel.nio.AbstractNioByteChannel $ NioByteUnsafe.read(AbstractNioByteChannel.java:131)at io。在io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:575)at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop)上的netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:652)。 java:489)at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:451)at io.netty.util.concurrent.SingleThreadEventExecutor $ 2.run(SingleThreadEventExecutor.java:140)at io.netty.util。 concurrent.DefaultThreadFactory $ DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)java.lang.Thread.run(Thread.java:748)

在org.apache.spark.network的org.apache.spark.network.client.TransportResponseHandler.handle(TransportResponseHandler.java:207)〜[spark-network-common_2.11-2.2.0.jar:2.2.0]。 server.TransportChannelHandler.channelRead(TransportChannelHandler.java:120)〜[spark-network-common_2.11-2.2.0.jar:2.2.0] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)〜 [netty-all-4.0.43.Final.jar:4.0.43.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)〜[netty-all-4.0.43.Final.jar: 4.0.43.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)〜[netty-all-4.0.43.Final.jar:4.0.43.Final] at io.netty.handler。 timeout.IdleStateHandler.channelRead(IdleStateHandler.java:287)〜[netty-all-4.0.43.Final.jar:4.0.43.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)〜 [网状-ALL-4.0.4 3.Final.jar:4.0.43.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)〜[netty-all-4.0.43.Final.jar:4.0.43.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)〜[netty-all-4.0.43.Final.jar:4.0.43.Final] at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder .java:102)〜[netty-all-4.0.43.Final.jar:4.0.43.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)〜[netty-all-4.0。 43.Final.jar:4.0.43.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)〜[netty-all-4.0.43.Final.jar:4.0.43.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)〜[netty-all-4.0.43.Final.jar:4.0.43.Final] at org.apache.spark.network.util.TransportFrameDecoder.channelRead (TransportFrameDec oder.java:85)〜[spark-network-common_2.11-2.2.0.jar:2.2.0] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)〜[netty-all-4.0 .43] .Final.jar:4.0.43.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)〜[netty-all-4.0.43.Final.jar:4.0.43.Final]在io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)〜[netty-all-4.0.43.Final.jar:4.0.43.Final] at io.netty.channel.DefaultChannelPipeline $ HeadContext.channelRead DefaultChannelPipeline.java:1294)〜[netty-all-4.0.43.Final.jar:4.0.43.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)〜[netty-all-4.0 .43] .Final.jar:4.0.43.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)〜[netty-all-4.0.43.Final.jar:4.0.43.Final]在io.netty.channel.DefaultChannelPipeline.f ireChannelRead(DefaultChannelPipeline.java:911)〜[netty-all-4.0.43.Final.jar:4.0.43.Final] at io.netty.channel.nio.AbstractNioByteChannel $ NioByteUnsafe.read(AbstractNioByteChannel.java:131)〜 [netty-all-4.0.43.Final.jar:4.0.43.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:643)〜[netty-all-4.0.43.Final。 jar:4.0.43.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:566)〜[netty-all-4.0.43.Final.jar:4.0.43.Final] at io。 netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:480)〜[netty-all-4.0.43.Final.jar:4.0.43.Final] at io.netty.channel.nio.NioEventLoop.run(NioEventLoop .java:442)〜[netty-all-4.0.43.Final.jar:4.0.43.Final] at io.netty.util.concurrent.SingleThreadEventExecutor $ 2.run(SingleThreadEventExecutor.java:131)〜[netty-all -4.0.43.Final.jar:4.0.43.Final] at io.netty.util.concurrent.DefaultThreadFactory $ DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)〜[netty-all-4.0.43.Final.jar: 4.0.43 。最终] … 1个常用的框架省略

任何想法?