docker中的纱线 – __spark_libs__.zip不存在

我已经看了这个 StackOverflow后,但他们没有帮助我很多。

我正试图让纱线在现有的集群上工作。 到目前为止,我们一直使用火花独立pipe理器作为我们的资源分配器,并且一直按预期工作。

这是我们架构的基本概述。 在白色框中的所有东西都在docker容器中运行。 在这里输入图像说明

master-machine我可以在yarn resource manager容器中运行以下命令,并运行使用yarn的spark-shell: ./pyspark --master yarn --driver-memory 1G --executor-memory 1G --executor-cores 1 --conf "spark.yarn.am.memory=1G"

但是,如果我尝试从jupyter容器内的client-machine运行相同的命令,我在YARN-UI中出现以下错误

 Application application_1512999329660_0001 failed 2 times due to AM Container for appattempt_1512999329660_0001_000002 exited with exitCode: -1000 For more detailed output, check application tracking page:http://master-machine:5000/proxy/application_1512999329660_0001/Then, click on links to logs of each attempt. Diagnostics: File file:/sparktmp/spark-58732bb2-f513-4aff-b1f0-27f0a8d79947/__spark_libs__5915104925224729874.zip does not exist java.io.FileNotFoundException: File file:/sparktmp/spark-58732bb2-f513-4aff-b1f0-27f0a8d79947/__spark_libs__5915104925224729874.zip does not exist 

我可以在client-machine上findfile:/sparktmp/spark-58732bb2-f513-4aff-b1f0-27f0a8d79947/ ,但是我无法在master machine上findspark-58732bb2-f513-4aff-b1f0-27f0a8d79947

需要注意的是,当spark-shell指向master machine上的独立sparkpipe理器时,spark-shell会从client-machine上运行。

没有日志打印到工人机器上的纱线日志目录中。

如果我在spark / examples / src / main / python / pi.py上运行spark-submit,我得到和上面一样的错误。

这是yarn-site.xml

 <configuration> <property> <description>YARN hostname</description> <name>yarn.resourcemanager.hostname</name> <value>master-machine</value> </property> <property> <name>yarn.resourcemanager.scheduler.class</name> <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler</value> <!-- <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.fifo.FifoScheduler</value> --> <!-- <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler</value> --> </property> <property> <description>The address of the RM web application.</description> <name>yarn.resourcemanager.webapp.address</name> <value>${yarn.resourcemanager.hostname}:5000</value> </property> <property> <name>yarn.resourcemanager.resource-tracker.address</name> <value>${yarn.resourcemanager.hostname}:8031</value> </property> <property> <description>The address of the scheduler interface.</description> <name>yarn.resourcemanager.scheduler.address</name> <value>${yarn.resourcemanager.hostname}:8030</value> </property> <property> <description>The address of the applications manager interface in the RM.</description> <name>yarn.resourcemanager.address</name> <value>${yarn.resourcemanager.hostname}:8032</value> </property> <property> <description>The address of the RM admin interface.</description> <name>yarn.resourcemanager.admin.address</name> <value>${yarn.resourcemanager.hostname}:8033</value> </property> <property> <description>Set to false, to avoid ip check</description> <name>hadoop.security.token.service.use_ip</name> <value>false</value> </property> <property> <name>yarn.scheduler.capacity.maximum-applications</name> <value>1000</value> <description>Maximum number of applications in the system which can be concurrently active both running and pending</description> </property> <property> <description>Whether to use preemption. Note that preemption is experimental in the current version. Defaults to false.</description> <name>yarn.scheduler.fair.preemption</name> <value>true</value> </property> <property> <description>Whether to allow multiple container assignments in one heartbeat. Defaults to false.</description> <name>yarn.scheduler.fair.assignmultiple</name> <value>true</value> </property> <property> <name>yarn.nodemanager.pmem-check-enabled</name> <value>false</value> </property> <property> <name>yarn.nodemanager.vmem-check-enabled</name> <value>false</value> </property> </configuration> 

这里是spark.conf:

 # Default system properties included when running spark-submit. # This is useful for setting default environmental settings. # DRIVER PROPERTIES spark.driver.port 7011 spark.fileserver.port 7021 spark.broadcast.port 7031 spark.replClassServer.port 7041 spark.akka.threads 6 spark.driver.cores 4 spark.driver.memory 32g spark.master yarn spark.deploy.mode client # DRIVER AND EXECUTORS spark.blockManager.port 7051 # EXECUTORS spark.executor.port 7101 # GENERAL spark.broadcast.factory=org.apache.spark.broadcast.HttpBroadcastFactory spark.port.maxRetries 10 spark.local.dir /sparktmp spark.scheduler.mode FAIR # SPARK UI spark.ui.port 4140 # DYNAMIC ALLOCATION AND SHUFFLE SERVICE # http://spark.apache.org/docs/latest/configuration.html#dynamic-allocation spark.dynamicAllocation.enabled false spark.shuffle.service.enabled false spark.shuffle.service.port 7061 spark.dynamicAllocation.initialExecutors 5 spark.dynamicAllocation.minExecutors 0 spark.dynamicAllocation.maxExecutors 8 spark.dynamicAllocation.executorIdleTimeout 60s # LOGGING spark.executor.logs.rolling.maxRetainedFiles 5 spark.executor.logs.rolling.strategy size spark.executor.logs.rolling.maxSize 100000000 # JMX # Testing # spark.driver.extraJavaOptions -Dcom.sun.management.jmxremote.port=8897 -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false # Spark Yarn Configs spark.hadoop.yarn.resourcemanager.address <master-machine IP>:8032 spark.hadoop.yarn.resourcemanager.hostname master-machine 

这个shell脚本在所有的机器上运行:

 # The main ones export CONDA_DIR=/cluster/conda export HADOOP_HOME=/usr/hadoop export SPARK_HOME=/usr/spark export JAVA_HOME=/usr/java/latest export PATH=$PATH:$SPARK_HOME/bin:$HADOOP_HOME/bin:$JAVA_HOME/bin:$CONDA_DIR/bin:/cluster/libs-python:/cluster/batch export PYTHONPATH=/cluster/libs-python:$SPARK_HOME/python:$PY4JPATH:$PYTHONPATH export SPARK_CLASSPATH=/cluster/libs-java/*:/cluster/libs-python:$SPARK_CLASSPATH # Core spark configuration export PYSPARK_PYTHON="/cluster/conda/bin/python" export SPARK_MASTER_PORT=7077 export SPARK_WORKER_PORT=7078 export SPARK_MASTER_WEBUI_PORT=7080 export SPARK_WORKER_WEBUI_PORT=7081 export SPARK_WORKER_OPTS="-Dspark.worker.cleanup.enabled=true -Duser.timezone=UTC+02:00" export SPARK_WORKER_DIR="/sparktmp" export SPARK_WORKER_CORES=22 export SPARK_WORKER_MEMORY=43G export SPARK_DAEMON_MEMORY=1G export SPARK_WORKER_INSTANCEs=1 export SPARK_EXECUTOR_INSTANCES=2 export SPARK_EXECUTOR_MEMORY=4G export SPARK_EXECUTOR_CORES=2 export SPARK_LOCAL_IP=$(hostname -I | cut -f1 -d " ") export SPARK_PUBLIC_DNS=$(hostname -I | cut -f1 -d " ") export SPARK_MASTER_OPTS="-Duser.timezone=UTC+02:00" 

这是主机(namenodes)上的hdfs-site.xml:

 <configuration> <property> <name>dfs.datanode.data.dir</name> <value>/hdfs</value> </property> <property> <name>dfs.namenode.name.dir</name> <value>/hdfs/name</value> </property> <property> <name>dfs.replication</name> <value>2</value> </property> <property> <name>dfs.replication.max</name> <value>3</value> </property> <property> <name>dfs.replication.min</name> <value>1</value> </property> <property> <name>dfs.permissions.superusergroup</name> <value>supergroup</value> </property> <property> <name>dfs.blocksize</name> <value>268435456</value> </property> <property> <name>dfs.permissions.enabled</name> <value>true</value> </property> <property> <name>fs.permissions.umask-mode</name> <value>002</value> </property> <property> <name>dfs.namenode.datanode.registration.ip-hostname-check</name> <value>false</value> </property> <property> <!-- 1000Mbit/s --> <name>dfs.balance.bandwidthPerSec</name> <value>125000000</value> </property> <property> <name>dfs.hosts.exclude</name> <value>/cluster/config/hadoopconf/namenode/dfs.hosts.exclude</value> <final>true</final> </property> <property> <name>dfs.namenode.replication.work.multiplier.per.iteration</name> <value>10</value> </property> <property> <name>dfs.namenode.replication.max-streams</name> <value>50</value> </property> <property> <name>dfs.namenode.replication.max-streams-hard-limit</name> <value>100</value> </property> </configuration> 

这就是worker-machines(数据节点)上的hdfs-site.xml:

 <configuration> <property> <name>dfs.datanode.data.dir</name> <value>/hdfs,/hdfs2,/hdfs3</value> </property> <property> <name>dfs.namenode.name.dir</name> <value>/hdfs/name</value> </property> <property> <name>dfs.replication</name> <value>2</value> </property> <property> <name>dfs.replication.max</name> <value>3</value> </property> <property> <name>dfs.replication.min</name> <value>1</value> </property> <property> <name>dfs.permissions.superusergroup</name> <value>supergroup</value> </property> <property> <name>dfs.blocksize</name> <value>268435456</value> </property> <property> <name>dfs.permissions.enabled</name> <value>true</value> </property> <property> <name>fs.permissions.umask-mode</name> <value>002</value> </property> <property> <!-- 1000Mbit/s --> <name>dfs.balance.bandwidthPerSec</name> <value>125000000</value> </property> </configuration> 

这是worker-machines(datanodes)上的core-site.xml,

 <configuration> <property> <name>fs.defaultFS</name> <value>hdfs://master-machine:54310/</value> </property> </configuration> 

这是主机(名称节点)上的core-site.xml:

 <configuration> <property> <name>fs.defaultFS</name> <value>hdfs://master-machine:54310/</value> </property> </configuration> 

经过大量的debugging后,我能够确定由于某种原因,即使HADOOP_HOME环境variables指向正确的位置,jupyter容器也没有在正确的hadoop conf目录中查找。 为解决上述问题,我只需要将HADOOP_CONF_DIR指向正确的目录,一切都开始工作了。