1

I am setting up for spark with Hadoop 2.3.0 on Mesos 0.21.0. when I try spark on the master, I get these error messages fro stderr of mesos slave:

WARNING: Logging before InitGoogleLogging() is written to STDERR

I1229 12:34:45.923665 8571 fetcher.cpp:76] Fetching URI 'hdfs://10.170.207.41/spark/spark-1.2.0.tar.gz'

I1229 12:34:45.925240 8571 fetcher.cpp:105] Downloading resource from 'hdfs://10.170.207.41/spark/spark-1.2.0.tar.gz' to '/tmp/mesos/slaves/20141226-161203-701475338-5050-6942-S0/frameworks/20141229-111020-701475338-5050-985-0001/executors/20141226-161203-701475338-5050-6942-S0/runs/8ef30e72-d8cf-4218-8a62-bccdf673b5aa/spark-1.2.0.tar.gz'

E1229 12:34:45.927089 8571 fetcher.cpp:109] HDFS copyToLocal failed: hadoop fs -copyToLocal 'hdfs://10.170.207.41/spark/spark-1.2.0.tar.gz' '/tmp/mesos/slaves/20141226-161203-701475338-5050-6942-S0/frameworks/20141229-111020-701475338-5050-985-0001/executors/20141226-161203-701475338-5050-6942-S0/runs/8ef30e72-d8cf-4218-8a62-bccdf673b5aa/spark-1.2.0.tar.gz'

sh: 1: hadoop: not found

Failed to fetch: hdfs://10.170.207.41/spark/spark-1.2.0.tar.gz

Failed to synchronize with slave (it's probably exited)

The interesting thing is that when i switch to the slave node and run the same command

hadoop fs -copyToLocal 'hdfs://10.170.207.41/spark/spark-1.2.0.tar.gz' '/tmp/mesos/slaves/20141226-161203-701475338-5050-6942-S0/frameworks/20141229-111020-701475338-5050-985-0001/executors/20141226-161203-701475338-5050-6942-S0/runs/8ef30e72-d8cf-4218-8a62-bccdf673b5aa/spark-1.2.0.tar.gz'

, it goes well.

  • I've checked the thread [Hadoop 2.5.0 on Mesos 0.21.0 with library 0.0.8 executor error](http://stackoverflow.com/questions/27254666/hadoop-2-5-0-on-mesos-0-21-0-with-library-0-0-8-executor-error), which can not solve my problem – fei_che_che Dec 29 '14 at 05:10
  • What user is mesos-slave run as? Does that user have `hadoop` in their PATH, and execute permission? – Adam Dec 29 '14 at 08:14
  • it's root, and have hadoop in its PATH – fei_che_che Dec 29 '14 at 09:25
  • (And root has execute permission on the actual hadoop binary?) – Adam Jan 08 '15 at 09:25
  • I had the same error in logs. It turned out that I had forgotten to restart a slave that did not have HADOOP_HOME set or hadoop on the path. Once I restarted the master and slave, mesos found hadoop. – jlb Feb 05 '15 at 04:14
  • Thanks, @jlb. it is strange that if start mesos-slave by command line with specifying HADOOP_HOME, it works. However, if i start mesos-slave by service command, it will get the error i mentioned. – fei_che_che Feb 11 '15 at 08:38

1 Answers1

0

When starting mesos slave, you have to specify the path to your hadoop installation through the following parameter:

--hadoop_home=/path/to/hadoop

Without that it just didn't work for me, even though I had the HADOOP_HOME environment variable set up.