0

This is Spark 1.6.1.

When I do below at spark/bin

$ ./spark-shell --master yarn-client

I get the following error.

enter image description here

I checked hostname at /etc/hosts and also in Hadoop but they are assigned as same hostname. Any idea?

Jacek Laskowski
  • 72,696
  • 27
  • 242
  • 420
Judy Kim
  • 13
  • 2
  • What's your `yarn-site.xml` and other Hadoop config files? What's the Spark version? – Jacek Laskowski Apr 17 '17 at 15:45
  • yarn-site.xml configuration part looks like: yarn.nodemanager.auxservices.mapreduce.shuffle.classorg.apache.hadoop.mapred.shuffleHandler yarn.resourcemanager.resource-tracker.address localhost:8025 yarn.resourcemanager.scheduler.address localhost:8030 yarn.resourcemanager.address master:8035 – Judy Kim Apr 18 '17 at 03:35
  • hdfs-site.xml configuration part looks like: 27 dfs.namenode.name.dir 28 file:/home/bohyun/data/hadoop/dfs/name 29 true 30 31 dfs.datanode.data.dir 32 file:/home/bohyun/data/hadoop/dfs/data 33 true 34 dfs.permissions false 35 – Judy Kim Apr 18 '17 at 03:41
  • 2
    Why do have `yarn.resourcemanager.address` value alone set to `master` host whereas it is `localhost` in other properties? And do you have an entry for `master` in `/etc/hosts`? – franklinsijo Apr 18 '17 at 05:53
  • Can you do `ping master` from the machine where you execute `spark-shell`? – Jacek Laskowski Apr 18 '17 at 07:55
  • Can you add `yarn-site.xml` to the question? – Jacek Laskowski Apr 18 '17 at 07:56
  • 1
    Thank you @franklinsijo & Jacek Laskowski. I went through all xml files as you mentioned and edited paths all over again and it works! Thank you so much for your help! – Judy Kim Apr 18 '17 at 08:56

0 Answers0