I am trying to submit a giraph job to a hadoop 1.2.1 cluster. The cluster has a name node master, a map reduce master, and four slaves. The job is failing with the following exception:
java.util.concurrent.ExecutionException: java.lang.IllegalStateException: checkLocalJobRunnerConfiguration: When using LocalJobRunner, must have only one worker since only 1 task at a time!
However, here is my mapred-site.xml file:
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>job.tracker.private.ip:9001</value>
</property>
<property>
<name>mapreduce.job.counters.limit</name>
<value>1000</value>
</property>
<property>
<name>mapred.tasktracker.map.tasks.maximum</name>
<value>50</value>
</property>
<property>
<name>mapred.tasktracker.reduce.tasks.maximum</name>
<value>50</value>
</property>
</configuration>
and my core-site.xml file:
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://name.node.private.ip:9000</value>
</property>
</configuration>
Additionally my job tracker's master file contains its private ip and the slaves file contains the private ips of the four slaves. The name node's master file contains its private ip and the slaves file contains the private ips of the four slaves.
I thought that setting the mapred.job.tracker field to the ip of the map reduce master would make hadoop boot with a remote job runner but apparently not - how can I fix this?