14

I installed hadoop-2.3.0 and tried to run wordcount example But it starts the job and sits idle

hadoop@ubuntu:~$ $HADOOP_HOME/bin/hadoop jar $HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.3.0.jar    wordcount /myprg outputfile1
14/04/30 13:20:40 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
14/04/30 13:20:51 INFO input.FileInputFormat: Total input paths to process : 1
14/04/30 13:20:53 INFO mapreduce.JobSubmitter: number of splits:1
14/04/30 13:21:02 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1398885280814_0004
14/04/30 13:21:07 INFO impl.YarnClientImpl: Submitted application application_1398885280814_0004
14/04/30 13:21:09 INFO mapreduce.Job: The url to track the job: http://ubuntu:8088/proxy/application_1398885280814_0004/
14/04/30 13:21:09 INFO mapreduce.Job: Running job: job_1398885280814_0004

The url to track the job: application_1398885280814_0004/ enter image description here enter image description here

For previous versions I did nt get such an issue. I was able to run hadoop wordcount in previous version. I followed these steps for installing hadoop-2.3.0

Please suggest.

USB
  • 6,019
  • 15
  • 62
  • 93
  • Have you looked at the `url to track the job: http://ubuntu:8088/proxy/application_1398885280814_0003/`? – Mike Park Apr 30 '14 at 20:58
  • @climbage : I updated my question with screenshot of tracking url – USB Apr 30 '14 at 21:25
  • What else do you have going on in your cluster? Do you have any active nodes in the `Nodes` section of the tracking URL? Your job is not being assigned which leads me to believe you don't have any nodes. – Mike Park Apr 30 '14 at 21:44
  • Yes I have .I am running my cluster in pseudomode and jobtracker list my Live node as 1 and no dead node is also shown .so for sure my node is up. But why is it not executing? – USB May 01 '14 at 04:19
  • Its hard to tell without more information from the resource manager logs – Mike Park May 01 '14 at 13:47

2 Answers2

9

I had the exact same situation a while back while switching to YARN. Basically there was the concept of task slots in MRv1 and containers in MRv2. Both of these differ very much in how the tasks are scheduled and run on the nodes.

The reason that your job is stuck is that it is unable to find/start a container. If you go into the full logs of Resource Manager/Application Master etc daemons, you may find that it is doing nothing after it starts to allocate a new container.

To solve the problem, you have to tweak your memory settings in yarn-site.xml and mapred-site.xml. While doing the same myself, I found this and this tutorials especially helpful. I would suggest you to try with the very basic memory settings and optimize them later on. First check with a word count example then go on to other complex ones.

Gaurav Kumar
  • 986
  • 13
  • 18
5

I was facing the same issue.I added the following property to my yarn-site.xml and it solved the issue.

 <property>
        <name>yarn.resourcemanager.hostname</name>
        <value>Hostname-of-your-RM</value>
        <description>The hostname of the RM.</description>
    </property>

Without the resource manager host name things go awry in the multi-node set up as each node would then default to trying to find a local resource manager and would never announce its resources to the master node. So your Map Reduce execution request probably didn't find any mappers in which to run because the request was being sent to the master and the master didn't know about the slave slots.

Reference : http://www.alexjf.net/blog/distributed-systems/hadoop-yarn-installation-definitive-guide/

Shash
  • 452
  • 8
  • 25