8

I have installed hadoop on my vmware and designed my jar file pagerank. Running the following command:

hadoop jar PageRank-1.0.0.jar PageRankDriver init input output 2, I get the following error;

Failing this attempt.Diagnostics: [2017-12-01 12:55:58.278]Exception from container-launch.
Container id: container_1512069161738_0011_02_000001
Exit code: 1
Stack trace: ExitCodeException exitCode=1: 
        at org.apache.hadoop.util.Shell.runCommand(Shell.java:994)
        at org.apache.hadoop.util.Shell.run(Shell.java:887)
        at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1212)
        at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:295)
        at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.launchContainer(ContainerLaunch.java:457)
        at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:277)
        at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:90)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)

[2017-12-01 12:55:58.278]
[2017-12-01 12:55:58.279]Container exited with a non-zero exit code 1. Last 4096 bytes of stderr :
Error: Could not find or load main class org.apache.hadoop.mapreduce.v2.app.MRAppMaster

Please check whether your etc/hadoop/mapred-site.xml contains the below configuration:
<property>
  <name>yarn.app.mapreduce.am.env</name>
  <value>HADOOP_MAPRED_HOME=${full path of your hadoop distribution directory}</value>
</property>
<property>
  <name>mapreduce.map.env</name>
  <value>HADOOP_MAPRED_HOME=${full path of your hadoop distribution directory}</value>
</property>
<property>
  <name>mapreduce.reduce.e nv</name>
  <value>HADOOP_MAPRED_HOME=${full path of your hadoop distribution directory}</value>
</property>
[2017-12-01 12:55:58.279]
[2017-12-01 12:55:58.279]Container exited with a non-zero exit code 1. Last 4096 bytes of stderr :
Error: Could not find or load main class org.apache.hadoop.mapreduce.v2.app.MRAppMaster

Please check whether your etc/hadoop/mapred-site.xml contains the below configuration:
<property>
  <name>yarn.app.mapreduce.am.env</name>
  <value>HADOOP_MAPRED_HOME=${full path of your hadoop distribution directory}</value>
</property>
<property>
  <name>mapreduce.map.env</name>
  <value>HADOOP_MAPRED_HOME=${full path of your hadoop distribution directory}</value>
</property>
<property>
  <name>mapreduce.reduce.e nv</name>
  <value>HADOOP_MAPRED_HOME=${full path of your hadoop distribution directory}</value>
</property>
[2017-12-01 12:55:58.279]
For more detailed output, check the application tracking page: http://number9.cs.stevens.edu:8088/cluster/app/application_1512069161738_0011 Then click on links to logs of each attempt.
. Failing the application.
2017-12-01 12:55:59,219 INFO mapreduce.Job: Counters: 0
Init Job Error

has anyone any idea how can I resolve this problem?

sarah123
  • 175
  • 1
  • 7

2 Answers2

22

Make configuration changes in mapred-site.xml as below

<configuration>
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
    <property>
        <name>yarn.app.mapreduce.am.env</name>
        <value>HADOOP_MAPRED_HOME=${HADOOP_HOME}</value>
    </property>
    <property>
        <name>mapreduce.map.env</name>
        <value>HADOOP_MAPRED_HOME=${HADOOP_HOME}</value>
    </property>
    <property>
        <name>mapreduce.reduce.env</name>
        <value>HADOOP_MAPRED_HOME=${HADOOP_HOME}</value>
    </property>
</configuration>
Wilson Vargas
  • 2,841
  • 1
  • 19
  • 28
Digvijay
  • 221
  • 2
  • 3
  • I have the same issue, I did as described, but the problem persists – Hugo Oshiro Mar 19 '18 at 22:16
  • 2
    @HugoOshiro Try stopping all hadoop daemons using `stop-all.sh` and then start them all again using `start-all.sh` , then it worked for me. – Aashish Kumar Jul 17 '18 at 05:48
  • @HugoOshiro Also you should have defined the HADOOP_HOME in your bash profile (.bashrc or .bash_profile). Like `export HADOOP_HOME="/usr/local/hadoop"` – Jagan Jan 16 '19 at 19:26
0

The above one is working for me after editing your need to restart all the services.

using:

stop-all.sh
start-all.sh
Tamara Koliada
  • 1,200
  • 2
  • 14
  • 31