8

Tried to execute sample map reduce program from Apache Hadoop. Got exception below when map reduce job was running. Tried hdfs dfs -chmod 777 / but that didn't fix the issue.

15/03/10 13:13:10 WARN mapreduce.JobSubmitter: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with
ToolRunner to remedy this.
15/03/10 13:13:10 WARN mapreduce.JobSubmitter: No job jar file set.  User classes may not be found. See Job or Job#setJar(String).
15/03/10 13:13:10 INFO input.FileInputFormat: Total input paths to process : 2
15/03/10 13:13:11 INFO mapreduce.JobSubmitter: number of splits:2
15/03/10 13:13:11 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1425973278169_0001
15/03/10 13:13:12 INFO mapred.YARNRunner: Job jar is not present. Not adding any jar to the list of resources.
15/03/10 13:13:12 INFO impl.YarnClientImpl: Submitted application application_1425973278169_0001
15/03/10 13:13:12 INFO mapreduce.Job: The url to track the job: http://B2ML10803:8088/proxy/application_1425973278169_0001/
15/03/10 13:13:12 INFO mapreduce.Job: Running job: job_1425973278169_0001
15/03/10 13:13:18 INFO mapreduce.Job: Job job_1425973278169_0001 running in uber mode : false
15/03/10 13:13:18 INFO mapreduce.Job:  map 0% reduce 0%
15/03/10 13:13:18 INFO mapreduce.Job: Job job_1425973278169_0001 failed with state FAILED due to: Application application_1425973278169_0001 failed 2 times due
to AM Container for appattempt_1425973278169_0001_000002 exited with  exitCode: 1
For more detailed output, check application tracking page:http://B2ML10803:8088/proxy/application_1425973278169_0001/Then, click on links to logs of each attemp
t.
Diagnostics: Exception from container-launch.
Container id: container_1425973278169_0001_02_000001
Exit code: 1
Exception message: CreateSymbolicLink error (1314): A required privilege is not held by the client.

Stack trace:

ExitCodeException exitCode=1: CreateSymbolicLink error (1314): A required privilege is not held by the client.

    at org.apache.hadoop.util.Shell.runCommand(Shell.java:538)
    at org.apache.hadoop.util.Shell.run(Shell.java:455)
    at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:715)
    at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:211)
    at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
    at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)

Shell output:

1 file(s) moved.

Container exited with a non-zero exit code 1
Failing this attempt. Failing the application.
15/03/10 13:13:18 INFO mapreduce.Job: Counters: 0
JasonMArcher
  • 14,195
  • 22
  • 56
  • 52
Sylvester Daniel
  • 98
  • 1
  • 1
  • 5
  • Having the same problem under windows7, I dont think its hdfs problem, more looks like running it under windows causes issue. Made it work under mac os without troubles – ametiste Mar 20 '15 at 09:42

7 Answers7

17

Win 8.1 + hadoop 2.7.0 (build from sources)

  1. run Command Prompt in admin mode

  2. execute etc\hadoop\hadoop-env.cmd

  3. run sbin\start-dfs.cmd

  4. run sbin\start-yarn.cmd

  5. now try to run your job

Mariusz
  • 2,617
  • 1
  • 23
  • 26
8

I recently met exactly the same problem. I tried reformatting namenode but it doesn't work and I believe this cannot solve the problem permanently. With the reference from @aoetalks, I solved this problem on Windows Server 2012 R2 by looking into Local Group Policy.

In conclusion, try the following steps:

  1. open Local Group Policy (press Win+R to open "Run..." - type gpedit.msc)
  2. expand "Computer Configuration" - "Windows Settings" - "Security Settings" - "Local Policies" - "User Rights Assignment"
  3. find "Create symbolic links" on the right, and see whether your user is included. If not, add your user into it.
  4. this will come in effect after logging in next time, so log out and log in.

If this still doesn't work, perhaps it's because you are using a Administrator account. In this case you'll have to disable User Account Control: Run all administrators in Admin Approval Mode in the same directory (i.e. User Rights Assignment in Group Policy) Then restart the computer to make it take effect.

Reference: https://superuser.com/questions/104845/permission-to-make-symbolic-links-in-windows-7

Community
  • 1
  • 1
DarkZero
  • 2,259
  • 3
  • 25
  • 36
1

I encountered the same problem as you. We solved the problem by checking the java environment.

  1. check java version and javac version.
  2. ensure that every computer in the clusters has the same java environment.
zThanks
  • 43
  • 5
0

I don't know the cause of error, but reformating NameNode helps me to solve it in Windows 8.

  1. Delete all old logs. Clean folders C:\hadoop\logs and C:\hadoop\logs\userlogs
  2. Clean folders C:\hadoop\data\dfs\datanode and C:\hadoop\data\dfs\namenode.
    Reformat NameNode with calling command in administrator mode:
    c:\hadoop\bin>hdfs namenode -format
Yuliia Ashomok
  • 8,336
  • 2
  • 60
  • 69
0

See this for a solution and this for an explanation. Basically, symbolic links can be a security risk and the design of UAC prevents users (even users who are part of the Administrators group) from creating symlinks unless they are running in elevated mode.

Long story short, try reformatting your name node and starting Hadoop and all Hadoop jobs from an elevated command prompt.

Community
  • 1
  • 1
aoetalks
  • 1,741
  • 1
  • 13
  • 26
0

In Windows, change the configuration in hdfs-site.xml as

<configuration>
   <property>
       <name>dfs.replication</name>
       <value>1</value>
   </property>
   <property>
       <name>dfs.namenode.name.dir</name>
       <value>file:///C:/hadoop-2.7.2/data/namenode</value>
   </property>
   <property>
       <name>dfs.datanode.data.dir</name>
       <value>file:///C:/hadoop-2.7.2/data/datanode</value>
   </property>
</configuration>

open cmd in admin mode and run the command:-

  • stop-all.cmd
  • hdfs namenode –format
  • start-all.cmd

and then run the final jar in admin mode hadoop jar C:\Hadoop_Demo\wordCount\target\wordCount-0.0.1-SNAPSHOT.jar file:///C:/Hadoop/input.txt file:///C:/Hadoop/output

0

I solved the same problem. Let's "Run as administrator" when you run "Command Prompt".

vuminh91
  • 141
  • 1
  • 6