0

I just installed Hadoop successfully on a small cluster. Now I'm trying to run the wordcount example but I'm getting this error:

****hdfs://localhost:54310/user/myname/test11
12/04/24 13:26:45 INFO input.FileInputFormat: Total input paths to process : 1
12/04/24 13:26:45 INFO mapred.JobClient: Running job: job_201204241257_0003
12/04/24 13:26:46 INFO mapred.JobClient:  map 0% reduce 0%
12/04/24 13:26:50 INFO mapred.JobClient: Task Id : attempt_201204241257_0003_m_000002_0, Status : FAILED
Error initializing attempt_201204241257_0003_m_000002_0:
java.io.IOException: Exception reading file:/tmp/mapred/local/ttprivate/taskTracker/myname/jobcache/job_201204241257_0003/jobToken
    at org.apache.hadoop.security.Credentials.readTokenStorageFile(Credentials.java:135)
    at org.apache.hadoop.mapreduce.security.TokenCache.loadTokens(TokenCache.java:165)
    at org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1179)
    at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1116)
    at org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2404)
    at java.lang.Thread.run(Thread.java:722)
Caused by: java.io.FileNotFoundException: File file:/tmp/mapred/local/ttprivate/taskTracker/myname/jobcache/job_201204241257_0003/jobToken does not exist.
    at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:397)
    at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:251)
    at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.<init>(ChecksumFileSystem.java:125)
    at org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:283)
    at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:427)
    at org.apache.hadoop.security.Credentials.readTokenStorageFile(Credentials.java:129)
    ... 5 more

Any help?

phs
  • 10,687
  • 4
  • 58
  • 84
thinkBig
  • 823
  • 2
  • 9
  • 12
  • 2
    Does the path `/tmp/mapred/local` exist, and does the user under which the hadoop services run have permission to write to this directory? – Chris White Apr 24 '12 at 18:30
  • IIRC you have to chown that dir or be a user in a group with those permissions. Otherwise you will get fnf – apesa Apr 24 '12 at 21:50

2 Answers2

2

I just worked through this same error--setting the permissions recursively on my Hadoop directory didn't help. Following Mohyt's recommendation here, I modified core-site.xml (in the hadoop/conf/ directory) to remove the place where I specified the temp directory (hadoop.tmp.dir in the XML). After allowing Hadoop to create its own temp directory, I'm running error-free.

Community
  • 1
  • 1
s3cur3
  • 2,749
  • 2
  • 27
  • 42
0

It is better to create your own temp directory.

<configuration>
 <property>
 <name>hadoop.tmp.dir</name>
 <value>/home/unmesha/mytmpfolder/tmp</value>
 <description>A base for other temporary directories.</description>
 </property>
.....

And give permission

unmesha@unmesha-virtual-machine:~$chmod 750 /mytmpfolder/tmp

check this for core-site.xml configuration

USB
  • 6,019
  • 15
  • 62
  • 93