1

I'm trying the run the word count tutorial on single node setup http://hadoop.apache.org/docs/stable/mapred_tutorial.html

Here's my terminal output:

> hadoop jar wordcount.jar org.myorg.WordCount input output
13/08/13 16:26:59 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
13/08/13 16:26:59 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
13/08/13 16:26:59 WARN snappy.LoadSnappy: Snappy native library not loaded
13/08/13 16:26:59 INFO mapred.FileInputFormat: Total input paths to process : 2
13/08/13 16:26:59 INFO mapred.JobClient: Running job: job_local955318185_0001
13/08/13 16:26:59 INFO mapred.LocalJobRunner: Waiting for map tasks
13/08/13 16:26:59 INFO mapred.LocalJobRunner: Starting task: attempt_local955318185_0001_m_000000_0
13/08/13 16:26:59 INFO mapred.Task:  Using ResourceCalculatorPlugin : null
13/08/13 16:26:59 INFO mapred.MapTask: Processing split: file:/Users/jfk/work/hadoop/2_word/input/file02:0+24
13/08/13 16:26:59 INFO mapred.MapTask: numReduceTasks: 1
13/08/13 16:26:59 INFO mapred.MapTask: io.sort.mb = 100
13/08/13 16:27:00 INFO mapred.MapTask: data buffer = 79691776/99614720
13/08/13 16:27:00 INFO mapred.MapTask: record buffer = 262144/327680
13/08/13 16:27:00 INFO mapred.MapTask: Starting flush of map output
13/08/13 16:27:00 INFO mapred.JobClient:  map 0% reduce 0%
13/08/13 16:27:05 INFO mapred.LocalJobRunner: file:/Users/jfk/work/hadoop/2_word/input/file02:0+24
13/08/13 16:27:06 INFO mapred.JobClient:  map 50% reduce 0%

And it's just stuck there. All I can do is Ctrl-C. How can I debug this?

Here is the content of logs/userlogs

-> ls -l
total 0
drwx--x---  16 jfk  admin  544 Aug  7 21:51 job_201308072147_0001
drwx--x---  16 jfk  admin  544 Aug  9 10:18 job_201308091015_0001
drwx--x---   9 jfk  admin  306 Aug 13 14:59 job_201308131457_0001
drwx--x---   7 jfk  admin  238 Aug 13 14:59 job_201308131457_0002
drwx--x---   9 jfk  admin  306 Aug 13 15:02 job_201308131457_0003
drwx--x---   9 jfk  admin  306 Aug 13 15:04 job_201308131457_0005
drwx--x---   9 jfk  admin  306 Aug 13 15:13 job_201308131457_0007
drwx--x---   9 jfk  admin  306 Aug 13 15:14 job_201308131457_0009
drwx--x---   9 jfk  admin  306 Aug 13 15:15 job_201308131457_0011
drwx--x---   7 jfk  admin  238 Aug 13 15:16 job_201308131457_0012
drwx--x---  15 jfk  admin  510 Aug 13 15:28 job_201308131457_0014
drwx--x---   7 jfk  admin  238 Aug 13 15:28 job_201308131457_0015
drwx--x---  15 jfk  admin  510 Aug 13 16:20 job_201308131549_0001
drwx--x---  11 jfk  admin  374 Aug 13 16:20 job_201308131549_0002
drwx--x---   4 jfk  admin  136 Aug 13 16:13 job_201308131549_0004

Content of job_201308131549_0004/attempt_201308131549_0004_r_000002_0/stderr:

2013-08-13 16:13:10.401 java[7378:1203] Unable to load realm info from SCDynamicStore

== UPDATE ==

When googling the error message Unable to load realm info from SCDynamicStore, it seems that several people using Hadoop on OSX have the same problem. The following solution seems to be working for some people, but unfortunately not for me. Hadoop on OSX "Unable to load realm info from SCDynamicStore"

Community
  • 1
  • 1
usual me
  • 8,338
  • 10
  • 52
  • 95

1 Answers1

0

go to http://localhost:50030/jobtracker.jsp and select the stuck job under running, find which map is stuck. go to the map job which is stuck and click all link on the right side which will point you to the exact error. Not sure if this helps you