8

I am new to the world of map reduce, I have run a job and it seems to be taking forever to complete given that it is a relatively small task, I am guessing something has not gone according to plan. I am using hadoop version 2.6, here is some info gathered I thought could help. The map reduce programs themselves are straightforward so I won't bother adding those here unless someone really wants me to give more insight - the python code running for map reduce is identical to the one here - http://www.michael-noll.com/tutorials/writing-an-hadoop-mapreduce-program-in-python/. If someone can give a clue as to what has gone wrong or why that would be great. Thanks in advance.

Name:   streamjob1669011192523346656.jar
Application Type:   MAPREDUCE
Application Tags:   
State:  ACCEPTED
FinalStatus:    UNDEFINED
Started:    3-Jul-2015 00:17:10
Elapsed:    20mins, 57sec
Tracking URL:   UNASSIGNED
Diagnostics:    

this is what I get when running the program:

bin/hadoop jar share/hadoop/tools/lib/hadoop-streaming-2.6.0.jar -  file python-files/mapper.py -mapper python-files/mapper.py -file python -    files/reducer.py -reducer python-files/reducer.py -input /user/input/* -  output /user/output
15/07/03 00:16:41 WARN streaming.StreamJob: -file option is deprecated, please use generic option -files instead.
2015-07-03 00:16:43.510 java[3708:1b03] Unable to load realm info from   SCDynamicStore
15/07/03 00:16:44 WARN util.NativeCodeLoader: Unable to load native-   hadoop library for your platform... using builtin-java classes where     applicable
packageJobJar: [python-files/mapper.py, python-files/reducer.py,     /var/folders/4x/v16lrvy91ld4t8rqvnzbr83m0000gn/T/hadoop-unjar8212926403009053963/] []     /var/folders/4x/v16lrvy91ld4t8rqvnzbr83m0000gn/T/streamjob1669011192523346656.jar tmpDir=null
15/07/03 00:16:53 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
15/07/03 00:16:55 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
15/07/03 00:17:05 INFO mapred.FileInputFormat: Total input paths to    process : 1
15/07/03 00:17:06 INFO mapreduce.JobSubmitter: number of splits:2
15/07/03 00:17:07 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1435852353333_0003
15/07/03 00:17:11 INFO impl.YarnClientImpl: Submitted application application_1435852353333_0003
15/07/03 00:17:11 INFO mapreduce.Job: The url to track the job:     http://mymacbook.home:8088/proxy/application_1435852353333_0003/
15/07/03 00:17:11 INFO mapreduce.Job: Running job: job_1435852353333_0003
godzilla
  • 3,005
  • 7
  • 44
  • 60
  • What is your cluster set up? What data are you trying to process and how does MR say it has broken down your job (e.g. from your tracker URL)? – Peter Brittain Jul 05 '15 at 18:43
  • 2
    Go to Yarn resource manager web ui main page( http://mymacbook.home:8088/ ) and post the output of Active nodes: , Memory Total:, Memory used : ,Vcores Total: ,VCores Used : – SachinJose Jul 05 '15 at 21:57

2 Answers2

4

If a job is in ACCEPTED state for long time and not changing to RUNNING state, It could be due to the following reasons.

Nodemanager(slave service) is either dead or unable to communicate with resource manager. if the Active nodes in the Yarn resource manager Web ui main page is zero then you can confirm no node managers are connected to resource manager. If so, you need to start nodemanager.

Another reason is there might be other jobs running which occupies the available slot and no room for new jobs check the value of Memory Total, Memory used ,Vcores Total ,VCores Used in the resource manager webui main page.

SachinJose
  • 8,462
  • 4
  • 42
  • 63
0

Have you partitioned your data the same way you query it ? Basically, you don't want to query all your data, which is what you may be doing at the moment. That could explain why it's taking such a long time to run.

You want to query a subset of your whole data set. For instance, if you partition over dates, you really want to write queries with a date constraint, otherwise the query will take forever to run.

If you can, make your query with a constraint on the variable(s) used to partition your data.