2

I am attempting to run a simple word count program in Python using Hadoop and mrjob. I have a pseudo-distributed Hadoop 2.7.3 installation on single t2.micro EC2 instance. The program is run as:

python mr_word_count.py -r hadoop hdfs:///user/ubuntu/input/lorem.txt  -o output

but it fails with the following error:

Using configs in /home/ubuntu/.mrjob.conf
Looking for hadoop binary in /home/ubuntu/hadoop/hadoop-2.7.3/bin...
Found hadoop binary: /home/ubuntu/hadoop/hadoop-2.7.3/bin/hadoop
Using Hadoop version 2.7.3
Creating temp directory /tmp/mr_word_count.ubuntu.20210403.013125.236375
uploading working dir files to hdfs:///user/ubuntu/tmp/mrjob/mr_word_count.ubuntu.20210403.013125.236375/files/wd...
Copying other local files to hdfs:///user/ubuntu/tmp/mrjob/mr_word_count.ubuntu.20210403.013125.236375/files/
Running step 1 of 1...
  Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
  session.id is deprecated. Instead, use dfs.metrics.session-id
  Initializing JVM Metrics with processName=JobTracker, sessionId=
  Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized
  Cleaning up the staging area file:/tmp/mapred/staging/ubuntu1155540475/.staging/job_local1155540475_0001
  Error launching job , bad input path : File does not exist: /tmp/mapred/staging/ubuntu1155540475/.staging/job_local1155540475_0001/files/mr_word_count.py#mr_word_count.py
  Streaming Command Failed!
Attempting to fetch counters from logs...
Can't fetch history log; missing job ID
No counters found
Scanning logs for probable cause of failure...
Can't fetch history log; missing job ID
Can't fetch task logs; missing application ID
Step 1 of 1 failed: Command '['/home/ubuntu/hadoop/hadoop-2.7.3/bin/hadoop', 'jar', '/home/ubuntu/hadoop/hadoop-2.7.3/share/hadoop/tools/lib/hadoop-streaming-2.7.3.jar', '-files', 'hdfs:///user/ubuntu/tmp/mrjob/mr_word_count.ubuntu.20210403.013125.236375/files/wd/mr_word_count.py#mr_word_count.py,hdfs:///user/ubuntu/tmp/mrjob/mr_word_count.ubuntu.20210403.013125.236375/files/wd/mrjob.zip#mrjob.zip,hdfs:///user/ubuntu/tmp/mrjob/mr_word_count.ubuntu.20210403.013125.236375/files/wd/setup-wrapper.sh#setup-wrapper.sh', '-input', 'hdfs:///user/ubuntu/input/lorem.txt', '-output', 'hdfs:///user/ubuntu/output', '-mapper', '/bin/sh -ex setup-wrapper.sh python3 mr_word_count.py --step-num=0 --mapper', '-combiner', '/bin/sh -ex setup-wrapper.sh python3 mr_word_count.py --step-num=0 --combiner', '-reducer', '/bin/sh -ex setup-wrapper.sh python3 mr_word_count.py --step-num=0 --reducer']' returned non-zero exit status 512.

It seems the runner should be copying my program to /tmp/mapred/staging/, but isn't, so I suspect I'm missing a configuration somewhere. The Python code is only local, the input file is in HDFS.

I've seen a bunch of questions here with practically the same error (particularly this and this), but none of the changes to the configuration xmls have fixed the error. It works if I run it in local (-r local) or inline (-r inline) modes, but not the Hadoop runner (-r hadoop).

This is the program I'm trying to run: https://gist.github.com/k4v/5d0d1425977fe7e228e7a1e538f72d68

Hadoop configuration files:

The following processes are running:

$ jps
23283 Jps
21846 NodeManager
21545 SecondaryNameNode
21674 ResourceManager
21325 DataNode
21149 NameNode

Please help figure out what I'm missing. Thank you.

Karthik V
  • 1,033
  • 1
  • 14
  • 29

0 Answers0