I'm new to hadoop and mapreduce, I'm trying to write a mapreduce that counts the top 10 count words of a word count txt file.
My txt file 'q2_result.txt' looks like:
yourself 268
yourselves 73
yoursnot 1
youst 1
youth 270
youthat 1
youthful 31
youths 9
youtli 1
youwell 1
youwondrous 1
youyou 1
zanies 1
zany 1
zeal 32
zealous 6
zeals 1
Mapper:
#!/usr/bin/env python
import sys
for line in sys.stdin:
line = line.strip()
word, count = line.split()
print "%s\t%s" % (word, count)
Reducer:
#!usr/bin/env/ python
import sys
top_n = 0
for line in sys.stdin:
line = line.strip()
word, count = line.split()
top_n += 1
if top_n == 11:
break
print '%s\t%s' % (word, count)
I know you can pass a flag to -D option in Hadoop jar command so it sorts on the key you want(in my case the count which is k2,2), here I'm just using a simple command first:
hadoop jar /usr/hdp/2.5.0.0-1245/hadoop-mapreduce/hadoop-streaming-2.7.3.2.5.0.0-1245.jar -file /root/LAB3/mapper.py -mapper mapper.py -file /root/LAB3/reducer.py -reducer reducer.py -input /user/root/lab3/q2_result.txt -output /user/root/lab3/test_out
So I thought such simple mapper and reducer shouldn't give me errors, but it did and I can't figure out why, errors here: http://pastebin.com/PvY4d89c
(I'm using the Horton works HDP Sandbox on a virtualBox on Ubuntu16.04)