Today I started working on rhdfs and rmr2 packages.
mapreduce() function on a 1D vector worked well as expected. piece of code on 1D vector
a1 <- to.dfs(1:20)
a2 <- mapreduce(input=a1, map=function(k,v) keyval(v, v^2))
a3 <- as.data.frame(from.dfs(a2())
It returns following dataframe
Key Val
1 1 1
2 10 100
3 11 121
4 12 144
5 13 169
6 14 196
7 15 225
8 16 256
9 17 289
10 18 324
11 19 361
12 2 4
13 20 400
14 3 9
15 4 16
16 5 25
17 6 36
18 7 49
19 8 64
20 9 81
Till now, it was fine.
But, While working on mapreduce function on mtcars dataset, I got the following error message. Unable to debug it further. Kindly give some clue to move ahead.
My piece of code :
rs1 <- mapreduce(input=mtcars,
map=function(k, v) {
if (mtcars$hp > 150) keyval("Bigger", 1) },
reduce=function(k, v) keyval(k, sum(v))
)
Error Message with the above piece of code.
13/09/21 07:24:49 ERROR streaming.StreamJob: Missing required option: input
Usage: $HADOOP_HOME/bin/hadoop jar \
$HADOOP_HOME/hadoop-streaming.jar [options]
Options:
-input <path> DFS input file(s) for the Map step
-output <path> DFS output directory for the Reduce step
-mapper <cmd|JavaClassName> The streaming command to run
-combiner <cmd|JavaClassName> The streaming command to run
-reducer <cmd|JavaClassName> The streaming command to run
-file <file> File/dir to be shipped in the Job jar file
-inputformat TextInputFormat(default)|SequenceFileAsTextInputFormat|JavaClassName Optional.
-outputformat TextOutputFormat(default)|JavaClassName Optional.
-partitioner JavaClassName Optional.
-numReduceTasks <num> Optional.
-inputreader <spec> Optional.
-cmdenv <n>=<v> Optional. Pass env.var to streaming commands
-mapdebug <path> Optional. To run this script when a map task fails
-reducedebug <path> Optional. To run this script when a reduce task fails
-io <identifier> Optional.
-verbose
Generic options supported are
-conf <configuration file> specify an application configuration file
-D <property=value> use value for given property
-fs <local|namenode:port> specify a namenode
-jt <local|jobtracker:port> specify a job tracker
-files <comma separated list of files> specify comma separated files to be copied to the map reduce cluster
-libjars <comma separated list of jars> specify comma separated jar files to include in the classpath.
-archives <comma separated list of archives> specify comma separated archives to be unarchived on the compute machines.
The general command line syntax is
bin/hadoop command [genericOptions] [commandOptions]
For more details about these options:
Use $HADOOP_HOME/bin/hadoop jar build/hadoop-streaming.jar -info
Streaming Command Failed!
Error in mr(map = map, reduce = reduce, combine = combine, vectorized.reduce, :
hadoop streaming failed with error code 1
Quick and detailed responses are highly appreciated...