I am trying to write a data frame from R to HDFS using rmr package in Rstudio on Amazon EMR. The tutorial I am following is http://blogs.aws.amazon.com/bigdata/post/Tx37RSKRFDQNTSL/Statistical-Analysis-with-Open-Source-R-and-RStudio-on-Amazon-EMR
The code I have written is
Sys.setenv(HADOOP_CMD="/home/hadoop/bin/hadoop")
Sys.setenv(HADOOP_STREAMING="/home/hadoop/contrib/streaming/hadoop-streaming.jar")
Sys.setenv(JAVA_HOME="/usr/java/latest/jre")
# load librarys
library(rmr2)
library(rhdfs)
library(plyrmr)
# initiate rhdfs package
hdfs.init()
# a very simple plyrmr example to test the package
library(plyrmr)
# running code localy
bind.cols(mtcars, carb.per.cyl = carb/cyl)
# same code on Hadoop cluster
to.dfs(mtcars, output="/tmp/mtcars")
I am following this code tutorial: https://github.com/awslabs/emr-bootstrap-actions/blob/master/R/Hadoop/examples/biganalyses_example.R
Tha hadoop version is Cloudera CDH5. I have also set the Environment variables appropriately.
On running the Above code, I get the following error:
> to.dfs(data,output="/tmp/cust_seg")
15/03/09 20:00:21 ERROR streaming.StreamJob: Missing required options: input, output
Usage: $HADOOP_HOME/bin/hadoop jar \
$HADOOP_HOME/hadoop-streaming.jar [options]
Options:
-input <path> DFS input file(s) for the Map step
-output <path> DFS output directory for the Reduce step
-mapper <cmd|JavaClassName> The streaming command to run
-combiner <JavaClassName> Combiner has to be a Java class
-reducer <cmd|JavaClassName> The streaming command to run
-file <file> File/dir to be shipped in the Job jar file
-inputformat TextInputFormat(default)|SequenceFileAsTextInputFormat|JavaClassName Optional.
-outputformat TextOutputFormat(default)|JavaClassName Optional.
-partitioner JavaClassName Optional.
-numReduceTasks <num> Optional.
-inputreader <spec> Optional.
-cmdenv <n>=<v> Optional. Pass env.var to streaming commands
-mapdebug <path> Optional. To run this script when a map task fails
-reducedebug <path> Optional. To run this script when a reduce task fails
-verbose
Generic options supported are
-conf <configuration file> specify an application configuration file
-D <property=value> use value for given property
-fs <local|namenode:port> specify a namenode
-jt <local|resourcemanager:port> specify a ResourceManager
-files <comma separated list of files> specify comma separated files to be copied to the map reduce cluster
-libjars <comma separated list of jars> specify comma separated jar files to include in the classpath.
-archives <comma separated list of archives> specify comma separated archives to be unarchived on the compute machines.
The general command line syntax is
bin/hadoop command [genericOptions] [commandOptions]
For more details about these options:
Use $HADOOP_HOME/bin/hadoop jar build/hadoop-streaming.jar -info
Streaming Job Failed!
I couldn't figure out the solution to this issue. Would appreciate it if someone cane assist quickly.