0

I am trying to install Open Edx Insights on Azure instance. Both LMS and Insights are on same box. As part of installation, I have installed Hadoop, hive etc through yml script. Now the next instruction is to test the hadoop installation and for that document asks to compute value of "pi". For that they have given following command:

hadoop jar hadoop*/share/hadoop/mapreduce/hadoop-mapreduce-examples*.jar pi 2 100

But after I run this command it gives error as :

hadoop@MillionEdx:~$ hadoop jar hadoop*/share/hadoop/mapreduce/hadoop-mapreduce-examples*.jar pi 2 100
Unknown program 'hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.2.jar' chosen.

Valid program names are:
 aggregatewordcount: An Aggregate based map/reduce program that counts the words in the input files.

aggregatewordhist: An Aggregate based map/reduce program that computes the histogram of the words in the input files. bbp: A map/reduce program that uses Bailey-Borwein-Plouffe to compute exact digits of Pi. dbcount: An example job that count the pageview counts from a database. distbbp: A map/reduce program that uses a BBP-type formula to compute exact bits of Pi. grep: A map/reduce program that counts the matches of a regex in the input. join: A job that effects a join over sorted, equally partitioned datasets multifilewc: A job that counts words from several files. pentomino: A map/reduce tile laying program to find solutions to pentomino problems. pi: A map/reduce program that estimates Pi using a quasi-Monte Carlo method. randomtextwriter: A map/reduce program that writes 10GB of random textual data per node. randomwriter: A map/reduce program that writes 10GB of random data per node. secondarysort: An example defining a secondary sort to the reduce. sort: A map/reduce program that sorts the data written by the random writer. sudoku: A sudoku solver. teragen: Generate data for the terasort terasort: Run the terasort teravalidate: Checking results of terasort wordcount: A map/reduce program that counts the words in the input files. wordmean: A map/reduce program that counts the average length of the words in the input files. wordmedian: A map/reduce program that counts the median length of the words in the input files. wordstandarddeviation: A map/reduce program that counts the standard deviation of the length of the words in the input files.

I tried many things like tried giving fully qualified name for Pi but i never got value of pi. Please suggest some work around. Thanks in advance.

1 Answers1

0

need to check for java version compatible with Hadoop-mapreduce 2.7.2.

This can be because for wrong jar file with respective to java version.