I am stuck with a strange problem. Pentaho Data integration provides sample Job "word Count Job" in order to understand MapReduce Jobs.
I am learning MapReduce and I am really lost with one strange error.
Error is :
"Caused by: java.io.IOException: Cannot initialize Cluster.
Please check your configuration for mapreduce.framework.name
and the correspond server addresses."
I have tried everything in my repertoire to resolve from chaging "plugin-properties" file in Pentaho data integration to re-installing Pentaho SHIM but to no avail.
As per the job's flow, file is correctly getting transferred to HDFS server from my local(where pentaho data integration is running) but the moment MapReduce job starts it throws error.