-2

Hi I have implemented my average word count in java in cloudera vm 4.2.1 p and I have converted to Jar file and ran the command: hadoop jar averagewordlength.jar stubs.AvgWordLength shakespeare wordleng

Next: I have run the Shakespeare correctly and unable to run my file (Which I have created: newfile). It is throwing an exception:

Exception in thread "main" org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does not exist: hdfs://0.0.0.0:8020/user/training/newfile at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:231) at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:248) at org.apache.hadoop.mapred.JobClient.writeNewSplits(JobClient.java:1064) at org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1081) at org.apache.hadoop.mapred.JobClient.access$600(JobClient.java:174) at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:993) at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:946) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408) at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java)

please guide in which path to paste the newfile for checking my solution.

Pramod Gharu
  • 1,105
  • 3
  • 9
  • 18
Pretham
  • 1
  • 1

2 Answers2

0

Seems your Hadoop configuration is incorrect.

hdfs://0.0.0.0 is not a valid address

cloudera vm 4.2.1 ? Try to download the newer CDH 5.x VM

OneCricketeer
  • 179,855
  • 19
  • 132
  • 245
0

I got it by command

hadoop fs -put localpath

Pretham
  • 1
  • 1