-2

I used this command to run wordcound example in Hadoop.

hadoop jar /usr/local/Cellar/hadoop/3.0.0/libexec/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.5.jar wordcount inputWiki/Wiki_data_100MB outputWiki0301

and I got error message like below.

2018-03-01 18:54:14,845 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2018-03-01 18:54:16,107 INFO beanutils.FluentPropertyBeanIntrospector: Error when creating PropertyDescriptor for public final void org.apache.commons.configuration2.AbstractConfiguration.setProperty(java.lang.String,java.lang.Object)! Ignoring this property.*

I used that command ran similar file before and it worked well. Could anyone help me on this?

Update results below:

pal-nat186-66-224:bin xujingjing$ hadoop jar /usr/local/Cellar/hadoop/3.0.0/libexec/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.5.jar wordcount inputGurtenberg0302/gurtenberg.txt outputGurtenberg0302 2018-03-02 17:23:58,961 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2018-03-02 17:24:00,164 INFO beanutils.FluentPropertyBeanIntrospector: Error when creating PropertyDescriptor for public final void org.apache.commons.configuration2.AbstractConfiguration.setProperty(java.lang.String,java.lang.Object)! Ignoring this property. 2018-03-02 17:24:00,226 INFO impl.MetricsConfig: loaded properties from hadoop-metrics2.properties 2018-03-02 17:24:00,396 INFO impl.MetricsSystemImpl: Scheduled Metric snapshot period at 10 second(s). 2018-03-02 17:24:00,397 INFO impl.MetricsSystemImpl: JobTracker metrics system started 2018-03-02 17:24:00,781 INFO mapreduce.JobSubmitter: Cleaning up the staging area file:/tmp/hadoop/mapred/staging/xujingjing1314852612/.staging/job_local1314852612_0001 org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does not exist: hdfs://localhost:8020/user/xujingjing/inputGurtenberg0302/gurtenberg.txt at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:330) at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:272) at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:394) at org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:313) at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:330) at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:203) at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1570) at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1567) at java.base/java.security.AccessController.doPrivileged(Native Method) at java.base/javax.security.auth.Subject.doAs(Subject.java:423) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1962) at org.apache.hadoop.mapreduce.Job.submit(Job.java:1567) at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1588) at org.apache.hadoop.examples.WordCount.main(WordCount.java:87) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:564) at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71) at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144) at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:564) at org.apache.hadoop.util.RunJar.run(RunJar.java:239) at org.apache.hadoop.util.RunJar.main(RunJar.java:153) pal-nat186-66-224:bin xujingjing$

JDDD
  • 61
  • 2
  • 2
  • 7
  • [Why is “Can someone help me?” not an actual question?](http://meta.stackoverflow.com/q/284236/18157) – Jim Garrison Mar 02 '18 at 00:30
  • I mean I would appreciate if you could give me any information on how to resolve it – JDDD Mar 02 '18 at 00:32
  • **Not** an error message. INFO and WARN can be ignored and do not not prevent applications from running. That first line has several results if you just search it – OneCricketeer Mar 02 '18 at 00:52
  • Also a [mcve] should be provided so we can reproduce your issues. How is Hadoop installed? What versions? What code are you using? Input files? `mapreduce-examples-2.6.5.jar` probably shouldn't be ran within a Hadoop 3.0 installation – OneCricketeer Mar 02 '18 at 00:54
  • @cricket_007 yes I understand the first line, while I got different result after running the same command. – JDDD Mar 02 '18 at 00:59
  • @cricket_007 yes I installed Hadoop 3.0 on my macOS, and I am using basic command to run the word count example. I used mapreduce examples-2.6.5 because I ain’t find the corresponding word count embedded for version3.0. The input file is an 100 MB txt file I got by using Twitter API with Python – JDDD Mar 02 '18 at 01:13
  • Can you show the full output of the command? The code will error if the output folder already exists – OneCricketeer Mar 02 '18 at 01:15
  • Please [edit] the question, don't put it below – OneCricketeer Mar 02 '18 at 22:28
  • @cricket_007yes I just edited the question. – JDDD Mar 02 '18 at 22:32
  • @cricket_007 what I did is I used -put to upload my local txt file to hdfs file system as input, then I ran the mapreduce examples-2.6.5 and gave it a new folder as output. – JDDD Mar 02 '18 at 22:37

1 Answers1

0

The error is this

Input path does not exist: hdfs://localhost:8020/user/xujingjing/inputGurtenberg0302/gurtenberg.txt

So use

hdfs dfs -mkdir -p /user/xujingjing/inputGurtenberg0302/
hdfs dfs -copyFromLocal \
   /path/to/gurtenberg.txt \
   /user/xujingjing/inputGurtenberg0302/

I used that command ran similar file before

The line in your initial command uses a completely different file

OneCricketeer
  • 179,855
  • 19
  • 132
  • 245