0

I've been able to kick off job flows using the elastic-mapreduce ruby library just fine. Now I have an instance which is still 'alive' after it's jobs have finished. I've logged in to is using SSH and would like to start another job, but each of my various attempts have failed because hadoop can't find the input file. I've tried storing the input file locally and on S3.

How can I create new hadoop jobs directly from within my SSH session?

The errors from my attempts:

(first attempt using local file storage, which I'd created by uploading files using SFTP)

hadoop jar hadoop-0.20-streaming.jar \
-input /home/hadoop/mystic/search_sets/test_sample.txt \
-output /home/hadoop/mystic/search_sets/test_sample_output.txt \
-mapper /home/hadoop/mystic/ctmp1_mapper.py \
-reducer /home/hadoop/mystic/ctmp1_reducer.py \
-file /home/hadoop/mystic/ctmp1_mapper.py \
-file /home/hadoop/mystic/ctmp1_reducer.py

11/10/04 22:33:57 ERROR streaming.StreamJob: Error Launching job :Input path does not exist: hdfs://ip-xx-xxx-xxx-xxx.us-west-1.compute.internal:9000/home/hadoop/mystic/search_sets/test_sample.txt

(second attempt using s3):

hadoop jar hadoop-0.20-streaming.jar \
-input s3n://xxxbucket1/test_sample.txt \
-output /home/hadoop/mystic/search_sets/test_sample_output.txt \
-mapper /home/hadoop/mystic/ctmp1_mapper.py \
-reducer /home/hadoop/mystic/ctmp1_reducer.py \
-file /home/hadoop/mystic/ctmp1_mapper.py \
-file /home/hadoop/mystic/ctmp1_reducer.py

11/10/04 22:26:45 ERROR streaming.StreamJob: Error Launching job : Input path does not exist: s3n://xxxbucket1/test_sample.txt
Trindaz
  • 17,029
  • 21
  • 82
  • 111

1 Answers1

2

The first will not work. Hadoop will look for that location in HDFS, not local storage. It might work if you use the file:// prefix, like this:

-input file:///home/hadoop/mystic/search_sets/test_sample.txt

I've never tried this with streaming input, though, and it probably isn't the best idea even if it does work.

The second (S3) should work. We do this all the time. Make sure the file actually exists with:

hadoop dfs -ls s3n://xxxbucket1/test_sample.txt

Alternately, you could put the file in HDFS and use it normally. For jobs in EMR, though, I usually find S3 to be the most convenient.

ajduff574
  • 2,101
  • 1
  • 21
  • 20
  • file:/// works a treat. I also found that changing s3n:// to s3:// got using s3 files working. – Trindaz Oct 05 '11 at 18:04
  • 1
    Ah, maybe you uploaded it with s3, rather than s3n? I don't think the two are compatible. http://wiki.apache.org/hadoop/AmazonS3 – ajduff574 Oct 07 '11 at 15:43