1

I am trying to run a job on an AWS EMR cluster. The problem Im getting is the following:

aws java.io.IOException: No FileSystem for scheme: hdfs

I dont know where exactly my problem resides (in my java jar job or in the configurations of the job)

In my S3 bucket Im making a folder (input) and in it im putting a bunch of files with my data. Then in the arguments Im giving the path for the input folder which then same path is used as the FileInputPath.getInputPath(args[0]).

My question is - First will the job grab all files in the input folder and work on them all or I have to supply all of the paths of each file?

Second question - How can I solve the above Exception?

Thanks

1 Answers1

0

Keep your input files in S3 . e.g. s3://mybucket/input/ Keep all your file to be pressed in input folder under my bucket.

In you map reduce use code as below

FileInputFormat.addInputPath(job,"s3n://mybucket/input/")

This will automatically process all files under input folder.

Sandesh Deshmane
  • 2,247
  • 1
  • 22
  • 25