5

I am creating a program to analyze PDF, DOC and DOCX files. These files are stored in HDFS.

When I start my MapReduce job, I want the map function to have the Filename as key and the Binary Contents as value. I then want to create a stream reader which I can pass to the PDF parser library. How can I achieve that the key/value pair for the Map Phase is filename/filecontents?

I am using Hadoop 0.20.2

This is older code that starts a job:

public static void main(String[] args) throws Exception {
 JobConf conf = new JobConf(PdfReader.class);
 conf.setJobName("pdfreader");

 conf.setOutputKeyClass(Text.class);
 conf.setOutputValueClass(IntWritable.class);

 conf.setMapperClass(Map.class);
 conf.setReducerClass(Reduce.class);

 conf.setInputFormat(TextInputFormat.class);
 conf.setOutputFormat(TextOutputFormat.class);

 FileInputFormat.setInputPaths(conf, new Path(args[0]));
 FileOutputFormat.setOutputPath(conf, new Path(args[1]));

 JobClient.runJob(conf);
}

I Know there are other inputformat types. But is there one that does exactly what I want? I find the documentation quite vague. If there is one available, then how should the Map function input types look?

Thanks in advance!

Christophe
  • 328
  • 5
  • 15
  • For new readers: take a look at Hadoop Archives (HAR) and Sequence files, which are more suitable for input to Hadoop. – Christophe May 16 '12 at 14:12

3 Answers3

8

The solution to this is to create your own FileInputFormat class that does this. You have access to the name of the input file from the FileSplit that this FileInputFormat receives (getPath). Be sure to overrule the isSplitable of your FileInputformat to always return false.

You will also need a custom RecordReader that returns the entire file as a single "Record" value.

Be careful in handling files that are too big. You will effectively load the entire file into RAM and the default setting for a task tracker is to have only 200MB RAM available.

Sheena
  • 15,590
  • 14
  • 75
  • 113
Niels Basjes
  • 10,424
  • 9
  • 50
  • 66
1

You can use WholeFileInputFormat (https://code.google.com/p/hadoop-course/source/browse/HadoopSamples/src/main/java/mr/wholeFile/?r=3)

In mapper name of the file u can get by this command:

public void map(NullWritable key, BytesWritable value, Context context) throws 
IOException, InterruptedException 
{       

Path filePath= ((FileSplit)context.getInputSplit()).getPath();
String fileNameString = filePath.getName();

byte[] fileContent = value.getBytes();

}
Markovich
  • 11
  • 1
1

As an alternative to your approach, maybe add the binary files to hdfs directly. Then, create an input file that contains the dfs paths for the all the binary files. This could be done dynamically using Hadoop's FileSystem class. Lastly, create a mapper that processes the input by opening input streams, again using FileSystem.

Brent Worden
  • 10,624
  • 7
  • 52
  • 57
  • Hey Brent thanks for your answer. I am gonna use it if I don't find a better alternative! I might lose the _rack awareness_ features that come with Hadoop. Opening a dfs filename might mean opening a file that resides 'far away' while another file might be close. (I need to prove scalability and factors of speedup to a certain degree) – Christophe Apr 19 '11 at 13:58