0

I'm trying to setup a MapReduce task that utilizes the Parallel Scan feature by dynamodb.

Basically, I want each Mapper class to take a tuple as the input value.

Every example I've seen so far sets this :

FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));

Can I set the input format for the job to be hashMap instead?

Chris White
  • 29,949
  • 4
  • 71
  • 93
n915
  • 81
  • 1
  • 1
  • 5

1 Answers1

0

I think you want to read your file as a key-value pair not as a standard way to read inputSlipt(Line number as a key and Line as a value). If it is waht you asked then you can use KeyValueTextInputFormat below description can be found on Hadoop: definitive guide

KeyValueTextInputFormat
TextInputFormat’s keys, being simply the offset within the file, are not normally
very useful. It is common for each line in a file to be a key-value pair, 
separated by a delimiter such as a tab character. For example, this is the output   
produced by TextOutputFormat, Hadoop’s default OutputFormat. To interpret such 
files correctly, KeyValueTextInputFormat is appropriate.

You can specify the separator via the key.value.separator.in.input.line property. 
It is a tab character by default. Consider the following input file, 
where → represents a (horizontal) tab character:

line1→On the top of the Crumpetty Tree
line2→The Quangle Wangle sat,
line3→But his face you could not see,
line4→On account of his Beaver Hat.
Like in the TextInputFormat case, the input is in a single split comprising four
records, although this time the keys are the Text sequences before the tab in
each line:

(line1, On the top of the Crumpetty Tree)
(line2, The Quangle Wangle sat,)
(line3, But his face you could not see,)
(line4, On account of his Beaver Hat.)
sunny
  • 824
  • 1
  • 14
  • 36
twid
  • 6,368
  • 4
  • 32
  • 50