-1

I write such code for MapReduce text sorting:

public static class SortMapper extends Mapper<Object, Text, Text, Text> {
    private Text citizenship = new Text();

    @Override
    public void map(Object key, Text value, Context context) throws IOException, InterruptedException {
        citizenship.set(value.toString().split(",")[11]);
        context.write(citizenship, value);
    }
}

public static class PrintReducer extends Reducer<Text, Text, NullWritable, Text> {

    @Override
    protected void reduce(Text key, Iterable<Text> values, Context context) throws IOException, InterruptedException {
        Iterator<Text> valIt = values.iterator();

        while (valIt.hasNext()) {
            Text value = valIt.next();
            context.write(NullWritable.get(), value);
        }
    }
}

public static void main(String[] args) throws Exception {
    Configuration conf = new Configuration();
    Job job = Job.getInstance(conf, "Football Sort");
    job.setJarByClass(FootballSort.class);
    job.setMapperClass(SortMapper.class);
    job.setCombinerClass(PrintReducer.class);
    job.setReducerClass(PrintReducer.class);
    job.setMapOutputKeyClass(Text.class);
    job.setMapOutputValueClass(Text.class);
    job.setOutputKeyClass(NullWritable.class);
    job.setOutputValueClass(Text.class);
    FileInputFormat.addInputPath(job, new Path(args[0]));
    FileOutputFormat.setOutputPath(job, new Path(args[1]));
    System.exit(job.waitForCompletion(true) ? 0 : 1);
}

but it always catch

IOException in lines 26, 34 reason: class org.apache.hadoop.io.NullWritable is not class org.apache.hadoop.io.Text

OneCricketeer
  • 179,855
  • 19
  • 132
  • 245
Jack Loki
  • 95
  • 6

2 Answers2

0

your mapper outputformat is not match your code ,In your main method you set the output TEXT

job.setMapOutputKeyClass(Text.class);
    job.setMapOutputValueClass(Text.class);   

but in your mapper public static class PrintReducer extends Reducer<Text, Text, NullWritable, Text> your set them NullWritable TEXT

HbnKing
  • 1,762
  • 1
  • 11
  • 25
  • But I need reduce output only value - without key. Like here https://stackoverflow.com/questions/23601380/hadoop-mapreduce-how-to-store-only-values-in-hdfs described – Jack Loki Nov 07 '17 at 22:07
  • My mapper output sets as Text, Text: job.setMapOutputKeyClass(Text.class); job.setMapOutputValueClass(Text.class); // My reducer (job) output sets as NullWritable, Text: job.setOutputKeyClass(NullWritable.class); job.setOutputValueClass(Text.class); – Jack Loki Nov 07 '17 at 23:37
  • the map output keyCan not be null, if it is null will make the map output and reduce the input does not match and cause the program to run error. – HbnKing Nov 14 '17 at 10:00
0

@Abhinay: You cannot make use of combiners in this case.Combiners are mini reducers whose operation is commutative and associative and combiner's signature should match Reducers.If combiner signature is " ", you will get error as reducer input key and value are --Text and IntWritable,but combiners's output key and value class is Text,NullWritable – Unmesha SreeVeni Dec 28 '15 at 5:51

//job.setCombinerClass(PrintReducer.class); or delete this string is way to fix

Jack Loki
  • 95
  • 6