I am using hadoop cdh4.1.2, and my mapper program is almost a echo of input data. But in my job status page, I saw
FILE: Number of bytes written 3,040,552,298,327
is almost equals to
FILE: Number of bytes read 3,363,917,397,416
for mappers, while I have already set
conf.set("mapred.compress.map.output", "true");
it seems them compressing algorithm does not work for my job? why is this?