1

I'm currently processing about 300 GB of log files on a 10 servers hadoop cluster. My data is being saved in folders named YYMMDD so each day can be accessed quickly.

My problem is that I just found out today that the timestamps I have in my log files are in DST (GMT -0400) instead of UTC as expected. In short, this means that logs/20110926/*.log.lzo contains elements from 2011-09-26 04:00 to 2011-09-27 20:00 and it's pretty much ruining any map/reduce done on that data (i.e. generating statistics).

Is there a way to do a map/reduce job to re-split every log files correctly? From what I can tell, there doesn't seem to be a way using streaming to send certain records in output file A and the rest of the records in output file B.

Here is the command I currently use:

/opt/hadoop/bin/hadoop jar /opt/hadoop/contrib/streaming/hadoop-streaming-0.20.2-cdh3u1.jar \
-D mapred.reduce.tasks=15 -D mapred.output.compress=true \
-D mapred.output.compression.codec=com.hadoop.compression.lzo.LzopCodec \
-mapper map-ppi.php -reducer reduce-ppi.php \
-inputformat com.hadoop.mapred.DeprecatedLzoTextInputFormat \
-file map-ppi.php -file reduce-ppi.php \
-input "logs/20110922/*.lzo" -output "logs-processed/20110922/"

I don't know anything about java and/or creating custom classes. I did try the code posted at http://blog.aggregateknowledge.com/2011/08/30/custom-inputoutput-formats-in-hadoop-streaming/ (pretty much copy/pasted what was on there) but I couldn't get it to work at all. No matter what I tried, I would get a "-outputformat : class not found" error.

Thank you very much for your time and help :).

Pierre
  • 522
  • 1
  • 8
  • 19

2 Answers2

1

From what I can tell, there doesn't seem to be a way using streaming to send certain records in output file A and the rest of the records in output file B.

By using a custom Partitioner, you can specify which key goes to which reducer. By default the HashPartitioner is used. Looks like the only other Partitioner Streaming supports is KeyFieldBasedPartitioner.

You can find more details about the KeyFieldBasedPartitioner in the context of Streaming here. You need not know Java to configure the KeyFieldBasedPartitioner with Streaming.

Is there a way to do a map/reduce job to re-split every log files correctly?

You should be able to write a MR job to re-split the files, but I think Partitioner should solve the problem.

Praveen Sripati
  • 32,799
  • 16
  • 80
  • 117
  • I don't see how a custom partitioner will solve my problem of saving my data in separate files. – Pierre Sep 27 '11 at 13:02
  • Using a custom partitioner all the Key1 pairs can be sent to Reduce1 and Key2 pairs can be sent to Reduce2. And each Reducer will create a separate o/p file. So, K1 and K2 will be in separate files as reducer o/p. Based on your requirement you need to partition the keys (in your problem the time) accordingly. – Praveen Sripati Sep 27 '11 at 14:12
0

A custom MultipleOutputFormat and Partitioner seems like the correct way split your data by day.

As the author of that post, sorry that you had such a rough time. It sounds like if you were getting a "class not found" error, there was some issue with your custom output format not being found after you included it with "-libjars".

blinsay
  • 1,082
  • 8
  • 6