7

According to the Hadoop : The Definitive Guide.

The new API supports both a “push” and a “pull” style of iteration. In both APIs, key-value record pairs are pushed to the mapper, but in addition, the new API allows a mapper to pull records from within the map() method. The same goes for the reducer. An example of how the “pull” style can be useful is processing records in batches, rather than one by one.

Has anyone pulled data in the Map/Reduce functions? I am interested in the API or example for the same.

Brian Tompsett - 汤莱恩
  • 5,753
  • 72
  • 57
  • 129
Praveen Sripati
  • 32,799
  • 16
  • 80
  • 117

1 Answers1

5

I posted a query @ mapreduce-user@hadoop.apache.org and got the answer.

The next key value pair can be retrieved from the context object which is passed to the map, by calling nextKeyValue() on it. So you will be able to pull the next data from it in the new API.

Is the performance of pull better than push in this scenario? Also, what are the scenarios in which the pull will be useful?

Praveen Sripati
  • 32,799
  • 16
  • 80
  • 117
  • I think this scenario is going to be useful when your processing of the current key/value is going to be dependant on the next keyvalue pair. – London guy Jul 26 '12 at 12:00
  • It could have been done with the old and the new API. But, the challenge is handling the situation when the data is spread across blocks. – Praveen Sripati Jul 26 '12 at 14:08
  • How can u do it with the old api? – London guy Jul 27 '12 at 08:37
  • Push vs Pull: If there's no handling limits on the handlers and time is of the essence, you'd want to push them to the handler as soon as you have data. Otherwise, you'd like handlers to pull data (to handle) when they're available. Pulling is something periodically, thus a handling delay can occur. – AlikElzin-kilaka Nov 15 '17 at 16:16