22

What are the key differences to do map/reduce work on MongoDB using Hadoop map/reduce vs built-in map/reduce of Mongo?

When do I pick which map/reduce engine? what are the pros and cons of each engine to work on data stored in mongodb?

Sergio Tulentsev
  • 226,338
  • 43
  • 373
  • 367
iCode
  • 4,308
  • 10
  • 44
  • 77

4 Answers4

31

My answer is based on knowledge and experience of Hadoop MR and learning of Mongo DB MR. Lets see what are major differences and then try to define criteria for selection: Differences are:

  1. Hadoop's MR can be written in Java, while MongoDB's is in JavaScript.
  2. Hadoop's MR is capable of utilizing all cores, while MongoDB's is single threaded.
  3. Hadoop MR will not be collocated with the data, while Mongo DB's will be collocated.
  4. Hadoop MR has millions of engine/hours and can cope with many corner cases with massive size of output, data skews, etc
  5. There are higher level frameworks like Pig, Hive, Cascading built on top of the Hadoop MR engine.
  6. Hadoop MR is mainstream and a lot of community support is available.

From the above I can suggest the following criteria for selection:
Select Mongo DB MR if you need simple group by and filtering, do not expect heavy shuffling between map and reduce. In other words - something simple.

Select hadoop MR if you're going to do complicated, computationally intense MR jobs (for example some regressions calculations). Having a lot or unpredictable size of data between map and reduce also suggests Hadoop MR.

Java is a stronger language with more libraries, especially statistical. That should be taken into account.

Eric Aya
  • 69,473
  • 35
  • 181
  • 253
David Gruzman
  • 7,900
  • 1
  • 28
  • 30
  • Great points, thank you. Do you think keeping the data in Mongo and NOT using HDFS is going to be of any big bottle neck? My data size is around 10 TB and highly structured and my computations are both simple and complex. Keeping the data in Mongo gives us many benefits but I am not sure if not using HDFS could be problematic at all? – iCode Feb 15 '12 at 12:19
  • and one more question, is it safe to say hadoop will be faster even on a simple M/R jobs? – iCode Feb 15 '12 at 12:21
  • 1
    My knowledge of Mongo DB is limited. In best of my understanding this system is built for random access, built around indexing. This is a system built for online serving. In the same time HDFS is built for the sequential access, heavy scans and all trade-offs are done in this direction. Thereof I do not expect MongoDB to be good in scans... With this size of data - it is tough questions and I think more information is needed to decide. Specifically - is affecting Mongo DB performance is critical. – David Gruzman Feb 15 '12 at 12:53
  • 5
    Regarding performance on simple queries - hadoop is not efficient, it has several layers, and lightweight MR implementation of MongoDB, working inside the system might have the edge. We can connect and discuss what would be the right way to make a test. – David Gruzman Feb 15 '12 at 12:55
  • Great points, let's actually do so and connect as this could be a valuable testing. – iCode Feb 15 '12 at 21:57
10

As of MongoDB 2.4 MapReduce jobs are no longer single threaded.

Also, see the Aggregation Framework for a higher-performance, declarative way to perform aggregates and other analytical workloads in MongoDB.

kstirman
  • 221
  • 2
  • 3
1

Item 3 is certainly incorrect when it comes to Hadoop. Processing colocation with the data is part of the foundation of Hadoop.

vfisher
  • 41
  • 4
0

I don't have a lot of experience with Hadoop MR, but my impression is that it only works on HDFS, so you would have to duplicate all of your Mongo data in HDFS. If you are willing to duplicate all of your data, I would guess Hadoop MR is much faster and more robust than Mongo MR.

nnythm
  • 3,280
  • 4
  • 26
  • 36
  • 1
    That is actually not the case. This project https://github.com/mongodb/mongo-hadoop help you to run hadoop directly on monogo data. no need to move to hdfs – iCode Feb 15 '12 at 09:52
  • Hadoop MR can work with any data source that you could access from java. Not just HDFS. – Marquez Jun 13 '13 at 16:03