As I installed MongoDB on a Linux server, I found that Map & Reduce function is much slower than db.count()
and db.find()
. For example, if I execute script
for(var i=0; i<4789302; ++i){
db.collection5.insert({ item:
"journal",
qty: 25,
tags: ["blank", "red"],
size: { h: 16, w: 21, uom: "cm" }
})
}
to insert some small JSON documents into a collection, I find that the execution time for db.collection5.count({"size.h":16})
is around 1.66 seconds, but the execution time for 478 MB dataset with 2394651 rows cost around 28 seconds. Besides that, if I add indexes on some related fields, then db.collection.find()
and db.collection.count()
could be much faster, but the execution time for Map & Reduce is still the same.
Can someone explain the reason for above phenomenon?