I did some evaluations on CouchDB recently. I found that memory consumption is pretty high for view construction (map & reduce) as well as importing a larger JSON document into CouchDB. I evaluated the view construction function on a Ubuntu system (4 cores, Intel® Xeon® CPU E3-1240 v5 @ 3.50GHz). Here are the results:
- four hundred 100KB datasets would cost around 683 MB memory;
- one 80 MB dataset would cost around 2.5 GB memory;
- four 80 MB datasets would cost around 10 GB memory.
It seems that memory consumption is hundreds of times of original JSON dataset. If we use 1 GB dataset, then CouchDB would run out of the memory. Does anyone know the reason why memory consumption is so huge? Many thanks!