We use mongodb 3.4.14 with 8 core, 32GB RAM. I was performing the load test with Jmeter, with 70 threads I have acceptable output. But as the load increases SLA is exponentially increasing and throughput reduces drastically. I tried increasing the ulimit
and sharding is the next step, apart from that is there any other performance optimization that I can do ?
Update
@Jeet, here are the findings :
- is there lot of aggregation queries? What kind of collection structure do you have i.e
The load test is run on a single aggregation query and the structure of the document is also having same set of fields. Fixing the document size would help ? how can I do it?
- is there a lot of nested arrays?
Answer : No nested queries.
- Is it a single instance or replica set? Try putting a replica set with read and write to different node.
Currently we want to run only on single node.
- Are the queries returning data from multiple collections?
No, only 1 collection.
- Check your instance is page-faulting for how much % of operations?
With a load of 500 users I don't see much page faults, only 2 digit numbers.
- Check your logs for operations with high nscanned or scanAndOrder during periods of high lock/queue, and index accordingly.
How can I check it ?
- Check your queries for CPU-intensive operators like $all, $push/$pop/$addToSet, as well as updates to large documents, and especially updates to documents with large arrays (or large subdocument arrays).
Yes, with the above load CPU is full and responses are delayed. We are doing a groupBy and then sorting with limit.
- if your database is write-heavy, keep in mind that only one CPU per database can write at a time (owing to that thread holding the write lock). Consider moving part of that data into its own database.
Our database is mostly read heavy, the collection will be populated once a day.
Apart from this I tried to do a simple test by putting the below code in a for loop :
Document findQuery = new Document("userId", "Sham");
FindIterable<Document> find = collection.find(findQuery);
MongoCursor<Document> iterator = find.iterator();
Used executor to start the process:
ExecutorService executorService = Executors.newFixedThreadPool(100);
even with this the performance is slow its taking like 900ms to return.
1 request = 150ms per request
100 request = 900ms per request
when I see the stats its as below for 500 users:
insert query update delete getmore command dirty used flushes vsize res qrw arw net_in net_out conn time
*0 *0 *0 *0 0 1|0 0.0% 0.0% 0 317M 28.0M 0|0 0|0 156b 45.1k 3 Oct 12 15:31:19.644
*0 *0 *0 *0 0 1|0 0.0% 0.0% 0 317M 28.0M 0|0 0|0 156b 45.1k 3 Oct 12 15:31:20.650
*0 *0 *0 *0 0 3|0 0.0% 0.0% 0 317M 28.0M 0|0 0|0 218b 46.1k 3 Oct 12 15:31:21.638
*0 *0 *0 *0 0 2|0 0.0% 0.0% 0 317M 28.0M 0|0 0|0 158b 45.4k 3 Oct 12 15:31:22.638
*0 *0 *0 *0 0 1|0 0.0% 0.0% 0 317M 28.0M 0|0 0|0 157b 45.4k 3 Oct 12 15:31:23.638
*0 376 *0 *0 0 112|0 0.0% 0.0% 0 340M 30.0M 0|0 0|0 64.9k 23.6m 26 Oct 12 15:31:24.724
*0 98 *0 *0 0 531|0 0.0% 0.0% 0 317M 27.0M 0|0 0|0 109k 6.38m 3 Oct 12 15:31:25.646
*0 *0 *0 *0 0 2|0 0.0% 0.0% 0 317M 27.0M 0|0 0|0 215b 45.6k 3 Oct 12 15:31:26.646
*0 *0 *0 *0 0 1|0 0.0% 0.0% 0 317M 27.0M 0|0 0|0 157b 45.1k 3 Oct 12 15:31:27.651
*0 *0 *0 *0 0 2|0 0.0% 0.0% 0 317M 27.0M 0|0 0|0 159b 45.8k 3 Oct 12 15:31:28.642