2

Since yesterday I am having huge trouble with my MongoDB. I have a 100GB collection with over 50M documents. Making a find on them and a .count() was never an issue until yesterday.

Somehow, since yesterday, a simple find and count query takes over 4 hours (it used to take about 10 seconds). The index seems to be created and on my server, only the RAM seems to be filled (85GB out of 96 are used).

My first thought is that it tries to put the 100GB into my ram, and as it can't, it doesn't use the index... Can this be ?

Does anyone have an idea what can have happened ?

Thanks in adavance.

EDIT:

Mongo Version: v4.0.3
Ubuntu Version: 4.15.0-36-generic
Explain for (I had to put a limit or else it was neverending...)

db.getCollection('users').find({"sessions.webInstanceId": 123456873, "sessions.timeStart": {$gt: 1500940800}}).limit(10000).explain(true)

Output of explain: https://pastebin.com/TqMUyKag

EDIT 2: With more research I also noticed I am having issues with the IO Wait. IO Wait error IOTOP

Can this be the reason why my queries are slow? My disks are in EXT4, which i see is not the best practice.

  • Please edit your question to include the specific version of MongoDB server (`x.y.z`) and O/S as well as the output of `explain(true)` for one of your slow queries. Does your collection include 100GB of data or is that the size of the collection on disk? Also, how are you measuring the RAM usage? – Stennie Jan 22 '19 at 08:32
  • Thank you for you help in advance. I added the explain to the original post. According to Compass, the data in the collection seems to be at 178.6GB with an average document size at 9.6KB. Whereas the backup of the collection is 104GB. I am measuring the RAM with a simple HTOP - mongo is the only service of the server. – Bastian Jakobsen Jan 22 '19 at 13:51
  • @Stennie Any ideas ? – Bastian Jakobsen Jan 28 '19 at 16:58

0 Answers0