0

There is a database with 9 million rows, with 3 million distinct entities. Such a database is loaded everyday into MongoDB using perl driver. It runs smoothly on the first load. However from the second, third, etc. the process is really slowed down. It blocks for long times every now and then.

I initially realised that this was because of the automatic flushing to disk every 60 seconds, so I tried setting syncdelay to 0 and I tried the nojournalling option. I have indexed the fields that are used for upsert. Also I have observed that the blocking is inconsistent and not always at the same time for the same line.

I have 17 G ram and enough hard disk space. I am replicating on two servers with one arbiter. I do not have any significant processes running in the background. Is there a solution/explanation for such blocking?

UPDATE: The mongostat tool says in the 'res' column that around 3.6 G is used.

Sai
  • 461
  • 7
  • 25
  • how much of your 17GB is actually used by mongo? did you try running mongostat? what's in the 'res' column? – Asya Kamsky Jul 12 '13 at 03:28
  • btw when you do second, third loads are you emptying the DB first or loading "on top" of existing entries? if so then this would explain the slow down - you are populating and searching through a much larger dataset. – Asya Kamsky Jul 12 '13 at 03:29
  • @AsyaKamsky around 5 GB of my RAM is being used by Mongo, I have no idea how to increase it. I will try running mongostat and update the question. Thank you very much for the insight. :) – Sai Jul 14 '13 at 02:01
  • When loading the second and further data I am updating the data set like you said because I would like the data set to reflect their recent data whether or not it was in the latest file that was loaded into MongoDB (but the total number of records usually never increases beyond 4 million). However I am fine if that is the reason for the slow down, but I doubt it because it stops inconsistently (sometimes for one minute sometimes ten minutes) regardless of the data and also if that was the case why would there be a pause, shouldn't it just be slow for all lines rather than pause randomly?Thanks – Sai Jul 14 '13 at 02:01
  • are you running mongod with any particular special options (other than nojournal and syncdelay 0? what version is this and what OS? Might be easier to discuss this on the mongodb-users google group. – Asya Kamsky Jul 14 '13 at 06:24
  • @AsyaKamsky No nothing other than nojournal and syncdelay 0. this is MongoDB 2.4 and OS is SUSE Linux Enterprise Server. :) I will try posting it on the google groups too. Thank you very much. – Sai Jul 14 '13 at 17:08
  • @AsyaKamsky I have updated the question with 'res' column of mongostat. Thanks – Sai Jul 15 '13 at 16:49

0 Answers0