3

Is there a way to tell mongodump or mongod to that effect, to free the current used ram?

I have an instance with a Mongo DB server which has a couple of databases which total around 2GB. The instance has 5GB of ram. Every night I have a backup cron running mongodump. I have set up a 5GB swap file. Every other night the OOM-Killer will kill mongo, the memory will go down to ~30% and spike up to again ~60% at backup time and stays like that until next backup spikes up the memory and OOM-Killer kicks in.

Before this I had like 3.75GB and no swap and it was getting killed every night and sometimes during the day when usted. I added more ram and the swap file a few days ago, and it has improved, but still is getting killed every other day and the memory after the backup is at ~60%. And I'm paying for extra ram that is only used for these spikes at backup.

If I run mongostat I see that mongo increases the used ram during backup but does frees it afterward. Is there a way to free it? Something that will not be to stop and start mongod ?

Before Backup:

insert query update delete getmore command dirty used flushes vsize   res qrw arw net_in net_out conn                time
    *0    *0     *0     *0       0     3|0  0.0% 1.0%       0 1.10G  100M 0|0 1|0   212b   71.0k    3 May  3 16:14:41.461

During backup:

insert query update delete getmore command dirty  used flushes vsize   res qrw arw net_in net_out conn                time
    *0    *0     *0     *0       1     1|0  0.0% 81.2%       0 2.61G 1.55G 0|0 2|0   348b   33.5m    7 May  3 16:16:01.464

After Backup

insert query update delete getmore command dirty  used flushes vsize   res qrw arw net_in net_out conn                time
    *0    *0     *0     *0       0     2|0  0.0% 79.7%       0 2.65G 1.62G 0|0 1|0   158b   71.1k    4 May  3 16:29:18.015
wren
  • 41
  • 3
  • How large have you configured the cache? – Joe May 04 '21 at 02:46
  • It's at 1.93GB which I'm guessing is the default ½ of (ram – 1 GB) . Do your reckon it should be different? thanks – wren May 04 '21 at 19:14
  • I run into this as well on a 3 member replica set in k8s. The only solution I've come up with is to stepdown the primary then terminate that instance after the backup. – James Nov 10 '22 at 14:12
  • Also, there is an open issue on jira to add a feature which limits total memory usage: https://jira.mongodb.org/browse/SERVER-39402 – James Nov 10 '22 at 14:14

0 Answers0