4

I am trying to figure out how to limit memory usage of mongo daemon to 4G.

I thought to use limits.conf with memlock but I am not sure that's the right way to do it.

from limits.conf man page I've understood that ulimits are referred to users and not for processes, and also the definition of "memlock" is not clear to me:

memlock
  maximum locked-in-memory address space (KB)

How can I limit the process memory usage?

smintz
  • 282
  • 3
  • 9
  • 6
    Setting hard limits on the memory of processes through the OS seems like a bad idea. Unless the program was written with a strict memory limit in mind, it's likely to behave in unexpected ways when approaching that limit (including crashes, strange bugs, etc). I think this is better controlled within the application, and the only reason I can think to do this is prevent an un-trusted process from consuming too much memory (like a fork-bomb). – Dana the Sane Sep 04 '11 at 14:48
  • Solution for MongoDB on Ubuntu 14.04 http://stackoverflow.com/a/37015518/1241725 – brainsucker May 03 '16 at 22:46

2 Answers2

1

MongoDB uses memory mapped files for all storage, it appears that you can limit the size of these files:

However, if you're concerned abut the large memory numbers displayed in top, they don't correspond (exactly) with the physical memory usage of Mongo. Some of that memory will be mapped disk, depending on your io cache settings, see Checking Memory Usage (MongoDB).

Dana the Sane
  • 828
  • 10
  • 19
1

memlock limits a users use of pages that cannot be swapped out, e.g. huge pages. It is not what you are looking for.

You don't want to use ulimit -v since with MongoDB your VSS will include your entire dataset. You could try ulimit -m to limit the RSS, but that doesn't work on linux 2.6 and newer. Even if it did, hitting the limit could result in strange behavior as the program's attempts to allocate memory fail.

A better approach is to use cgroups. jlebar has a tutorial on this. The two key things about cgroups vs ulimit is that

  1. It works
  2. When RSS usage nears the limit, linux's normal memory reclamation algorithms kick in. For MongoDB, I think this will result in cached data being dropped.
sciurus
  • 12,678
  • 2
  • 31
  • 49