2

We've mounted a graylog2 dedicated server (with a rails unicorn, mongodb and eleasticsearch) on a virtual machine with 2gigs of ram a couple days ago.

RAM Consumption just keeps climbing, I am getting high consumption alert quite frequently.

I'm trying to evaluate how much RAM I'll need to centralize all syslogs and rails logs for 25+ servers, any one has experience about this ?

Alternatively, does anyone have a way to keep graylog2 and his dependent applications (unicorn, mongodb, eleasticsearch) under 2gigs of RAM ?

EDIT 2013-02-20: It turns out, RAM is not really the problem after a little boost to 2.25 GB. The Problem is now the CPU load, we've got graylog-server consuming almost 100% of all 8 cpu cores.

Raphael
  • 69
  • 2
  • 11
  • Can you give us more details about this "high consumption alert"? Is it warning about consumption of physical RAM? Or high resident set size on a process? Or what? – David Schwartz Feb 07 '13 at 16:47
  • Raphael, Is the setup performing poorly or are you just alarmed by the memory usage? – draper7 Feb 07 '13 at 16:49
  • I have Monit to keep an eye on hw (virtual hw actually). He used to send me alert of ram consumption over 75% and a average load of 4 or 5. It's monit default alarm, but I think a machine should use much more than 50% of it's resources in normal use. I just added little more ram and a little more cpu (+250mb and +500mhz). – Raphael Feb 11 '13 at 08:10
  • if you are using 0.11.0, change in /etc/graylog2.conf, set `processor_wait_strategy = blocking` – HVNSweeting Mar 30 '13 at 04:52

1 Answers1

2

MongoDB will tend towards 100% resident memory over time as long as the data set (data plus indexes) exceeds the available RAM. It will eventually find a "steady state" whereby new (recently touched) data is paged into or kept in RAM and old data (Least Recently Used) is paged out. The only way to avoid this is to have a data set that is smaller than available memory, otherwise it will happen eventually (though it may take hours/days/weeks/months depending on how quickly you access the data).

This is nothing to be worried about, and is similar to how there is erroneous reporting around memory mapped files and memory consumption in general - the kernel manages the memory allocation and will page out MongoDB data if other processes need it. It is good to bear in mind when writing things like high memory utilization alarms however - in most cases for MongoDB systems they are meaningless. You would be far better off looking at the page fault rates or disk IO as a proxy instead (see the metrics in MMS for more).

Adam C
  • 5,222
  • 2
  • 30
  • 52
  • Mongodb is not consuming more than 11MB of RAM. Shorlty after a reboot, Elasticsearch is consuming 43% of the RAM, and Graylog-server 60% of the cpu. – Raphael Feb 20 '13 at 09:54
  • then there's very little data in MongoDB in that system - it may still grow over time as you add data, so I was basically warning about that behavior. I can only speak to that part of it I'm afraid - I've never used either Graylog or Elasticsearch – Adam C Feb 20 '13 at 10:18