0

i have a very odd issue on one of my openvz containers. The memory usage reported by top,htop,free and openvz tools seems to be ~4GB out of allocated 10GB.

when i list the processes by memory usage or use ps_mem.py script, i only get ~800MB of memory usage. Similarily, when i browse the process list in htop, i find myself unable to pinpoint the memory hogging offender.

There is definitely a process leaking ram in my container, but even when it hits critical levels and i stop everything in that container (except for ssh, init and shells) i cannot reclaim the ram back. Only restarting the container helps, otherwise the OOM starts kicking in in the container eventually.

I was under the assumption that leaky process releases all its ram when killed, and you can observe its misbehavior via top or similar tools.

If anyone has ever experienced behavior like this, i would be grateful for any hints. The container is running icinga2 (which i suspect for leaking ram) , although at most times the monitoring process sits idle, as it manages to execute all its scheduled checks in more than timely manner - so i'd expect the ram usage to drop at those times. It doesn't though.

kovalsky
  • 26
  • 4

1 Answers1

0

I had a similar issue in the past and in the end it was solved by the hosting company where I had my openvz container. I think the best approach would be to open a support ticket to your hoster, explain them the problem and ask them to investigate. Maybe they use some outdated kernel version or they did changes on the server that have impact on your ovz container.

Bogdan Stoica
  • 4,349
  • 2
  • 23
  • 38
  • I figured it out - openvz has something called dcache. It grows and grows, but eventually takes up so much ram that OOM killer starts going crazy inside the container. I've reconfigured the container with fixed dcache size and suddently no phantom memory usage. I have no clue why would it misbehave like this. – kovalsky Feb 22 '17 at 12:05