2

I had a very unfortunate situation, where a bash script that contained a subtle error gone wild and took all available memory and then started killing other tasks (production apps!) to get even more.

How can I future-proof invocations of this and other scripts so when they reached memory limit they fall themselves and not kill other apps?

I'd prefer something I could incorporate into the text of the script.

Imaskar
  • 123
  • 5

3 Answers3

2

On Linux, https://www.kernel.org/doc/Documentation/sysctl/vm.txt documents a variety of tunables for the virtual memory system.

For example, vm.oom_kill_allocating_task=1

If this is set to non-zero, the OOM killer simply kills the task that triggered the out-of-memory condition. This avoids the expensive tasklist scan.

There is no guarantee your production apps won't be the ones triggering the OOM. But a runaway allocation is more likely to hit it.

There also is a score you can tweak to weight specific processes more or less likely to be murdered for their memory. /proc/$PID/oom_adj. Although, you probably want to set it in your init scripts. systemd.exec has OOMScoreAdjust.

You may disable the OOM killer entirely. However, in extreme memory pressure the system may not be able to respond, or panic.

John Mahowald
  • 32,050
  • 2
  • 19
  • 34
2

ulimit -m will let you set a core (well, RSS) limit applicable to a process, and ulimit -v will let you do likewise for the VM footprint.

MadHatter
  • 79,770
  • 20
  • 184
  • 232
  • So, I ended up with a mix of @MadHatter and @JohnMahowald answers. First, `ulimit -m` is deprecated (see https://unix.stackexchange.com/a/129592/285241), second, `vm.oom_kill_allocating_task=1` won't help, because both offending script and prod apps are allocating actively. Which led me to using `ulimit -v` and `echo 1000 > /proc/self/oom_score_adj` – Imaskar Aug 01 '18 at 05:13
0

You can use supervisord to manage process. Supervisor can set memory quota.