4

I'm running Ubuntu Server 12.04 LTS in EC2. I have several node.js daemons running as services under upstart, along with the usual init stuff. After every deploy, during which all the node.js daemons restart, the "init" process starts growing at about 0.5MB/min. If I restart a particular one of my daemons, init goes back to <50MB.

What could my process be doing to cause upstart to eat my RAM?

Output from top:

Aug 1 23:51 UTC

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND          
17627 root      20   0  307m  90m 3444 S    0  5.3 146:18.06 chef-client        
    1 root      20   0 67680  44m 1140 S    1  2.6  59:11.04 init               
17857 appserve  20   0  927m  30m 7024 S    4  1.8   2:01.79 node               
17963 appserve  20   0  732m  26m 6504 S    2  1.6   0:36.03 node               
18363 appserve  20   0  728m  21m 6316 S    0  1.3   0:00.71 node               
14798 postgres  20   0  533m  20m  19m S    0  1.2   1:38.83 postgres           
18091 appserve  20   0  726m  16m 6320 S    0  1.0   0:00.66 node               
14801 postgres  20   0  533m  16m  15m S    0  1.0   4:07.21 postgres           
17993 postgres  20   0  538m  16m  12m S    0  1.0   0:09.56 postgres           
17865 postgres  20   0  537m  16m  12m S    0  0.9   0:15.00 postgres          

Aug 2 01:32 UTC

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND          
    1 root      20   0  116m  94m 1140 S    0  5.6  59:51.25 init               
17627 root      20   0  304m  87m 3444 S    0  5.2 147:04.41 chef-client        
17963 appserve  20   0  737m  35m 7192 S    1  2.1   1:25.47 node               
17857 appserve  20   0  926m  27m 7028 S    3  1.6   5:41.82 node               
18363 appserve  20   0  728m  22m 6316 S    0  1.3   0:00.98 node               
14798 postgres  20   0  533m  20m  19m S    0  1.2   1:39.29 postgres           
18091 appserve  20   0  726m  16m 6320 S    0  1.0   0:00.66 node               
14801 postgres  20   0  534m  16m  15m S    0  1.0   4:08.34 postgres           
17993 postgres  20   0  538m  16m  12m S    0  1.0   0:23.08 postgres           
17865 postgres  20   0  537m  16m  13m S    0  1.0   0:30.20 postgres          

**Update: Looks like it was too much spew to stdout. Thanks for your help, guys! **

wolak
  • 149
  • 3

1 Answers1

1

The short answer is that upstart is eating all of your RAM because the system has nothing else to do with the RAM. Your system isn't under any memory pressure, so it pretty much just leaves RAM used wherever it winds up. It takes effort to reclaim memory and so long as the system has no need, it simply doesn't bother.

David Schwartz
  • 31,449
  • 2
  • 55
  • 84
  • Exactly what I was going to say. I've never understood the fascination with having lots of "free" RAM in *any* modern operating system. RAM the computer can't find a use for is RAM you wasted money buying. – Rob Moir Aug 02 '12 at 07:11
  • So let me get this straight: The _resident size_ of an _individual process_ will only decrease under pressure on Linux? As far as I've understood, the only way the OS can reduce the _resident_ footprint of a process is to swap out pages to disk. – wolak Aug 02 '12 at 17:17
  • @epall: That's correct. And your understanding is incorrect. The resident footprint of a process can be reduced other ways, for example when clean, unmodified pages are discarded. (For example, pages containing code from shared libraries.) – David Schwartz Aug 02 '12 at 18:38
  • If the pages were swapped in and not written to, and thus have a copy on disk, either in swap space, or from the original mapped files, they are "not dirty" and thus can be quickly reclaimed without a swap-out. – Skaperen Aug 02 '12 at 18:43