6

I have a container leaking memory. Or at least, reported memory consumption goes up fast. If I run top, I get this:

top - 16:56:51 up 6 days, 17:25,  0 users,  load average: 0.16, 0.27, 0.31
Tasks:   4 total,   1 running,   3 sleeping,   0 stopped,   0 zombie
%Cpu(s):  1.3 us,  0.7 sy,  0.0 ni, 98.0 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
KiB Mem:   7676380 total,  4089380 used,  3587000 free,   675164 buffers
KiB Swap:        0 total,        0 used,        0 free.  2586496 cached Mem

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND
    1 root      20   0   46924  15196   6456 S   0.0  0.2   0:15.54 supervisord
    8 root      20   0 3526084  47976  29660 S   0.0  0.6   0:59.15 dotnet
  568 root      20   0   20364   3332   2728 S   0.0  0.0   0:00.09 bash
 2800 root      20   0   21956   2420   2032 R   0.0  0.0   0:00.00 top

The number I get reported out at this moment is 90M, which doesn't add up. I'm pretty sure /sys/fs/cgroup/memory/memory.usage_in_bytes is what's reported:

> cd /sys/fs/cgroup/memory/
> cat cgroup.procs
1
8
568
2494

> cat memory.usage_in_bytes
92282880

> pmap -p 1 | tail -n 1
 total            46924K

> pmap -p 8 | tail -n 1
 total          3599848K

> pmap -p 568 | tail -n 1
 total            20364K

> ps 2494
  PID TTY      STAT   TIME COMMAND

To my eyes there's a significant amount of memory 'missing' here... if I cat memory.usage_in_bytes again over the time I've been typing this:

> cat memory.usage_in_bytes
112291840

> pmap -p 1 | tail -n 1
 total            46924K

> pmap -p 8 | tail -n 1
 total          3452320K

> pmap -p 568 | tail -n 1
 total            20368K

Nothing obviously accounts for this memory usage. If I look at memory.stat I see this:

# cat memory.stat
cache 89698304
rss 30699520
rss_huge 0
mapped_file 1552384
writeback 0
pgpgin 102007
pgpgout 72613
pgfault 115021
pgmajfault 8
inactive_anon 1519616
active_anon 30789632
inactive_file 417792
active_file 87654400
unevictable 4096
hierarchical_memory_limit 18446744073709551615
total_cache 89698304
total_rss 30699520
total_rss_huge 0
total_mapped_file 1552384
total_writeback 0
total_pgpgin 102007
total_pgpgout 72613
total_pgfault 115021
total_pgmajfault 8
total_inactive_anon 1519616
total_active_anon 30789632
total_inactive_file 417792
total_active_file 87654400
total_unevictable 4096

Then a short moment later:

# cat memory.stat
cache 89972736
rss 30777344
rss_huge 0
mapped_file 1552384
writeback 0
pgpgin 102316
pgpgout 72836
pgfault 115674
pgmajfault 8
inactive_anon 1519616
active_anon 30867456
inactive_file 417792
active_file 87928832
unevictable 4096
hierarchical_memory_limit 18446744073709551615
total_cache 89972736
total_rss 30777344
total_rss_huge 0
total_mapped_file 1552384
total_writeback 0
total_pgpgin 102316
total_pgpgout 72836
total_pgfault 115674
total_pgmajfault 8
total_inactive_anon 1519616
total_active_anon 30867456
total_inactive_file 417792
total_active_file 87928832
total_unevictable 4096

But I'll be honest; I don't really know what I'm looking at here. I am looking suspicously at active_file but again; I don't really know what I'm looking at.

Some notes and observations:

  • The container is scheduled by Kubernetes.
  • This program writes a lot of data to stdout.
  • Reducing the amount of data written to console to near-zero solves the reported memory leak.
  • Deploying a program which only writes lots of data to stdout does not seem exhibit the same memory leak(!)

So! How should I go about finding where this memory is getting consumed? Is there anything obvious to anybody - perhaps something's staring me in the face or I'm not looking at something I should be?

Thanks!

ledneb
  • 179
  • 6

1 Answers1

1

In short; memory.usage includes disk caches. I should have been measuring (memory.usage - memory.cache). The issue (and perceived memory leak) was that supervisord was writing my program's stdout to a log file.

ledneb
  • 179
  • 6