0

After shutting down a Java application (that does heavy usage of DirectByteBuffer) running inside a Docker Container, there is a lot of unaccounted used memory:

$ free -hg
               total        used        free      shared  buff/cache   available
 Mem:           755G        305G        449G         17M        448M        448G
 Swap:          4.0G          0B        4.0G

Doing something like this:

$ ps -e -o pid= -o comm= -o rss= | awk 'BEGIN{rss_total = 0} {rss_total = rss_total + $3} END {print "RSS total(GB): " rss_total/1024/1024}'
RSS total(GB): 0.698826

I am missing around 300GB(!) of memory.

I am aware that the total sum of RSS won't match the numbers reported by free (due to share pages, buffers, caches, etc) but still...300GB?

Contents of /proc/meminfo folow:

MemTotal:       792419424 kB
MemFree:        471276328 kB
MemAvailable:   470340492 kB
Buffers:              52 kB
Cached:           272672 kB
SwapCached:            0 kB
Active:           477128 kB
Inactive:         170892 kB
Active(anon):     365132 kB
Inactive(anon):    27764 kB
Active(file):     111996 kB
Inactive(file):   143128 kB
Unevictable:           0 kB
Mlocked:               0 kB
SwapTotal:       4194300 kB
SwapFree:        4194300 kB
Dirty:                 8 kB
Writeback:             0 kB
AnonPages:        375316 kB
Mapped:           216800 kB
Shmem:             17596 kB
Slab:             191792 kB
SReclaimable:      58096 kB
SUnreclaim:       133696 kB
KernelStack:       14512 kB
PageTables:        13808 kB
NFS_Unstable:          0 kB
Bounce:                0 kB
WritebackTmp:          0 kB
CommitLimit:    400404012 kB
Committed_AS:    2241808 kB
VmallocTotal:   34359738367 kB
VmallocUsed:     1787248 kB
VmallocChunk:   33821919228 kB
HardwareCorrupted:     0 kB
AnonHugePages:         0 kB
CmaTotal:              0 kB
CmaFree:               0 kB
HugePages_Total:       0
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB
DirectMap4k:      450496 kB
DirectMap2M:    804855808 kB

The application utilizes Active Pivot an in-memory database.

The server runs with vm.overcommit_memory = 1 this a requirement by the Active Pivot component.

The container image is build on top of anapsix / alpine-java : 8u192b12_server-jre

JoGa
  • 85
  • 5
  • How about checking if any individual process is still hanging on to the RAM rather than summarising it? – tink Dec 15 '19 at 16:46
  • I did could not find something. – JoGa Dec 16 '19 at 06:44
  • How did you run your docker container? If the container is still alive, or if it is only paused, it may be retaining the allocated memory to be able to resume from the same point. – Kineolyan Dec 23 '19 at 20:34

0 Answers0