2

So I have a system with linux kernel 4.14.73 in which I am using values from /proc/meminfo in an program that shows the system specs including memory used and memory reserved. All was well until I saw something really weird: the total committed memory is less than the used memory ( or in /proc/meminfo terms COMMITTED_AS < MEMTOTAL - MEMAVAILABLE). Here is the output of /proc/meminfo :

# cat /proc/meminfo     
MemTotal:       32911616 kB
MemFree:        32322628 kB
MemAvailable:   32360768 kB
Buffers:            4604 kB
Cached:           304088 kB
SwapCached:            0 kB
Active:            83876 kB
Inactive:         263204 kB
Active(anon):      46680 kB
Inactive(anon):      152 kB
Active(file):      37196 kB
Inactive(file):   263052 kB
Unevictable:       83788 kB
Mlocked:           83788 kB
SwapTotal:             0 kB
SwapFree:              0 kB
Dirty:                 0 kB
Writeback:             0 kB
AnonPages:        122204 kB
Mapped:            22348 kB
Shmem:              1328 kB
Slab:              52696 kB
SReclaimable:      28548 kB
SUnreclaim:        24148 kB
KernelStack:        2896 kB
PageTables:         2348 kB
NFS_Unstable:          0 kB
Bounce:                0 kB
WritebackTmp:          0 kB
CommitLimit:    32911616 kB
Committed_AS:     366544 kB
VmallocTotal:   34359738367 
VmallocUsed:           0 kB
VmallocChunk:          0 kB
HugePages_Total:       0
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB
DirectMap4k:       47004 kB
DirectMap2M:     4050944 kB
DirectMap1G:    29360128 kB

So this gives me ~538MB of used memory but only ~358MB of committed memory! How is it possible that the total allocated memory in the system is less than the used memory?? Or can someone point out if I'm doing something wrong here?

Plz, any pointers on what is going on here would be greatly appreciated!!

igalvez
  • 29
  • 1
  • 4

1 Answers1

3

On Linux, Committed_AS is a user space commit estimate. If you add kernel stuff, Cached, Slab, KernelStack, and PageTables, that accounts for most of the "missing" few hundred MB.

Programs don't use all of their allocations. So the kernel plays clever overcommit games, and hopes it doesn't go bankrupt if everyone fills theirs with actual data.

On many systems, those with most of their memory allocated in user space, Committed_AS can approach MemTotal relatively safely. Although far exceeding it leads to paging out and bad performance in general. To be safe, my capacity planning target is Committed_AS below MemTotal.

But this system is well under that threshold, at about 2% memory utilization. (Very underutilized. No capacity concern here.) User space allocations are barely more than the kernel's. So the incorrect assumption that Committed_AS is all user + kernel allocations no longer fits the data.

John Mahowald
  • 32,050
  • 2
  • 19
  • 34
  • Thanks for your reply. I forgot to mention that overcommit is disabled on this system, so the kernel will never allow the allocated memory to surpass the total memory. Anyways, so from what you are saying the correct way to get the total allocated memory then is to sum up the Commited_AS field with Cached, Slab, KernelStack, and PageTables? – igalvez Mar 27 '20 at 12:35
  • Not exactly, that was to illustrate the misconception about how Committed_AS works. What you should do for capacity planning is a rough estimate of workload based on number of processes and their malloc patterns, shared memory sizes, a bit for the kernel, that kind of thing. Do not be concerned until you use up most of that 98% free memory. Be concerned if Committed_AS is much greater than MemTotal or there is significant page out and in to disk. Between those two extremes the kernel is pretty good at managing its virtual memory. – John Mahowald Mar 27 '20 at 12:56
  • The thing is I need a tool to inform the user how much memory is being used and how much has been allocated by the system (because overcommit is disabled this is useful info) as this system is used to run vms, which can reserve quite a lot of memory even when not much memory is actually being used.... I did some tests adding up Commited_AS with the other fields you mentioned, but under heavy alloc sometimes the sum of this values go above the total memory, probably not a surprise since apparently this is just a rough estimation? – igalvez Mar 31 '20 at 16:17
  • I was just wondering if there was a more accurate way of getting the total allocated memory.... – igalvez Mar 31 '20 at 16:17
  • Committed going higher than total is the kernel being clever/lazy in assuming not all pages will be used. A safe way to do capacity planning is to not overprovision guest memory. So a 32 GB host might support 7x 4 GB guests. Capacity planning could be its own question. – John Mahowald Mar 31 '20 at 20:54