I have a Kubernetes Pod that has
- Requested Memory of 1500Mb
- Memory Limit of 2048Mb
I have 2 containers running inside this pod, one is the actual application (heavy Java app) and a lightweight log shipper.
The pod consistently reports a usage of 1.9-2Gb of memory usage. Because of this, the deployment is scaled (an autoscaling configuration is set which scales pods if memory consumption > 80%), naturally resulting in more pods and more costs
Yellow Line represents application memory usage
However, on deeper investigation, this is what I found.
On exec
ing inside the application container, I ran the top
command, and it reports a total of 16431508 KiB
or roughly 16Gb of memory available, which is the memory available on the Machine.
There are 3 processes running inside the application container, out of which the root process (application) takes 5.9% of memory, which roughly comes out to 0.92Gb.
The log-shipper simply takes 6Mb of memory.
Now, what I don't understand is WHY my pod consistently reports such high usage metrics. Am I missing something ? We're incurring significant costs due to the unintended auto-scaling and would like to fix the same.