2

I would like to ask about a strange memory behavior that we encountered in some of our clusters.

After a spike in the memory consumption of the api server, the ram remains in the same level of the top of the spike which means that the kube api server does not free any memory.

Is this behavior normal? Can you guide us to a document that describes the kube api server memory cleanup mechanism?

Cluster information:

Kubernetes version: openshift 4.6.35 / kubernetes version 1.19
Cloud being used: openstack 13
Installation method: openshift IPI installation
Host OS: coreos

UPDATE: We upgraded the cluster to openshift version 4.8 and now the api server can free up memory.

Ron Megini
  • 31
  • 7
  • Overall RAM usage is rather meaningless stat by itself. When you use `free` command on node, you can see buff/cache colums, which represents filesystem caches. Do you have a monitoring set up? Do you really have RAM used by processes, or is it mostly occupied by cache? Remember, free RAM = wasted RAM, so Linux always tries to use as much as it can – Andrew Feb 16 '22 at 21:15
  • The master nodes have memory consumption shown on splunk of 79% for ctrl-1. When checking `top` command on the server, we can see 79% - 50GB~ utilization and also buff/cache has 12G~ additional which leaves 2GB free. I understand what you say about Linux ram management. But the kube API server is written in go which use his own garbage collector. Keep as many ram as possible is really the strategy for the kube API server? – Ron Megini Feb 16 '22 at 21:32

0 Answers0