0

Looking at my 1.23 GKE cluster under "observability", I see memory usage of 200%+. The breakdown shows most of it is by the v2k-system namespace, which AFAIK is GKE's internals. Why does it use over 2x memory than what it actually requests? I've got my own pods trying to get memory and fail - I suspect it's because v2k-system pods take up all the memory

Sagi Mann
  • 111
  • 3

1 Answers1

0

Yes you're correct it's more than 100% & GKE internal.

Memory usage for all containers in the cluster is divided by total memory requests for those containers. This can be more than 100% if total usage exceeds the total request.

enter image description here

Memory utilization: The memory utilization of containers that can be attributed to a resource within the selected time span.

The metric used is kubernetes.io/container/memory/request_utilization. This Metrics API offers a basic set of metrics to support automatic scaling and similar use cases. This API makes information available about resource usage for nodes and pods, including metrics for CPU and memory. If you deploy the Metrics API into your cluster, clients of the Kubernetes API can then query for this information.

There are 2 types of memory available as below :

1)Evictable memory is memory that will be removed from the resource if usage becomes too high.

2)Non-evictable memory usage exceeds the limits, the container may be terminated. For more information about resource limits, see and use Requests and limits it may help to resolve your issue.

A Container is guaranteed to have as much memory as it requests, but is not allowed to use more memory than its limit.Assign Memory Resources to Containers and Pods for more information.

Veera Nagireddy
  • 523
  • 2
  • 6