192

I am trying to see how much memory and CPU is utilized by a kubernetes pod. I ran the following command for this:

kubectl top pod podname --namespace=default

I am getting the following error:

W0205 15:14:47.248366    2767 top_pod.go:190] Metrics not available for pod default/podname, age: 190h57m1.248339485s
error: Metrics not available for pod default/podname, age: 190h57m1.248339485s
  1. What do I do about this error? Is there any other way to get CPU and memory usage of the pod?
  2. I saw the sample output of this command which shows CPU as 250m. How is this to be interpreted?

  3. Do we get the same output if we enter the pod and run the linux top command?

mirekphd
  • 4,799
  • 3
  • 38
  • 59
aniztar
  • 2,443
  • 4
  • 18
  • 24
  • 5
    If you run top inside the pod, it will be like you run it on the host system because the pod is using kernel of the host system. https://stackoverflow.com/a/51656039/429476 – Alex Punnen Jul 14 '20 at 10:04

19 Answers19

195

CHECK WITHOUT METRICS SERVER or ANY THIRD PARTY TOOL


If you want to check pods cpu/memory usage without installing any third party tool then you can get memory and cpu usage of pod from cgroup.

  1. Go to pod's exec mode kubectl exec -it pod_name -n namespace -- /bin/bash
  2. Run cat /sys/fs/cgroup/cpu/cpuacct.usage for cpu usage
  3. Run cat /sys/fs/cgroup/memory/memory.usage_in_bytes for memory usage

Make Sure you have added the resources section (requests and limits) to deployment so that it can calculate the usage based on cgroup and container will respect the limits set on pod level

NOTE: This usage is in bytes. This can vary upon pod usage and these values changes frequently.

Yash Kumar Verma
  • 9,427
  • 2
  • 17
  • 28
Dashrath Mundkar
  • 7,956
  • 2
  • 28
  • 42
  • 24
    How can I calculate percentage of CPU used from this value or is there a way I can determine percentage of CPU used allocated to a pod/deployment? – Jaraws Jul 30 '20 at 15:54
  • 25
    For copy paste: `cat /sys/fs/cgroup/memory/memory.usage_in_bytes` & `cat /sys/fs/cgroup/cpu/cpuacct.usage` – Roman Mar 03 '21 at 22:20
  • 3
    As a note, this method could show different memory values from `kubectl top`, as this is checking memory *usage*, which is _used + cache_, while `top` reports _used_ only. More at: https://www.ibm.com/support/pages/kubectl-top-pods-and-docker-stats-show-different-memory-statistics – Rafael Aguilar Mar 05 '21 at 11:43
  • `cat /sys/fs/cgroup/cpu/cpuacct.usage` doesn't work for me ("no such file or directory"). I had to use `cat /sys/fs/cgroup/cpuacct/cpuacct.usage`, and apparently `cat /sys/fs/cgroup/cpuacct.usage` might also work. Not quite sure why though. – Lebbers Sep 13 '21 at 20:52
  • 9
    You can use `cat /sys/fs/cgroup/memory/memory.usage_in_bytes | numfmt --to=iec` to get numbers in Kb/Mb/Gb. – teegaar Oct 25 '21 at 10:43
  • 23
    How should you interpret the CPU value? What is the unit? – justin.m.chase Feb 24 '22 at 18:30
  • 1
    Another way to get the MB usage if you don't have numfmt available: `cat /sys/fs/cgroup/memory/memory.usage_in_bytes | awk '{ foo = $1 / 1024 / 1024 ; print foo "MB" }'` – KNejad Mar 13 '22 at 22:28
  • I get only `cat: /sys/fs/cgroup/cpu/cpuacct.usage: No such file or directory` and `cat: /sys/fs/cgroup/memory/memory.usage_in_bytes: No such file or directory`. – Kris Jun 07 '23 at 15:37
179

kubectl top pod <pod-name> -n <fed-name> --containers

FYI, this is on v1.16.2

Umakant
  • 2,106
  • 1
  • 7
  • 12
  • 6
    I understand that metrics server must first be installed: `$ kubectl top pod mypod -n mynamespace --containers Error from server (NotFound): podmetrics.metrics.k8s.io "mynamespace/mypod" not found` – user9074332 Sep 08 '20 at 20:48
  • 2
    @user9074332, Yes you need metrics server installed first. You can do so by executing following commands: `wget https://raw.githubusercontent.com/pythianarora/total-practice/master/sample-kubernetes-code/metrics-server.yaml kubectl create -f metrics-server.yaml` – Umakant Mar 25 '21 at 17:10
  • "kubectl get pods --namespace product | grep Running | awk '{print $1}' | kubectl top pod $1 --namespace product --containers" for overall output instead of running for each pod – Mehmet Gökalp Jun 14 '22 at 14:08
  • 1
    not adding namespace will give pod not found error – Yash Kumar Verma Oct 10 '22 at 14:10
  • 1
    Use watch if you want to execute the top command periodically. Example for watch interval of 5 sec : `watch -n5 kubectl top pod -n --containers` . [watch man page](https://man7.org/linux/man-pages/man1/watch.1.html) – Raunak Kapoor Apr 28 '23 at 18:56
69

Use k9s for a super easy way to check all your resources' cpu and memory usage.

enter image description here

Nick
  • 11,483
  • 8
  • 41
  • 44
48
  1. As described in the docs, you should install metrics-server

  2. 250m means 250 milliCPU, The CPU resource is measured in CPU units, in Kubernetes, is equivalent to:

    • 1 AWS vCPU
    • 1 GCP Core
    • 1 Azure vCore
    • 1 Hyperthread on a bare-metal Intel processor with Hyperthreading

    Fractional values are allowed. A Container that requests 0.5 CPU is guaranteed half as much CPU as a Container that requests 1 CPU. You can use the suffix m to mean milli. For example 100m CPU, 100 milliCPU, and 0.1 CPU are all the same. Precision finer than 1m is not allowed.

    CPU is always requested as an absolute quantity, never as a relative quantity; 0.1 is the same amount of CPU on a single-core, dual-core, or 48-core machine.

  3. No, kubectl top pod podname shows metrics for a given pod, Linux top and free runs inside a Container and report metrics based on Linux system reporting based on the information stored in the virtual filesystem /proc/, they are not aware of the cgroup where it runs.

    There are more details on these links:

Diego Mendes
  • 10,631
  • 2
  • 32
  • 36
  • For the 3rd point, the link you gave tells that running `top` inside pod is same as running it on the host system. But when i tried it, the outputs don't match – aniztar Feb 15 '19 at 04:38
  • Actually the statement is wrong, it does not report the same thing, but they work the same way. The main difference is that the contents on `/proc/` filesystem of the container are different from the host then the results won't be the same. I've added another link with more detailed information. – Diego Mendes Feb 15 '19 at 11:51
37

A quick way to check CPU/Memory is by using the following kubectl command. I found it very useful.

kubectl describe PodMetrics <pod_name>

replace <pod_name> with the pod name you get by using

kubectl get pod
Suvoraj Biswas
  • 573
  • 1
  • 6
  • 10
  • 14
    error: the server doesn't have a resource type "PodMetrics" – JRichardsz Sep 16 '21 at 00:26
  • 2
    @JRichardsz you need to install the k8s metrics server first `kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml` – Presidenten Jan 04 '22 at 09:45
  • Can you believe that I've ran the describe command over a pod but not details related to memory or cpu used was displayed. Some specific feature shoul be enabled ? – Manuel Lazo Feb 07 '23 at 14:42
30

You need to run metric server to make below commands working with correct data:

  1. kubectl get hpa
  2. kubectl top node
  3. kubectl top pods

Without metric server: Go into the pod by running below command:

  1. kubectl exec -it pods/{pod_name} sh
  2. cat /sys/fs/cgroup/memory/memory.usage_in_bytes

You will get memory usage of pod in bytes.

chetan mahajan
  • 723
  • 7
  • 9
  • 2
    To add, there should also be a file tells you the memory limit /sys/fs/cgroup/memory/memory.limit_in_bytes . With these files you can calculate the memory usage percentage on that Pod. Preferably have some script on the Pod itself calculates the memory percentage and writes to a file. Then it will be simple as kubectl exec pod/ -- cat to get its memory load. – Z_K Jun 09 '21 at 18:58
16

Not sure why it's not here

  1. To see all pods with time alive - kubectl get pods --all-namespaces
  2. To see memory and CPU - kubectl top pods --all-namespaces
Guy Luz
  • 3,372
  • 20
  • 43
shimi_tap
  • 7,822
  • 5
  • 23
  • 23
  • 1
    It's not there because the metrics API server is simply not installed by default on kubernetes, at least not on vanilla. – Lethargos Sep 01 '22 at 14:54
10

If you use Prometheus operator or VictoriaMetrics operator for Kubernetes monitoring, then the following PromQL queries can be used for determining per-container, per-pod and per-node resource usage:

  • Per-container memory usage in bytes:
sum(container_memory_usage_bytes{container!~"POD|"}) by (namespace,pod,container)
  • Per-container CPU usage in CPU cores:
sum(rate(container_cpu_usage_seconds_total{container!~"POD|"}[5m])) by (namespace,pod,container)
  • Per-pod memory usage in bytes:
sum(container_memory_usage_bytes{container!=""}) by (namespace,pod)
  • Per-pod CPU usage in CPU cores:
sum(rate(container_cpu_usage_seconds_total{container!=""}[5m])) by (namespace,pod)
  • Per-node memory usage in bytes:
sum(container_memory_usage_bytes{container!=""}) by (node)
  • Per-node CPU usage in CPU cores:
sum(rate(container_cpu_usage_seconds_total{container!=""}[5m])) by (node)
  • Per-node memory usage percentage:
100 * (
  sum(container_memory_usage_bytes{container!=""}) by (node)
    / on(node)
  kube_node_status_capacity{resource="memory"}
)
  • Per-node CPU usage percentage:
100 * (
  sum(rate(container_cpu_usage_seconds_total{container!=""}[5m])) by (node)
    / on(node)
  kube_node_status_capacity{resource="cpu"}
)
valyala
  • 11,669
  • 1
  • 59
  • 62
  • Okay so I tried per-pod memory usage in bytes and it doesn't compare with what is reported by `kubectl top` or manually adding the memory usage after getting it via api. – Anupam Srivastava Jun 13 '22 at 12:49
  • @AnupamSrivastava, it would be great if you could provide an example pod with its memory usage reported by the query and memory usage returned by `kubectl top`? – valyala Jun 13 '22 at 19:10
8

As heapster is deprecated and will not be releasing any future releases, you should go with installing metrics-server

You can install metrics-server in following way:

  1. Clone the metrics-server github repo: git clone https://github.com/kubernetes-incubator/metrics-server.git

Edit the deploy/1.8+/metrics-server-deployment.yaml file and add following section just after command section:

- command:
     - /metrics-server
     - --metric-resolution=30s
     - --kubelet-insecure-tls
     - --kubelet-preferred-address-types=InternalIP
  1. Run the following command: kubectl apply -f deploy/1.8+

It will install all the requirements you need for metrics server.

For more info, please have a look at my following answer:

How to Enable KubeAPI server for HPA Autoscaling Metrics

gp.
  • 8,074
  • 3
  • 38
  • 39
Prafull Ladha
  • 12,341
  • 2
  • 37
  • 58
4

An alternative approach without having to install the metrics server.

It requires you to currently install crictl into Worker Nodes where pods are installed. There is Kubernetes task defined in official doc.

Once, you have installed it properly you can use the below commands. (I had to use sudo in my case, but, probably may not be required depending on your Kubernetes Cluster install)

  1. Find your container id of the pod sudo crictl ps
  2. use stats to get CPU and RAM sudo crictl stats <CONTAINERID>

Sample output for reference:

CONTAINER           CPU %               MEM                 DISK                INODES
873f04b6cef94       0.50                54.16MB             28.67kB             8
prasun
  • 7,073
  • 9
  • 41
  • 59
  • 2
    Worth noting that this assumes that kubernetes is running containerd, and not dockerd, used by earlier releases – SiHa Jun 29 '22 at 06:45
3

To check the usage of individual pods in Kubernetes type the following commands in terminal

$ docker ps | grep <pod_name>

This will give your list of running containers in Kubernetes To check CPU and memory utilization using

$ docker stats <container_id>

CONTAINER_ID  NAME   CPU%   MEM   USAGE/LIMIT   MEM%   NET_I/O   BLOCK_I/O   PIDS
Ambir
  • 59
  • 1
  • 7
2

you need to deploy heapster or metric server to see the cpu and memory usage of the pods

P Ekambaram
  • 15,499
  • 7
  • 34
  • 59
1

You can use API as defined here:

For example:

kubectl -n default get --raw /apis/metrics.k8s.io/v1beta1/namespaces/default/pods/nginx-7fb5bc5df-b6pzh | jq

{
  "kind": "PodMetrics",
  "apiVersion": "metrics.k8s.io/v1beta1",
  "metadata": {
    "name": "nginx-7fb5bc5df-b6pzh",
    "namespace": "default",
    "selfLink": "/apis/metrics.k8s.io/v1beta1/namespaces/default/pods/nginx-7fb5bc5df-b6pzh",
    "creationTimestamp": "2021-06-14T07:54:31Z"
  },
  "timestamp": "2021-06-14T07:53:54Z",
  "window": "30s",
  "containers": [
    {
      "name": "nginx",
      "usage": {
        "cpu": "33239n",
        "memory": "13148Ki"
      }
    },
    {
      "name": "git-repo-syncer",
      "usage": {
        "cpu": "0",
        "memory": "6204Ki"
      }
    }
  ]
}

Where nginx-7fb5bc5df-b6pzh is pod's name.

Pay attention CPU is measured in nanoCPUs where 1x10E9 nanoCPUs = 1 CPU

drFunJohn
  • 199
  • 4
1

I know this is an old thread, but I just found it trying to do something similar. In the end, I found I can just use the Visual Studio Code Kubernetes plugin. This is what I did:

  • Select the cluster and open the Workloads/Pods section, find the pod you want to monitor (you can reach the pod through any other grouping in the Workloads section)
  • Right-click on the pod and select "Terminal"
  • Now you can either cat the files described above or use the "top" command to monitor CPU and memory in real-time.

Hope it helps

1

In completion of Dashrath Mundkar's answer, this execution is possible without entering the pod (using command prompt) :

kubectl exec pod_name -n namespace -- cat /sys/fs/cgroup/cpu/cpuacct.usage

Tireuuuuu
  • 43
  • 6
1

In my use case I wanted to aggregate memory/cpu usage per namespace as I wanted to see how heavy or lightweight a Harbor system running in my small K3s cluster would be, so I wrote this Python script using the kubernetes Python client:

from kubernetes import client, config
import matplotlib.pyplot as plt
import pandas as pd

def cpu_n(cpu_str: str):
    if cpu_str == "0":
        return 0.0
    assert cpu_str.endswith("n")
    return float(cpu_str[:-1])

def mem_Mi(mem_str: str):
    if mem_str == "0":
        return 0.0
    assert mem_str.endswith("Ki") or mem_str.endswith("Mi")
    val = float(mem_str[:-2])
    if mem_str.endswith("Ki"):
        return val / 1024.0
    if mem_str.endswith("Mi"):
        return val

config.load_kube_config()
api = client.CustomObjectsApi()
v1 = client.CoreV1Api()
cpu_usage_pct = {}
mem_usage_mb = {}
namespaces = [item.metadata.name for item in v1.list_namespace().items]
for ns in namespaces:
    resource = api.list_namespaced_custom_object(group="metrics.k8s.io", version="v1beta1", namespace=ns, plural="pods")
    cpu_total_n = 0.0
    mem_total_Mi = 0.0
    for pod in resource["items"]:
        for container in pod["containers"]:
            usage = container["usage"]
            cpu_total_n += cpu_n(usage["cpu"])
            mem_total_Mi += mem_Mi(usage["memory"])
    if mem_total_Mi > 0:
        mem_usage_mb[ns] = mem_total_Mi
    if cpu_total_n > 0:
        cpu_usage_pct[ns] = cpu_total_n * 100 / 10**9

df_mem = pd.DataFrame({"ns": mem_usage_mb.keys(), "memory_mbi": mem_usage_mb.values()})
df_mem.sort_values("memory_mbi", inplace=True)

_, [ax1, ax2] = plt.subplots(2, 1, figsize=(12, 12))

ax1.barh("ns", "memory_mbi", data=df_mem)
ax1.set_ylabel("Namespace", size=14)
ax1.set_xlabel("Memory Usage [MBi]", size=14)
total_memory_used_Mi = round(sum(mem_usage_mb.values()))
ax1.set_title(f"Memory usage by namespace [{total_memory_used_Mi}Mi total]", size=16)

df_cpu = pd.DataFrame({"ns": cpu_usage_pct.keys(), "cpu_pct": cpu_usage_pct.values()})
df_cpu.sort_values("cpu_pct", inplace=True)
ax2.barh("ns", "cpu_pct", data=df_cpu)
ax2.set_ylabel("Namespace", size=14)
ax2.set_xlabel("CPU Usage [%]", size=14)
total_cpu_usage_pct = round(sum(cpu_usage_pct.values()))
ax2.set_title(f"CPU usage by namespace [{total_cpu_usage_pct}% total]", size=16)

plt.show()

Sample output looks like this: memory cpu usage per namespace

Of course, keep in mind that it is just a snapshot of your system's memory and CPU usage, it could vary a lot as workloads become more or less active.

Darien Pardinas
  • 5,910
  • 1
  • 41
  • 48
0

In case you are using minikube, you can enable the metrics-server addon; this will show the information in the dashboard.

Javier Aviles
  • 7,952
  • 2
  • 22
  • 28
0

If you exec into your pod, using sh or bash, you can run the top command which will give you some stats about resource utilisation that updates every few moments.

enter image description here

Ian Robertson
  • 2,652
  • 3
  • 28
  • 36
  • I'm getting error `bash: top: command not found` – Alexey Sh. Jul 30 '21 at 17:09
  • You might have to use your package manager to install it – Ian Robertson Jul 31 '21 at 14:52
  • 1
    Pods that use images created from "scratch" image in general does not have installed "top". – drFunJohn Aug 01 '21 at 13:41
  • See my previous comment, you might have to use your package manager to install it... – Ian Robertson Aug 02 '21 at 17:12
  • 2
    just to consider, this method wont give you independent stats of a given pod.,it will show stats of the cluster. – uajov6 Sep 03 '21 at 12:51
  • No... if you exec into a running container, then you are using the shell of that container. We are not running top via kubectl, so it is not cluster based. – Ian Robertson Sep 06 '21 at 06:34
  • > just to consider, this method wont give you independent stats of a given pod.,it will show stats of the cluster. | This is correct this will give you the resource of the node where your pod is. See https://stackoverflow.com/a/51656039/5697747 – ordem Aug 03 '23 at 10:41
0

Metrics are available only if metric server is enabled or third party solutions like prometheus is configured. Otherwise you need to look at /sys/fs/cgroup/cpu/cpuacct.usage for cpu usage, which is the total cpu time occupied by this cgroup/container and /sys/fs/cgroup/memory/memory.usage_in_bytes for memory usage, which is total memory consumed by all the processes in the cgroup/container.

Also don't forget another beast called QOS, which can have values like Bursted, Guaranteed. If your pod appears Bursted, then it will be OOMKilled, even if it has not breached the CPU or Memory threshold.

Kubernetes is FUN!!!

Ajit Surendran
  • 709
  • 7
  • 4