This is a branch off of my previous post: How to get metrics from Kubernetes Metrics-server with a specific window parameter
The answer did help me get results from my query to Prometheus. But Prometheus (correct me if I'm wrong) is unable to query metrics (for my case I need the MIN/MAX/AVG of the API server CPU and memory) by specifying the namespace, pod, and start and end time. With the following command, I am able to get the CPU and memory from all three K8s API server pods, but only with a default window of 5 minutes:
kubectl get --raw "/apis/metrics.k8s.io/v1beta1/namespaces/openshift-kube-apiserver/pods/" | jq
Here is the output:
{
"kind": "PodMetricsList",
"apiVersion": "metrics.k8s.io/v1beta1",
"metadata": {},
"items": [
{
"metadata": {
"name": "kube-apiserver-guard-master-0.ocp9.pd.f5net.com",
"namespace": "openshift-kube-apiserver",
"creationTimestamp": "2023-05-30T23:15:10Z",
"labels": {
"app": "guard"
}
},
"timestamp": "2023-05-30T23:15:10Z",
"window": "5m0s",
"containers": [
{
"name": "guard",
"usage": {
"cpu": "0",
"memory": "868Ki"
}
}
]
},
{
"metadata": {
"name": "kube-apiserver-guard-master-1.ocp9.pd.f5net.com",
"namespace": "openshift-kube-apiserver",
"creationTimestamp": "2023-05-30T23:15:10Z",
"labels": {
"app": "guard"
}
},
"timestamp": "2023-05-30T23:15:10Z",
"window": "5m0s",
"containers": [
{
"name": "guard",
"usage": {
"cpu": "0",
"memory": "868Ki"
}
}
]
},
{
"metadata": {
"name": "kube-apiserver-guard-master-2.ocp9.pd.f5net.com",
"namespace": "openshift-kube-apiserver",
"creationTimestamp": "2023-05-30T23:15:10Z",
"labels": {
"app": "guard"
}
},
"timestamp": "2023-05-30T23:15:10Z",
"window": "5m0s",
"containers": [
{
"name": "guard",
"usage": {
"cpu": "0",
"memory": "864Ki"
}
}
]
},
{
"metadata": {
"name": "kube-apiserver-master-0.ocp9.pd.f5net.com",
"namespace": "openshift-kube-apiserver",
"creationTimestamp": "2023-05-30T23:15:10Z",
"labels": {
"apiserver": "true",
"app": "openshift-kube-apiserver",
"revision": "19"
}
},
"timestamp": "2023-05-30T23:15:10Z",
"window": "5m0s",
"containers": [
{
"name": "kube-apiserver",
"usage": {
"cpu": "404m",
"memory": "3013592Ki"
}
},
{
"name": "kube-apiserver-cert-regeneration-controller",
"usage": {
"cpu": "0",
"memory": "15188Ki"
}
},
{
"name": "kube-apiserver-cert-syncer",
"usage": {
"cpu": "0",
"memory": "26016Ki"
}
},
{
"name": "kube-apiserver-check-endpoints",
"usage": {
"cpu": "2m",
"memory": "48108Ki"
}
},
{
"name": "kube-apiserver-insecure-readyz",
"usage": {
"cpu": "0",
"memory": "12940Ki"
}
}
]
},
{
"metadata": {
"name": "kube-apiserver-master-1.ocp9.pd.f5net.com",
"namespace": "openshift-kube-apiserver",
"creationTimestamp": "2023-05-30T23:15:10Z",
"labels": {
"apiserver": "true",
"app": "openshift-kube-apiserver",
"revision": "19"
}
},
"timestamp": "2023-05-30T23:15:10Z",
"window": "5m0s",
"containers": [
{
"name": "kube-apiserver",
"usage": {
"cpu": "337m",
"memory": "3181316Ki"
}
},
{
"name": "kube-apiserver-cert-regeneration-controller",
"usage": {
"cpu": "3m",
"memory": "33596Ki"
}
},
{
"name": "kube-apiserver-cert-syncer",
"usage": {
"cpu": "0",
"memory": "25576Ki"
}
},
{
"name": "kube-apiserver-check-endpoints",
"usage": {
"cpu": "5m",
"memory": "48516Ki"
}
},
{
"name": "kube-apiserver-insecure-readyz",
"usage": {
"cpu": "0",
"memory": "12744Ki"
}
}
]
},
{
"metadata": {
"name": "kube-apiserver-master-2.ocp9.pd.f5net.com",
"namespace": "openshift-kube-apiserver",
"creationTimestamp": "2023-05-30T23:15:10Z",
"labels": {
"apiserver": "true",
"app": "openshift-kube-apiserver",
"revision": "19"
}
},
"timestamp": "2023-05-30T23:15:10Z",
"window": "5m0s",
"containers": [
{
"name": "kube-apiserver",
"usage": {
"cpu": "246m",
"memory": "2414920Ki"
}
},
{
"name": "kube-apiserver-cert-regeneration-controller",
"usage": {
"cpu": "0",
"memory": "15764Ki"
}
},
{
"name": "kube-apiserver-cert-syncer",
"usage": {
"cpu": "1m",
"memory": "24760Ki"
}
},
{
"name": "kube-apiserver-check-endpoints",
"usage": {
"cpu": "2m",
"memory": "58352Ki"
}
},
{
"name": "kube-apiserver-insecure-readyz",
"usage": {
"cpu": "0",
"memory": "14684Ki"
}
}
]
}
]
}
I need to be able to get the metrics from the output of this query, but with a specific start and end time (or custom window would work as well). The output values of this command are also aggregated, so I believe they are the sum or maybe average of the values within the default 5 minute window, I need the query to give me all values within the window I give with a specified step, so I can loop through and get the MIN/MAX/AVG of the CPU and memory. I don't think Prometheus has these values, I'm thinking I most likely need to curl the metrics server directly.