2

I installed stable/prometheus from helm. By default the job_name kubernetes-service-endpoints contains node-exporter and kube-state-metrics as component label. I added the below configuration in prometheus.yml to include namespace, pod and node labels.

      - source_labels: [__meta_kubernetes_namespace]
        separator: ;
        regex: (.*)
        target_label: namespace
        replacement: $1
        action: replace
      - source_labels: [__meta_kubernetes_pod_name]
        separator: ;
        regex: (.*)
        target_label: pod
        replacement: $1
        action: replace
      - source_labels: [__meta_kubernetes_pod_node_name]
        separator: ;
        regex: (.*)
        target_label: node
        replacement: $1
        action: replace

kube_pod_info{component="kube-state-metrics"} already had namespace, pod and node labels and hence exported_labels were generated. And the metric node_cpu_seconds_total{component="node-exporter"} now correctly has labels namespace, pod and node.

To have these labels correctly, I need to have these 3 labels present in both the above metric names. To achieve that I can override value of exported_labels. I tried adding the below config but to no avail.

      - source_labels: [__name__, exported_pod]
        regex: "kube_pod_info;(.+)"
        target_label: pod
      - source_labels: [__name__, exported_namespace]
        regex: "kube_pod_info;(.+)"
        target_label: namespace
      - source_labels: [__name__, exported_node]
        regex: "kube_pod_info;(.+)"
        target_label: node

Similar approach was mentioned here. I can't see the issue with my piece of code. Any directions to resolve would be very helpful.

Updated - (adding complete job)

    - job_name: kubernetes-service-endpoints
      kubernetes_sd_configs:
      - role: endpoints

      metric_relabel_configs:
      - source_labels: [__name__, exported_pod]
        regex: "kube_pod_info;(.+)"
        target_label: pod
      - source_labels: [__name__, exported_namespace]
        regex: "kube_pod_info;(.+)"
        target_label: namespace
      - source_labels: [__name__, exported_node]
        regex: "kube_pod_info;(.+)"
        target_label: node

      relabel_configs:
      - action: keep
        regex: true
        source_labels:
        - __meta_kubernetes_service_annotation_prometheus_io_scrape
      - action: replace
        regex: (https?)
        source_labels:
        - __meta_kubernetes_service_annotation_prometheus_io_scheme
        target_label: __scheme__
      - action: replace
        regex: (.+)
        source_labels:
        - __meta_kubernetes_service_annotation_prometheus_io_path
        target_label: __metrics_path__
      - action: replace
        regex: ([^:]+)(?::\d+)?;(\d+)
        replacement: $1:$2
        source_labels:
        - __address__
        - __meta_kubernetes_service_annotation_prometheus_io_port
        target_label: __address__
      - action: labelmap
        regex: __meta_kubernetes_service_label_(.+)
      - action: replace
        source_labels:
        - __meta_kubernetes_namespace
        target_label: namespace
      - action: replace
        regex: (.*)
        replacement: $1
        separator: ;
        source_labels:
        - __meta_kubernetes_pod_name
        target_label: pod
      - action: replace
        source_labels:
        - __meta_kubernetes_pod_node_name
        target_label: node

And the result from promql

kube_pod_info results

Ankit Nayan
  • 363
  • 7
  • 18
  • Labels starting with `__` are meta labels that are automatically added by the service discovery process. There are different sets of meta labels for the different types of [Kubernetes service discovery configs](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#kubernetes_sd_config). You can see in the "Service Discovery" page of the Prometheus UI which exact meta labels your targets have and then you can start replacing from there. – weibeld Nov 26 '19 at 05:15
  • @object recognition Have you found the above comment helpful? – Wytrzymały Wiktor Nov 26 '19 at 14:25
  • Hi @weibeld and @ohhimark, `__name__` is not present in the Service Discover page of the prometheus UI and also should not be since it does not have a specific value. However `{__name__=~".+"}` query correctly returns all non-state time series. Still could not figure out why the above relabelling config does not work. – Ankit Nayan Nov 27 '19 at 05:50

1 Answers1

6

So your goal is to rename the metric labels exported_pod to pod, etc, for the kube_pod_info metric?

In that case, you need metric relabelling which is done when metrics are fetched from targets:

- job_name: 'kubernetes-service-endpoints'

  kubernetes_sd_configs:
    - role: endpoints

  metric_relabel_configs:
    - source_labels: [__name__, exported_pod]
      regex: "kube_pod_info;(.+)"
      target_label: pod
    - source_labels: [__name__, exported_namespace]
      regex: "kube_pod_info;(.+)"
      target_label: namespace
    - source_labels: [__name__, exported_node]
      regex: "kube_pod_info;(.+)"
      target_label: node
  relabel_configs:
    # Insert the same what you have so far

Background:

Normal relabelling (relabel_configs) is applied at service discovery time to the target labels that are automatically discovered by the service discovery process. It defines the definitive target labels. At scrape time, target labels are added to the metric labels of all the metrics from the target. Normal relabelling can be only used to work on labels of a target after service discovery, which are generally meta labels starting with __.

Metric relabelling (metric_relabel_configs) is applied to the metric labels at scrape time. So, this can be used to rename labels that are defined by the applications exposing the metrics themselves.

weibeld
  • 13,643
  • 2
  • 36
  • 50
  • This makes sense but unfortunately still no luck.. I have added the complete job config and the results screenshot in the question. Let me know if you see anything there. – Ankit Nayan Nov 28 '19 at 11:00
  • Can you post a screenshot of the `10.12.0.4:8080` target of the `kubernetes-service-endpoints` job in the Service Discovery page of the Prometheus UI? – weibeld Nov 28 '19 at 12:08
  • Sure.. the IP has changed due to restart but captured what you are looking for the the link https://ibb.co/kHwttm3 – Ankit Nayan Nov 29 '19 at 06:46
  • I need also the second column (Target Labels). Can you capture first and second column together? – weibeld Nov 29 '19 at 08:21
  • In this target, the `pod`, `node`, and `namespace` labels are correctly recognised. What's exactly the problem you're trying to fix? – weibeld Nov 30 '19 at 06:08
  • job `kubernetes-service-endpoints` has `node-exporter` and `kube-state-metrics` as components. I want to have consistent labels node, pod and namespace across all components. I need to use `kube_pod_info` and `node_cpu_seconds_total` in some recording rule. `node_cpu_seconds_total` does not have labels pod, node and namespace but `kube_pod_info` has the labels. I added `__meta__` labels to include these labels in `node_cpu_seconds_total` which is correctly there now. But since `kube_pod_info` already had those labels, it generated exported_labels which I don't want. – Ankit Nayan Nov 30 '19 at 07:42
  • Thus I was trying to override `exported_<>` labels in `kube_pod_info` – Ankit Nayan Nov 30 '19 at 07:42