1

I am trying to monitor Strimzi using the kube-prometheus-stack helm chart. I have set it up following the tutorial from the official Strimzi documentation. In this tutorial, they both use Podmonitors and a Prometheus config to get some metrics. But I do not quite understand why I need to set up a Podmonitor for some metrics and add jobs in prometheus.prometheusSpec.additionalScrapeConfigs for others. Could someone explain the difference to me?

Manuel
  • 649
  • 5
  • 22

1 Answers1

4

The PodMonitor(s) are used to select the metrics from the pods from the Strimzi provided custom resources, like for example Kafka, ZooKeeper, KafkaBridge, and so on. The Prometheus operator translates that configuration in corresponding jobs with a kubernetes_sd_configs having role: pod. The prometheus-additional.yaml file that is used for the additional scrape configs field, contains "raw" jobs configuration for Kubernetes related metrics directly from nodes and provided by cadvisor and kubelet (i.e. volume disk space, CPU and memory usage). In the Prometheus operator there is no a corresponding thing for role: node, doesn't exist a thing like NodeMonitor. I hope that it makes more sense now.

ppatierno
  • 9,431
  • 1
  • 30
  • 45
  • Yes, that makes it clearer. Why is the `prometheus-additional.yaml` then needed as volume disk space, CPU, and memory usage can also be collected by other Prometheus jobs independent of Strimzi. (I am using a Prometheus helm chart where some targets are already defined). Or are these additional metrics in any way special? – Manuel Dec 17 '20 at 10:32
  • 1
    if you have these metrics already available then you don't need to apply the additional YAML. For example, in OpenShift, 4.x these metrics are available through the already running Prometheus instance so we don't need the additional ones. – ppatierno Dec 17 '20 at 10:37