1

is it possible to setup a prometheus/grafana running on centos to monitor several K8S clusters in the lab? the architecture can be similar to the bellow one, although not strictly required. Right now the kubernetes clusters we have, do not have prometheus and grafana installed. The documentation is not very much clear if an additional component/agent remote-push is required or not and how the central prometheus and the K8S need to be configured to achieve the results? Thanks.

Required architecture

OneCricketeer
  • 179,855
  • 19
  • 132
  • 245
  • You need to somehow expose the metrics endpoints to clients outside the cluster or have a prometheus-node-exporter connect to an remote prometheus in another cluster. In other options, maybe Thanos (https://thanos.io/) can help? You can agregate multiple distributed instances into a unified store and view. For example, each cluster could have its own prometheus setup collecting short-term metrics that are aggregated into a remote cluster for long-term storage. – JulioHM Nov 03 '21 at 12:40
  • As the image shows, it uses Grafana to view an aggregated Cortex node of all Prometheus servers. What issues are you having replicating this setup? – OneCricketeer Nov 03 '21 at 21:17

1 Answers1

1

You have different solution in order to implement your use case :

  1. You can use prometheus federation. This will allow you to have a central prometheus server that will scrape samples from other prometheus servers.
  2. You can use remote_write configuration. This will allow you to send your samples to a remote endpoint (and then eventually scrape that central endpoint). You'll also be able to apply relabeling rules with this configuration.
  3. As @JulioHM said in the comment, you can use another tool like thanos or Cortex. Those tools are great and allow you to do more stuff than just writing to a remote endpoint. You'll be able to implement horizontal scalling of your prometheus servers, long-term storage, etc.
Marc ABOUCHACRA
  • 3,155
  • 12
  • 19