please advise
I am using Pushgateway to monitor custom service. As far as I understand, all the metric measurement we send, are stored by the Pushgateway in its own internal DB and is then exposed to Prometheus. If this is true, I have the following implications (which I tested except for the 1., and that's how it works for me now):
- Prometheus will have to process full amount of measurements on every scrape, including the old ones
- If measurement is deleted or replaced in Pushgateway, then it is not available in Prometheus anymore either
- If Pushgateway is down (especially fun that it can go down because inconsistent metrics were pushed to it, so anyone can bring it down with a single PUT request if the port is open), measurements will be again unavailable in Prometheus
This situation is beyond stupid from my point of view, considering Prometheus is more or less production-ready industry standard when it comes to TSDB, so I hope, a lot, this is simply my lack of misunderstanding.
I would like to have the following configuration:
- a. measurement is sent to the Pushgateway
- b. Prometheus is reading the measurement from the Pushgateway and is writing it in its own database
- c. Pushgateway is deleting the metric when it was read by Prometheus
So far I was unable to achieve the above.
I have also tried to configure text-based scraping through node_exporter with the same result: metric and measurements are available as long as I keep the text file on the node. If I delete the text data, measurements disappear from Prometheus, so it doesn't store it on its own.
UPD Actually, I was able to find old metrics I scraped from the text file - just needed to set a past time period instead of "now". So this works, Prometheus is writing metrics from the text files to its own database, I will use this solution