I have a quart app that is running with hypercorn
on production. Eight worker hypercorn processes are configured to be started. My objective is to collect application performance logs such as latency, throughput using prometheus. From the quart app I am incrementing/updating counters and histograms based on event using the aioprometheus library. An endpoint /myapp/metrics
is exposed in the application to collect the metrics.
Now the problems is that each time this endpoint is hit by the scraping agent it collects data from one process only on which ever the request gets routed to. For example if once process has seen 6 hits for event E1 and other process has seen 7 hits for the same event I need a total of 13 hits as the response to my metrics endpoint but with the current setup it gives either 6 or 7 depending on which process the request gets routed to.
Can someone please suggest me how to get the metrics for my entire application in this multi process hypercorn
model. One of the solution is to have all the processes update some common data source and the the metrics endpoint could read for the data source. But before I do that I want to explore if there exists some hypercorn specific solution for it.
Edit: I see a similar question but with gunicorn and flask.