I use MongoDb in which data changes ( updates ) frequently, - every minute. The data is taken from MongoDB thought third party API application via HTTP. Also in that API data is additionaly agregrated before they are returned, for example counted last X days views sum for page N.
Constantly increasing data amount ( i.e. few of these collections are from 6 GB to 14 GB ) in some cases occurred 2 - 7 seconds delays till API returns aggregated data. Mentioned delay for web application is big enought. I want to reduce these delays somehow.
Which models are used in my described situations? Maybe first of all i should descline that HTTP API idea and move all API logic to server side?
Own ideas, considerations:
Maybe there should be two seperated data "proccessors":
1) First "proccessor" should do all aggregation jobs and just write to second one.
2) Second "proccessor" all data justs returns without any internal calculations, aggregations.
But also there can be bootleneck when the first writes to second data store, there should be the logic to update new and old data which also impacts the performance..