I'm desiging a high performance system with the main function is to update a product inventory. Each product has an unique product id, and we can add/substract number of items of that product in the investory. To improve the performace, I don't want to hit the database for every single add/substract request but do it in an application server's memory and then periodically update the database. However that approach has a tradeoff that if the application server dies then I will lost all the temporary data. How could I improve the system to overcome it?
Asked
Active
Viewed 31 times
0
-
Removed Kafka tag since question doesn't seem to be asking anything about it – OneCricketeer Nov 14 '22 at 14:27
1 Answers
0
If you want transactional consistency, you'll need to hit the database.
Otherwise, don't write to memory. Write a request log to disk (persistent storage). After a period of time, compact those logs and compute the aggregate request to send to the database.
Keep in mind, that if you distribute this process, and have other validation such as "database amount cannot be negative at any time", then it's possible you may emit several subtraction events at the same time, and then invalidate your requirements.

OneCricketeer
- 179,855
- 19
- 132
- 245