Is there some mechanism for working with data in high concurrency?
First we used mongodb and it has atomic updates that solving problem. But updates freq grossing to about 1000\seconds and we setup Redis to help mongo and writed syncronisation between them. It works good, but we have concerrency problem with redis.
For example:
- First Request came at 0.01ms - process exits at 0.04ms
- Second Request came at 0.02ms and exits at 0.03s.
Both requests get same object? change it's data and save it on exit.
When we used mongodb - we can do - partial updates on object, but with redis - we cannot.
Is it possible to manipulate with same object(data) from multiple process at the same time and not overwrite it whole - only part?
The only way i find - create lock mechanism and wait process while it exists before get it second time.