Scenario: I have a database that accepts writes from a set of geographically distributed clients over unreliable links. The clients just do write operations, and possible reads on the data of their own last 2 to 3 write operations. Older data is archived off to a data warehouse.
Problem: The clients connect to the database over unreliable networks and are unable to write when the data links between them and the server are down. This leads to a large amount of man hours wasted as the clients is essentially data entry tools. It is not possible to improve the connectivity of the networks.
Possible solution: Run a caching database proxy on each client node that caches the writes locally when the data link is down. When the link comes back up, it pushes all writes to the main database.
Question: Does any such system exist (if so, for which database) or am I stuck with writing such a system of my own?
Notes:
- The database is relational in nature. It may be possible to change it to a NoSQL based structure, but the effort would set back the project by atleast 6 months.
- The same applies to using a distributed message queue system.
Disclaimer: Google was no help, other than providing a link for Google F1.