0

Scenario: I have a database that accepts writes from a set of geographically distributed clients over unreliable links. The clients just do write operations, and possible reads on the data of their own last 2 to 3 write operations. Older data is archived off to a data warehouse.

Problem: The clients connect to the database over unreliable networks and are unable to write when the data links between them and the server are down. This leads to a large amount of man hours wasted as the clients is essentially data entry tools. It is not possible to improve the connectivity of the networks.

Possible solution: Run a caching database proxy on each client node that caches the writes locally when the data link is down. When the link comes back up, it pushes all writes to the main database.

Question: Does any such system exist (if so, for which database) or am I stuck with writing such a system of my own?

Notes:

  • The database is relational in nature. It may be possible to change it to a NoSQL based structure, but the effort would set back the project by atleast 6 months.
  • The same applies to using a distributed message queue system.

Disclaimer: Google was no help, other than providing a link for Google F1.

Samveen
  • 3,482
  • 35
  • 52
  • Have you looked into message queues (e.g. MSMQ)? The client could write to a local message queue, which would guarantee delivery to a remote message queue (and you read the remote one to put data into the database). They are designed for exactly this sort of thing. – RB. Oct 01 '13 at 12:30
  • @RB. I have not considered message queues as a solution. I'll look into them, but using them will probably affect my project timelines as well. – Samveen Oct 01 '13 at 12:34

0 Answers0