4

So the scenario is this:

I have a mySQL database on a local server running on Windows 2008 Server. The server is only meant to be accessible to users on our network and contains our companies production schedule information. I have what is essentially the same database running on a hosted server running linux, which is meant to be accessible online so our customers can connect to it and update their orders.

What I want to do is a two-way sync of two tables in the database so that the orders are current in both databases, and a one-way sync from our server to the hosted one with the data in the other tables. The front end to the database is written in PHP. I will say what I am working with so far, and I would appreciate if people could let me know if I am on the right track or barking up the wrong tree, and hopefully point me in the right direction.

My first idea is to make (at the end of the PHP scripts that generate changes to the orders tables) an export of the changes that have been made, perhaps using INSERT into OUTFILE WHERE account = account or something similar. This would keep the size of the file small rather than exporting the entire orders table. What I am hung up on is how to (A) export this as an SQL file rather than a CSV (B) how to include the information about what has been deleted as well as what has been inserted (C) how to fetch this file on the other server and execute the SQL statement.

I am looking into SSH and PowerShell currently but can't seem to formulate a solid vision of exactly how this will work. I am looking into cron jobs and Windows scheduled tasks as well. However, it would be best if somehow the updates simply occurred whenever there was a change rather than on a schedule to keep them synced in real time, but I can't quite figure that one out. I'd want to be running the scheduled task/cronjob at least once every few minutes, though I guess all it would need to be doing is checking if there were any dump files that needed to be put onto the opposing server, not necessarily syncing anything if nothing had changed.

Has anyone ever done something like this? We are talking about changing/adding/removing from 1(min) to 160 lines(max) in the tables at a time. I'd love to hear people's thoughts about this whole thing as I continue researching my options. Thanks.

Also, just to clarify, I'm not sure if one of these is really a master or a slave. There isn't one that's always the accurate data, it's more the most recent data that needs to be in both.

+1 More Note Another thing I am thinking about now is to add at the end of the order updating script on one side another config/connect script pointing to the other servers database, and then rerun the exact same queries, since they have identical structures. Now that just sounds to easy.... Thoughts?

Johnny Mnemonic
  • 187
  • 1
  • 2
  • 8

2 Answers2

3

You may not be aware that MySQL itself can be configured with databases on separate servers that opportunistically sync to each other. See here for some details; also, search around for MySQL ring replication. The setup is slightly brittle and will require you to learn a bit about MySQL replication. Or you can build a cluster; much higher learning curve but less brittle.

If you really want to roll it yourself, you have quite an adventure in design ahead of you. The biggest problem you have to solve is not how to make it work, it's how to make it work correctly after one of the servers goes down for an hour or your DSL modem melts or a hard drive fills up or...

David O'Riva
  • 696
  • 3
  • 5
  • Oh yeah, I'm already on the adventure :-) !!! Two months ago I'd never heard of PHP or mySQL, our production scheduling system consisted of an excel spreadsheet and post it notes all over the place, and our online ordering was called GoogleDocs. We've all come a long way. – Johnny Mnemonic Nov 03 '11 at 03:16
  • For where we are at at the moment I'm going with my last idea... I'll simply run the query on the local server, than reconnect to the remote server and run the query again. That is until I've figured all of the rest of this replication stuff out... – Johnny Mnemonic Nov 03 '11 at 03:29
  • 1
    Replication sounds a lot scarier than it is. Basically, they're doing exactly what you're proposing, but they've figured out and handled all (or at least most) of the edge and failure cases. You add about 10 incantations to mysql.ini on each side, tune up your schemas, make sure the firewalls are open, and it should start working and repair itself automatically after most failures. There are details for securing it and whatnot, but you'll have to deal with those no matter how your system ends up working. – David O'Riva Nov 03 '11 at 04:01
2

Start a query on a local and a remote server can be a problem if the connection breaks. It is better to each query locally stored in the file, such as GG-MM-DD-HH.sql, and then send the data every hour, when the hour expired. Update period can be reduced to 5 minutes for example.

In this way, if the connection breaks, the re-establishment take on all the left over files. At the end of the file insert CRC for checking content.