0

We have a vendor product, which support only relational databases (Oracle and MS SQL). Due to operational efficiency and cost, we are planning to use MS SQL. However, we have a challenge to deploy the product in Active-Active mode in two different AWS regions (East and West). Since (as per my knowledge) AWS RDS MS SQL or IaaS MS SQL always-on availability group doesn't allow multi-region writes. Also, due to latency concerns, we can not configure the product to the database in other region.

We want to have locally available database instance, where the product can read and write. Is there a way to deploy MS SQL server in multi-region, multi-master, where database servers in both regions can accept write transactions?

P.S.- This product is end user facing, with huge number of transactions. This product makes decisions based on user's transactional data.

Indranil
  • 101
  • 1
  • Even taking the added complexity of doing on AWS out of the equation, having more than one replica in a topology being able to accept writes is challenging. There's peer-to-peer replication but the db schema really has to be designed to use it (i.e. it's not something you can just put on top of an existing schema and call it done). – Ben Thul Sep 30 '21 at 23:38
  • Hi @BenThul , can you please share more information on how to setup peer-to-peer replication? – Indranil Oct 04 '21 at 20:43
  • 1
    I have no practical experience with it and so whatever I'd be directing you towards would just be an uninformed regurgitation of Google results. I'd really make sure that you need local writes in multiple locations before going down this road though. There are a *lot* of high transaction websites that do everything from one datacenter (with a cold- to warm-DR site) that work just fine. The overhead in setting up for multiple writers is high. – Ben Thul Oct 04 '21 at 22:06

1 Answers1

0

My anser is.... MS SQL Server and Oracle do not allow multiple masters, furthermore the transactional replication used in the "merge replication" of MS SQL server is as slow as 1-2 lines of data per second when the databases are 10.000 miles away. Looking deeper into the "subscriber - distributor - publisher" model shows that distributor opens a dataset remotely via native client 11 connection. By doing so this is intended for LAN conditions, not for WAN. Beleive me I have seen that, I have set it up, I have talked with Premier Support about that. This is worst replication ever even if you tune the line conditions with endpoint optimizers (e.g. Riverbed) you end up wiht 300 lines of data being replicated per second. HAve to repeat - it is reading line by line, SQL raw text transfer, with a move forwared cursor. And one more point - our application has some feature that duplicates data like insert into table ... select * from table where ID=a and key=b; But the replication of MS SQÖ Server adds a rowguid column and it is forbidden to have two identical rowguids. That stops the distributor if it copies such data from a client... many applications do so but I have to say forget about that. Multiple masters with low latency is Aurora with MySQL engine ONLY (or partially with Postgres engine). The replication cycle in the "merge replication" which basically allows multiple synched databases is 60 seconds, theoretically it can be smaller, but comes with more overhead like establishing the connection, exchange of certificates ... if it's simple web stuff you can possibly change the SQL client in it's script language like PHP allows MS SQL, MySQL, Postgres and some more. If it is a big thing with billion US$ of turnaround - think about lift/shift to AWS and Aurora using the proper engines. MS SQL Server is not (yet) made for that, at least not for real time sync or big amount of data. There are also options with Ingres which has a feature to check out single data sets to another Ingres engine but that won'T fit your needs.