0

Need some sanity check. Imagine having 1 SQL Server instance, a beefy system (i.e 48GB of RAM and tons of storage). Obviously there comes a point where it gets hammered in a situation where there are lots of jobs running. These jobs/DB are part of an external piece of software and cannot be controlled or modified by us directly.

Now, when these jobs run, besides the queries probably being inefficient, do bring the DB down - they become very slow so any "regular" users are having slow responses.

The immediate thing I can think of is replication of some kind where maybe, the "secondary" DB would be the one where these jobs point to and do their hammering, still leaving the primary available and active but would receive any updates from secondary for data consistency/integrity.

Would this be the right thing to do? Ultimately I want the load to be elsewhere but have the primary be aware of updates and update itself without bringing it down or being very slow.

What is this called in MS SQL Server? Does such a thing exist? The jobs will be doing a read-write FYI.

Ahmed ilyas
  • 5,722
  • 8
  • 44
  • 72

1 Answers1

1

There are numerous approaches to this, all of which are native to SQL Server, but I think you should look into Transactional Replication:

https://learn.microsoft.com/en-us/sql/relational-databases/replication/transactional/transactional-replication?view=sql-server-ver16

It effectively creates a read-only replica based on log shipping that, for reporting purposes, is practically real time.

From the documentation:

"By default, Subscribers to transactional publications should be treated as read-only, because changes are not propagated back to the Publisher. However, transactional replication does offer options that allow updates at the Subscriber."

Your scenario likely has nuances I don't know about, but you can use various flavors of SQL Replication, custom triggers, linked servers, 3-part queries, etc. to fill in the holes.

marc_s
  • 732,580
  • 175
  • 1,330
  • 1,459
DanielG
  • 1,669
  • 1
  • 12
  • 26
  • 1
    Thanks! This is what I was thinking to. The problem is that I feel (still investigating) that these jobs do a read and write to the DB instead of just read. So you could replicate to secondary, point the jobs to secondary but if they update secondary, the first needs to be aware of the updates... – Ahmed ilyas Aug 12 '22 at 17:57
  • Yes, I feel your pain. Exactly how you would engineer this has tons of nuance I will never know, but I've done similar things using the techniques/technologies I listed. It really depends on what these jobs are doing, how many tables they impact, etc. If the set of tables impacted is manageable, then a step at the end to upsert (SQL MERGE) to the primary. production DB using 3-part table qualifiers could be made to change the rows affected. It's just hard for me to know how to respond in a more specific way. :-) – DanielG Aug 12 '22 at 18:33
  • I totally feel and hear you Daniel! I appreciate it. yeah this is a big beast. This particular application is from a 3rd party and a beast of a database (all to do with medical stuff). It's also hard for me to say what these application and job are doing since it is 3rd party and left on my own to figure it out on best way of making it performant and available and this is the only thing I could think of from the infra/SQL offerings approach rather than digging into the actual SPROCS (which would violate the terms of the contract!) – Ahmed ilyas Aug 12 '22 at 21:06