1

We just implemented a new job on a medium traffic database involved in transactional replication, that will process inputs from an Excel spreadsheet and call a stored procedure to create records within various tables in that DB.

One of the fields involved is a text data type that can range from 10k-30k characters. Within the job, a custom SQL server function parses out a URL from the large HTML body and writes it to a table.

Due to the intensive nature, or what seems to be that of the process, we only process 35 records or rows from the spreadsheet per job execution every half hour. What we are seeing is that the transaction log is growing at an alarming rate and is thus causing considerable latency from the publisher to the subscriber because the log reader agent scans the entire transaction log and finds 0 records for replication.

To put it into perspective, the log sits around 30GB in size, and cannot be much smaller because running the job 5-6 times will grow it past that size. We are doing frequent backups of the distribution database (every 5 minutes) and the transaction log every 15.

We are in dire need of a solution where we can decrease the latency, or decrease the amount of reading (I/O) from the log. I realize that there could be work that needs to be done on the app/job side, but is there anything within the replication configuration that can be done to help with this latency and wasted scanning from the log?

Any help would be much appreciated. Thank you.

SchmitzIT
  • 9,227
  • 9
  • 65
  • 92

0 Answers0