4

I have a job that nightly runs a backup of all attached databases however it can bog the server down if someone is connected during that time. I have TONS of CPU overhead to work with (I am barely above 5% in use, 8 cores total the highest is maybe 30% the rest are at idle), my issue is I am pegging my I/O. Looking at resource monitor I noticed that the I/O priority in Resource Monitor of the reads to the mdf and to the .bak file have a priority of Normal.

Is there a way to make my backups run at Background priority?

Scott Chamberlain
  • 1,455
  • 2
  • 21
  • 37

4 Answers4

3

You can backup databases to a UNC path (manually or by creating a backup device) so you could try that, but you'll be trading disk I/O for network I/O.

joeqwerty
  • 109,901
  • 6
  • 81
  • 172
  • Hmm, I do immediately just transfer the files anyway to another computer. This may be my solution, but I will leave this open for a day or two if there comes better suggestions. – Scott Chamberlain Aug 26 '11 at 17:44
  • There are all sorts of memory problems that go with streaming large database backups to a remote server. Basically this causes all the memory to be assigned to the system cache and taken away from SQL Server. I highly recommend never doing this. – mrdenny Aug 26 '11 at 17:47
  • If I mounted a iSCSI device? do the same issues apply? – Scott Chamberlain Aug 26 '11 at 17:52
  • If Windows sees it as a local disk on the SQL Server it doesn't apply (mapped drives do not count as local disk). Only when going to a network path (or a mapped drive). – mrdenny Aug 26 '11 at 17:55
2

This doesn't actually answer the question (how to tweak I/O priority for a backup operation) but may help significantly speed up backups and reduce their affect on other activity:

If you can write your backups to drives that are not holding the database's data and log files. With spinning disk based drives this improve performance considerably because the backup operation is not constantly causing the heads to flip between the area holding the live data (to read pages) to the area the backup is being written to (to store those pages).

This may mean adding an extra physical drive (or multiple drives so you can use RAID for redundancy) to the machine for the backups to go to, particularly if you only have one drive or array that holds everything. If you have your data and log files on different disks/arrays but don't have a third drive/array to write the backups to and are unable to add one, writing the backups to the drive with the logs is usually faster than writing them to the drive with the data files (as the active pages in the log files are far fewer except in unusual cases).

David Spillett
  • 22,754
  • 45
  • 67
  • Unfortunately this is a 1U server and all the bays are full and used in a RAID configuration. there is also a issue with the USB controller not showing up in hardware devices (that is the reason it is running on the same disk, it used to go to a external drive, now it gets backed up locally then FTPed to another machine.) – Scott Chamberlain Aug 26 '11 at 17:28
  • Is there another machine it can see over the local network to backup to? That would seperate the source and destination of the backup process to a different spindle set, and if the network is only 100Mbit it will also artificially reduce the I/O load imposed by the operation in any given second. – David Spillett Aug 26 '11 at 20:58
  • Doing it over the network looks like the solution I will end up doing. – Scott Chamberlain Aug 26 '11 at 21:01
0

No, there is no supported way of changing one thread to a different priority level. You can try using something like LiteSpeed and/or HyperBac to compress the backups which will reduce the write IO requirements.

mrdenny
  • 27,174
  • 4
  • 41
  • 69
  • Would using windows file and folder compression free up I/O resources at the expense of CPU resources, or would it not make a difference? – Scott Chamberlain Aug 26 '11 at 17:30
  • I'm not sure if NTFS compression is supported for SQL Backups. I know that NTFS compression isn't very good, so it probably wouldn't help much. HyperBac isn't that expensive and has a very good ROI on it. Here's a real world post I wrote up about it. http://itknowledgeexchange.techtarget.com/sql-server/how-hyperbac-worked-for-me/ – mrdenny Aug 26 '11 at 17:35
  • I just ran it by the higher-ups and they rather have it go slow than spend money on making the backups run faster :( – Scott Chamberlain Aug 26 '11 at 17:44
  • Really, they can't fork over $795? Ouch. – mrdenny Aug 26 '11 at 17:46
  • For a batch job running at 2 AM that maybe inconveniences one person a week, yea. – Scott Chamberlain Aug 26 '11 at 17:50
  • ok, I can see that. Would be hard pressed to spend money for one person, unless that person's time being wasted was more than the software cost. – mrdenny Aug 26 '11 at 17:54
0

Upgrade to SQL Server 2008 Enterprise or SQL Server 2008 R2 Standard or higher, make use of native backup compression - and trade some of those CPU cycles in for a potentially much reduced backup time.

Peter Schofield
  • 1,639
  • 9
  • 11