2

I have to do a backup of all databases back to back (3.25 GB of data for all .BAK files combined).

Will it go faster if I configure the script to set the database to read only first before backing it up?

I am using the RESTORE VERIFYONLY option to check the integrity of the backup. I'd also like to know if there is anything that can speed that process up? I found that takes longer than the actual backup does!

Bart De Vos
  • 17,911
  • 6
  • 63
  • 82
Michael
  • 39
  • 3
  • Are the backups under time constraints? – joeqwerty Aug 10 '15 at 17:52
  • @joeqwerty You can say that, yes. – Michael Aug 10 '15 at 18:00
  • 3
    You can say that or you are saying that? Why does everyone have to be cheeky and clever. Simply saying yes or no would have been more efficient and more clear. So you `do` have a time constraint? – joeqwerty Aug 10 '15 at 18:45
  • 3
    @joeqwerty OP can neither confirm nor deny that his specific interpretation of "constraint" and/or "time" may or may not be in line with your interpretation of the same. Also, there probably aren't any backups, and everyone is using notpades and crayon. – Wesley Aug 10 '15 at 18:48
  • @joeqwerty YES @ Wesley Where can I purchase 'notpades'? An attempt to be smartassed but only accomplished dumbassed. – Michael Aug 10 '15 at 19:22
  • @Michael: My apologies if my comment came off as being harsh. Your comment came off to me as being a bit vague and ambiguous. You either have a time constraint or you don't. It's not a matter of conjecture or debate. Thanks for clarifying that you do have a time constraint. – joeqwerty Aug 10 '15 at 19:31
  • @joeqwerty Did you have any value to provide to the post besides asking that question? – Michael Aug 11 '15 at 11:25

3 Answers3

2

Setting it to READ ONLY might help you in so far as you are removing one drop of water out of an ocean.

Backups take into account active transactions, during the time that the backup is being taken.
This means that yes, when you put your database into READ ONLY you are removing a tiny bit of overhead.
Which technically does help, but it really isn't worth it.

Improving the performance of backups is well known territory.
There are plenty of blog posts and even a tech-net article about how to improve the performance of your backups. None of those mention read only because honestly you won't notice any difference.

To improve your backups performance:

  1. Use multiple media or devices
  2. Research optimization options for your specific type of backup.
    Full and differential backups have different optimizations than Log backups, and yet different optimizations than restoring your backup.
  3. Improve read performance on your origin platform
  4. Improve write performance on your destination platform

So to recap:
Yes it helps, but not enough to be measurable by a sane human being.
There are ways to improve backup performance, try looking into those.

Reaces
  • 5,597
  • 4
  • 38
  • 46
1

Setting it to read-only will not help you. You would also create downtime for any application that needs write access.

Look at where your bottleneck is now. Most of the time it's either 1 of 2 things.

CPU is the bottleneck: Don't compress your backup.
Disk is the bottleneck: Compress your backup.

If you want to dive deeper, run this TSQL Script while the backup is running:

SELECT command,
            sh.text,
            start_time,
            percent_complete,
            CAST(((DATEDIFF(s,start_time,GetDate()))/3600) as varchar) + ' hour(s), '
                  + CAST((DATEDIFF(s,start_time,GetDate())%3600)/60 as varchar) + 'min, '
                  + CAST((DATEDIFF(s,start_time,GetDate())%60) as varchar) + ' sec' as running_time,
            CAST((estimated_completion_time/3600000) as varchar) + ' hour(s), '
                  + CAST((estimated_completion_time %3600000)/60000 as varchar) + 'min, '
                  + CAST((estimated_completion_time %60000)/1000 as varchar) + ' sec' as est_time_to_go,
            dateadd(second,estimated_completion_time/1000, getdate()) as est_completion_time,
            status, blocking_session_id, wait_type, wait_time, last_wait_type, wait_resource, reads, writes, cpu_time 
FROM sys.dm_exec_requests re
CROSS APPLY sys.dm_exec_sql_text(re.sql_handle) sh
WHERE re.command in ('RESTORE DATABASE', 'BACKUP DATABASE', 'RESTORE LOG', 'BACKUP LOG')

(the percent_complete won't work under 2014, still need to find a solution for that).

In the column wait_type you will see what is causing the backup to wait. You can look it up in this list. If it has to wait for another process, it will show up in blocking_session_id. You can than get more details about that process with sp_who2 xxx.

If your server (disk/cpu) can handle it, you can schedule the backups to run concurrent. If you really want performance, backup to multiple files, spanned over multiple disks.

Bart De Vos
  • 17,911
  • 6
  • 63
  • 82
  • Down vote because you are wrong. I was going to make that a comment, but then decided to put it up as an answer. Plenty of room for someone with sql server specific knowledge to do better though. Try again. – mc0e Sep 03 '15 at 10:20
  • Ok, cool. My area of expertise is MS SQL. Setting your database to READ ONLY will almost never give you more performance for your read queries. You need above 3-4000 transactions/sec to start noticing a difference. Since the size is only a few gigs that would be very unlikely. Also, backups don't cause locks for user objects in SQL Server. – Bart De Vos Sep 03 '15 at 10:28
  • 1
    While true in practice, not true when being a technical pedantic. Regardless +1. – Reaces Sep 03 '15 at 10:56
  • No, it's not the (read only) backup process creating locks that's the issue. Transaction duration (with exclusive lock) is more to the point than transactions per second. How on earth can you infer transaction rate from database size? – mc0e Sep 03 '15 at 11:26
-2

@BartDeVos is is right to say that it depends on what your bottleneck is, but he's missed the key point which is that IF competing write locks are your bottleneck, then yes, making the database read only can help. (and if not, then don't worry about it).

My database expertise is not with Microsoft products, so I can't comment on the particular engine you are using, and there's room for quite a bit of difference in the implementation details between database engines and backup systems.

EDIT. Read the comments. IF the backup system being used is the one assumed by the other respondents, then the speed of the backup would not be affected, and I'm guessing the OP's reference to .bak files tends to support that.

mc0e
  • 5,866
  • 18
  • 31
  • So, you have no expertise in SQL Server but you know I'm wrong. Nice. – Bart De Vos Sep 03 '15 at 10:48
  • OK, so what does happen when the backup process encounters an exclusive lock? If you are right, and the backup process doesn't wait, then wouldn't that mean that the backup process would have to take a snapshot as of before the oldest such lock, or accept a lack of consistency? – mc0e Sep 03 '15 at 11:31
  • 1
    @mc0e I covered this in my answer. SQL server backups are transaction aware and keep a backup of the log and active transactions at the time of the backup. As such when backing up they can just backup the files as-is and the consistency is guaranteed by the log of the transactions during the backup process. – Reaces Sep 03 '15 at 11:44
  • 1
    SQL Server takes a dirty copy of all pages on disk. If those pages are inconsistent (concurrent activity, activity without checkpoints, ...). He then gets it out of the transaction logs. – Bart De Vos Sep 03 '15 at 11:45
  • So it sounds like effectively you get a snapshot as of before the oldest exclusive lock? (or the backup process has to wait for that transaction to complete). Granted this is irrelevant for many people as with horizontally scaled systems you avoid long held exclusive locks like the plague, but the OP hasn't indicated the sort of environment that's involved. Also, unless you wait for the locking transaction to complete, then you're right that backup speed would be unaffected. – mc0e Sep 03 '15 at 11:57
  • It's not a snapshot, because it doesn't freeze anything. It simply copies each page _as it is at the time that it is reading the page_. Each change to the page is recorded in the transaction log which is then also backed up to allow for a consistent backup. – Reaces Sep 03 '15 at 12:38
  • You don't need the copy-on-write type mechanism of a snapshot because the log does the same thing, but there's presumably still some point in time to which the state would be recovered if the database is rebuilt from the backup? What I'm getting at is when that point in time is. – mc0e Sep 03 '15 at 13:37