Setting it to read-only will not help you. You would also create downtime for any application that needs write access.
Look at where your bottleneck is now. Most of the time it's either 1 of 2 things.
CPU is the bottleneck: Don't compress your backup.
Disk is the bottleneck: Compress your backup.
If you want to dive deeper, run this TSQL Script while the backup is running:
SELECT command,
sh.text,
start_time,
percent_complete,
CAST(((DATEDIFF(s,start_time,GetDate()))/3600) as varchar) + ' hour(s), '
+ CAST((DATEDIFF(s,start_time,GetDate())%3600)/60 as varchar) + 'min, '
+ CAST((DATEDIFF(s,start_time,GetDate())%60) as varchar) + ' sec' as running_time,
CAST((estimated_completion_time/3600000) as varchar) + ' hour(s), '
+ CAST((estimated_completion_time %3600000)/60000 as varchar) + 'min, '
+ CAST((estimated_completion_time %60000)/1000 as varchar) + ' sec' as est_time_to_go,
dateadd(second,estimated_completion_time/1000, getdate()) as est_completion_time,
status, blocking_session_id, wait_type, wait_time, last_wait_type, wait_resource, reads, writes, cpu_time
FROM sys.dm_exec_requests re
CROSS APPLY sys.dm_exec_sql_text(re.sql_handle) sh
WHERE re.command in ('RESTORE DATABASE', 'BACKUP DATABASE', 'RESTORE LOG', 'BACKUP LOG')
(the percent_complete
won't work under 2014, still need to find a solution for that).
In the column wait_type
you will see what is causing the backup to wait. You can look it up in this list. If it has to wait for another process, it will show up in blocking_session_id
. You can than get more details about that process with sp_who2 xxx
.
If your server (disk/cpu) can handle it, you can schedule the backups to run concurrent. If you really want performance, backup to multiple files, spanned over multiple disks.