-1

We have a production server which has approx. 50 customer production databases running, we are reviewing the recovery models and would like to know what all your recommendations are for what recovery model to use.

I have been doing some research, but it's been mixed responses to which model is best.

The server scenario is;

  • Full Server backups are performed once daily (this occurs by the cloud provider)
  • All databases currently don't get backed up regularly (as they used too approx. 6 months ago)
  • The information/amount of data that is stored on these databases differ between each customer's needs - per database

All of the databases are currently set as SIMPLE recovery model, this was decided when the log files began to grow excessively which started causing HDD issues / limited space and connections to the databases couldn't be made. Since changing to SIMPLE it has stopped the HDD issues but now we have to consider what the recovery of the databases would be if a disaster was to happen.

I look forward to your response/recommendations!

Thank you in advance

marc_s
  • 732,580
  • 175
  • 1,330
  • 1,459
Tyler
  • 9
  • 4
  • If you use simple recovery model there is minimal logging. Your can shrink the log file and it will remain small, however, there is some risk. Here is a nice article on the different recovery types --> https://learn.microsoft.com/en-us/sql/relational-databases/backup-restore/recovery-models-sql-server – Ross Bush Apr 12 '17 at 02:21
  • Thank you Ross. I have been looking into all the information before posting on here. What I am after is more, recommendations/tips on best practices etc... – Tyler Apr 12 '17 at 03:07
  • 1
    The main question is: ***how critical*** is your data? What if the database server crashes and trashes its disk - can you (and your business) survive if all you have is the backup from last night, and potentially a whole day's work is lost? If **yes**, then you're fine with the `SIMPLE` recovery model. If you need the ability to restore to closer to a catastrophic failure - in order to only loose a few minutes or hours of work - then you need to switch to `FULL` recovery model and setup proper transaction log backup procedures to keep those tx logs from exploding in size – marc_s Apr 12 '17 at 04:59
  • 1
    Thank you marc, now another question that comes up is, is performing regular transaction logs heavy on the resources? CPU, Memory etc... just don't want to cause problems during peak times etc... – Tyler Apr 13 '17 at 00:23

1 Answers1

0

Simple Work loss exposure: Changes since the most recent backup are unprotected. In the event of a disaster, those changes must be redone.

Full Work loss exposure: Normally none. If the tail of the log is damaged, changes since the most recent log backup must be redone.

So what if you lose your data since your last backup? Acceptable or not?

Datendenker
  • 264
  • 2
  • 10