1

I recently removed table data from a batch of production databases and I've been shrinking the database files using the 'Shrink File' feature in SSMS. Some of the databases will shrink more than 50gb. I noticed that this processes takes a very long time to complete and that there is no cancel button while this process is running. I am connecting to all the database servers remotely via RDP and cannot guarantee that they won't be power-cycled during the trim process.

If the shrink process is running and the server is power-cycled, will I experience database corruption?

Aaron Bertrand
  • 272,866
  • 37
  • 466
  • 490
user1288850
  • 11
  • 1
  • 2

1 Answers1

3

I think it entirely depends on what pages were in motion at the time of the power outage. You might come out okay, but I certainly wouldn't want to roll the dice here if it were my data.

Instead of just running a single shrink operation (and especially instead of using the UI for this), you can use a DBCC SHRINKFILE command to shrink the file a small amount at a time. Since this option allows you to set a target size, you can repeatedly run the command in short bursts.

While this reduces your risk for corruption, however, it can cause other harm e.g. increasing fragmentation. You may get better mileage out of rebuilding indexes first and then running a shrinkfile with truncateonly. This will clear out the "end" of the file after the rebuilds will have presumably moved all of the data together toward the beginning of the file.

An even better solution might be to add a second data file in its own filegroup, rebuild all of your indexes onto the new filegroup, then shrinking the primary file will reduce it to the size of just the system/metadata objects that live there.

Aaron Bertrand
  • 272,866
  • 37
  • 466
  • 490