0

We have a database with multiple filegroups and one of the filegroup is huge about 20tb mdf file and we want to shrink that file as it says unused space is 15tb. If we use the below shrink command it will take forever to shrink the file.

USE UserDB; GO DBCC SHRINKFILE (DataFile1, SizeinMB); GO

wanted to know how can we shrink the 20tb file in small parts of say 500gb. If someone can help me with a tsql code or help we with how to do the same it would be great. Thank you

prasad540
  • 33
  • 1
  • 4
  • Normally, you wouldn't shrink a DB, but I had a case similar to yours where a bug in the code blew up the DB size way beyond what it would ever be under normal circumstances. What is the your typical usage? Do you have times of relatively low usage during off hours? The answers to these questions will determine the best approach. – Robert Sievers Jul 24 '20 at 15:09
  • I've had better luck not trying to remove 15 TB all at once but rather in small chunks. Shrink it to 19.5 TB, then 19 TB, then 18.5, ... Another approach would be to create a new, say, 10 TB filegroup, and rebuild the table(s) on that new filegroup. You don't want to make just a 5 TB filegroup because surely that will need to grow _some_ amount later. – Aaron Bertrand Jul 24 '20 at 15:22
  • 1
    Also see [this recent Brent Ozar post](https://www.brentozar.com/archive/2020/07/what-if-you-really-do-need-to-shrink-a-database/). – Aaron Bertrand Jul 24 '20 at 15:30
  • @RobertSievers: Thank you for responding. Normally we do not have any low usage hours on this server they are client facing. May be a couple of hours in the night is when we can prob run them. – prasad540 Jul 24 '20 at 16:25
  • @RobertSievers But initially we will try to test it on a test server to analyse how long it will take and how much the file is shrinking so that we can do the same on prod. We tried shrinking the other small filegroups in test env using the command used above. Shrinking a 5tb file took us around 4-5 days hence wanted to know how can we work around the 20tb in small parts cause if we use the same command as above will take us nearly a month or so. Doing this on Prod is not workable with all the blocking and locking. – prasad540 Jul 24 '20 at 16:25
  • @AaronBertrand: Thank you for you responce. If i wanted to shrink the file like you said Shrink it to 19.5 TB, then 19 TB, then 18.5, ...How would I write the tsql for this process. – prasad540 Jul 24 '20 at 16:28
  • How did you write the T-SQL for doing it once? – Aaron Bertrand Jul 24 '20 at 16:31
  • USE MDX; GO DBCC shrinkfile (MDXfile-02, 0); GO – prasad540 Jul 24 '20 at 17:53
  • @prasad540 I almost hate to say this publicly because it will make me look silly. We had a 600G Db to shrink to 200G. It took 12 hours. So I shrunk it 1G at a time, which took from 10-30 seconds. If you multiply it out, it's significantly less time. If you test out manually what size is large enough to shrink it at once vs how many shrink operations you need, you can find the sweet spot. If the chunks are big enough, you can just do it manually. If not, hold a value in a table somewhere, and run an agent job every night to knock the DB size down the set amount. – Robert Sievers Jul 24 '20 at 20:47

0 Answers0