1

Question: In Sql server 2012 what is the best way to reclaim as much reserved space as possible while having as little fragmentation as possible?

Background:

Our SQL server is running low on disk space and as a part of HW+SW upgrade we are going to move data files to different server - for this reason we want to reduce size of datafiles (to prevent unnecessary moving of 'reserved space'. We are speaking about tearbytes). I also want to perform this partition by partition to be able to run overnights and limit production impact.

One approach I tried (per partition on the heavy consumer table with single index):

ALTER TABLE <Tbl>
REBUILD PARTITION = <PartitionId> WITH (DATA_COMPRESSION =  PAGE) 
GO

--I know this is bad practice - but I need to reclaim space to speed up moving
DBCC SHRINKFILE(<PartitionName>, target_size = 10 )
GO

-- This is to mitigate the impact of shrinkfile
ALTER TABLE <Tbl>
REBUILD PARTITION = <PartitionId>
GO


--Run this in the end (and I run also between the individual tasks) to see impact on index fragmentation
SELECT * FROM sys.dm_db_index_physical_stats  
    (DB_ID(<DbName>), OBJECT_ID(<TblName>), <IndexId>, <PartitionId>, 'SAMPLED');  
GO

In test environemnt this yield great results for some partitions (0% gragmentation and near 0% 'wasted' space on reserved space. Wasted by considering that next stage is moving data over wire), but I have a case of partition that SHRINKFILE reduces the size significantly, but causes 99.99% fragmentation; the REBUILD solves the fragmentation, but doubles the filegroup size (half being reserved space) - which is probably expected as rebuild creates index from scratch. If I shrink afterwards I can reclaim the space, but again get the large fragmentation. And this can go in circles

I'm now trying to run reorganize on shrinked filegroup:

ALTER INDEX <IdxName> on <Tbl> REORGANIZE PARTITION = <PartitionId> 

As this should hopefully fix the index fragmentation without growing the datafile. However:

  • is that a good idea to run Reorganize on 99.99% fragmented index?
  • will the result be comparable/inferior/superior to running rebuild?

Another option I'm considering is to rebuild the partition to brand new filegroup, but this would require manipulating partition schema - and I want to keep the process as simple as possible.

Jan
  • 1,905
  • 17
  • 41

3 Answers3

1

What about backing up the database using compression and restoring it to the new server. Backups do not include unused space.

Sean Pearce
  • 1,150
  • 5
  • 10
  • This is very interesting tip I didn't know about! I'll wait to see if there are other suggestions. The reason is that I'd prefer solution that can prepare data for transfer with minimal additional storage cost (for various reasons we let the storage filled to 14T out of 16T and it's in remote location where we currently have no additional storage - so I'm trying to prepare as much upfront remotely to minimize the in-person maintenance time - so originally we planned to 'compact' DB in place and copy it via detach-attach) – Jan Jun 21 '16 at 11:48
1

This is not the best answer, but this is what I did as it best addressed my specific circumstances - especially the lack of any additional free space to be used - so I couldn't even use the backup-restore approach; and ability to perform the operation in smaller (overnight) batches.

I wanted to post it in case someone would find it helpful.

However - it's definitively better to make sure that you always have at least as much free space as what your DB currently occupies and then you can use more appropriate solution like e.g. the compressed backup suggestion that I marked as answer.

--This is just so that anything doesn't interract with table during the entire process.
-- reorganize is being done online; but I want the process to finish as fast as possible and
--  app logic is resilient to not seeing the table for while
exec sp_rename  <tbl_orig>, <tbl>
GO

print 'starting compressing: ' + CAST(GETDATE() AS NVARCHAR)
GO

-- this is to enable compression on the partition
ALTER TABLE <tbl> 
REBUILD PARTITION = <PartitionId> WITH (DATA_COMPRESSION =  PAGE) 
GO

print 'Compressing done: ' + CAST(GETDATE() AS NVARCHAR)
GO

-- recaliaming all free space; potentially very bad fragmentation is possible
DBCC SHRINKFILE(<DataFile>, target_size = 10 )
GO

print 'shrinking done: ' + CAST(GETDATE() AS NVARCHAR)
GO

-- solve the fragmentation without giving up on any reclaimed space. Rebuild would use some additional space.
-- This assumes that my partitions are stored in dedicated filegroups (which is always a good idea)
ALTER INDEX <IdxName> on <tbl> REORGANIZE PARTITION = <PartitionId>
GO

print 'index reorganizing done: ' + CAST(GETDATE() AS NVARCHAR)
GO

-- see the stats
SELECT * FROM sys.dm_db_index_physical_stats  
    (DB_ID(<DBName>), OBJECT_ID(<Tbl>), 1, <PartitionId> , 'SAMPLED');  
GO

print 'DONE: ' + CAST(GETDATE() AS NVARCHAR)
GO

-- show the table back to app logic
exec sp_rename  <tbl>, <tbl_orig>
GO
Jan
  • 1,905
  • 17
  • 41
0

You can rebuild the partition over to a new filegroup. This will result in perfectly contiguous pages and a perfectly filled file. This is in general a very nice way of defragmenting. You can automate this.

Defragmenting by rebuilding in place has some issues as you found out. You need a lot of temporary space and your newly allocated b-tree will be squished into lots of free space holes by SQL Servers allocation algorithm. The allocation algorithm is not smart. It will not try to find big holes. It is happy spreading the new tree over tiny holes if they exist. That's the reason you can end up being fragmented directly after rebuilding. (Interestingly, NTFS has the same issue. If you just write a 100GB file sequentially it might end up extremely fragmented.)

I believe this issue is not widely understood in the SQL Server community.

usr
  • 168,620
  • 35
  • 240
  • 369