6

I'm a little worried about the default retention period in SQL Server 2008 Change Tracking (which is 2 days).

Is it a good idea to set this period to eg. 100 years and turn auto cleanup off or will it bite me back in the future with excessive storage usage and/or performance degradation? Anyone has experience in that matter?

Marek Stój
  • 4,075
  • 6
  • 49
  • 50

1 Answers1

8

If you set auto cleanup off, it's best to periodically go through and remove the change tracking information yourself, by disabling and then re-enabling change tracking for each table. Otherwise, yes, the tracking data will continue to grow and grow.

You can't query the underlying tables directly, but you can poke at their metadata. The following query shows relative row counts:

select 
  s.name as schema_name
, t.name as table_name
, (select sum(rows) from sys.partitions x where o.parent_object_id = x.object_id) as rows_in_base_table
, o.name as tracking_table
, p.rows as rows_in_tracking_table
from sys.objects o
join sys.tables t on o.parent_object_id = t.object_id
join sys.schemas s on t.schema_id = s.schema_id
join sys.partitions p on o.object_id = p.object_id
where o.name like 'change[_]tracking%'
  and o.schema_id = schema_id('sys')
order by schema_name, table_name

Run that in your database, and you should get a rough sense of current overhead.

The change tracking tables all follow a standard schema. For example:

select 
  c.name, c.column_id
, type_name(user_type_id) as type_name
, c.max_length, c.precision, c.scale
, c.is_nullable, c.is_identity
from sys.columns c
where object_id = (
  select top 1 object_id from sys.objects o
  where o.name like 'change[_]tracking%'
    and o.schema_id = schema_id('sys')
  )

The k_% columns vary by table and correspond to the primary keys of the tracked table. You are looking at a base minimum overhead of 18 bytes + (primary key length) per row. That adds up!

For example, I'm tracking some skinny base tables that are only 15 bytes wide, with a 7-byte composite key. That makes the tracking tables 18+7=25 bytes wide!

Peter Radocchia
  • 10,710
  • 2
  • 34
  • 56
  • even a 100 byte row would only get you 100MB after a million rows, so I don't see your example being _too_ critical, although I guess that might be a lot if it was per table tracked. please correct me if I'm wrong. – Dave Cousineau Jul 30 '11 at 20:05
  • @Sahuagin: in my case, assuming at most 10% of the table gets updated in a two day window, I incur a 0.1*(25/15) = 16.7% storage overhead for change tracking. No, it's not horrible--by comparison, any index would be at least (7/15) = 46.7%. Indefinite retention, though--that would cost 166%. Also, I don't know off the top of my head if change tables use data compression, but if they don't, and if my table *is* compressed, then the percentages get much larger. – Peter Radocchia Jul 31 '11 at 04:50