-1

I have to update a table primary key identity index column from INT to BIGINT.

Background information:

  • The column is an identity column
  • The column is a primary key
  • The index is a clustered index
  • Table is partitioned
  • There are currently 111 partitions
  • The this is the column that the table is partitioned on
  • Just under 1TB data space taken up
  • Just under 400GB index space taken up
  • Actual row count stands at just under 1.4B rows but the identity increment is higher as some rows have been deleted
  • Currently it is SQL Server 2008 R2 but plan to have it on SQL Server 2016 at the time of the change
  • There are over 40 foreign keys in other tables referencing this column
  • Obviously since it is partitioned it is Enterprise Edition
  • Rows are pretty wide with 81 other columns
  • I would like to minimize the downtime as much as possible
  • Values cannot change due to the 40+ foreign key but also there are a lot of manually inserted values in other tables that do not have a FK relationship.
  • I have some amount of extra space I can work with, there is a lot but working with over 100TB of total files I have to be careful
  • I can do this in a maintenance window

My current plan is to use the process outlined in https://dba.stackexchange.com/a/159251/44556 and covered in detail by Aaron Bertrand here: https://sqlperformance.com/2016/08/sql-indexes/widening-identity-column-4

Questions I have:

  1. Can I use the process outlined in the above link?
  2. Will this process need to differ given that it is partitioned table on that column?
  3. What else do I need to be aware of?
  4. Is there some alternate process I might consider that would be better? If so, why is it better?

There are a good number of posts regarding how to do the actual update process but I did not note any that referenced partitioned tables.

Snippet of the table definition: (artificially modified to protect the innocent) Note this is the SQL Server 2008 R2, prior to upgrade to 2016

SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_PADDING ON
GO

CREATE TABLE [dbo].[MyDataRecord]
(
    [JMyDataRecord_ID] [int] IDENTITY(1,1) NOT FOR REPLICATION NOT NULL,
    ... 81 other columns
    CONSTRAINT [PK_MyDataRecord_ID] 
        PRIMARY KEY CLUSTERED ([MyDataRecord_ID] ASC)
)
... other stuff like FK etc
Mark Schultheiss
  • 32,614
  • 12
  • 69
  • 100
  • You are asking if some random person has a better approach to converting an int identity to a bigint identity than Aaron Bertrand on his blog? You asked 5 questions, well really one 4. 1) - yes, 2) probably not, 3) uuhhhhmmmm, 5) probably not. I have one question for you. Why do you not simply try this on a test system and find out? Surely you aren't just going to flip the switch in production right?? – Sean Lange May 16 '18 at 19:26
  • 1
    You'll need a new partition function and scheme for the side-by-side migration the table partitioned on the column you're changing. – Dan Guzman May 16 '18 at 19:28
  • @SeanLange - yes it will be tested, just looking for anything related to partitioned vs not primarily as that added complication to the process. – Mark Schultheiss May 16 '18 at 19:43

1 Answers1

0

I may not look like a solution, but rather alternative method.

Usually the need to upgrade INT keys to BIGINT is when the key values approach max int value (2,147,483,647). In your case you still have a room for growth, and you can extend it with little cost by reseeding the keys into minimum negative number: -2,147,483,648 , so you'll have plenty of time till it reaches zero. And in case if you archive the data periodically you may never need to change it to BIGINT.

RESEED is very quick and simple command without any disadvantages, no downtime. You just need to check whether you have any app logic based on comparison for the keys, like >= , <= , ORDER BY, etc. Even in such cases it's quite easy to fix that logic.

Anton
  • 2,846
  • 1
  • 10
  • 15