1

I have an Azure SQL DB Table with about 700k records and 9 columns. Most of those columns were initially set to NVARCHAR(max)

I decided to then apply a SQL Script to limit the column size as I felt this should optimize performance. I ran the following queries inside Azure Data Studio (I'm on a mac) against each of the columns:

  ALTER TABLE [MyTable] ALTER COLUMN [ColumnInQuestion1] nvarchar(500) NULL;
  ALTER TABLE [MyTable] ALTER COLUMN [ColumnInQuestion2] nvarchar(500) NULL;
  etc etc...

This query took around 20 mins to complete. However after I looked at my DB, I noticed the size of my DB in Azure increased by almost 30% - from around 1.5GB to just over 1.9GB.

Why did this happen? Does Azure/Data Studio run some kind of a backup for the table during the query execution that it doesn't get rid of afterwards or does limiting column length really increase the DB size?

Allen S
  • 3,471
  • 4
  • 34
  • 46

1 Answers1

2

Moving from "MAX" to "500" column size changed how SQL allocates the data in your table rows. Pages (logical chunks of data) might be left over from how the rows were handled with the "MAX" columns.

REBUILD any index that had the altered column(s) in it.

Run the DBCC CLEANTABLE command on your table. https://learn.microsoft.com/en-us/sql/t-sql/database-console-commands/dbcc-cleantable-transact-sql?view=sql-server-ver16

Louis Ricci
  • 20,804
  • 5
  • 48
  • 62