0

I wonder if something like the following is possible to achieve:

ALTER TABLE dbo.[Foo] 
    ALTER COLUMN * NVARCHAR(500) 
    WHERE Columns NVARCHAR(n) < NVARCHAR(500);
marc_s
  • 732,580
  • 175
  • 1,330
  • 1,459
basquiatraphaeu
  • 525
  • 7
  • 19
  • 2
    In brief: no. `ALTER TABLE... ALTER COLUMN` needs to be applied **for each column** individually - no way around this, I'm afraid – marc_s Jan 24 '22 at 15:32
  • It can only be done by building a dynamic SQL query and running that with EXECUTE. – Peter B Jan 24 '22 at 15:32
  • 4
    The fact that you want to change all the columns to be 500 characters in length, however, sounds like an [XY Problem](//xyproblem.info). This'll make all of your columns over the 900 byte index size. – Thom A Jan 24 '22 at 15:36
  • Not only that but it's highly unlikely you suddently have an actual requirement to cater for data up to 500 characters in all these columns, sounds more like a quick cop-out. Bear in mind that many parts of a query execution make educated guesses about the expected size of data thruput from the size of the columns, which drives memory grants and can rob your server of precious memory where it's not actually required. – Stu Jan 24 '22 at 15:43
  • @Larnu I'm afraid your XY problem assumption is absolutely right. Currently, I am stuck at a data ingestion issue in which one of the input columns exceeds the limit of the output one. However, the error does not tell me which column. This is the issue I am referring to: https://stackoverflow.com/questions/69895530/dataflow-error-data-source-cannot-be-converted – basquiatraphaeu Jan 24 '22 at 16:05
  • 1
    Considering in that question neither of the tables have an `nvarchar` let alone one that is 255 characters long strongly suggests the problem is your pipeline, @basquiat . Like you confirmed, this is most certainly an XY Problem (note, I am not familiar with Azure Pipelines, so i cannot answer on there) – Thom A Jan 24 '22 at 16:09

1 Answers1

1

you can use information_schema.columns and build a dynamic sql query to do generate query. Still you need to copy the result and run them manually.

select 'alter table '+TABLE_SCHEMA+'.'+TABLE_NAME+' alter column '+COLUMN_NAME+' NVARCHAR(500)' 
from INFORMATION_SCHEMA.columns 
where DATA_TYPE = 'nvarchar' 
and CHARACTER_MAXIMUM_LENGTH < 500 
and table_name = 'foo' 
and TABLE_SCHEMA = 'dbo'

edited below

use the below code to include null/not null

select 'alter table ['+TABLE_SCHEMA+'].['+TABLE_NAME+'] alter column ['+COLUMN_NAME+'] NVARCHAR(500) '+case when IS_NULLABLE = 'YES' then 'NULL' else 'NOT NULL' end 
from INFORMATION_SCHEMA.columns 
where DATA_TYPE = 'nvarchar' 
and CHARACTER_MAXIMUM_LENGTH < 500 
and table_name = 'foo' 
and TABLE_SCHEMA = 'dbo'
Moulitharan M
  • 729
  • 4
  • 15
  • 3
    Also need to add `NULL` or `NOT NULL` to avoid changing the exiting column nullability. – Dan Guzman Jan 24 '22 at 15:45
  • 2
    Some `QUOTENAME`s also wouldn't go amiss too, in the event that the OP has column names that must be delimit identified. – Thom A Jan 24 '22 at 15:50
  • 3
    And consider constraints and indexes that reference the column will need to be dropped beforehand and recreated afterwards. – Dan Guzman Jan 24 '22 at 15:51
  • 1
    @DanGuzman That *shouldn't* be needed, as the data type is staying the same and not getting smaller. Though, as I mention in my original comment, 500 `nvarchar` characters is above the 900 bytes limit for an index. [db<>fiddle](https://dbfiddle.uk/?rdbms=sqlserver_2019&fiddle=24ebb538fbe376320cd8bc2d07d5ded9) – Thom A Jan 24 '22 at 15:54
  • modified the code to accommodate DanGuzman comment – Moulitharan M Jan 24 '22 at 15:56
  • And you shouldn't use `INFORMATION_SCHEMA` anyway, it's for compatibility. Instead use `sys.columns` – Charlieface Jan 24 '22 at 18:43