0

The code below results in error when a table [mytable] exists and has a columnstore index, even though the table will have been dropped and recreated without columnstore index before page compression is applied to it.

drop table if exists [mytable]
select top 0
    *
into [mytable]
from [myexternaltable]
alter table [mytable] rebuild partition = all with (data_compression = page)

The error thrown:

This is not a valid data compression setting for a columnstore index. Please choose COLUMNSTORE or COLUMNSTORE_ARCHIVE compression.

At this point [mytable] has not been dropped so SQL Server has apparently not started executing any code.

The code runs just fine when I run the drop table statement first and the rest of the code after. SQL Server seemingly stops in error prematurely if it detects an inconsistency (that will not necessarily persist) with an existing table when starting a batch, but is perfectly happy with table [mytable] not existing at all, whereas a table not existing can hardly be seen as consistent with applying compression on it. SQL Server's consistency checking does not look particularly consistent itself.

I recall having had similar issues when using column references that did not exist yet and were to be created in code, if only SQL Server would allow the code to run instead of terminating on a wrongly predicted error.

What would be the most straightforward solution to this issue? I would not mind suppressing the error altogether - if possible - since it is obviously wrong.
I am trying to avoid work-arounds such as running the code as 2 separate batches, putting part of the code in an exec phrase, or trying and catching the error. The code is being used in hundreds of stored procedures, so the simpler the solution the better.

Thom A
  • 88,727
  • 11
  • 45
  • 75
shortski
  • 68
  • 7
  • 4
    looks like you might have an XY problem: why are you dropping and recreating the table(s)? "The code is being used in hundreds of stored procedures" - that smells funny – Mitch Wheat Nov 16 '22 at 13:06
  • Looks like a compilation error too, so the entire batch isn't being run. – Thom A Nov 16 '22 at 13:10
  • I agree with the XY problem. Does your production code have a semi-colon `;` at the end of each statement? (your example code is missing statement terminators). Perhaps try `CREATE TABLE` before doing the `INSERT` and `ALTER`. The `INSERT INTO` method is the lazy-man's way of creating a table and not caring what columns are coming from the other table. But, sometimes we have to bite the bullet and start caring. – Tim Jarosz Nov 16 '22 at 15:40
  • @MitchWheat: The use case may be atypical, but I have run into the issue on a number of occasions and I wonder if there is an elegant solution to it. In this example every procedure fully reloads an external table into a local table and is used only sporadically. The local tables are normally loaded incrementally on a daily basis. The procedures I am mentioning are used whenever a full reload is needed. In this case because the local tables had been tampered with. – shortski Nov 16 '22 at 18:23
  • @TimJarosz: the purpose of the procedure is to produce an exact local copy of an external table, whatever the columns in the external table. A `SELECT INTO` may be a lazy solution, but in this case is exactly what needs to be done, and without me having to construct a table creation statement based on the metadata of the external table. Anyway I doubt if SQL Server has a problem with laziness :-). The error seems to be due to a build-in consistency check that can be useful but gets it wrong at times. – shortski Nov 16 '22 at 18:48
  • have you considered TRUNCATE ? – Mitch Wheat Nov 16 '22 at 23:57
  • I did. The procedure however needs to recreate the table since the structure of an external table can change. – shortski Nov 17 '22 at 08:25

0 Answers0