Reading about clustered column store index in SQL Server 2014, I'm wondering if having a table with a huge number of columns is still an anti-pattern. Currently to alleviate the problem of having a single table with lots of columns I'm using vertical partitioning but having clustered column store index available this shouldn't be needed. Is this correct or am I missing something?
Example: Lets take for example the log of performance counters, the raw data might have the following structure:
╔══════════════════╦═══════╦═══════╦═════╦═════╦═════╦══════════╗ ║ Time ║ Perf1 ║ Perf2 ║ ... ║ ... ║ ... ║ Perf1000 ║ ╠══════════════════╬═══════╬═══════╬═════╬═════╬═════╬══════════╣ ║ 2013-11-05 00:01 ║ 1 ║ 5 ║ ║ ║ ║ 9 ║ ║ 2013-11-05 00:01 ║ 2 ║ 9 ║ ║ ║ ║ 9 ║ ║ 2013-11-05 00:01 ║ 3 ║ 2 ║ ║ ║ ║ 9 ║ ║ 2013-11-05 00:01 ║ 4 ║ 3 ║ ║ ║ ║ 9 ║ ╚══════════════════╩═══════╩═══════╩═════╩═════╩═════╩══════════╝
Having such a table with 1000 columns is evil, because one row will most likely span more than one page, because usually it is unlikely that one will be interested in all measures but the query will always incur in the IO cost, etc.. etc.. To solve this vertical partitioning usually helps, for example one could partition performance counters in different tables by category (CPU, RAM, etc.).
Conversely having such a table as a clustered column store index should not be such a problem because data will be stored column-wise and the IO involved for every query will be about only the requested columns, nothing more regardless of the total number of columns in the table.