I have a table with 100m+ rows. We're seeing very bad performance on queries as the data grows. I noticed that the row size is pretty big (10190) and I'm thinking that this is affecting indexes/index performance.
the table has a bunch of columns set to wrong data types (lots of ints where tinyints are more appropriate, etc). I went and updated the table to change what I could.
Original row size is 10190 and I was able to get it down to 10090 by adjusting ints to smallint or tinyint.
There are two columns set to varchar(2048). I changed those to varchar(max) and the row size went down to about 6000.
I used the queries found here http://www.sqlservercentral.com/Forums/Topic226699-5-1.aspx to get the row size.
My question is: Does changing the varchar(2048) columns to varchar(max) help indexing/performance when those columns are not used often? What about getting row size below 8000?