0

Whenever I look through SQL table descriptions, I pretty much always see column data sizes of 2^n. (e.g. 8, 16, 64, 256, etc). I know that due to the nature of binary and the fact that bytes are 8 bits long, 2^n is common to use in computing. But does it actually make any difference in defining a SQL table?

If I have a column in a table called "FULL_NAME", and I want to give some room for a fairly long name, I would do

FULL_NAME VARCHAR(50) NOT NULL,

in my create table statement. But I rarely see this in other people's tables. For this example, I would probably see

FULL_NAME VARCHAR(64) NOT NULL,

used instead.

Is there some actual advantage to using the next-highest 2^n value? It seems like that is pointlessly allocating more space to a column just for the sake of having a number that feels like a computer would like it better. I would imagine the column space allocation would be smart enough to not have to waste a bunch of space every time you don't enter a size of 2^n.

jros
  • 714
  • 1
  • 10
  • 33
  • With varchar, no extra space would be wasted anyway; but yeah, as far as I know I would classify such behavior as superstitious optimization... **especially when it comes to VARCHAR**. – Uueerdo May 17 '16 at 16:34
  • For VARCHAR, it doesn't make any difference because the size can be varied based on the content. "50" means anything 0-50 bytes plus 1 additional byte for end character whereas "64 means anything 0-64 bytes plus 1 additional byte for end character. On the other hand, CHAR is fixed, 64 means 64 bytes are used regardless of the content. So I'd think if it's CHAR, power of 2 may be better? If you care very much about the space, there is MySQL internal manual - it explains page and record levels http://dev.mysql.com/doc/internals/en/innodb.html. There are additional bytes used internally. – Tin May 17 '16 at 18:50

0 Answers0