There are often times when you know for a fact that your loop will never run more than x number of times where x can be represented by byte or a short, basically a datatype smaller than int.
Why do we use int which takes up 32 bits (with most languages) when something like a byte would suffice which is only 8 bits.
I know we have 32 bit and 64 bit processes so we can easily fetch the value in a single trip but it still does consume more memory. Or what am I missing here?
UPDATE: Just to clarify. I am aware that speed wise there is no difference. I am asking about the impact on memory consumption.