Recently, during a refactoring session, I was looking over some code I wrote and noticed several things:
- I had functions that used
unsigned char
to enforce values in the interval [0-255]. - Other functions used
int
orlong
data types withif
statements inside the functions to silently clamp the values to valid ranges. - Values contained in classes and/or declared as arguments to functions that had an unknown
upper bound
but a known and definite non-negativelower bound
were declared as anunsigned
data type (int
orlong
depending on the possibility that theupper bound
went above 4,000,000,000).
The inconsistency is unnerving. Is this a good practice that I should continue? Should I rethink the logic and stick to using int
or long
with appropriate non-notifying clamping?
A note on the use of "appropriate": There are cases where I use signed
data types and throw notifying exceptions when the values go out of range but these are reserved for divde by zero
and constructors
.