By "intuitive" I mean given
int a = -1;
unsigned int b = 3;
expression (a < b)
should evaluate to 1.
There is a number of questions on Stackoverflow already asking why in this or that particular case C compiler complains about signed/unsigned comparison. The answers boil down to integer conversion rules and such. Yet there does not seem to be a rationale behind why compiler has to be so exceptionally dumb when comparing singed and unsigned integers. Using declarations above, why expression like
(a < b)
is not automatically substituted by
(a < 0 || (unsigned int)a < b)
if there is no single machine instruction to do it properly?
Now, there have been some comments for previous questions in the vein of "if you have to mix signed and unsigned integers, there is something wrong with your program". I would not buy that since libc itself makes it impossible to live in a signed-only or unsigned-only world (e.g. example sprintf()
family of functions returns int
as the number of bytes written, send()
returns ssize_t
and so on).
I also don't think I can buy an idea expressed in comments below that implicit conversion of signed integer to unsigned in comparison (the (d - '0' < 10U)
"idiom") bestows some additional powers on C programmer compared to explicit cast (((unsigned int)(d - '0') < 10U)
). But sure enough it opens wide opportunities to screw up.
And yes, I'm happy that compiler warns me that it cannot do it (unfortunately only if I ask it explicitly). The question is - why can't it? Usually there are good reasons behind standard's rules, so I'm wondering if there are any here?