As we know, code like this will generate an warning:
for (int i = 0; i < v.size(); ++i)
The solution is something like auto i = 0u;
, decltype(v.size())
or std::vector<int>::size_type
but pretend that we're forced to have both a signed and unsigned value. The compiler will automatically cast the int
to be an unsigned int
(the actual type doesn't matter). Using an explicit cast, static_cast<unsigned int>(i)
makes the warning go away, but this is bad because it only did the same thing the compiler did and silenced an important warning!
The better solution is:
if ((i < 0) || (static_cast<unsigned int>(i) < v.size()))
Understandably, C is "closer to the metal" and as a consequence more unsafe. But in C++, there's no excuse for this. As C++ and C diverge (as they have been doing for many years), hundreds of improvements to C++ have increased safety. I highly doubt a change like this would hurt performance either.
Is there a reason why compilers don't do this automatically?
N.B: this DOES happen in the real world. See Vulnerability Note VU#159523:
This vulnerability in Adobe Flash arises because Flash passes a signed integer to calloc(). An attacker has control over this integer and can send negative numbers. Because calloc() takes size_t, which is unsigned, the negative number is converted to a very large number, which is generally too big to allocate, and as a result calloc() returns NULL causing the vulnerability to exist.