If we multiply two uint32_t
types and they type int
on this system has 63 value bits and one sign bit, then those values are converted to int
( integer promotions ), multiplied, and converted back to uint32_t
. The intermediate result cannot be represented by an int
: 2^32-1 * 2^32-1 > 2^63-1
, and triggers a signed integer overflow, causing undefined behavior.
Value of UINT32_MAX
is 2^32-1
, since uint32_t
is guaranteed to have 32 value bits.
uint32_t a = UINT32_MAX ;
uint32_t b = UINT32_MAX ;
uint32_t c = a*b ;
Since platforms that have int the size of 64 bits are common, is my conclusion correct? The programmer expected that the result will wrap, since the type is unsigned, but the integer promotions will cause undefined behavior because of signed overflow.