2

e.g.

uint8_t value = 256;

debug output:

0

I've read that it does some sort of truncating? I'm not seeing exactly how, any links are appreciated.

Womble
  • 345
  • 3
  • 13
  • 2
    dupe/related: http://stackoverflow.com/questions/16056758/c-c-unsigned-integer-overflow – NathanOliver Mar 29 '16 at 19:10
  • Who is voting to close this as *Questions asking us to recommend or find a book, tool, software library, tutorial or other off-site resource*? – NathanOliver Mar 29 '16 at 21:06

3 Answers3

3

According to [conv.integral]

If the destination type is unsigned, the resulting value is the least unsigned integer congruent to the source integer (modulo 2^n where n is the number of bits used to represent the unsigned type). [ Note: In a two’s complement representation, this conversion is conceptual and there is no change in the bit pattern (if there is no truncation). — end note ]

If the destination type is signed, the value is unchanged if it can be represented in the destination type (and bit-field width); otherwise, the value is implementation-defined.

So, for your example, you would reliably get zero; if you used int8_t instead of uint8_t, the result would be implementation-defined. (In contrast, if an operation on signed integers overflows, the result is undefined behaviour. Why the inconsistency? I don't know.)

Brian Bi
  • 111,498
  • 10
  • 176
  • 312
  • As for the "Why?" see: http://stackoverflow.com/questions/18195715/why-is-unsigned-integer-overflow-defined-behavior-but-signed-integer-overflow-is – NathanOliver Mar 29 '16 at 19:15
3

I'll try to make sense of it along with you.

uint8_t is an 8-bit data type, or a byte. It has 8 slots which can either be 1 or 0. 1111 1111 would be 255. So if you ad one to it, it keeps carrying over. 255 + 1 in binary would be 1 0000 0000, but since the data type can only store 8 bits, it drops the 1, and becomes 0000 0000, which translates to the integer value 0.

At least, that's how I understand it works.

Zackeezy
  • 74
  • 8
1

In the case of unsigned integral types, the lowest appropriate number of bits get stored in the variable. (Brian's answer encompasses everything that I say here.)

For example, unsigned char a = 257 would result in a=1.

The compiler (gcc in this case) should warn you when you do such assignments, e.g. filename.c:line:column: warning: overflow in implicit constant conversion [-Woverflow].

blazs
  • 4,705
  • 24
  • 38