3
#include <stdio.h>

int main()
{
    unsigned int x =1;
    signed char y = -1;
    unsigned int sum = x + y;
    printf("%u", sum);
}

In the above program I expected signed char to be upcasted to unsigned int and hence sum to be x + y = 1 + 2^32 -1 = 2^32. But surprisingly it prints 0.

Previously, I had tried printing (x>y) and got false (0) as the output. I can't figure out what's going on here, could someone explain how does one about casting in such cases?

Archer
  • 271
  • 5
  • 15
  • `y` is probably being promoted to `int`. – Fiddling Bits Mar 04 '20 at 20:40
  • This link has something to do with your question: https://github.com/LambdaSchool/CS-Wiki/wiki/Casting-Signed-to-Unsigned-in-C – Michael Heidelberg Mar 04 '20 at 20:43
  • 1
    `y` is converted to `int`. But, note that `2^32` is `1 << 32`, which means zero because `unsigned int` only has 32 bits and the [shifted] value is 0x100000000 which can't be contained in 32 bits and is truncated to zero. – Craig Estey Mar 04 '20 at 20:58
  • 1
    I get a warning on this code: `error: conversion to 'unsigned int' from 'signed char' may change the sign of the result [-Werror=sign-conversion]`. I think this is the kind of thing it's warning about. – Fred Larson Mar 04 '20 at 21:01
  • @FredLarson the `Wsign-conversion` warns about a lot of things that are actually well-defined , if you use it in combination with `-Werror` then the compiler is no longer conforming – M.M Mar 04 '20 at 21:04
  • 1
    Quote: " = 2^32. But surprisingly it prints 0." hmm... how did you expect to store 2^32 in a 32 bit unsigned? – Support Ukraine Mar 04 '20 at 21:30
  • Yet another reason to always enable the warnings when compiling, Then fix those warnings – user3629249 Mar 05 '20 at 07:38

1 Answers1

6

It shouldn't be surprising that computing 232 as an unsigned int results in 0. On a machine with 32-bit ints, UINT_MAX is 232−1 and 232 is out of range. As with any other unsigned arithmetic, the out-of-range value is reduced modulus UINT_MAX + 1 (i.e., 232), resulting in 0.

Specifically, in the evaluation of x + y:

First, y is converted to a (signed) int, as per the "integer promotions". This doesn't change the value of y; it is still -1.

Then, as per the "usual arithmetic conversions", since unsigned int and int have the same rank, y is converted to unsigned int, making its value 232 − 1.

Finally, the addition is computed using unsigned int arithmetic. That results in 0, as above.

This exact same sequence is followed for the evaluation of x > y. Since y has been converted to an unsigned int before the comparison is evaluated, the result is (perhaps unexpectedly) false. That's why some compilers will warn about comparison between signed and unsigned values.

Also, the type of the variable being assigned to does not alter the computation. Only when the result is computed is any consideration taken of what will be done with the result. If, for example, sum had been declared unsigned long long int, the computation would be done identically and sum would still be 0. For the extra precision to be useful, you would have to first cast y to unsigned int manually, and then ensure that the addition was computed with extra precision by manually casting one of the arguments to +:

unsigned int y_as_int = y;
unsigned long long sum = x + (unsigned long long)y_as_int;
rici
  • 234,347
  • 28
  • 237
  • 341