3

I know that most significant bit means the sign of a number in signed types. But I found one strange (for me) thing: if the number is negative and we use short type, this number will look like 0xffff####. How can this be? Short contains only 2 bytes and in 0xffff#### we see 4 whole bytes. Why do 16 more bits become one in binary representation. Explain me, please, how does it works.

For example,

short s = 0x8008;
printf("%x", s);

Output:
>>> ffff8008
Igorka
  • 31
  • 2
  • 5
    Calling a function with a variadic argument list promotes integer types that are smaller than `int` to `int`. – Pete Becker Sep 23 '22 at 17:01
  • [Documentation link for above explanation](https://en.cppreference.com/w/cpp/language/variadic_arguments#Default_conversions) – user4581301 Sep 23 '22 at 17:04
  • @PeteBecker, thanks a lot. I spend an hour to find the answer and the the answer was so simle. – Igorka Sep 23 '22 at 17:06
  • Use "%hx" instead. – Hans Passant Sep 23 '22 at 17:27
  • [This answer](https://stackoverflow.com/a/28097654/12002570) explains this. Here is another dupe: [Why is compiler converting char to int in printf?](https://stackoverflow.com/questions/67163712/why-is-compiler-converting-char-to-int-in-printf). – Jason Sep 23 '22 at 17:40

1 Answers1

1

As @Pete Becker says, the problem is implicit conversion to int. If you try the same thing with C++ iostreams you will get the output you expect though

#include <iostream>

int main() { 

    short foo = 0x8008;
    std::cout << std::hex << foo << "\n";

    short bar = -32760;
    std::cout << std::hex << bar << "\n";

}
jwezorek
  • 8,592
  • 1
  • 29
  • 46