0

I'm referring this question because I can't understand how ASCII characters from 0 to 255 can be represented with a signed char if the range of it is from -128 to 127.

Being char = sizeof(char)= 1 byte, it is also reasonable to think that it can easily represent values up to the maximum of 255;

So why the assignment: char a = 128 has nothing wrong and also why shouldn't I use unsigned char for it.

Thank you in advance!

rici
  • 234,347
  • 28
  • 237
  • 341
  • Possible duplicate of [C- why char c=129 will convert into -127?](https://stackoverflow.com/q/20756626/11683) – GSerg Nov 21 '18 at 17:35
  • Possible duplicate of [C- why char c=129 will convert into -127?](https://stackoverflow.com/questions/20756626/c-why-char-c-129-will-convert-into-127) – Patrick Artner Nov 21 '18 at 17:38
  • I read the comment of the possible duplicate, but I'm still in doubt. If I try to printf("%c",a) will output the character ç, so the char a yields effectively the value 128 something that only unsigned char could represent and basically the overflow is managed not to assign a negative value to the variable but converting it to an unsigned char? – Franco Bosi Nov 21 '18 at 18:01
  • This seems like a general question about character encodings that use code unit values in the range 128 to 255. The ASCII character encoding is not one of them. Also, ç is not even in the ASCII character set. – Tom Blodget Nov 21 '18 at 20:04

2 Answers2

2

char c = 128; by itself is correct in C. The standard says that a char contains CHAR_BIT bits, which can be greater than 8. Also, a char can be signed or unsigned, implementation defined, and an unsigned char has to contain at least the range [0, 255].

So an implementation where a char is bigger than 8 bits, or the char is unsigned by default, this line is valid and relevant.

Even in a common 8 bit signed char implementation, the expression is still well-defined in how it will convert the 128 to fit in a char, so there is no problem.

In real cases, the compiler will often issue a warning for these, clang for example :
warning: implicit conversion from 'int' to 'char' changes value from 128 to -128 [-Wconstant-conversion].

ElderBug
  • 5,926
  • 16
  • 25
-1

signed or unsigned - it takes 8bits. 8bits can contain 256 values. Just question how we use them.

AndrewF
  • 33
  • 3
  • basically, what you are saying is that as we know 8 bit can represent 256 values and so if you assign 128 to a signed char, even if the range is up to 127, C will manage the overflow, assuming that could be represented as unsigned char and be effectively the value 128? – Franco Bosi Nov 21 '18 at 17:59
  • I'm c#. I mean, probably, compiler just got byte/8bits (nevermind 'a' or 128 or -15. and set value at address in memory). and google says char in c++ can be -128/127 or 0/255. – AndrewF Nov 21 '18 at 18:24