3

Why does the following code work?

char c = 'A';
printf("%d - size: %d", c, sizeof(c));

Prints out:

65 - size: 1

Why is the output not garbage, since an int would usually be 4 bytes long and we can clearly see that the char is 1 byte long. Does the compiler do the implicit conversation?

dbush
  • 205,898
  • 23
  • 218
  • 273
NightRain23
  • 143
  • 1
  • 5

2 Answers2

5

Any integer type with a rank lower than int is promoted to either int or unsigned int anytime it is used in an expression. This is specified in section 6.3.1.1p2 of the C standard:

The following may be used in an expression wherever an int or unsigned int may be used:

  • An object or expression with an integer type (other than int or unsigned int ) whose integer conversion rank is less than or equal to the rank of int and unsigned int .
  • A bit-field of type _Bool , int , signed int ,or unsigned int .

If an int can represent all values of the original type (as restricted by the width, for a bit-field), the value is converted to an int ; otherwise, it is converted to an unsigned int . These are called the integer promotions .

All other types are unchanged by the integer promotions

That's what is happening in this case, since the printf function doesn't implicitly know the type of its parameters at compile time. So the char argument is promoted to int, and using %d to format it is valid.

dbush
  • 205,898
  • 23
  • 218
  • 273
  • 1
    Any integer type with a rank lower than `int` is promoted to `int` **if** `int` can hold all values of the type. Otherwise, it's promoted to `unsigned int`. For example, if `unsigned short` and `unsigned int` are the same size, then `unsigned short` promotes to `unsigned int`, not to `int`. (That won't matter for `char` except on exotic systems where `CHAR_MAX > INT_MAX`, which can only happen if `CHAR_BIT >= 16`.) – Keith Thompson Oct 23 '18 at 20:00
5

There is a special rule for functions with variable-length argument lists, like printf. In the variable-length portion of the argument list, all integral arguments smaller than int are promoted to int, and float is promoted to double. So it turns out it's perfectly fine to print a character (or a short) using %d.

These default argument promotions end up accounting for a number of anomalies in printf. You might think that the correct format specifiers for char, short, int, float, and double are %hhd, %hd, %d, %f, and %lf, respectively. But in fact you can get away with %d, %d, %d, %f, and %f. printf basically ignores the l modifier for floating point, and it seems to ignore the h modifier for integers. (Actually h can make a difference in obscure cases, as chux explains in a comment.)

Steve Summit
  • 45,437
  • 7
  • 70
  • 103
  • The `"h"` modifier cases the `int` argument (regardless it was originally `int`, `short`, `signed char`) to be converted to `short` before printing. The `"hh"` modifier cases the `int` argument to be converted to `signed char` before printing. So "essentially ignore" is true if the original value passed was in range of the finite converted type. – chux - Reinstate Monica Oct 23 '18 at 20:05
  • @chux "essentially ignore" was a bit too string. Reworded. – Steve Summit Oct 23 '18 at 20:41
  • Is there any documentation of this special rule? – NightRain23 Oct 24 '18 at 06:02
  • @NightRain23 Yes. They are called the *default argument promotions*; a web search should find more details. – Steve Summit Oct 24 '18 at 11:53