65

I am just learning C and I have a little knowledge of Objective-C due to dabbling in iOS development. In Objective-C, I was using NSLog(@"%i", x); to print the variable x to the console. However, I have been reading a few C tutorials and they are saying to use %d instead of %i.

printf("%d", x); and printf("%i", x); both print x to the console correctly.

These both seem to get me to the same place, so which is preferred? Is one more semantically correct or is right?

Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
Dummy Code
  • 1,858
  • 4
  • 19
  • 38

6 Answers6

80

They are completely equivalent when used with printf(). Personally, I prefer %d. It's used more often (should I say "it's the idiomatic conversion specifier for int"?).

(One difference between %i and %d is that when used with scanf(), then %d always expects a decimal integer, whereas %i recognizes the 0 and 0x prefixes as octal and hexadecimal, but no sane programmer uses scanf() anyway, so this should not be a concern.)

Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
  • Thanks :) So more just a developer's preference? – Dummy Code Jun 26 '13 at 20:20
  • 1
    @HenryHarris Yes, but if you take my advice, you use `%d` ;) –  Jun 26 '13 at 20:20
  • 9
    This may be extremely late, but what is wrong with using scanf()? – Spellbinder2050 Sep 28 '14 at 21:12
  • 4
    @Spellbinder2050, see [http://c-faq.com/stdio/scanfprobs.html](http://c-faq.com/stdio/scanfprobs.html). Basically, scanf() does not respond well to unexpected input. – Chad Nov 23 '15 at 16:23
  • 3
    Did you check out the link Chad provided? `"It's nearly impossible to deal gracefully with all of these potential problems when using scanf; it's far easier to read entire lines (with fgets or the like),...(Functions like strtol, strtok, and atoi are often useful"`. Man, I hate these kind of comments where people say _"Oh, everyone else is shitty"_, but don't read through the resources provided, just like OiciTrap. – Kyle Chadha Jul 12 '18 at 19:25
25

I am just adding an example here, because I think examples make it easier to understand.

In printf(), they behave identically, so you can use any either %d or %i. But they behave differently in scanf().

For example:

int main()
{
    int num, num2;
    scanf("%d%i", &num, &num2); // Reading num using %d and num2 using %i

    printf("%d\t%d", num, num2);
    return 0;
}

Output:

enter image description here

You can see the different results for identical inputs.

num:

We are reading num using %d so when we enter 010 it ignores the first 0 and treats it as decimal 10.

num2:

We are reading num2 using %i.

That means it will treat decimals, octals, and hexadecimals differently.

When it give num2 010 it sees the leading 0 and parses it as octal.

When we print it using %d it prints the decimal equivalent of octal 010 which is 8.

Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
A.s. Bhullar
  • 2,680
  • 2
  • 26
  • 32
  • 3
    Upvoting you, but I'd have preferred to see a complete answer rather than patchwork over another answer (which is exactly what the StackOverflow project is fighting against) – vog Apr 01 '15 at 08:58
4

The d and i conversion specifiers behave the same with fprintf, but they behave differently for fscanf.

As some other wrote in their answer, the idiomatic way to print an int is using d conversion specifier.

Regarding i specifier and fprintf, C99 Rationale says that:

The %i conversion specifier was added in C89 for programmer convenience to provide symmetry with fscanf’s %i conversion specifier, even though it has exactly the same meaning as the %d conversion specifier when used with fprintf.

Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
ouah
  • 142,963
  • 15
  • 272
  • 331
2

Both %d and %i can be used to print an integer.

%d stands for "decimal", and %i for "integer." You can use %x to print in hexadecimal, and %o to print in octal.

You can use %i as a synonym for %d, if you prefer to indicate "integer" instead of "decimal."

On input, using scanf(), you can use use both %i and %d as well. %i means parse it as an integer in any base (octal, hexadecimal, or decimal, as indicated by a 0 or 0x prefix), while %d means parse it as a decimal integer.

Check here for more explanation:

Why does %d stand for Integer?

Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
Priyatham51
  • 1,864
  • 1
  • 16
  • 24
1

%d seems to be the norm for printing integers, I never figured out why, they behave identically.

Stephan
  • 16,509
  • 7
  • 35
  • 61
0

As others said, they produce identical output on printf, but behave differently on scanf. I would prefer %d over %i for this reason. A number that is printed with %d can be read in with %d and you will get the same number. That is not always true with %i, if you ever choose to use zero padding. Because it is common to copy printf format strings into scanf format strings, I would avoid %i, since it could give you a surprising bug introduction:

I write fprintf("%i ...", ...);

You copy and write fscanf(%i ...", ...);

I decide I want to align columns more nicely and make alphabetization behave the same as sorting: fprintf("%03i ...", ...); (or %04d)

Now when you read my numbers, anything between 10 and 99 is interpreted in octal. Oops.

If you want decimal formatting, just say so.

David Roundy
  • 1,706
  • 2
  • 14
  • 20