0

I have a program in c which is as follows:

#include <stdio.h>

int main() {

   int sum = 17, count = 5;
   double mean;
   printf("Value of mean (without casting): %f\n", sum/count);
   mean = (double) sum / count;
   printf("Value of mean (with casting): %f\n", mean );
   return (0);
}

For the above program, I'm getting the following output:

Value of mean (without casting): 0.000000                                                                                                                                          
Value of mean (with casting): 3.400000  

I'm not getting why I'm getting 0.0000000 before performing the typecasting even though my sum/count returns a decimal (float) value, so I believe both the values should be coming out to be the same. Any help would be highly appreciated. Thanks!

AnonSar
  • 556
  • 1
  • 7
  • 24

2 Answers2

3

That's the result of using improper format specifier to compute a task and display using printf(). Notice your code syntax:

printf("Value of mean (without casting): %f\n", sum / count);

Here you're computing the division task for sum by count which will evaluate as an integer 3 (because the datatypes of sum and count are of integer, the decimals are truncated.)

OTOH, if you enable the compiler warnings by -Wformat flag, you'll get a warning:

main.cpp:8:46: warning: format '%f' expects argument of type 'double', but argument 2 has type 'int' [-Wformat=]
    8 |    printf("Value of mean (without casting): %f\n", (sum / count));
      |                                             ~^     ~~~~~~~~~~~~~
      |                                              |          |
      |                                              double     int
      |                                             %d

By using the correct format specifier here, which is %d for integers, the error will no longer happen. Or, if you're thinking not to change the format specifier, change the expression instead into:

((float)sum / count)

Which will solve your problem as well.

Rohan Bari
  • 7,482
  • 3
  • 14
  • 34
0
printf("Value of mean (without casting): %f\n", sum/count);

You are pushing an integer expression (sum/count) onto the stack, but telling printf to pop it off as a double (%f) and interpret the bits as such.

Two problems with this. The integer expression is likely pushing 4 bytes (sizeof(int)) onto the stack, but printf is popping 8 bytes (sizeof(double)), as a result of being passed %f. Undefined behavior. Second, even if the sizes matched up, the bits of a floating point value are in a completely different order and arrangement than an integer. It's going to be evaluated differently and garbage gets printed.

selbie
  • 100,020
  • 15
  • 103
  • 173