2

in my code, the output is 2.000000 whereas it's supposed to be 2.11111

#include<stdio.h>main(){
int i,j;
float r;
i = 19;
j = 9;
r = i / j;
printf(" %f ",r);}

why it's not working?

user2779715
  • 33
  • 1
  • 4

6 Answers6

8

Because i / j is integer division (both operands are integers). It doesn't make any difference that you store the result in a float.

You would get the desired result if one of i and j were a float, e.g.:

r = ((float)i) / j;
Jon
  • 428,835
  • 81
  • 738
  • 806
2

Change

r = i / j;

with

r = i / (float) j;

i and j are integers and i / j is a integer division.

ouah
  • 142,963
  • 15
  • 272
  • 331
2

This is because the division is done in integers before being assigned to a float. Change the type of i and j to float to get it fixed:

main(){
    float i,j,r;
    i = 19;
    j = 9;
    r = i / j;
    printf(" %f ",r);
}

Alternatively, you can cast i to float in the division, like this:

r = ((float)i) / j;
Sergey Kalinichenko
  • 714,442
  • 84
  • 1,110
  • 1,523
0

Diving two integers wilds a integer (in your case 2 which is later converted to a float):

Just convert i or j to float (or declare them as floats to begin with):

r = ((float) i) / j;

Working example

Anthony Accioly
  • 21,918
  • 9
  • 70
  • 118
0

Here, int was converted to float by default after the RHS (integer division, the result of which is by default the quotient = an integer) was evaluated.That's why you get 2.000000. Typecast any number in RHS to get the result you seek.

See you need to understand type conversions in c and then casting.Check them in any good source. Generally, automatic conversions are those which can convert a narrower operand into a wider one without loss of information. For example, converting an integer to floating point in examples like float + integer (on 64-bit machine).[from wiki].

e.g float x =7.8 ; int k=9; int j = k + x = 16 .

Here, first k is coverted to float, then added and then finally the result is truncated to an integer.

Rafed Nole
  • 112
  • 1
  • 4
  • 16
0

Because i / j is integer division (both operands are integers),so their division results in an integer number (with the fractional part discareded) which is again assigned to a float type of variable r so the compiler typecasts(i.e.implicit typecasting) the integer result to float again to show 2.000000

You would get the desired result if one of i and j were a float, e.g.:

r = ((float)i) / j; //explicit typecasting

r_goyal
  • 1,117
  • 1
  • 11
  • 20