I want to create a program that use decimal numbers, so I thought I would need to use float types, but I don't understand how these types behave. I made a test:
#include <stdio.h>
#include <float.h>
int main(void)
{
float fl;
fl = 5 - 100000000;
printf("%f\n", fl);
fl = FLT_MAX - FLT_MAX * 2;
printf("%f\n", fl);
fl = -100000000000000;
printf("%f\n", fl);
return 0;
}
Output:
-99999992.000000 // I expected it to be -99999995.000000
-inf // I expected it to be -340282346638528859811704183484516925440.000000
-100000000376832.000000 // I expected it to be -100000000000000
Why are the results different of my expectations?
EDIT: Thank's to people who don't just downvote my question for some reasons, and actually try to help me. However, what I could learn in this thread doesn't help me understanding why some float variables containing integers (ending with .000000) behaves strangely.