I was wondering why precision problems in floating point numbers are different with different values:
#include <iostream>
#include <iomanip>
int main ()
{
std::cout << std::setprecision(20);
double d1(1.0);
std::cout << d1 << std::endl;
double d2(0.1);
std::cout << d2 << std::endl;
return 0;
}
The output of this program is:
- 1
- 0.10000000000000000555
If both the numbers are of type double (that generally have precision problems), why compiler doesn't find any problem with value 1.0 and does find a problem with value 0.1. One more thing that it not clear to me is that if precision is set to 20 digits, why do I get a number that contains 21 digits as the result of d2?