Here I am having a query about how double works with their precision.
I created a sample where in I entered the double values as below:
double d = 2.0126161;
double d1 = 2.0126162;
double d2 = 2.0126163;
double d3 = 2.0126164;
double d4 = 2.0126165;
Current Output:
If I put break point and check by running "po" command in lldb then values show up as below:
(lldb) po d
2.0126160999999998
(lldb) po d1
2.0126162000000001
(lldb) po d2
2.0126162999999999
(lldb) po d3
2.0126164000000002
(lldb) po d4
2.0126165
I am assigning these doubles to CLLocation object's latitude and longitude which have a datatype "double".
Here as you can see the output, shows that the “po” command for d4 prints the value “2.0126165” which is same as the value which we gave explicitly to d4 which is perfectly fine and what I want.
Issue:
But “po” commands for d, d1, d2, d3 have changes in the value as compared to what we explicitly passed.
This changes in value of d, d1, d2, d3 creates a difference in the calculation we do involving double d, d1, d2, d3. And if we check the cumulative effect shows that there are significant differences compared to the output we expect.
I don't want to round off the value to specific number of decimal places as that varies from value to value.
How can I have d, d1, d2, d3 to have same values as they were explicitly initialized with and not change the precision using datatype double (Check the expected output below)?
Expected Output:
(lldb) po d
2.0126161
(lldb) po d1
2.0126162
(lldb) po d2
2.0126163
(lldb) po d3
2.0126164
(lldb) po d4
2.0126165
Note: Please take a note of it that I don't want to display this value on screen, I am using these double values for mathematical calculations, so no point converting to NSString by passing "%.7f" as a format specifier for NSString.