DECLARE @f as float
SELECT @f = 0.2
SELECT @f
The above code returns
----------------------
0.2
(1 row(s) affected)
No surprise there. Except that floating-point cannot represent 0.2
exactly.
So what is SQLServer doing here? Is it using floating-point with a base of 10? I was of the understanding that it always dealt with floating-point in base 2 (as floating-point with base 10 would basically be DECIMAL
?).
Is it doing some rounding?