For decimal numbers of the form [x.00, x.99]
, where x
is any arbitrary integer (you can probably guess part of the context here), floating-point datatypes are generally considered to be fairly inaccurate and sometimes dangerous. Instead, in languages that have decimal datatypes, decimals are often preferred. SQL is one such language.
That being said, here's an interesting little experiment in SQL Server:
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE TABLE [dbo].[Table_1](
[SomeFloat] [float] NOT NULL
) ON [PRIMARY]
GO
INSERT INTO Table_1 VALUES (1.1)
SELECT SomeFloat FROM Table_1
SELECT SomeFloat + SomeFloat FROM Table_1
UPDATE Table_1 SET SomeFloat = SomeFloat + SomeFloat
SELECT SomeFloat FROM Table_1
This returns the following:
1.1
2.2
2.2
That shouldn't be, right? x.1
is not supposed to be able to be stored as a floating-point number - not in SQL, not in a good many languages. Instead, shouldn't it be rounded off, due to limitiations when working with base-2?
But it's not. Neither is anything else of the form [x.00, x.99]
. They're all represented accuratly, despite using a float
datatype.
I can't change some stuff from float
to decimal
without first being able to justify that there's even a problem in the first place. I'm using SQL Server 2012. Does it automatically pick up on issues like this and convert these numbers into varchar
s internally or something else like that? How reliable is this? Thanks.