If you see the Microsoft SQL Server 2008 R2 documentation for the Point data type it expects a latitude and longitude to be given as floats.
However, this post on SO suggests that the better data type for storing long and lat is decimal(9,6).
So, why are Microsoft using floats?
Update: In the following example, where the lat/long are denormalized to improve performance, should I match the float data type on the point, or maintain the decimal?
CREATE TABLE [dbo].[GeoTest](
[Id] [INT] IDENTITY(1,1) NOT NULL,
[Lat] [decimal](9, 6) NOT NULL,
[Long] [decimal](9, 6) NOT NULL,
[GeoPoint] [GEOGRAPHY] NOT NULL,
CONSTRAINT [PK_GeoTest] PRIMARY KEY CLUSTERED
(
[Id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
The schema above allows me to this:
SELECT Lat, Long FROM GeoTest WHERE GeoPoint.STDistance(GEOGRAPHY::Point(47.6475, -122.1393, 4326))/1609.344 <= 20
Rather than this:
SELECT GeoPoint.Lat, GeoPoint.Long FROM GeoTest WHERE GeoPoint.STDistance(GEOGRAPHY::Point(47.6475, -122.1393, 4326))/1609.344 <= 20
Which is slower. It struck me odd that the two data types were different.