0

I have an SQL Server 2008 table that can be illustrated as such:

CREATE TABLE tbl (dtIn DATETIME2, dtOut DATETIME2)
INSERT tbl VALUES
('9/12/2012 3:21:22 AM', '9/12/2012 3:32:15 AM'),
('9/12/2012 3:58:52 AM', '9/12/2012 4:00:47 AM'),
('9/12/2012 4:02:00 AM', '9/12/2012 4:03:26 AM'),
('9/12/2012 4:04:34 AM', '9/12/2012 4:17:03 AM'),
('9/12/2012 4:22:37 AM', '9/12/2012 4:24:18 AM'),
('9/12/2012 5:35:27 AM', '9/12/2012 5:36:26 AM'),
('9/12/2012 5:37:00 AM', '9/12/2012 5:38:08 AM'),
('9/12/2012 5:38:36 AM', '9/12/2012 5:39:40 AM'),
('9/12/2012 5:44:22 AM', '9/12/2012 9:40:21 PM'),
('9/12/2012 9:41:28 PM', '9/12/2012 9:44:19 PM'),
('9/12/2012 10:25:40 PM', '9/12/2012 10:30:25 PM'),
('9/12/2012 10:30:40 PM', '9/12/2012 10:34:06 PM'),
('9/12/2012 10:37:53 PM', '9/12/2012 10:40:12 PM'),
('9/12/2012 10:40:17 PM', '9/12/2012 11:59:59 PM')   --and so on

I then need to execute a query like this (to calculate duration in minutes):

WITH ctx AS(
  SELECT datediff(minute, dtIn, dtOut) AS d FROM tbl
  )
SELECT SUM(d) FROM ctx

The issue with the query above is that I lose precision. For instance, this query for the table above will be 1 second off, which will get worse for more values to a point that I actually lose tens of minutes for a large dataset.

I was thinking to replace it with one of two alternatives, but I'm not sure which one is more efficient to use?

One with division:

WITH ctx AS(
  SELECT datediff(second, dtIn, dtOut) / 60.0 AS d FROM tbl
  )
SELECT SUM(d) FROM ctx

Or two with the cast (to prevent int overrun that goes as far as 68 years):

WITH ctx AS(
  SELECT datediff(second, dtIn, dtOut) AS d FROM tbl
  )
SELECT SUM(CAST(d AS BIGINT)) FROM ctx
c00000fd
  • 20,994
  • 29
  • 177
  • 400

1 Answers1

1

Between your two queries, there really isn't much difference. It can be said that integer maths is marginally faster than floating point, so you could prefer the latter if reporting by SECONDS is acceptable.

RichardTheKiwi
  • 105,798
  • 26
  • 196
  • 262
  • Thanks. Also, wouldn't doing divisions on N records (say, 100,000) be slower than converting to BIGINT 100,000 times? – c00000fd Sep 30 '12 at 03:23
  • Casting from int (int32) to bigint isn't really as expensive as you imagine, so yes, that could also play into the hand of preferring the latter. – RichardTheKiwi Sep 30 '12 at 03:24
  • Also about your comment "reporting by SECONDS is acceptable". It is not "acceptable", it is a NECESSITY! I lose precision a big time, if I don't do it. Here's an example: http://stackoverflow.com/questions/12657680/datediff-with-minute-does-not-return-expected-value – c00000fd Sep 30 '12 at 03:25
  • Yes, that's what the latter example will do behind the scenes, in C# code (not SQL). – c00000fd Sep 30 '12 at 03:27