I have a query that returns several thousand records, joined across several tables. In the WHERE clause, a date is checked to be no more than two months in the past. At first, the query had this date boundary set up as a variable and the WHERE used that.
DECLARE @startdate as DATE = DATEADD(MONTH, -2, SYSDATETIME())
select [....]
where dateinquestion >= @startdate
This runs just fine (returns the result expected in under 4 seconds), but I wanted to get rid of the variable declaration and move the assignment into the WHERE clause itself:
select [....]
where dateinquestion >= DATEADD(MONTH, -2, SYSDATETIME())
This runs for over a minute and eats all CPU. I killed the query to stop pushing the server, no results were given. Changing to GETDATE() (I don't need the precision from SYSDATETIME() anyway) speeds things up:
select [....]
where dateinquestion >= DATEADD(MONTH, -2, GETDATE())
Results are similar to scenario 1.
I believe this is because SYSDATETIME is evaluated on a per-row-basis because processing a row costs several nanoseconds, significant to SYSDATETIME. GETDATE however, having a higher change-threshold, is unaffected (or less affected) and does not change - or need re-evaluating - on a per-row-basis.
Can you confirm? Is my assumption concerning this behaviour correct?
I've searched for this, but couldn't find anything except this, which only concerns assigning SYSDATETIME() to a variable, not using it in a WHERE: Does SYSDATETIME() cost more than GETDATE()?
Also this, but only GETDATE is used in the example: TSQL datetimes functions in where clause