I have a rather troublesome table that looks like this:
EventTimeLog
Id (bigint) | Time (datetime) | LogId (FK to Log tables Id column)
This table outlines the times a Log event occurred. This way to only write one unique log event and times those logs occur can be derived from this table.
Problem is the dates are so specific and these dates can have duplicate values. Ex:
2015-08-03 23:54:58.000 | 1983
2015-08-03 23:54:58.000 | 1934
2015-08-03 23:54:56.000 | 1647
After some time it becomes very difficult to query. Usually around 500k rows or so it's starts to chug, even if I put an index on LogId and Time. By the time I hit the 1mill range and up, queries slow to a crawl...
I do need these specific times so aggregating by the start of the hour or day is not an option. I also doubt implementing a Count columns for duplicate time values will help much as these dates are so granular. The index will still have to jump through all those specific dates which is very slow.
I'm unsure how to make this table more scale-able. Maybe break this out into monthly tables?
As requested, here is the query used that starts chugging
SELECT b.User, b.Ip, b.AccountId, a.Time FROM
EventTimeLog a
inner join [Log] b on a.LogId = b.Id
WHERE
b.UserId = '<param>' AND
a.Time >= '<param>' AND
a.Time <= '<pamam>'
If the time difference is > 2 days it chugs like crazy. And yes, I have indexes on Log for UserId.