I have a SQL Server instance (hosted via Azure SQL) where a certain table currently gets lots (around 10/sec on average) of writes, but no reads.
I have a requirement where i need to read some of this data now, where the read involves performing aggregate functions (sum, count, max, etc) on a bunch of these rows (0-100ish) at a time.
Note: these read operations are mainly going to be invoked on a schedule, e.g daily reports.
I'm worried about lock contention here, so i'm looking for advice on how to split this table out and reduce deadlocks, slow queries etc.
Off the top of my head, this is what comes to mind:
- Perform nightly replication (snapshot) to another DB, where the data replication is aggregated in the way i need for the reads.
- Run a nightly SQL job on the DB, aggregating the data to another table.
I'm thinking option 1 makes the most sense. approach is?
If it matters, this is the data structure at the moment, as written to db:
PostId Timestamp EventType
---------------------------------------
1 2017-01-01 10:00:00 1
1 2017-01-01 10:30:00 1
1 2017-01-01 11:30:00 2
2 2017-01-01 11:30:00 1
Basically, a tracking/event table.
Here's what i want to aggregate to
PostId Date DetailViews SearchViews
------------------------------------------------
1 2017-01-01 2 1
2 2017-01-01 1 0
So i'm just trying to aggregate each individual tracking record into daily stats. Pretty simple.
Can someone let me know the best approach here, to get the most acceptable read performance? Am i overthinking this, and a simple indexed view might do? Thanks in advance.
EDIT
Here is the output of sys.dm_db_wait_stats
. Can anyone spot anything?