Solution 1: Use better indexing
I can think of another way, to create an index and poll the table from your code, always keeping the last id you have processed (in some persistent way, so that you can access it even if your application fails).
Solution 2: Change Data Capture
This IS the safest way to go. The tables are updated from the transaction log with minimum to none impact at performance or the insert transaction.
You can also add some automatic cleanup so that the table will not get bloated. The througput is not that much and it will handle it easily with good cleanup intervals.
The downside is that with low enough interval you can lose data.
Solution 3: Triggers
By far the best in my opinion. Create a duplicate table (with the same columns) Like MyTableName_LiveData.
Create a trigger in the original table, that will insert the same row into the new table. Sql Server trigger insert values from new row into another table
CREATE TRIGGER yourNewTrigger ON yourSourcetable
FOR INSERT
AS
INSERT INTO yourDestinationTable
(col1, col2 , col3, user_id, user_name)
SELECT
'a' , default , null, user_id, user_name
FROM inserted
go
When you access a row and process is, delete from the temp table so that it won't become unusable itself.
The downsides are
- If the trigger fails, the insertion to the original table fails too
- Any changes to schema must be applied to both tables
- The inserts will take a bit more time to happen (negligible but I had to mention it).