I have an insert that is executed every 2 seconds (20 columns, 15000 rows, outside of SQL Server [sensor data]), it runs in 200 ms. I would like to keep only 10 minutes of data (sum ~4.500.000 rows) in this table then move (archive) the earliest 5 minutes to another archive table (that will store 50 days, billions of rows). The stored procedure for archiving:
begin tran
declare @UTC_min datetime2(2) = (select TOP 1 UTCLogDateTime from table1 order by UTCLogDateTime asc)
declare @UTC_copy datetime2(2) = dateadd(minute,5,@UTC_min)
INSERT INTO archive_table
SELECT *
FROM table1
where UTCLogDateTime<@UTC_copy
delete top(100000) from table1 where UTCLogDateTime<@UTC_copy
WHILE @@rowcount > 0
BEGIN
delete top(100000) from table1 where UTCLogDateTime<@UTC_copy
END
commit
I would like to ensure that the insert runs flawlessly and as fast as possible, without locking out from this archiving process. When archiving starts, the runtime for insert grows to 4-5 sec. I will also have a 2 sec live query (Power BI) to read this table.
Currently I have a clustered index on the UTCLogDateTime column for both tables.
These all processes have to run seamlessly, without locking the table from each other. Do you have any suggestions how I can achieve this?