0

I have an insert that is executed every 2 seconds (20 columns, 15000 rows, outside of SQL Server [sensor data]), it runs in 200 ms. I would like to keep only 10 minutes of data (sum ~4.500.000 rows) in this table then move (archive) the earliest 5 minutes to another archive table (that will store 50 days, billions of rows). The stored procedure for archiving:

begin tran

declare @UTC_min datetime2(2) = (select TOP 1 UTCLogDateTime from table1 order by UTCLogDateTime asc)
declare @UTC_copy datetime2(2) = dateadd(minute,5,@UTC_min)

INSERT INTO archive_table
SELECT *
FROM table1
where UTCLogDateTime<@UTC_copy 

delete top(100000) from table1 where UTCLogDateTime<@UTC_copy 
WHILE @@rowcount > 0
BEGIN 
delete top(100000) from table1 where UTCLogDateTime<@UTC_copy
END 

commit

I would like to ensure that the insert runs flawlessly and as fast as possible, without locking out from this archiving process. When archiving starts, the runtime for insert grows to 4-5 sec. I will also have a 2 sec live query (Power BI) to read this table.

Currently I have a clustered index on the UTCLogDateTime column for both tables.

These all processes have to run seamlessly, without locking the table from each other. Do you have any suggestions how I can achieve this?

MRK_S
  • 9
  • 1
  • 3
  • If you insert, update or delete *more than 5000* rows in a single transaction, SQL Server will give up on row-level locking and will do a **lock escalation** to exclusively lock the entire table(s) involved. So to avoid this - update in batches clearly smaller than 5000 rows at once. – marc_s Oct 08 '19 at 15:38
  • can you share the code that is inserting the sensor data? Why not first insert your sensor data into "table1" then insert the same data into the archive? This should reduce pages touched. You would then do your delete the same as you are but only on "table1". Just leave the archive growing. marc_s is right though. You can go that route as well. – jamie Oct 08 '19 at 16:58
  • Why not archive the oldest 15,000 rows every two seconds right after the sensor adds the newest 15,000 rows. That should allow the archival to run quickly before the next insertion. – Robert Sievers Oct 08 '19 at 20:44
  • Tried archiving with <5000 rows in every 2 sec (without transaction, via Python), it is super fast the first time (250 ms), but gradully gets slower, after ~50 runs it is 3 sec and growing. Unfortunately I can't modify the code of the sensor data insert. – MRK_S Oct 10 '19 at 15:05

1 Answers1

0

If you are using SQL Server 2016 and above, you may use TRUNCATE WITH partitions. This uses fewer locks compared to DELETE. Worth trying partitioning the table in TEST environment first.

VB_isYoung
  • 83
  • 8
  • In my understanding partitions need to be specified in advance so I can't use them in a dinamic table. – MRK_S Oct 10 '19 at 15:06