0

I have the following statement:

UPDATE Table SET Column=Value WHERE TableID IN ({0})

I have a comma delimited list of TableIDs that can be pretty lengthy(for replacing {0}). I've found that this is faster than using a SqlDataAdapter, however I also noticed that if the command text is too long, the SqlCommand might perform poorly.

Any ideas?

This is inside of a CLR trigger. Each SqlCommand execution incurs some sort of overhead. I've determined that the above command is better than SqlDataAdapter.Update() because Update() will update individual records incurring several SQL statements to be executed.

...I ended up doing the following(trigger time went from .7 to .25 seconds)

UPDATE T SET Column=Value FROM Table T INNER JOIN INSERTED AS I ON (I.TableID=T.TableID)

4 Answers4

0

When there is a long list, the execution plan is probably using an index scan instead of an index seek. In this case, you are probably better off limiting the list to several items, but call the update command repeatedly until all items in the list are accommodated.

George Mastros
  • 24,112
  • 4
  • 51
  • 59
  • What is a good length? Is the length of the text more important or the number of IDs? Would I be better off using CommandParameters? –  Mar 18 '13 at 19:50
  • Unfortunately, the answer is... it depends. Scanning a large table is awful for performance, so the correct answer is whatever prevents scans and allows for seeks. It really depends on the selectivity of the column you are filtering on in the where clause. The length doesn't really matter, the number of items is more important. I recommend trial and error to determine what is best for you situation. – George Mastros Mar 18 '13 at 21:01
0

Split your list of IDs into batchs maybe. I assume you have the list of id numbers in a collection and you're building up the {0} string. So maybe update 20 or 100 at a time.

Wrap it in a transaction and perform all the updates before calling Commit()

Eoin Campbell
  • 43,500
  • 17
  • 101
  • 157
0

If you can use a stored procedure, you could use a MERGE statement instead.

MERGE INTO Table AS target
USING @TableIDList AS source
ON target.TableID = source.ID
WHEN MATCHED THEN UPDATE SET Column = source.Value

where @TableIDList is table type sent from code as a table-valued parameter with the IDs (and possibly Values) you need.

David S.
  • 5,965
  • 2
  • 40
  • 77
0

If this is a stored procedue I would use a Table-Valued Parameter. If this is an ad hoc batch then consider populating a temporary table and joining to it in your batch. Your IN-clause is rationalized as a bunch of ORs which can quite easily negate the use of an index. With a JOIN you may get a better plan from the optimizer.

DECLARE @Value VARCHAR(100) = 'Some value';
CREATE TABLE #Table (TableID INT PRIMARY KEY);
INSERT INTO #Table VALUES (1),(2),(3),(n)...;
MERGE INTO Schema.Table AS target
USING #Table AS source
ON target.TableID = source.TableID
WHEN MATCHED THEN UPDATE SET Column = Value;