1

I have a stored procedure that accepts a table variable parameter. The table variable has a single column and is used to pass in a list of primary keys to query against.

The table variable definition is:

CREATE TYPE PrimaryKeyList AS TABLE
                              (
                                   PrimaryKey INT NOT NULL
                              )

The stored procedure definition is something like:

CREATE PROCEDURE MyStoredProc
    @idList AS PrimaryKeyList READONLY
AS
BEGIN
    SELECT id, col1, col2
    FROM myTable mt
    JOIN @idList ids ON ids.PrimaryKey = mt.id
END

The table myTable has several million rows and the stored procedure will encounter PrimaryKeyLists of varying lengths (between 1 and about 10,000).

How do I optimize the table variable structure and the stored procedure?

Would it make sense to add a clustered index to PrimaryKeyList? It will be reused elsewhere for the same purpose.

The PrimaryKeyList table variable is generated in C# as a System.Data.DataTable object and the stored procedure is invoked using ADO.

How would the performance differ between an indexed and non-indexed PrimaryKeyList table variable?

Is there a better approach?

marc_s
  • 732,580
  • 175
  • 1,330
  • 1,459
Andrew
  • 789
  • 7
  • 13
  • The `TABLE` variable can morph into temporary table. You could create index this way: `DECLARE @x TABLE (id int, INDEX IX_id CLUSTERED(id));` – JohnyL Jul 26 '18 at 19:37

1 Answers1

1

Primary key in table variable will improve performance dramatically. No need to build it separately, you can use extended definition and it will be created as clustered by default (you can remove NOT NULL). For sure it is expected to have clustered index on myTable.id, better primary key as well.

CREATE TYPE PrimaryKeyList AS TABLE(
    PrimaryKey INT PRIMARY KEY
)
revoua
  • 2,044
  • 1
  • 21
  • 28