3

I'm having the well known problem with a trigger and @@IDENTITY. I created a new Auditing table and a trigger to insert auditing rows on it. We use a software that is using @@IDENTITY and this is causing a conflict with the Id generated when the trigger insert a new row. I have not access to the code that is using @@IDENTITY.

I need ideas on how can I generate the identity value by my self. I cannot use GUID because the order is important to me. If I replace the Id column by a datetime column with default value GETDATE(), does it guarantee it will be unique?

Thank you

mate00
  • 2,727
  • 5
  • 26
  • 34
ilabrada
  • 45
  • 4

2 Answers2

4

GETDATE() won't be unique. It's accuracy is such that multiple near concurrent events can be supplied with the same time.

If you are forced to generate your own identity values, so as not to interfere with @@IDENTITY, then you can do the following...

INSERT INTO
  myTable (
    id,
    field1,
    field2
  )
SELECT
  (SELECT ISNULL(MAX(id), 0) FROM myTable WITH(TABLOCKX)) + 1,
  @p1,
  @p2

This is implicitly within it's own transaction and will guarantee unique values.


EDIT

My original comment was going to be that this will NOT work when inserting multiple records, and that instead you would need to iterate through the source records individually, inserting them one at a time.

The following example, however, may be suitable to you for processing SETs of data...

WITH
  sorted_data AS
(
  SELECT
    ROW_NUMBER() OVER (ORDER BY field1) AS set_id,   -- DO NOT include a PARTITION here
    *
  FROM
    inserted
)
INSERT INTO
  myTable (
    id,
    field1,
    field2
  )
SELECT
  (SELECT ISNULL(MAX(id), 0) FROM myTable WITH(TABLOCKX)) + set_id,
  @p1,
  @p2
FROM
  sorted_data

This will both generate unique IDs for each row, and be safe against concurrent processes using the same code.

EDIT

I've added WITH(TABLOCKX) to prevent other processes reading from the table while it's being updated. This prevents concurrent processes from establishing the same MAX(id) and then trying to insert duplicate id's in new records.

(The single query structure already prevented records from being altered after they had been read from, but did not prevent other processes reading from the table 'between' the MAX(id) being read and all the new records being inserted.)

MatBailie
  • 83,401
  • 18
  • 103
  • 137
  • Is your code concurrent safe? Even if the trigger is inserting multiple rows? – ilabrada Oct 31 '11 at 16:04
  • Yes; `This is implicitly within it's own transaction and will guarantee unique values.` – MatBailie Oct 31 '11 at 16:06
  • Even if the trigger is inserting multiple rows? – ilabrada Oct 31 '11 at 16:07
  • I'm curious because I'm not entirely sure how the implicit transaction works, but what would prevent overlapping sets of IDs from being generated if two insert statements are executed at the same time since there's no lock on myTable when the MAX(ID) is evaluated? – Michael Fredrickson Oct 31 '11 at 16:37
  • If the `MAX(ID)` was acquired in one statement, then used in another statement, a race condition is created unless transactions and table/row locking are used. When the `MAX(ID)` is acquired within a sub-query, however, the whole query is a single statement and so is implicitly already in it's own transaction. – MatBailie Oct 31 '11 at 16:56
  • In writing that comment something occurred to me... The ACID nature of a single statement will guarantee something doesn't change the data being read to generate the MAX(id), but it doesn't guarantee that something else doesn't read from that table between finding MAX(id) and finishing inserting the new records. This means that a locking hint is still needed to prevent READs from the table. I've updated the answer and am eating my hat. – MatBailie Oct 31 '11 at 17:09
1

I know you likely will not be able to change things, but the problem is that the software is using @@IDENTITY, which is not in scope. Inserts into ANY table will change @@IDENTITY. The software should be using the function scope_identity() instead.

datagod
  • 1,031
  • 1
  • 13
  • 21