1

Is there a way to block parallel inserts in a table and not just row-lock level?

The insert is very fast (millisecond level) but I want to have some sort of guarantee that only 1 row can be inserted in a particular millisecond entry.

By design it already makes sure the data will never be inconsistent (see load_id_by_date):

CREATE TABLE my_table
(
    load_id          uniqueidentifier NOT NULL,
    load_date        datetime NOT NULL DEFAULT (GETDATE()),
    load_id_by_date  bigint NOT NULL DEFAULT (CAST(GETDATE() as decimal(19,9)) * 1000000000) UNIQUE,
    is_processed     bit DEFAULT(0)
    PRIMARY KEY (load_id_by_date)
)

But I was just wondering if there is a way to stop parallel inserts from happening from multi-threaded calls. A simple (single threaded) simulation below highlights the issue.

-- TO TEST:
WHILE (1=1)
BEGIN
    INSERT INTO my_table (load_id)
    SELECT NEWID()
END

will have

Msg 2627, Level 14, State 1, Line 6
Violation of UNIQUE KEY constraint 'UQ__config_l__A307163DB6D0D819'. Cannot insert duplicate key in object 'my_table.config_load_id_toprocess'. The duplicate key value is (43507564143441).

But now I am thinking the approach on timestamp uniqueness might be the wrong way to go by. But the actual calls will not be that fast, with a frequency of 2 seconds fastest but multi-threaded.

Earl
  • 420
  • 5
  • 16
  • 1
    `with (tablockx)` – Dale K Feb 13 '19 at 01:36
  • That will still fail if a single INSERT inserts two rows. See this answer for a possible approach: https://stackoverflow.com/questions/51459805/generate-a-unique-time-based-id-on-a-table-in-sql-server/51460731#51460731 – David Browne - Microsoft Feb 13 '19 at 01:57
  • Actually, there is a guarantee at least on the insert side that it will only insert one row only per call. My only concern (however improbable) that parallel calls occur that result in same values from GETDATE() occur. so if parallel calls occur, i would like there would be a blocking delay of the smallest millisecond possible. I have updated my question to reflect the error that doesnt even need a parallel calls. – Earl Feb 13 '19 at 02:31
  • 1
    "but I want to have some sort of guarantee that only 1 row can be inserted in a particular millisecond entry." - sounds like you have an XY problem.... – Mitch Wheat Feb 13 '19 at 02:35
  • 1
    First time i have heard of XY problem. Interesting concept. Love it. I'm now thinking of the approach now, it might be just plain wrong. Okay the real problem is the primary key is bigint on the external routine i have no control over. My PK is a uniqueidentifier but I still need to pass a unique bigint. I am using GETDATE() converted to decimal(19,9) converted to bigint as I can guarantee it to be unique on a millisecond level. – Earl Feb 13 '19 at 02:39

1 Answers1

1

Thanks @Mitch Wheat on the XY problem. I have narrowed on what i needed to do.

The load_id_by_int (formerly load_id_by_date) is now generated from a bigint representation of NEWID(). The chances of conflict is now acceptable (at least in my opinion). Thanks for the assistance everyone who commented.

CREATE TABLE my_table
(
    load_id          uniqueidentifier NOT NULL,
    load_date        datetime NOT NULL DEFAULT (GETDATE()),
    load_id_by_int   bigint NOT NULL DEFAULT (ABS(convert(bigint, convert (varbinary(8), NEWID(), 1)))),
    is_processed     bit DEFAULT(0)
    PRIMARY KEY (load_id_by_int)
)

The concept was derived from Convert from UniqueIdentifier to BigInt and Back?

Earl
  • 420
  • 5
  • 16