7

I'm encountering an issue where I have a function that is intended to require serialized access dependent on some circumstances. This seemed like a good case for using advisory locks. However, under fairly heavy load, I'm finding that the serialized access isn't occurring and I'm seeing concurrent access to the function.

The intention of this function is to provide "inventory control" for a event. Meaning, it is intended to limit concurrent ticket purchases for a given event such that the event is not oversold. These are the only advisory locks used within the application/database.

I'm finding that occasionally there are more tickets in an event than the eventTicketMax value. This doesn't seem like it should be possible because of the advisory locks. When testing with low volume (or manually introduced delays such as pg_sleep after acquiring the lock), things work as expected.

CREATE OR REPLACE FUNCTION createTicket(
        userId int,
        eventId int,
        eventTicketMax int
    ) RETURNS integer AS $$
        DECLARE insertedId int;
        DECLARE numTickets int;
    BEGIN
            -- first get the event lock
            PERFORM pg_advisory_lock(eventId);

            -- make sure we aren't over ticket max
            numTickets := (SELECT count(*) FROM api_ticket
                WHERE event_id = eventId and status <> 'x');

            IF numTickets >= eventTicketMax THEN
                -- raise an exception if this puts us over the max
                -- and bail
                PERFORM pg_advisory_unlock(eventId);
                RAISE EXCEPTION 'Maximum entries number for this event has been reached.';
            END IF;

            -- create the ticket
            INSERT INTO api_ticket (
                user_id,
                event_id,
                created_ts
            )
            VALUES (
                userId,
                eventId,
                now()
            )
            RETURNING id INTO insertedId;

            -- update the ticket count
            UPDATE api_event SET ticket_count = numTickets + 1 WHERE id = eventId;

            -- release the event lock
            PERFORM pg_advisory_unlock(eventId);

        RETURN insertedId;
    END;
    $$ LANGUAGE plpgsql;

Here's my environment setup:

  • Django 1.8.1 (django.db.backends.postgresql_psycopg2 w/ CONN_MAX_AGE 300)
  • PGBouncer 1.7.2 (session mode)
  • Postgres 9.3.10 on Amazon RDS

Additional variables which I tried tuning:

  • setting CONN_MAX_AGE to 0
  • Removing pgbouncer and connecting directly to DB

In my testing, I have noticed that, in cases where an event was oversold, the tickets were purchased from different webservers so I don't think there is any funny business about a shared session but I can't say for sure.

Brock Haywood
  • 516
  • 2
  • 10

1 Answers1

6

As soon as PERFORM pg_advisory_unlock(eventId)is executed, another session can grab that lock, but as the INSERT of session #1 is not yet commited, it will not be counted in the COUNT(*)of session #2, resulting in the over-booking.

If keeping the advisory lock strategy, you must use transaction-level advisory locks (pg_advisory_xact_lock), as opposed to session-level. Those locks are automatically released at COMMIT time.

Daniel Vérité
  • 58,074
  • 15
  • 129
  • 156
  • Yes, this is exactly the problem. Switching to transaction level locks has resolved the issue. Thank you! – Brock Haywood May 05 '16 at 21:15
  • Am I missing something? If isolation level is "repeatable read" or "serializable", isn't the session#2 going to wait inside the transaction and keep continue using the old (stale) data (= repeatable read) after the lock is acquired? – Mikko Rantalainen Mar 30 '20 at 09:56
  • 1
    @MikkoRantalainen: you're right. I've removed the reference to "Repeatable Read" which on reread, appears to be incorrect (the default isolation level is "Read Committed" in PG) and uncalled-for in this question. – Daniel Vérité Mar 30 '20 at 11:31