1

I have a table that acts like a queue (let's call it queue) and has a sequence from 1..N.

Some triggers inserts on this queue (the triggers are inside transactions).

Then external machines have the sequence number and asks the remote database: give me sequences greater than 10 (for example).

The problem:

In some cases transaction 1 and 2 begins (numbers are examples). But transaction 2 ends before transaction 1. And in between host have asked queue for sequences greater than N and transaction 1 sequences are skipped.

How to prevent this?

Thiago Sayão
  • 2,197
  • 3
  • 27
  • 41
  • This may be difficult in PostGreSQL because PostGreSQL is unnable to have pessimistic locking that will serialize the transaction. Try with ACCESS EXCLUSIVE table explicit locking. – SQLpro May 11 '21 at 12:42
  • That would work I think, but seems like a huge performance hit, it would serialize all transactions, would't it? – Thiago Sayão May 11 '21 at 13:16
  • I am thinking on a double queue solution where a job locks queue1 and copies to queue2. So queue2 would not skip records and locking would be quickier. – Thiago Sayão May 11 '21 at 14:25
  • You can inspire you from Microsoft SQL Server which have queues tables for Service Broker and implements a RECEIVE pseudo SQL order that discard while reading (simultaneous SELECT and DELETE) the rows... By the way @Thiago Sayão does the same in a reversed manner but keeps the row in a purging table... – SQLpro May 11 '21 at 14:42
  • By the way, recently PG has a process that seems to be similar in a new contrib.... A way to explore : https://pgxn.org/dist/pg_message_queue/ – SQLpro May 11 '21 at 14:44
  • I don't think it will works because the copying phase can be execute twice or more by different user simultaneously... A read is made on table 1 and this read can be execute concurrently. The only thing to do is an exclusise lock while reading to avoid any simultaneous read ! – SQLpro May 11 '21 at 14:53
  • What sequence numbers do the external machines have? How did they get them? And what is the big picture of what you are trying to achieve? – jjanes May 11 '21 at 16:27

1 Answers1

1

I would proceed like this:

  • add a column state to the table that you change as soon as you process an entry

  • get the next entry with

    SELECT ... FROM queuetab
    WHERE state = 'new'
    ORDER BY seq
    LIMIT 1
    FOR UPDATE SKIP LOCKED;
    
  • update state in the row you found and process it

As long as you do the last two actions in a single transaction, that will make sure that you are never blocked, get the first available entry and never skip an entry.

Laurenz Albe
  • 209,280
  • 17
  • 206
  • 263
  • The probem is that the queue has N consumers, so I can't have a state column. – Thiago Sayão May 11 '21 at 16:35
  • That should work with any number of concurrent consumers without locking anybody. The state is independent of which consumer grabs an entry (but of course you can update the state to reflect the consumer that processed it). – Laurenz Albe May 12 '21 at 04:47