I want to make a persistent job queue in postgresql. So that multiple workers can select one job from the queue (using select for update
with skip locked
), process it and than delete it from the queue. I have a table:
create table queue (
id serial primary key
some_job_param1 text not null
some_job_param2 text not null
)
Now if there are two jobs then it works fine:
the worker1
starts a transaction and selects the first job
begin;
select * from queue for update skip locked limit 1;
starts the processing. worker2
does the same thing and selects the second job with the same query.
After worker1
does it's job, deletes it from the queue and commits the transaction:
delete from queue where id=$1;
commit;
Then the worker1
is ready for a new job, so it does the same thing. Begins a new transaction, selects for a job that isn't locked. But the problem is, that there are no more jobs, the query returns zero rows.
Ideal would be if the query would block until there is a new job and it returns a result. Is it somehow possible? Or I'm going in a wrong direction?
EDIT:
the workers are an external process. So if the worker died, the session dies and the transaction also. Then the selected job will be locked no more and ready for another worker. The pseudo code would look like this:
while (true) {
tx = db.new_transaction()
job_id, param1, param2 = tx.query("select * from queue for update skip locked limit 1")
try {
some_long_processing(param1, param2)
tx.commit()
} catch {
tx.rollback()
}
}