3

I already know that transactions at the serializable level should be retried in case of a serialization failure.

I'm just curious about 2 scenarii where a 40001 error is raised :

  1. When running an INSERT command.
  2. When issuing a PREPARE TRANSACTION STATEMENT.

What could be the root cause of a serialization issue in each case ? What is your experience about these particular cases ?

I would like to try to reduce these issues by looking at the concurrent transactions.

TIA.

UPDATE 1

From the PostgreSQL wiki I've learned that among the serializable transactions, the first COMMIT wins. PREPARE seems to behave like a COMMIT so I could understand why it may fail.

But, for the INSERT case, if there is no concurrent serializable transaction that performed a SELECT, I'm missing the point. From what I've seen so far there are only multiple concurrent INSERT.

UPDATE 2

I've found out that I have concurrent SELECT ... FOR SHARE on the same table.

UPDATE 3

There seems to be a pattern here, the transaction last longer than usual when it fails. This is because we are processing files, which may take longer than expected. I'll try to reduce the duration of the transaction by reodering the various tasks.

1 Answers1

2

The docs have a rather good discussions on how a 40001 can occur:

http://www.postgresql.org/docs/current/static/transaction-iso.html

Quoting, rather than poorly paraphrasing at length, the applicable dos and donts:

Don't put more into a single transaction than needed for integrity purposes.

Don't leave connections dangling "idle in transaction" longer than necessary.

When the system is forced to combine multiple page-level predicate locks into a single relation-level predicate lock because the predicate lock table is short of memory, an increase in the rate of serialization failures may occur. You can avoid this by increasing max_pred_locks_per_transaction.

A sequential scan will always necessitate a relation-level predicate lock. This can result in an increased rate of serialization failures. It may be helpful to encourage the use of index scans by reducing random_page_cost and/or increasing cpu_tuple_cost. Be sure to weigh any decrease in transaction rollbacks and restarts against any overall change in query execution time.

Put another way, commit as quickly as you can when writing, and avoid massive queries that read or write all over the place.

Community
  • 1
  • 1
Denis de Bernardy
  • 75,850
  • 13
  • 131
  • 154
  • From the doc, we learn that "The shared predicate lock table tracks locks on max_pred_locks_per_transaction * (max_connections + max_prepared_transactions) objects (e.g., tables); hence, no more than this many distinct objects can be locked at any one time." max_connections is set to 2000, max_prepared_transactions is set to 30 and max_pred_locks_per_transaction is set to 64. The predicate lock table should'nt be short in memory. –  Oct 07 '13 at 07:10