2

While performing write/read operations in cockroach database with springboot, we are getting below error intermittently. Any solutions here is appreciated. Thanks

Caused by:

org.postgresql.util.PSQLException: ERROR: restart transaction: TransactionRetryWithProtoRefreshError: ReadWithinUncertaintyIntervalError: read at time 1640760553.619962171,0 encountered previous write with future timestamp

eshirvana
  • 23,227
  • 3
  • 22
  • 38
DK93
  • 89
  • 1
  • 14

1 Answers1

0

The documentation states that this error is a sign of contention. It also suggests four ways of solving it:

  • Be prepared to retry on uncertainty (and other) errors, as described in client-side retry handling.
  • Use historical reads with SELECT ... AS OF SYSTEM TIME.
  • Design your schema and queries to reduce contention. For more information about how contention occurs and how to avoid it, see Understanding and avoiding transaction contention. In particular, if you are able to send all of the statements in your transaction in a single batch, CockroachDB can usually automatically retry the entire transaction for you.
  • If you trust your clocks, you can try lowering the --max-offset option to cockroach start, which provides an upper limit on how long a transaction can continue to restart due to uncertainty.

Did you already try these?

SebDieBln
  • 3,303
  • 1
  • 7
  • 21
  • Yes I did read the documentation…but I dint understand the timestamp cache handled in cockroachdb. Does this mean when transaction reads a specific record from the range, and t2 comes and try to read the same record, t2 is put on hold until t1 is completed its associated transaction? I understand this is applicable for writes, but even for concurrent reads this applies? – DK93 Jan 09 '22 at 02:38
  • @Dheeraj Your question and the error message you posted mention **write** operations. You would not get this error message if all transactions only read the data because then "*previous write with future timestamp*" can not happen. Maybe some of your transactions write data without you knowing it? – SebDieBln Jan 09 '22 at 13:58
  • Sorry I deviated slightly from the question context. But to answer you, yes we have write operation within a same transaction. And as per the cockroach db documents, it internally tries to retry transactions when it notices stale data before throwing an exception to client. So wanted to understand for how much time does the second transaction waits in the queue for the lock to be released by first transaction. – DK93 Jan 09 '22 at 15:35
  • Are you sure the server retries the transaction? Could you post a link to the documentation? I assumed it is always the client that is responsible for retrying, although that might be hidden in some library code and could therefore be transparent to the application code. – SebDieBln Jan 09 '22 at 19:50