I'm not familiar with Postgres concurrency and lock models, so I assume there might be a solution to the problem I'm having without resorting to explicit locks in code. I'm integrating one of payment gateway service. Everytime there is a payment made, I receive a notification and add the sum to user's balance. Now here's the problem, there's a slim, but theoretical chance that I receive 2 messages with identical payload. That is, two payment notification for single transaction. The problem is to avoid adding double amount to user's balance.
Here's the code:
var deposit = await _repository.GetDepositByGatewayDepositId(ipn.DepositId);
var oldStatus = default(DepositStatus);
if (deposit == null)
{
deposit = _converter.ToDeposit(model, status, ipn.Id, user.Id);
}
else
{
oldStatus = deposit.Status;
deposit.IpnId = ipn.Id;
deposit.Updated = DateTime.UtcNow;
deposit.Status = status;
}
await _repository.SaveAsync(deposit);
if (deposit.Status == DepositStatus.Completed && oldStatus != DepositStatus.Completed)
{
await _repository.AddBalance(user.Id, ipn.Amounti);
}
I'm updating user's balance only if previous payment status (oldStatus
variable) was not Completed
(Pending
), however, I acquire oldStatus from database, and if there was a duplicate request and second transaction was also in progress, I would end up with a race condition.
So my question is, is there an efficient way to solve the problem without using locks in code? Maybe tell Postgres to use row level lock that will prevent reads from other transactions until first one finishes?
Update: Thanks everyone for the replies. I think I would add few more details to the topic. I might receive multiple message for same transaction (same Id that is) when trans status changes. Good part is that the messages are digitally signed, so I already added unique constraint on signature field. But payment gateway states that any status >= 100 is a success status. So in theory, if they send me status 100 and then 101, this theoretically might cause race condition with financial consequences. Again, the chances might be very low, but I'd rather be safe then sorry.
Based on the replies, the most elegant solution seems to be optimistic locking. However, for this particular case, I think distributed locking might be a better option because it seems to be simpler. Acquire lock only for one specific customer so lock for customer A
doesn't cause wait for lock for customer B
.
Thanks