1

We're using lockForUpdate like this:

DB::beginTransaction();

try {
    $balance = UserBalance::where('user_id', 2)
                    ->lockForUpdate()
                    ->first();
    $balance->usd += 100;
    $balance->save();

    // A LOT MORE LOGIC HERE

    $balance = UserBalance::where('user_id', 7)
                    ->lockForUpdate()
                    ->first();
    $balance->usd -= 100;
    $balance->save();

} catch (\Exception $e) {
    DB::rollback();
    return json_encode ([
        'success' => false,
        'message' => 'error',
    ]);
}
DB::commit();

Everything ok, i tried with some "sleep(20)" before the commit and sent another requisition (without the sleep) and the row is really locked, but we're facing a problem. When we run this from a cron multiple times, it seems that the function run on the exactly same millisecond, then the lock doesn't seems to work, is this possible? Is there any other solution instead of using a queue?

The CRON is just calling a route multiple times like this:

* * * * * curl http://test.com.br/test
* * * * * curl http://test.com.br/test
* * * * * curl http://test.com.br/test

1 Answers1

0

I'm not understanding why you need to use a DB::transcation in here since this operation is an atomic one.

You could simply just do a UserBalance::where('user_id', 2)->increment('usd', 100).

This will send a single query to the back-end incrementing the value of usd by 100.

Bruno Francisco
  • 3,841
  • 4
  • 31
  • 61