15

Let's say I have two variables, protected_var1 and protected_var2. Let's further assume that these variables are updated via multiple threads, and are fairly independent in that usually one or the other but not both is worked on - so they both have their own mutex guard for efficiency.

Assuming:

-I always lock mutexes in order (mutex1 then mutex2) in my code in regions where both locks are required.

-Both mutexes are used many other places by them selves (like just lock mutex1, or just lock mutex2).

Does the order in which I unlock the mutexes at the end of a function using both make a difference in this situation?

void foo()
{
    pthread_mutex_lock(&mutex1);
    pthread_mutex_lock(&mutex2);

    int x = protected_var1 + protected_var2;

    pthread_mutex_unlock(&mutex1); //Does the order of the next two lines matter?
    pthread_mutex_unlock(&mutex2);
}

I was asked a question a long time ago in an interview regarding this situation, and I came out feeling that the answer was yes - the order of those two unlocks does matter. I cannot for the life of me figure out how a deadlock could result from this though if the locks are always obtained in the same order wherever both are used.

John Humphreys
  • 37,047
  • 37
  • 155
  • 255
  • ...but where they are NOT both used, it may be possible for two threads to deadlock on each other. To be perfectly safe you should lock both in the same order. yes it may serialize things you don't need to, but otherwise you'd have to make sure there's no place where threads locking individually could collide with a place where they are both locked. – stu Dec 08 '19 at 19:35

9 Answers9

18

The order shouldn't matter, as long as you don't attempt to acquire another lock between the releases. The important thing is to always acquire the locks in the same order; otherwise, you risk a deadlock.

EDIT:

To expand on the constraint: You must establish a strict ordering among the mutexes, e.g. mutex1 precedes mutex2 (but this rule is valid for any number of mutexes). You may only request a lock on a mutex if you don't hold a mutex which comes after it in the order; e.g. you may not request a lock on mutex1 if you hold a lock on mutex2. Anytime these rules are respected, you should be safe. With regards to releasing, if you release mutex1, then try to reacquire it before releasing mutex2, you've violated the rule. In this regard, there may be some advantage in respecting a stack-like order: last acquired is always the first released. But it's sort of an indirect effect: the rule is that you cannot request a lock on mutex1 if you hold one on mutex2. Regardless of whether you had a lock on mutex1 when you acquired the lock on mutex2 or not.

James Kanze
  • 150,581
  • 18
  • 184
  • 329
  • 2
    Very **very** important point. I knew that there was a reason for unlock ordering. Please expand this a little so people see it. Consider unlocking `mutex1`, doing something, and then grabbing `mutex1` again. I remember having a deadlock that was resolved by reverse ordering the mutex release and that was the cause. – D.Shawley Feb 23 '12 at 15:10
  • @D.Shawley That **would** cause a deadlock. (I seem to recall analyzing this in more detail in a recent answer, to a question concerning using a mutex on a container _and_ on the individual elements in the container.) But that violates the ordering constraint: if you acquire both mutexes, you must acquire `mutex1` first. – James Kanze Feb 23 '12 at 15:18
9

I cannot for the life of me figure out how a deadlock could result from this though if the locks are always obtained in the same order wherever both are used.

In these circumstances, I don't think the order of unlocking the mutexes could be the cause of a deadlock.

Since pthread_mutex_unlock() doesn't block, both mutexes would always get unlocked regardless of the order of the two calls.

Note that if you attempt to acquire any locks between the two unlock calls, this can change the picture completely.

NPE
  • 486,780
  • 108
  • 951
  • 1,012
5

It doesn't matter for correctness of locking. The reason is that, even supposing some other thread is waiting to lock mutex1 and then mutex2, the worst case is that it gets immediately scheduled as soon as you release mutex1 (and acquires mutex1). It then blocks waiting for mutex2, which the thread you're asking about will release as soon as it gets scheduled again, and there's no reason that shouldn't happen soon (immediately, if these are the only two threads in play).

So there might be a small cost in performance in that exact situation, compared with if you released mutex2 first and so there was only one rescheduling operation. Nothing you'd normally expect to predict or worry about, though, it's all within the boundaries of "scheduling often isn't deterministic".

The order you release the locks could certainly affect scheduling in general, though. Suppose that there are two threads waiting for your thread, and one of them is blocked on mutex1 while the other is blocked on mutex2. It might turn out that whichever lock you release first, that thread gets to run first, simply because your thread has outlived its welcome (consumed more than an entire time-slice), and hence gets descheduled as soon as anything else is runnable. But that can't cause a fault in an otherwise-correct program: you aren't allowed to rely on your thread being descheduled as soon as it releases the first lock. So either order of those two waiting threads running, both running simultaneously if you have multiple cores, or the two alternating on one core, must all be equally safe whichever order you release the locks.

Steve Jessop
  • 273,490
  • 39
  • 460
  • 699
1

The order of unlocking cannot cause deadlocks. However, if give the opportunity, I recommend unlocking them in reverse locking order. This has negligible effect on the running of the code. However, developers are used to thinking in terms of scopes, and scopes "close" in reverse order. Seeing them in reverse order to simply think of scoping locks. This brings me to the second point, which is that, in most cases, it is safest to replace the direct calls to lock and unlock with a stack based guard that calls them for you. Doing so yields the greatest degree of correctness for the least mental effort, and also happens to be safe in the presence of exceptions (which can really muck with a manual unlock)!

A simple guard (there are many out there.. this just just a quick roll-your-own):

class StMutexLock
{
    public:
        StMutexLock(pthread_mutex_t* inMutex)
        : mMutex(inMutex)
        {
            pthread_mutex_lock(mMutex);
        }

        ~StMutexUnlock()
        {
            pthread_mutex_unlock(mMutex);
        }
    private:
        pthread_mutex_t*   mMutex;
}

{
    StMutexLock lock2(&mutex2);
    StMutexLock lock1(&mutex1);

    int x = protected_var1 + protected_var2;
    doProtectedVar1ThingThatCouldThrow(); // exceptions are no problem!
    // no explicit unlock required.  Destructors take care of everything
}
Cort Ammon
  • 10,221
  • 31
  • 45
0
void * threadHandle (void *arg) 
{
   // Try to lock the first mutex...
   pthread_mutex_lock (&mutex_1);

   // Try to lock the second mutex...
   while (pthread_mutex_trylock(&mutex_2) != 0)  // Test if already locked   
   {
      // Second mutex is locked by some other thread.  Unlock the first mutex so that other threads won't starve or deadlock
      pthread_mutex_unlock (&mutex_1);  

      // stall here
      usleep (100);

      // Try to lock the first mutex again
      pthread_mutex_lock(&mutex_1);
   }

   // If you are here, that means both mutexes are locked by this thread

   // Modify the global data
   count++;

   // Unlock both mutexes!!!

   pthread_mutex_unlock (&mutex_1);

   pthread_mutex_unlock (&mutex_2);
}

I guess this can prevent the deadlock

  • 1
    In addition to the code, please provide some details as to why your code works and what you did to make it work. – buczek Sep 23 '16 at 19:37
0

No, it doesn't matter. A deadlock cannot result from this; both unlock operators are guaranteed to succeed (bar heap corruption or similar problems).

Collin Dauphinee
  • 13,664
  • 1
  • 40
  • 71
0

The order of unlocking isn't a problem here, but the order of locking can be a problem.

Consider:

void foo()
{
    pthread_mutex_lock(&mutex1);
    pthread_mutex_lock(&mutex2);

    int x = protected_var1 + protected_var2;

    pthread_mutex_unlock(&mutex1);
    pthread_mutex_unlock(&mutex2);
}

void bar()
{
    pthread_mutex_lock(&mutex2);
    pthread_mutex_lock(&mutex1);

    int x = protected_var1 + protected_var2;

    pthread_mutex_unlock(&mutex1);
    pthread_mutex_unlock(&mutex2);
}

This can lead to a deadlock since foo might have locked mutex1 and now waits for mutex2 while bar has locked mutex2 and now waits for mutex1. So it's a good idea to somehow ensure that nested mutex locks always get locked in the same order.

DarkDust
  • 90,870
  • 19
  • 190
  • 224
0

What I can see is that if another operation is taking mutex2 and keeps it for a long time, your foo() function will be stuck after pthread_mutex_lock(&mutex1); and will probably have some performance hit.

Gui13
  • 12,993
  • 17
  • 57
  • 104
0

So long as whenever var1 and var2 are locked they are locked in the same order you are safe regardless of the freeing order. In fact freeing in the order they were locked is the RAII freeing behavior found in STL and BOOST locks.

Unknown1987
  • 1,671
  • 13
  • 30