1

I have an application in which several threads share a mutex.

std::lock_guard< std::recursive_mutex > lock(globalMutex_);

One intensively (T1) the others lesser (T2,T3..). I have an example in which the threads which require the lock less often get blocked 100 seconds before successfully acquire the lock.

The tread (T1 so) which acquire the lock often do it in the following way :

void func()
{
  std::lock_guard< std::recursive_mutex > lock(globalMutex_);
  processing();
}

globalMutex_is then well released periodically.

Strange behavior:

T1 get the lock systematically during a total period of 100 seconds while the other thread do not get the lock at all

(In other threads I have the same pattern but the other func is called less often)

Question: What can explain that ? Is it a normal behavior?

Context: I am under windows 10 / last version of Visual Studio / 64 bits / GUI application

Note: Even if I put T2 with a high priority, the situation is the same.

Jason Aller
  • 3,541
  • 28
  • 38
  • 38
Guillaume Paris
  • 10,303
  • 14
  • 70
  • 145
  • 5
    If the first thread never yields and relocks the mutex quickly you get a sort of live lock. – François Andrieux Jun 14 '18 at 14:36
  • 1
    There is [`std::this_thread::yield`](http://en.cppreference.com/w/cpp/thread/yield) but it's non-binding and your millage may vary : *"Provides a hint to the implementation to reschedule the execution of threads, allowing other threads to run."* – François Andrieux Jun 14 '18 at 14:37
  • @FrançoisAndrieux Well it's non-binding, but by the same argument the behavior that the asker observes may well be correct as far as the standard is concerned. – Max Langhof Jun 14 '18 at 14:39
  • @MaxLanghof Yes, OP's observed behavior is likely correct though undesirable. `yield` might solve his problem but it might not. That's why I added "your millage may vary". – François Andrieux Jun 14 '18 at 14:40
  • The default mutex implementation is typically fast and simple, not perfectly fair. If you want more "fairness", you might want to managed the wait line yourself with your own mutex... – curiousguy Jun 14 '18 at 14:53
  • Is it normal that a change on thread priority doesn't force T2 to get the lock ? – Guillaume Paris Jun 14 '18 at 15:08
  • What is "high priority"? – curiousguy Jun 14 '18 at 15:12
  • the hightest priority a thread can have on windows – Guillaume Paris Jun 14 '18 at 15:41
  • 1
    Locks are typically knowingly unfair. If a thread is swapped in (running on a core) it is actually likely to get the lock back before swapped-out threads are swapped in and given a chance. The C++ standard requires some super loose guarantee that all threads will make eventual process. But by the letter of the law every 100 seconds is eventually! Writing a decent fair lock is quite tricky but what you need if the application can't be restructured. I couldn't find a good fair lock online. I'm sure it's out there... – Persixty Jun 14 '18 at 16:10
  • @Guillaume07 I'm not familiar with the priorities of Windows. Does that meant that this threat could starve other thread, if it doesn't yield? Is that what you want? – curiousguy Jun 15 '18 at 02:30

3 Answers3

2

std::mutex provides no guarantees that mutexes are locked in the order that threads call lock(). When a thread releases the lock if the thread quickly relocks the lock then unless another thread is already waiting on the lock and is executing at the same time the first thread is likely to succeed in regaining the lock.

The simplest solution is to keep locks for as short a time as possible and try to make sure that each thread spends at least some time without the mutex locked.

The more involved solution is to make your own mutex class which does provide some guarantees about the order of lock/unlock. You could implement this with a combination of std::mutex and std::condition_variable.

Alan Birtles
  • 32,622
  • 4
  • 31
  • 60
  • I think programmers keep critical sections or other access control devices (I don't know the generic name) just as long as necessary. – curiousguy Jun 14 '18 at 15:01
  • @curiousguy then the programmers you know are far better than me and the ones I know. Generally in my experience there is often scope for reducing the scope of locks in programs. – Alan Birtles Jun 14 '18 at 15:07
1

This looks like a mistake:

{
  std::lock_guard< std::recursive_mutex > lock(globalMutex_);
  processing();
}

What does processing() do? If it takes more than a few microseconds, then there's probably a more efficient way to solve your problem. Sometimes it looks like this:

bool success=false;
while (! success) {
    auto result = speculative_processing();
    {
        std::lock_guard< std::recursive_mutex > lock(globalMutex_);
        success = attempt_to_publish(result);
    }
}

It's often the case that individual threads in a multi-threaded program have to do extra work in order to keep out of each other's way. But by keeping out of each other's way, they are better able to exploit multiple processors, and they get the whole job done more quickly.

Solomon Slow
  • 25,130
  • 5
  • 37
  • 57
  • yes i agree ; but it is an old application not nicely optimized. I just have to deploy it with new visual studio compiler and up-to-date boost library but now i get this behavior... – Guillaume Paris Jun 14 '18 at 15:54
  • @curiousguy, Instead of doing a complicated, in-place update on some shared data structure, it might copy the data, update the copy, and then return a pointer to the copy---all without locking any lock. Then, the `attempt_...`, function would either substitute the new, updated copy for the old structure and return `true`, or else it would discover that another thread had snuck in some other, incompatible change, and it would return `false` without swapping the pointers. The copying is extra work, and the occasional failure is more extra work, but it pays off by allowing more concurrency. – Solomon Slow Jun 15 '18 at 13:44
  • @jameslarge So essentially a lock free idiom using a mutex? What is "more concurrency"? – curiousguy Jun 15 '18 at 14:20
  • Sorry, "concurrency" is the wrong word. I should have said "parallelism." My example assumes a data structure with many readers and a few writers. If we can minimize the amount of time that a single writer keeps the data structure locked, then that maximize the amount of time that the many readers are able to _simultaneously_ read the data and get work done. – Solomon Slow Jun 15 '18 at 14:47
0

You will achieve your goal with a condition_variable.

std::condition_variable cv;
bool busy = false;
void func()
{
  {
    std::unique_lock<std::mutex> lk(globalMutex_);
    cv.wait(lk, []{return !busy;});
    busy = true;
  }
  processing();
  busy = false;
  cv.notify_one();
}
Alon
  • 761
  • 6
  • 7