3

This code demonstrates that the mutex is being shared between two threads, but something weird is going on with the scoping block around thread_mutex.

(I have a variation of this code in another question, but this seems like a second mystery.)

#include <thread>
#include <mutex>
#include <iostream>

#include <unistd.h>

    int main ()
    {
        std::mutex m;

        std::thread t ([&] ()
        {
            while (true)
            {
                {
                    std::lock_guard <std::mutex> thread_lock (m);

                    usleep (10*1000); // or whatever
                }

                std::cerr << "#";
                std::cerr.flush ();
            }
        });

        while (true)
        {
            std::lock_guard <std::mutex> main_lock (m);
            std::cerr << ".";
            std::cerr.flush ();
        }
    }

This basically works, as it is, but the scoping block around thread_lock should theoretically not be necessary. However, if you comment it out...

#include <thread>
#include <mutex>
#include <iostream>

#include <unistd.h>

int main ()
{
    std::mutex m;

    std::thread t ([&] ()
    {
        while (true)
        {
//          {
                std::lock_guard <std::mutex> thread_lock (m);

                usleep (10*1000); // or whatever
//          }

            std::cerr << "#";
            std::cerr.flush ();
        }
    });

    while (true)
    {
        std::lock_guard <std::mutex> main_lock (m);
        std::cerr << ".";
        std::cerr.flush ();
    }
}

The output is like this:

........########################################################################################################################################################################################################################################################################################################################################################################################################################################################################################

i.e., it seems like the thread_lock NEVER yields to main_lock.

Why does thread_lock always gain the lock and main_lock always wait, if the redundant scoping block is removed?

spraff
  • 32,570
  • 22
  • 121
  • 229
  • 1
    There's no guarantee of fair scheduling. Both behaviours are permitted by the standard. – n. m. could be an AI Oct 30 '18 at 16:30
  • Shouldn't there be a guarantee of *some* scheduling? – spraff Oct 30 '18 at 16:31
  • 4
    ```main_lock``` is starved. In uncommented version of your code there was an I/O operation, which gave time your main thread to gain mutex. Mutexes are not fair, so if there is a tiny distance between removing lock and acquiring it back (as in second version of your code), there is a good chance that your thread ```t``` will be first to lock mutex again. – Michał Łoś Oct 30 '18 at 16:31
  • 3
    @spraff guaranties can be costly, and it's not C++ way to give you costly solution just "because" ; ). – Michał Łoś Oct 30 '18 at 16:33
  • There is scheduling, it works like this: the scheduler takes a thread which is ready to run and runs it for some time, then repeats. There's no guarantee it will not pick the same thread every time. If you don't want this, it's up to you to make sure the same thread is not always ready to run. – n. m. could be an AI Oct 30 '18 at 16:35
  • 2
    the scoping block is not "redundant". The mutex is locked till end of scope of `thread_lock` so it does matter where `thread_lock`s scope ends – 463035818_is_not_an_ai Oct 30 '18 at 16:45
  • @MichałŁoś Why did you write this as a comment and not an answer? – Max Langhof Oct 30 '18 at 17:07
  • @MaxLanghof: I didn't feel that this is exhaustive enough for an answer – Michał Łoś Oct 31 '18 at 10:10

2 Answers2

2

I tested your code (with block scope removed) on Linux with GCC (7.3.0) using pthreads and got similar results as you. The main thread is starved, although if I waited long enough, I would occasionally see main thread do some work.

However, I ran the same code on Windows with MSVC (19.15) and no thread was starved.

It looks like you're using posix, so I'd guess your standard library uses pthreads on the back-end? (I have to link pthreads even with C++11.) Pthreads mutexes don't guarantee fairness. But that's only half the story. Your output seems to be related to the usleep call.

If I take out the usleep, I see fairness (Linux):

    // fair again
    while (true)
    {
        std::lock_guard <std::mutex> thread_lock (m);
        std::cerr << "#";
        std::cerr.flush ();
    }

My guess is that, due to sleeping so long while holding the mutex, it is virtually guaranteed that the main thread will be as blocked as blocked can be. Imagine that at first the main thread might try to spin in hope that the mutex will become available soon. After a while, it might get put on the waiting list.

In the auxiliary thread, the lock_guard object is destroyed at the end of the loop, thus the mutex is released. It will wake the main thread, but it immediately constructs a new lock_guard which locks the mutex again. It's unlikely that the main thread will grab the mutex because it was just scheduled. So unless a context switch occurs in this small window, the auxiliary thread will probably get the mutex again.

In the code with the scope block, the mutex in the auxiliary thread is released before the IO call. Printing to the screen takes a long time, so there is plenty of time for the main thread to get a chance to grab the mutex.

As @Ted Lyngmo said in his answer, if you add a sleep before the lock_guard is created, it makes starvation much less likely.

    while (true)
    {
        usleep (1);
        std::lock_guard <std::mutex> thread_lock (m);
        usleep (10*1000);
        std::cerr << "#";
        std::cerr.flush ();
    }

I also tried this with yield, but I needed like 5+ to make it more fair, which leads me to believe that there are other nuances in the actual library implementation details, OS scheduler, and caching and memory subsystem effects.

By the way, thanks for a great question. It was really easy to test and play around with.

Humphrey Winnebago
  • 1,512
  • 8
  • 15
0

You can give it a hint to reschedule by yielding the threads (or by sleeping) without owning the mutex. The rather long sleep below will probably cause it to output #.#.#.#. perfectly. If you switch to yielding you'll probably get blocks of ############............... but roughly 50/50 in the long run.

#include <thread>
#include <mutex>
#include <iostream>

#include <unistd.h>

int main ()
{
    std::mutex m;

    std::thread t ([&] ()
    {
        while (true)
        {
            usleep (10000);
            //std::this_thread::yield();
            std::lock_guard <std::mutex> thread_lock (m);

            std::cerr << "#" << std::flush;
        }
    });

    while (true)
    {
        usleep (10000);
        //std::this_thread::yield();
        std::lock_guard <std::mutex> main_lock (m);
        std::cerr << "." << std::flush;
    }
}
Ted Lyngmo
  • 93,841
  • 5
  • 60
  • 108