1

Trying to expand in my two previous questions Move operations for a class with a thread as member variable and Call function inside a lambda passed to a thread

I'm not understanding why the thread doing a wait_for is somtimes not being notified wich results in a deadlock. Cppreference says on condition variables http://en.cppreference.com/w/cpp/thread/condition_variable/notify_one

The notifying thread does not need to hold the lock on the same mutex as the one held by the waiting thread(s); in fact doing so is a pessimization, since the notified thread would immediately block again, waiting for the notifying thread to release the lock.

MCVE, the commented line explains what changes if I hold the lock, but I dont undrestand why:

#include <atomic>
#include <condition_variable>
#include <mutex>
#include <thread>

#include <iostream>

using namespace std;

class worker {
public:
    template <class Fn, class... Args>
    explicit worker(Fn func, Args... args) {
        t = std::thread(
            [&func, this](Args... cargs) -> void {
                std::unique_lock<std::mutex> lock(mtx);
                while (true) {
                    cond.wait(lock, [this]() -> bool { return ready; });
                    if (terminate) {
                        break;
                    }

                    func(cargs...);
                    ready = false;
                }
            },
            std::move(args)...);
    }

    ~worker() {
        terminate = true;
        if (t.joinable()) {
            run_once();
            t.join();
        }
    }

    void run_once() {
        // If i dont hold this mutex the thread is never notified of ready being
        // true.
        std::unique_lock<std::mutex> lock(mtx);
        ready = true;
        cout << "ready run once " << ready << endl;
        cond.notify_all();
    }

    bool done() { return (!ready.load()); }

private:
    std::thread t;
    std::atomic<bool> terminate{false};
    std::atomic<bool> ready{false};
    std::mutex mtx;
    std::condition_variable cond;
};

// main.cpp

void foo() {
    worker t([]() -> void { cout << "Bark" << endl; });
    t.run_once();
    while (!t.done()) {
    }
}

int main() {
    while (true) {
        foo();
    }
    return 0;
}
Community
  • 1
  • 1
aram
  • 1,415
  • 13
  • 27
  • 1
    It's "ready=true:" that must happen under mutex. Doesn't matter that it's atomic. – Cubbi Jul 28 '16 at 13:15
  • @user2079303 Fixed it! Still its hard to see if execution "expires". – aram Jul 28 '16 at 14:07
  • @user2079303 Its not an instant deadlock at least not on my computer. Thats why I put a loop so I could call the function several times. – aram Jul 28 '16 at 14:17
  • 1
    @user2079303 Yes very misleading, my bad, again. Sorry. I fixed it just in case someone else has the same problem. – aram Jul 28 '16 at 14:45

1 Answers1

1

You need a memory barrier to make sure that the other thread will see the modified "ready" value. "ready" being atomic only ensures that the memory access is ordered so that modifications that happened before the atomic access are actually flushed to main memory. This does not guarantee that the other threads will see that memory, since these threads may have their own cache of the memory. Therefore, to ensure that the other thread sees the "ready" modification, the mutex is required.

{
  std::unique_lock<std::mutex> lock(mtx);
  ready = true;
}
Sven Nilsson
  • 1,861
  • 10
  • 11
  • 1
    The `load` method of `std::atomic` has a parameter to determine the visibility of atomic values accross threads. From the documentation for the `load` method : If an atomic store in thread A is tagged memory_order_release and an atomic load in thread B from the same variable is tagged memory_order_acquire, all memory writes (non-atomic and relaxed atomic) that happened-before the atomic store from the point of view of thread A, become visible side-effects in thread B, that is, once the atomic load is completed, thread B is guaranteed to see everything thread A wrote to memory. – keith Jul 28 '16 at 15:14
  • 1
    The reasoning in the answer is wrong. As keith points out, the atomicity guarantees that the write *is* in fact visible in the other thread. The problem is not caching, but a race with scheduling order of the different threads. The solution is correct. – eerorika Jul 28 '16 at 15:31