0

I have written the producer - consumer as follows:

void producer(int n)
{
    std::unique_lock<std::mutex> lock(mtx);
    for (int i = 0; i < n; ++i) 
    {
        cv.wait(lock, [] {return !notified; });
        std::cout << "Producing.." << i << std::endl;
        Q.push(i);
        notified = true;
        cv.notify_one();
    }
    done = true;
}

void consumer()
{
    std::unique_lock<std::mutex> lock(mtx);
    while (!done) 
    {
        cv.wait(lock, [] {return notified; });
        while (!Q.empty()) 
        {
            std::cout << "Consuming.." << Q.front() << std::endl;
            Q.pop();
            cv.notify_all();
            notified = false;
        }
    }
}

int main()
{
    auto fut1 = std::async(producer, 20);
    auto fut2 = std::async(consumer);
    fut1.get();
    fut2.get();
    system("pause");
}

This I consider to be a rare version (home made ^_^), the difference from well known sources like cppreference is that in producer I do not force thread to sleep, instead I wait for notification from consumer, and in consumer I dont wrap the cv.wait into while(!notified) loop. This works perfectly.
Is there something wrong with this approach? Did I miss something?

Eduard Rostomyan
  • 7,050
  • 2
  • 37
  • 76
  • The only thing you really missed here is that, as written, there could only be a maximum of one object in the queue. The producer will not produce another element until the consumer consumes the first one it produces. – Sam Varshavchik May 28 '18 at 01:46
  • correct, ist this the whole meaning of this problem? – Eduard Rostomyan May 28 '18 at 01:47
  • Not really. In general, producers and consumers are independent of each other. If a producer can temporarily produce things faster than the consumer, it should. That's the whole reasons for independently executing threads to exist. Otherwise there is no real reason to have threads here. Just produce one value, then consume it, then produce the next one, in a single execution thread. – Sam Varshavchik May 28 '18 at 01:49
  • Actually, it is not. why would you use a queue if there would be only one element? – ichramm May 28 '18 at 01:49
  • Oh I see now @SamVarshavchik, the full definition is this: The problem describes two processes, the producer and the consumer, who share a common, fixed-size buffer used as a queue. The producer's job is to generate data, put it into the buffer, and start again. At the same time, the consumer is consuming the data (i.e., removing it from the buffer), one piece at a time. The problem is to make sure that the producer won't try to add data into the buffer if it's full and that the consumer won't try to remove data from an empty buffer. – Eduard Rostomyan May 28 '18 at 01:52
  • Also, that code will work only with one consumer, adding more consumer will cause race condition with the variable `notified` (dorman workers would be awoken by the call to `notify_all`). – ichramm May 28 '18 at 01:52
  • Eduard, you are right!. The producer must wait if the queue is at its maximum capacity, and the consumer must wait if the queue is full. If you class `Queue` has a `size` method you may not need the additional boolean (having an `empty` would be a plus). Another thing to have in mind is to release the lock after consuming the element from the queue and BEFORE doing some work over that element, it is not a good practices to hold a lock for too much time. – ichramm May 28 '18 at 01:58
  • Well, then, your code fails the requirements. Your producer will not try to make full use of the buffer, until it's full. It will always add one value at a time and wait until the consumer consumes it, before adding the next value. – Sam Varshavchik May 28 '18 at 02:00
  • thanks guys, I am one step closer to understand this whole concept now! – Eduard Rostomyan May 28 '18 at 02:04

0 Answers0