3

I use concurrency::task from ppltasks.h heavily in my codebase.

I would like to find a awaitable queue, where I can do "co_await my_queue.pop()". Has anyone implemented one?

Details: I have one producer thread that pushes elements to a queue, and another receiver thread would be waiting and waking up when elements arrive in the queue. This receiving thread might wait/wake up to handle other tasks in the meantime (using pplpp::when_any).

I don't want a queue with an interface where i have to poll a try_pop method as that is slow, and I don't want a blocking_pop method as that means I can't handle other ready tasks in the meantime.

petke
  • 1,345
  • 10
  • 25
  • 1
    check [condition variables](http://en.cppreference.com/w/cpp/thread/condition_variable). They can work with std::queue.empty() for an "awaitable queue". I'm not sure how you don't want to block and don't want to poll... – stefaanv Jun 13 '17 at 12:19
  • I'm sorry but... how exactly do you want the queue to work with two+ thread out of which one is a writer if you don't want to use try_{insert_geter}() and don't want a blocking methods (which basically means mutexes, I assume) ? In essence any lock free structure that I know off still uses a spin lock to do the atomic operations it needs to get and set elements, but, in an ideal implementation, the overhead for this is minimal and you will generally aquire the lock on the first try\ – George Jun 13 '17 at 12:27
  • I use "pplpp::when_any" in a while loop. That way I can wait on many tasks at once. It wakes up when any one task is ready. So any one task does not block the others. And I don't have to poll any ready flag. – petke Jun 13 '17 at 12:54
  • Also known as a _blocking queue_. – Solomon Slow Jun 13 '17 at 17:37

2 Answers2

5

This is basically your standard thread-safe queue implementation, but instead of a condition_variable, you will have to use futures to coordinate the different threads. You can then co_await on the future returned by pop to become ready.

The queue's implementation will need to keep a list of the promises that correspond to the outstanding pop calls. In case that the queue is still full when poping, you can return a ready future immediately. You can use plain old std::mutex to synchronize concurrent access to the underlying data structures.

I don't know of any implementation that already does this, but it shouldn't be too hard to pull off. Note though that managing all the futures will introduce some additional overhead, so your queue will probably be slightly less efficient than the classic condition_variable-based approach.

ComicSansMS
  • 51,484
  • 14
  • 155
  • 166
1

Posted a comment but I might as well write this as the answer since its long an I need formatting.

Basically you're two options are:

Lock-free queues, the most popular of which is this:

https://github.com/cameron314/concurrentqueue

They do have try_pop, because it uses atomic pointer and any atomic methods (e.g. std::atomic_compare_exchange_weak) can and will "fail" and return false at times, so you are forced to have a spin-lock over them.

You may find queues that abstract this inside a "pop" which just calls "try_pop" until it works, but that's the same overhead in the backround.

Lock-base queues:

These are easier to do on your own, without a third part library, just wrap every method you need in locks, if you want to 'peek' very often look into using shared_locks, otherwise just std::lock_guard should be enough to guard all wrapper. However this is what you may call a 'blocking' queue since during an access, weather it is to read or to write, the whole queue will be locked.

There is not thread-safe alternatives to these two implementations. If you are in need of a really large queue (e.g. hundreds of GBs of memory worth of objects) under heavy usage you can consider writing some custom hybrid data structure, but for most usecases moodycamel's queue will be more than sufficient an.

George
  • 3,521
  • 4
  • 30
  • 75
  • How would you realize the `co_await` invocation from the question with either of those queues? – ComicSansMS Jun 13 '17 at 13:10
  • Also please don't confuse spin and spin-lock. You might want to spin on a lock-free queue, but you almost certainly don't want to put a spin-lock on it, as that would cancel out all the progress guarantees of lock-freedom. – ComicSansMS Jun 13 '17 at 13:11
  • isn't a spin the same as a spin-lock ? The only difference is that in the case of a lock free queue the spin lock is on a single atomic pointer rather than the whole queue. But its still a lock... unless spin-locking means something else, I always assumed it meant waiting in a loop for a condition to be meant (e.g. atomic_exchange_swap to return true) – George Jun 13 '17 at 13:26
  • No, they are not the same, but the difference is somewhat subtle. A spin on a lock-free queue can only be caused (leaving aside spurious failures due to hardware effects) by another thread accessing the queue at the same time. The lock-free property guarantees you that if either of you has to spin, the other will always make progress, so you will never *both* spin. The same is not true for spin-locks. For instance, in the extreme case with spin locks you could just deadlock, which means everybody keeps spinning indefinitely and there is no progress at all. – ComicSansMS Jun 13 '17 at 14:19