0

I am involved a task of listening to a web service. Which will send a xml data through push service. The data have to undergo some calculation and then it will we displayed.

I have planned to use queue to store the data by service listener and read the data by the business logic code. It is a pure single producer single consumer queue.

Since I have to receive the data as web service push, I have to always open to receive the data and push it to the queue. I thought of using boost_lockfree_spsc_queue. Because, if it were a lockable queue the listener have to wait for a while to acquire the lock, as opposed to it boost_lockfree_spsc_queue does not need any locks.

The data I am going to store is

struct MemoryStruct {
    char *memory;
    size_t size;
};

And the queue is

boost::lockfree::spsc_queue<MemoryStruct*> lockFreeQ{100};

After reading performance section here I got bit confused.

Is it safe to use this boost_lockfree_spcc_queue for production purposes. Or should I use standard queue (C++ 11 )with locks?

Thanks

sehe
  • 374,641
  • 47
  • 450
  • 633
Kid
  • 169
  • 1
  • 19
  • 1
    Yes it is safe for production uses - as long as you abide to the usage requirements. What section made you confused? – sehe Dec 06 '16 at 13:36
  • You use that `MemoryStruct` but are worried that `boost` will screw up your program? – nwp Dec 06 '16 at 13:39
  • Is there anything wrong with using MemoryStruct ? – Kid Dec 06 '16 at 13:44
  • http://stackoverflow.com/users/85371/sehe(http://www.boost.org/doc/libs/1_55_0/doc/html/lockfree.html#lockfree.introduction___motivation.introduction__amp__terminology ) When discussing the performance of non-blocking data structures, one has to distinguish between amortized and worst-case costs. The definition of 'lock-free' and 'wait-free' only mention the upper bound of an operation. Therefore lock-free data structures are not necessarily the best choice for every use case. In order to maximise the throughput of an application one should consider high-performance concurrent data structures – Kid Dec 06 '16 at 13:47

1 Answers1

2

Yes. If you expect the load to not saturate your CPU you will just bump the electricity bill. The usual approach is exponential back-off. ¹

If you're not at all sure about this, then this smells a lot like premature optimization, and you can probably use a locking queue.

You could make sure your usage patterns make it easy to swap in a lock free implementation. Make your own blocking pop() function that would wrap the wait logic in case of a lock-free implementation.

¹ see e.g. http://kukuruku.co/hub/cpp/lock-free-data-structures-the-evolution-of-a-stack

sehe
  • 374,641
  • 47
  • 450
  • 633
  • I am sorry I am still bit unclear about your statement “Make your own blocking pop() function that would wrap the wait logic in case of a lock-free implementation.” – Kid Dec 06 '16 at 13:59
  • Making my own booking pop() means surround pop() using lock() and unlock(). What is meant by wrapping wait logic incase of lock-free? lock free mean wait free right ? Then why should I warp lock-free implementation using wait logic ? – Kid Dec 06 '16 at 14:01
  • TL;DR use a locking queue. Optionally, play around with lock-free queues so you see the difference(s) in interface (e.g. there'd be no `size()` or `empty()` calls). That will make it easier for you to migrate to the lock-less queue in a future version should you find out you need one after all. Just, one step at a time. Look at the things you do understand, expand your knowledge _one step at a time_ - follow your curiosity. Making jumps just leads to holes in your experience/understanding. – sehe Dec 06 '16 at 14:03
  • Re. "Making my own booking pop() means surround pop() using lock() and unlock()" - certainly not. That would be using an unlocked queue, by synchronizing access externally. That's not what a blocking queue does. It locks internally. **Forget about my wrapping suggestion, see previous comment.** – sehe Dec 06 '16 at 14:04
  • Hi, Thanks for the reply and introducing me to these jargons. I will read more about it and come back. – Kid Dec 07 '16 at 09:12
  • Hi, I have done some googling on booking pop() and I tried to implement it `void sharedQueue::lockFreeConsume() { for(i till 100) { while(!(lockFreeQ.read_available() > 0)) { std::this_thread::yield() ; } lockFreeQ.pop(); } } ` Is this right? (Sorry for the poor formatting) – Kid Dec 07 '16 at 14:44