2

There is a kind of lockless ring buffer which uses the compare and swap atomic operation to work around lock. It was saying that this lockless ring buffer improves the performance. But checking the implementation of the lockless algorithm, you will find that each user actually still need to keep a while loop waiting for the condition of "compare and set" to be true to continue. I understand this is in essence still a kind of lock. But why this technology supersede the traditional locking method?

riveridea
  • 53
  • 5
  • Because the vast majority of accesses will not loop, or only a couple of times. This is optimistic concurrency when you assume that no other thread conflicts, then handle the rare blocking case separately. – user1937198 Aug 13 '17 at 09:35
  • It has not superceded it. It's another tool in the box that can improve performance in some cases on multi-core systems, specifically, when it's highly likely that locks will only be held for short interval so that retries on any failed lock atttempt will succeed quickly. It's a tradeoff between the costs of kernel locking, (kernel call, ring change, context-swap on fail), and the costs of looping on a failed lock, ('slow' hardware-locked atomic ops, CPU used continually during lock, memory-bandwidth used continually during lock). – Martin James Aug 13 '17 at 13:15
  • So, the basic advantage for lockless ringbuffer is not to avoid the conflict fundamentally (which is impossible), but remove the overhead caused by OS level switching. Right? – riveridea Aug 14 '17 at 19:22

0 Answers0