26

std::atomic is new feature introduced by c++11 but I can't find much tutorial on how to use it correctly. So are the following practice common and efficient?

One practice I used is we have a buffer and I want to CAS on some bytes, so what I did was:

uint8_t *buf = ....
auto ptr = reinterpret_cast<std::atomic<uint8_t>*>(&buf[index]);
uint8_t oldValue, newValue;
do {
  oldValue = ptr->load();
  // Do some computation and calculate the newValue;
  newValue = f(oldValue);
} while (!ptr->compare_exchange_strong(oldValue, newValue));

So my questions are:

  1. The above code uses ugly reinterpret_cast and is this the correct way to retrieve the atomic pointer that reference to the location &buf[index]?
  2. Is the CAS on a single byte significantly slower than CAS on a machine word, so that I should avoid using it? My code will look more complicated if I change it to load a word, extract the byte, compute and set the byte in the new value, and do CAS. This makes the code more complicated and I also need to deal with address alignment myself.

EDIT: if those questions are processor/architecture dependent, then what's the conclusion for x86/x64 processors?

Peter Cordes
  • 328,167
  • 45
  • 605
  • 847
Kan Li
  • 8,557
  • 8
  • 53
  • 93
  • 1
    C++ Concurrency in Action [(early access)](http://www.manning.com/williams/), [(amazon)](http://www.amazon.com/gp/product/1933988770/ref=as_li_qf_sp_asin_tl?ie=UTF8&tag=gummadoon-20&linkCode=as2&camp=1789&creative=9325&creativeASIN=1933988770) is probably the best book on this topic right now, or rather, will be. – Cubbi Jan 05 '12 at 20:37
  • 1
    There aren't many tutorials on atomics because, other than for a few simple cases like atomic flags, it's a minefield. Watching "The Hurt Locker" should be a prerequisite to using atomics. Use locks! – Bartosz Milewski Jan 05 '12 at 23:51
  • You want `compare_exchange_weak` since it's in a retry loop anyway. In C++20, `std::atomic_ref` will let you do an atomic operation on a C++ object that isn't always atomic. – Peter Cordes Feb 21 '23 at 23:46

3 Answers3

29
  1. The reinterpret_cast will yield undefined behaviour. Your variable is either a std::atomic<uint8_t> or a plain uint8_t; you cannot cast between them. The size and alignment requirements may be different, for example. e.g. some platforms only provide atomic operations on words, so std::atomic<uint8_t> will use a full machine word where plain uint8_t can just use a byte. Non-atomic operations may also be optimized in all sorts of ways, including being significantly reordered with surrounding operations, and combined with other operations on adjacent memory locations where that can improve performance.

    This does mean that if you want atomic operations on some data then you have to know that in advance, and create suitable std::atomic<> objects rather than just allocating a generic buffer. Of course, you could allocate a buffer and then use placement new to initialize your atomic variable in that buffer, but you'd have to ensure the size and alignment were correct, and you wouldn't be able to use non-atomic operations on that object.

    If you really don't care about ordering constraints on your atomic object then use memory_order_relaxed on what would otherwise be the non-atomic operations. However, be aware that this is highly specialized, and requires great care. For example, writes to distinct variables may be read by other threads in a different order than they were written, and different threads may read the values in different orders to each other, even within the same execution of the program.

  2. If CAS is slower for a byte than a word, you may be better off using std::atomic<unsigned>, but this will have a space penalty, and you certainly can't just use std::atomic<unsigned> to access a sequence of raw bytes --- all operations on that data must be through the same std::atomic<unsigned> object. You are generally better off writing code that does what you need and letting the compiler figure out the best way to do that.

For x86/x64, with a std::atomic<unsigned> variable a, a.load(std::memory_order_acquire) and a.store(new_value,std::memory_order_release) are no more expensive than loads and stores to non-atomic variables as far as the actual instructions go, but they do limit the compiler optimizations. If you use the default std::memory_order_seq_cst then one or both of these operations will incur the synchronization cost of a LOCKed instruction or a fence (my implementation puts the price on the store, but other implementations may choose differently). However, memory_order_seq_cst operations are easier to reason about due to the "single total ordering" constraint they impose.

In many cases it is just as fast, and a lot less error-prone, to use locks rather than atomic operations. If the overhead of a mutex lock is significant due to contention then you might need to rethink your data access patterns --- cache ping pong may well hit you with atomics anyway.

Anthony Williams
  • 66,628
  • 14
  • 133
  • 155
6

Your code is certainly wrong and bound to do something funny. If things go really bad it might do what you think it is intended to do. I wouldn't go as far as understanding how to properly use e.g. CAS but you would use std::atomic<T> something like this:

std::atomic<uint8_t> value(0); 
uint8_t oldvalue, newvalue;
do
{
    oldvalue = value.load();
    newvalue = f(oldvalue);
}
while (!value.compare_exchange_strong(oldvalue, newvalue));

So far my personal policy is to stay away from any of this lock-free stuff and leave it to people who know what they are doing. I would use atomic_flag and possibly counters and that is about as far as I'd go. Conceptually I understand how this lock-free stuff work but I also understand that there are way too many things which can go wrong if you are not extremely careful.

cmorse
  • 337
  • 1
  • 7
  • 15
Dietmar Kühl
  • 150,225
  • 13
  • 225
  • 380
  • 1
    I would say it is a problem coming from real world use case, not some academia homework. I would personally follow standard as much as possible but in real life, sometimes I just can't. – Kan Li Jan 06 '12 at 08:56
3

Your reinterpret_cast<std::atomic<uint8_t>*>(...) is most definatly not the correct way to retrieve an atomic and not even guranteed to work. This is because std::atomic<T> is not guaranteed to have the same size as T.

To your second question about CAS being slower for bytes then machine words: That's really machine dependent, it might be faster, it might be slower, or there might not even exist CAS for bytes on your Target architecture. In the later case the implementation will most likely either need to use a locking implementation for the atomic or use a different (bigger) type internally (which is one example of atomics not having the same size as the underlying type).

From what I see there is really no way to get an std::atomic on an existing value, particularly since they aren't guaranteed to be the same size. Therefore you really should directly make buf an std::atomic<uint8_t>*. Furthermore I'm relatively sure that even if such a cast would work, access through non atomics to the same address wouldn't be guaranteed to work as expected (since this access isn't guaranteed to be atomic even for bytes). So having nonatomic means to access a memory location you want to do atomic operations on doesn't really make sense.

Note that for common architectures stores and loads of bytes are atomic anyways, so you have little to no performance overhead for using atomics there, as long as you use relaxed memory order for those operations. So if you don't really care about order of execution at one point (e.g. because the program isn't multithreaded yet) simply use a.store(0, std::memory_order_relaxed) instead of a.store(0).

Of course if you are only talking about x86 your reinterpret_cast is likely to work, but your performance question is probably still processor dependent (I think, I haven't looked up the actual instruction timings for cmpxchg).

Grizzly
  • 19,595
  • 4
  • 60
  • 78
  • I am 90% sure atomic on byte will be slower than a word, because it needs to do some bitwise operation. I want to know how much slower it would look like. Another thing is, I dont agree with you that a read/write a single byte is atomic, at least on x86. Thanks for your suggestion that uses atomic array instead of byte array, which works, but will cause loading from the bytes slower as well, which is not what I want. Actually in 99% of time, I can tell no other thread is storing to the array, so the extra barrier is not needed. Only for a short period of time I need to do the above stuff. – Kan Li Jan 06 '12 at 08:51
  • 1
    @icando: As I said its platform dependent. But since you are talking about x86: Why would an atomic operation be slower on a byte then on a word? What do you mean it needs to do some bitwise operations? X86 can natively store bytes and has 8bit `cmpxchg`, so it shouldn't matter (well thats not exactly, but it shouldn't have more impact then using bytes instead of machinewords has anyways). And about the extra barrier: that's why I suggested `memory_order_relaxed`, which should eliminate most of the extra costs, since load/store is atomic anyways (on x86 at least). – Grizzly Jan 06 '12 at 15:11