I'm trying gain a deeper understanding of relaxed memory ordering. Per CPP reference, there is no synchronization, however atomicity is still guaranteed. Doesn't atomicity in this case require some form of sync, e.g. how does fetch_add()
below guarantee that only one thread will update the value from y
to y+1
, particularly if writes can be visible out of order to different threads? Is there an implicit sync associated with fetch_add
?
memory_order_relaxed Relaxed operation: there are no synchronization or ordering constraints imposed on other reads or writes, only this operation's atomicity is guaranteed (see Relaxed ordering below)
#include <thread>
#include <iostream>
#include <atomic>
#include <vector>
#include <cassert>
using namespace std;
static uint64_t incr = 100000000LL;
atomic<uint64_t> x;
void g()
{
for (long int i = 0; i < incr; ++i)
{
x.fetch_add(1, std::memory_order_relaxed);
}
}
int main()
{
int Nthreads = 4;
vector<thread> vec;
vec.reserve(Nthreads);
for (auto idx = 0; idx < Nthreads; ++idx)
vec.push_back(thread(g));
for(auto &el : vec)
el.join();
// Does not trigger
assert(x.load() == incr * Nthreads);
}