2

I got a "bit" confused: In production we have two processes communicating via shared memory, a part of data exchange is a long and a bool. The access to this data is not synchronized. It's been working fine for a long time and still is. I know modifying a value is not atomic, but considering that these values are modified/accessed millions of times this had to fail?

Here is a sample piece of code, which exchanges a number between two threads:

#include <pthread.h>
#include <xmmintrin.h>

typedef unsigned long long uint64;
const uint64 ITERATIONS = 500LL * 1000LL * 1000LL;

//volatile uint64 s1 = 0;
//volatile uint64 s2 = 0;
uint64 s1 = 0;
uint64 s2 = 0;

void* run(void*)
{
    register uint64 value = s2;
    while (true)
    {
        while (value == s1)
        {
        _mm_pause();// busy spin
        }
        //value = __sync_add_and_fetch(&s2, 1);
        value = ++s2;
    }
 }

 int main (int argc, char *argv[])
 {
     pthread_t threads[1];
     pthread_create(&threads[0], NULL, run, NULL);

     register uint64 value = s1;
     while (s1 < ITERATIONS)
     {
         while (s2 != value)
         {
        _mm_pause();// busy spin
         }
        //value = __sync_add_and_fetch(&s1, 1);
        value = ++s1;
      }
}

as you can see I have commented out couple things:

//volatile uint64 s1 = 0;

and

//value = __sync_add_and_fetch(&s1, 1);

__sync_add_and_fetch atomically increments a variable.

I know this is not very scientific, but running a few times without sync functions it works totally fine. Furthermore if I measure both versions sync and without sync they run at the same speed, how come __sync_add_and_fetch is not adding any overhead?

My guess is that compiler is guaranteeing atomicity for these operations and therefore I don't see a problem in production. But still cannot explain why __sync_add_and_fetch is not adding any overhead (even running in debug).

Some more details about mine environment: ubuntu 10.04, gcc4.4.3 intel i5 multicore cpu.

Production environment is similar it's just running on more powerful CPU's and on Centos OS.

thanks for your help

Tadzys
  • 1,044
  • 3
  • 16
  • 22
  • What failure symptoms were you expecting from this code if the `++` wasn't atomic? – NPE Oct 19 '11 at 16:44
  • Well, if one thread is reading and another thread is writing the same thing at the same time, and it is not synchronised this is undefined behaviour, so I was expecting a crash, or some random values, than I would know for sure, this is wrong, I need synchronisation and I would sleep better... – Tadzys Oct 20 '11 at 08:07
  • Explained here as well http://stackoverflow.com/questions/11608869/is-it-normal-that-the-gcc-atomic-builtins-are-so-slow – Nasir Jul 26 '16 at 12:24

3 Answers3

9

Basically you're asking "why do I see no difference in behavior/performance between

s2++;

and

__sync_add_and_fetch(&s2, 1);

Well, if you go and look at the actual code generated by the compiler in these two cases, you will see that there IS a difference -- the s2++ version will have a simple INC instruction (or possibly an ADD), while the __sync version will have a LOCK prefix on that instruction.

So why does it work without the LOCK prefix? Well, while in general, the LOCK prefix is required for this to work on ANY x86-based system, it turns out its not needed for yours. With Intel Core based chips, the LOCK is only needed to synchronize between different CPUs over the bus. When running on a single CPU (even with multiple cores), it does its internal synchronization without it.

So why do you see no slowdown in the __sync case? Well, a Core i7 is a 'limited' chip in that it only supports single socket systems, so you can't have multiple CPUs. Which means the LOCK is never needed and in fact the CPU just ignores it completely. Now the code is 1 byte larger, which means it could have an impact if you were ifetch or decode limited, but you're not, so you see no difference.

If you were to run on a multi-socket Xeon system, you would see a (small) slowdown for the LOCK prefix, and could also see (rare) failures in the non-LOCK version.

Chris Dodd
  • 119,907
  • 13
  • 134
  • 226
1

I think compiler generates no atomicity until you use some compiler-specific patterns, so thats a no-go.

If only two processes are using the shared memory, usually no problems would occur, Specially if code snippets are short enough. Operating system prefers to block one process and run another when its best (e.g I/O), So it will run one to a good point of isolation, then switch to the next.

Try running a few instances of the same application and see what happens.

AbiusX
  • 2,379
  • 20
  • 26
0

I see you're using Martin Thompson inter-thread-latency example.

My guess is that compiler is guaranteeing atomicity for these operations and therefore I don't see a problem in production. But still cannot explain why __sync_add_and_fetch is not adding any overhead (even running in debug).

The compiler doesn't guarentee anything here. The X86 platform you're running on is. This code will probably fail on funky hardware.

Not sure what you're doing, but C++11 does provide atomicity with std::atomic. You can also have a look at boost::atomic. I assume you're interested in the Disruptor pattern, I'll shamelessly plug my port to C++ called disruptor--.

fsaintjacques
  • 446
  • 2
  • 11