I'm writing for an embedded system running uCOS-ii. I need to atomically write (and read) two integers (value and timestamp which should be synchronized with one another). The easiest way is to wrap the writing of the two values with a critical section, thus disabling any interrupts or task switching. But I was told this is very aggressive and that it's very easy to mess up the other real-time stuff by disabling interrupts.
But writing two integers is such a small operation that I wasn't sure the entire bookkeeping of using a mutex is worth it.
So I made some measurements. I measured how long it takes to write these two values a million times and counted the number of milliseconds it took. All this was done in a single task, just to understand the overhead of the different synchronization mechanisms. Here are the results:
- No synchornization mechanism: ~65
- Critical section: ~185
- Mutex with priority 2: ~1890
- Scheduler locked: ~1750
- Semaphore initialized with 1: ~1165
I admit I measured this with the debugger attached because I'm new to this and I'm not sure if we have a profiler, but it makes sense to me that the CS is fastest and that a mutex is slower than a semaphore (because it has all the priority inversion handling).
So should I conclude from this that using a critical section is best? Or is it really a very bad thing to do to disable interrupts? In general - are there guidelines on when to use each synchronization mechanism?
UPDATE: A colleague suggested using a spin lock. Obviously this will have smaller overhead than the more advanced synchronization mechanisms. But I wonder if it's better than a critical section in this specific case.
UPDATE 2: Come to think of it, since we have a single CPU, a spin lock won't do any good. It will just spin until a context switch...