1

I'm writing for an embedded system running uCOS-ii. I need to atomically write (and read) two integers (value and timestamp which should be synchronized with one another). The easiest way is to wrap the writing of the two values with a critical section, thus disabling any interrupts or task switching. But I was told this is very aggressive and that it's very easy to mess up the other real-time stuff by disabling interrupts.

But writing two integers is such a small operation that I wasn't sure the entire bookkeeping of using a mutex is worth it.

So I made some measurements. I measured how long it takes to write these two values a million times and counted the number of milliseconds it took. All this was done in a single task, just to understand the overhead of the different synchronization mechanisms. Here are the results:

  • No synchornization mechanism: ~65
  • Critical section: ~185
  • Mutex with priority 2: ~1890
  • Scheduler locked: ~1750
  • Semaphore initialized with 1: ~1165

I admit I measured this with the debugger attached because I'm new to this and I'm not sure if we have a profiler, but it makes sense to me that the CS is fastest and that a mutex is slower than a semaphore (because it has all the priority inversion handling).

So should I conclude from this that using a critical section is best? Or is it really a very bad thing to do to disable interrupts? In general - are there guidelines on when to use each synchronization mechanism?

UPDATE: A colleague suggested using a spin lock. Obviously this will have smaller overhead than the more advanced synchronization mechanisms. But I wonder if it's better than a critical section in this specific case.

UPDATE 2: Come to think of it, since we have a single CPU, a spin lock won't do any good. It will just spin until a context switch...

Dina
  • 1,346
  • 1
  • 15
  • 35

3 Answers3

3

If the stop time is less than the overhead of other mechanisms and less than the maximum permissible delay on any interrupt handler or task, then the simpler brute-force approach is probably the most appropriate.

However you need to be certain that the length of the critical section will not grow unacceptably under maintenance, or that the use of the mechanism will not be seen as a green light to use it everywhere without due consideration. Consequently I suggest that you document its use in clear comments with its justification and constraints i.e. why you did it, and under what circumstances it is guaranteed to be safe in terms of meeting real-time deadlines.

Clifford
  • 88,407
  • 13
  • 85
  • 165
1

For small synchronized operations with uCOS-II, just disable the interrupts.

All the mechanisms provided by uCOS-II will disable interrupts for a period that is longer than the time it would take to read or write two integers. Using them in a situation like this will actually hurt interrupt latency.

D Krueger
  • 2,446
  • 15
  • 12
0

I suspect disabling interrupts while writing two values is going to be fine. But it really depends on the real-time requirements of your application and we don't know what those are.

Is that 185 milliseconds to do the operation 1 million times? And does that imply that you'd be disabling interrupts for 185 nanoseconds on average? Do you have any real-time requirements where an extra 185 nanoseconds would cause you to miss a deadline and fail?

Take a look at the uC/OS-ii source code for the mutex and other services you're considering. I suspect you will find that those services disable interrupts for short periods of time. It's possible that using those services will cause interrupts to be disabled for more time than it would take you to write the two values.

There are many guidelines in embedded software development such as minimizing critical sections. Don't take all these guidelines as hard-and-fast rules. Instead learn and understand why each guideline exists. Then you'll know better when to abide by them and when to make an exception.

You want to minimize critical sections so that you don't disable interrupts for so long that either an interrupt or a real-time deadline is missed. Disabling interrupts for seconds is almost certainly bad. Disabling interrupts for milliseconds could be bad in many applications. Disabling interrupts for nanoseconds may be OK for many applications.

kkrambo
  • 6,643
  • 1
  • 17
  • 30