2

I am working on a project where I need to block all threads when a certain thread starts execution. I have considered using thread flags, but I believe this would involve adding checks to all the threads. I have also considered using a mutex to block all threads except the critical thread which I need to execute/have sole control of the processor. The reason why I haven't yet used a mutex is because I have read that it only relates to resources and that some threads would still continue to execute if they are not linked to the mutex, however I may have misunderstood this.

Could you please tell me if my approach to the mutex idea is correct or if I should use another method?

Edit: I am using Keil RTX 5/CMSIS RTOS 2 on the STM32H753 chip

Thanks

wolly981
  • 29
  • 8
  • 1
    I would question your priority assignments and task decomposition if you feel a _critical section_ or schedule lock is necessary - it is often indicative of a design flaw. Mutexes can be used for thread synchronisation, but you seem to want an asynchronous scheduler lock. – Clifford Jan 13 '20 at 19:14
  • First, let's clean up a misunderstanding: Mutexes are *usually* related to some concrete resource (e.g., the I²C peripheral to a bus with several slave chips, or some global variable). In the case you mentioned, the resource to be protected is the CPU and the consistency of the execution context during the period you want to lock other tasks out. That is, you can use a mutex in principle to achieve what you describe. On the other hand, you are right that every task that must be blocked away from the critical section must try to get that mutex, so all relevant functions would need patches now. – HelpingHand May 01 '20 at 16:41

3 Answers3

3

The CMSIS RTOS has a pair of functions osKernelLock() and osKernelUnlock() - dynamically modifying thread priorities or using mutexes is unnecessary and probably ill-advised.

Any other RTOS will have similar critical section API.

Note that this only prevents task context switching; it does not prevent interrupts from running. This is normally desirable, but if you want to prevent that, you can simply disable all interrupts using _disable_irq()/_enable_irq(). That will prevent task switched and interrupts.

Disabling interrupts is brute-force and has a greater impact on the real-time behaviour of your system that even a scheduler lock. Generally it should be done only for very short periods (as should scheduler locking).

Clifford
  • 88,407
  • 13
  • 85
  • 165
0

What RTOS are you using? I'll assume you're using a priority based RTOS.

Don't layer another scheduling mechanism with thread flags or a mutex. Just use the scheduler that you already have.

If you want one thread to run exclusively, then make that thread the highest priority thread. The RTOS scheduler will run the highest priority thread that is ready to run. If your thread is highest priority and doesn't block itself, then the other threads will not run. In CMSIS-RTOS you can change a thread's priority with osThreadSetPriority().

kkrambo
  • 6,643
  • 1
  • 17
  • 30
  • Raising priority of one thread doesn't block all other lower priority threads, unless there is only one CPU core available. – Maxim Egorushkin Jan 06 '20 at 15:45
  • 1
    If CMSIS-RTOS is in use (which seem likely form the tagging), then `osKernelLock()`/`osKernelUnlock()` would be more appropriate that dynamically messing with thread priorities. Even if not, I doubt there is any RTOS worthy of the name that does not similar critical section support. – Clifford Jan 13 '20 at 18:56
  • Fixing critical-section problems by assigning different priorities often leads to the next problem. Priorities shall be assigned according to the response time requirements of tasks or other, more important criteria. Many critical sections are found in less typical moments (such as power-up), where a task that should better be low-prio has to run without interrupt for a short period. – HelpingHand May 01 '20 at 17:08
0

Don't change priorities of your tasks directly from your user code. Most RTOSes provide APIs that enable us to do this, but it is bad style since it spawns more problems than it will solve. An exception is when certain RTOS functions to this internally (e.g, mutexes with priority inheritance to avoid certain multi-task issues).

I guess you want to have longer critical section only during the power-up phase of your system, or another very special phase of its runtime. Otherwise, you should really listen to @Clifford's comment and question your priority assignments and task decomposition.

If you need that sequential period only during init/power-up phase, this is a typical situation that is possible with good task design, too. In that case, what you need is a runlevel management.

Runlevel Management

The simplest way to implement this is to write a little library on top of your RTOS, using two counting semaphore resources: One is for the present runlevel, the other for the next one. When a runlevel is entered, the semaphore is filled as many tokens as there are tasks that must be under control of the runlevel management. Every task waiting to process a given runlevel is trying to get its token from the current-runlevel semaphore. When the task has finished its runlevel part, it accesses the next-level semaphore, which will be unavailable at that time.

Before populating the runlevel management configuration, you can draw yourself a sequence diagram to check at which points, tasks must wait for others for whatever reason. Usually for every task, only few of the runlevels are relevant - and per runlevel, the set of relevant tasks may be small too. Therefore you may want to add a little helper function like WaitForRunlevelNumber(N) with a loop that automatically deals with those runlevels that aren't relevant.

Runlevel management must finish if all phases that require explicit synchronisation are finished. Then, all tasks are released into freedom. Sometimes, you want to release low-priority tasks earlier if they have finished all critical phases yet. Then the number of tasks maintained may decrease from one runlevel to the next.

HelpingHand
  • 1,294
  • 11
  • 27