I am working with custom embedded real-time OS having some home-brew threading and synchronization facility. The mutual exclusion is implemented in a way similar to the outlined below:
typedef int Mutex;
#define MUTEX_ACQUIRED 1
#define MUTEX_RELEASED 0
bool AquireMutex(Mutex* pMutex)
{
bool Ret;
// Assume atomic section here (implementation specific)
if( *pMutex == MUTEX_RELEASED )
{
*pMutex = MUTEX_ACQUIRED;
Ret = true;
}
else
{
Ret = false;
}
// Atomic section end
return Ret;
}
void ReleaseMutex(Mutex* pMutex)
{
// Assume atomic section here (implementation specific)
*pMutex = MUTEX_RELEASED;
// end atomic section
}
Let's assume the two functions above are atomic (in the actual implementation they are, but the actual implementation is irrelevant for the question).
Each one of the threads are sharing some globally defined m
and having the code similar to this:
extern Mutex m;
// .............
while (!AquireMutex(&m)) ;
// Do stuff
ReleaseMutex(&m);
The question is about the line:
while (!AquireMutex(&m)) ;
Will the AquireMutex
actually be evaluated each iteration? Or the optimizer will just consider it as a constant since it won't see how m
is changed?
Should the AquireMutex
be declared with a volatile
qualifier instead:
bool AquireMutex(volatile Mutex* pMutex);