0

I'm currently studying how semaphores and mutual exclusion actually work and encountered the following problem.

Let's assume we have two cores on a CPU. We have two processes, on each core there's running one. We're now calling on both cores a wait() call, because we wish to enter a critical section:

wait(){
  while(s.value <= 0)
    // busy waiting
  s.value--;
}

If both cores execute the code in parallel, and the initial semaphore value is 1, both read the while loop statement, which results to false (since the s = 1). That means, both decrement the semaphore nearly at the same time, which results to s = -1. Now, both processes enter their critical section at the same time, and that shouldn't be possible in terms of mutual exclusion.

What am I getting wrong?

Thanks for clarification.

Martin Bucher
  • 165
  • 3
  • 11
  • 1
    The wait for non-zero count and decrement are usually handled within the kernel of an operating system, in a manner that makes the check and decrement an atomic operation. – rcgldr Jun 12 '17 at 12:46

3 Answers3

2

As you have already discovered, these are not simple user-space functions - it will be very tricky (impossible?) for you to implement a semaphore or mutex yourself without using the functions provided by the kernel.

For example, on Linux you have:


You have the concept correct but the two operations (the check and the inc/dec) need to be conducted in an "atomic" way - simplistically this means that they happen as one operation that cannot be split (read up on Linearizability).

Additionally, it's worth noting that you have implemented a 'busy loop', which when working with an operating system is a bad idea as you are depriving other tasks / processes from CPU time and raising the power usage while doing no actual work - the functions mentioned above will "block" with 0% CPU usage, while yours will "block" with 100% CPU usage if given the chance.


You would have more luck trying to 'play' with such concepts when running on a single core (you can restrict your applications execution to a single core - look at sched_setaffinity().

However, even if you get that going you have very little control over whether your process is scheduled out at a bad time causing your example application to break in exactly the same way. It might be possible to further improve your chances of correct operation by calling sched_setscheduler() with SCHED_FIFO, though I've not got first hand experience with this (ref, ref).

Either way, this is not likely to be 100% reliable, while the kernel-supported functions should be.


If you're up for it, then the best way to play with the implementation details in your own functions would be to implement a very basic round-robin scheduler (that doesn't interrupt tasks) and run it on a micro or in a single thread.

Attie
  • 6,690
  • 2
  • 24
  • 34
1

in java and in other languages you ca use synchronized to synchronize a block of code or a function and that avoid this kind of problemes because When one thread is executing a synchronized method or a block , all other threads that invoke synchronized methods or blocks (suspend execution) until the first thread is done.

poyo fever.
  • 742
  • 1
  • 5
  • 22
1

It's probably better to use the built-in functions for the semaphore. To wait using the pthreads library, you'd use the pend() function, and to make the semaphore available, you'd use the post() function.

pend will wait until the value of s is greater than 0. Then it will atomically decrement the value of s and then continue onward. How this is exactly implemented depends on the library, of course. It may look something like this:

sem_wait(){
    // Make a kernel function call to guarantee that the following code is atomic
    enter_critical_section();
    // Test the semaphore value. If it is positive, let the code continue.
    int sem_val = s.value;
    s.value--;
    if (sem_val > 0) {
        exit_critical_section();
        return;
    }
    else {
    // At this point we know the semaphore value is negative, so it's not available. We'd want to block the caller and make a context switch to a different thread or task.
    ... // Put the current thread on list of blocked threads
    ... // Make a context switch to a ready thread.
    }
    // When the semaphore is made available with a sem_post() function call somewhere else, there will eventually be a context switch back to this (blocked) thread. Simply exit the critical section, return back to the calling function, and let the program execute normally.
    exit_critical_section();
}

This code is actually based off an RTOS I implemented for a class. Each implementation will look very different, and there's a lot I haven't shown here, but it should give you a basic idea of how it could work.

Finally, you mentioned in your hypothetical case that there were 2 separate processes sharing a single semaphore. That's possible, you just have to make certain to make the right function calls to make the semaphore shareable between processes.

mgarey
  • 733
  • 1
  • 5
  • 19