14

Assume that the following code is being executed by 10 threads.

pthread_mutex_lock(&lock)
Some trivial code
pthread_mutex_unlock(&lock)

For purpose of explanations lets say the threads are T1, T2, T3.....T10. My requirement is that as long as T1 or T2 or T3( i.e any of T1, T2 or T3) is waiting for acquiring a lock, the other threads i.t T4, T5, T6.....T10 should not be able to acquire the lock i.e T1, T2 and T3 should have precedence in acquiring the lock with respect to other threads.

I guess it could be done by increasing the priority of threads T1, T2 and T3

i.e here is the pseudo code

if this thread is T1 or T2 or T3
increase its priority 
pthread_mutex_lock(&lock)
Some trivial code
pthread_mutex_unlock(&lock)
if this thread is T1 or T2 or T3 decrease it priority to normal

Please note that I want a solution that is works for Linux platform and should use pthreads. I don't really care about any other platform.

Also note that I don't really want to make these 3 threads as realtime, I want them to exhibit their defualt behaviour(scheduling and priority) except that in the above mentioned small piece of code I want them to always have precedence in acquiring lock.

I have read some man pages about scheduling policies and scheduling priorities in Linux but can't really make out :(

Will this work? Can you help me with the exact pthread API required to accomplish the above task?

Regards lali

bstpierre
  • 30,042
  • 15
  • 70
  • 103
ghayalcoder
  • 1,675
  • 2
  • 17
  • 24

6 Answers6

11

Here's my implementation. Low priority threads use prio_lock_low() and prio_unlock_low() to lock and unlock, high priority threads use prio_lock_high() and prio_unlock_high().

The design is quite simple. High priority threads are held at the critical section mutex ->cs_mutex, low priority threads are held at the condition variable. The condition variable mutex is only held around updates to the shared variable and signalling of the condition variable.

#include <pthread.h>

typedef struct prio_lock {
    pthread_cond_t cond;
    pthread_mutex_t cv_mutex; /* Condition variable mutex */
    pthread_mutex_t cs_mutex; /* Critical section mutex */
    unsigned long high_waiters;
} prio_lock_t;

#define PRIO_LOCK_INITIALIZER { PTHREAD_COND_INITIALIZER, PTHREAD_MUTEX_INITIALIZER, PTHREAD_MUTEX_INITIALIZER }

void prio_lock_low(prio_lock_t *prio_lock)
{
    pthread_mutex_lock(&prio_lock->cv_mutex);
    while (prio_lock->high_waiters || pthread_mutex_trylock(&prio_lock->cs_mutex))
    {
        pthread_cond_wait(&prio_lock->cond, &prio_lock->cv_mutex);
    }
    pthread_mutex_unlock(&prio_lock->cv_mutex);
}

void prio_unlock_low(prio_lock_t *prio_lock)
{
    pthread_mutex_unlock(&prio_lock->cs_mutex);

    pthread_mutex_lock(&prio_lock->cv_mutex);
    if (!prio_lock->high_waiters)
        pthread_cond_signal(&prio_lock->cond);
    pthread_mutex_unlock(&prio_lock->cv_mutex);
}

void prio_lock_high(prio_lock_t *prio_lock)
{
    pthread_mutex_lock(&prio_lock->cv_mutex);
    prio_lock->high_waiters++;
    pthread_mutex_unlock(&prio_lock->cv_mutex);

    pthread_mutex_lock(&prio_lock->cs_mutex);
}

void prio_unlock_high(prio_lock_t *prio_lock)
{
    pthread_mutex_unlock(&prio_lock->cs_mutex);

    pthread_mutex_lock(&prio_lock->cv_mutex);
    prio_lock->high_waiters--;
    if (!prio_lock->high_waiters)
        pthread_cond_signal(&prio_lock->cond);
    pthread_mutex_unlock(&prio_lock->cv_mutex);
}
caf
  • 233,326
  • 40
  • 323
  • 462
  • this should work, but very verbose comparing to the same logic achieved with semaphore – ULysses Jul 19 '10 at 13:55
  • @ULysses: I suppose, but at least it just uses pthreads primitives and it'd be hidden in an implementation module - the code using it would simply do `prio_lock_*(&lock); /* ... */ prio_unlock_*(&lock);` around its critical section. – caf Jul 19 '10 at 21:20
  • You have a bug here if the path `while (prio_lock->high_waiters` is followed, you don't lock the cs_mutex, but you'll unlock it. I'm almost sure it's bad. – xryl669 Jul 12 '19 at 10:25
  • @xryl669: For the `while()` loop to finish, the `||` must evaluate to `0`, which means that `pthread_mutex_trylock()` must have been called and returned `0` - ie, the mutex has been locked. – caf Jul 12 '19 at 10:50
  • Or `high_waiters` is 0, in that case, the function returns as if the lock was taken but it's not. Later, the user call `unlock` and the code calls `pthread_mutex_unlock` on an unlocked mutex. – xryl669 Jul 12 '19 at 10:57
  • @xryl669: It's an `||`, so `high_waiters` must be `0` *and* `pthread_mutex_trylock()` must have returned `0`. It won't actually even call the `pthread_mutex_trylock()` while `high_waiters` is non-zero. – caf Jul 12 '19 at 11:02
6

As I understand it, the only way you can truly guarantee this would be to write a lock that works like that yourself. However @xryl669's answer that suggests using thread priority and priority inheritance is certainly worthy of consideration if it works for your use case.

To implement it yourself, you will need condition variables and counts of the number of waiting low / high priority threads.

In terms of the concepts and APIs you'll need, it is relatively similar to implementing a read/write lock (but the semantics you need are completely different, obviously - but if you understood how the r/w lock is working, you'll understand how to implement what you want).

You can see an implementation of a read write lock here:

http://ptgmedia.pearsoncmg.com/images/0201633922/sourcecode/rwlock.c

In the lower priority threads, you'd need to wait for high priority threads to finish, in the same way readers wait for writers to finish.

(The book the above code is taken from it also a great posix threads book btw, http://www.informit.com/store/product.aspx?isbn=0201633922 )

JosephH
  • 37,173
  • 19
  • 130
  • 154
3

The native way is to enable priority inheritance for your mutex (with pthread_mutex_attr), and use pthread's thread priority to perform what you need. It only requires very few lines of code, and you are not re-inventing the wheel. On the good side, it'll also work with RT or FIFO scheduler while your homebrew version will not.

Then, whenever a thread with a high priority waits on a mutex that's acquired by a thread on lower priority, the kernel "boost" the low priority thread so it can be scheduled in place of the high priority thread, thus giving it a timeslice to release the lock. As soon as the lock is released, the high priority thread is scheduled. That's the lowest delay you could get since it's done in the kernel.

xryl669
  • 3,376
  • 24
  • 47
2

Alternatively you may just introduce another lock for higher priority threads. consider the following pseudo-code (i am not familiar with the pthread semantics, but i believe this is not hard to map the code to the needed calls)

EDIT (thanx JosephH)

introducing the exec semaphore set to 3 (number of high-prio threads) note that pend(exec,3); means that this pend will sleep until all 3 slots are available and will consume them all



//init
exec = semaphore(3,3);

//========================

if this is NOT thread (t1,t2,t3)
    lock(low_prio);
    sem_pend(exec,3);
else
    sem_pend(exec,1);
lock(high_prio);
//...
unlock(high_prio);
if this is NOT thread (t1,t2,t3)
    sem_release(exec,3);
    sleep(0); //yield();  //ensures that sem_pend(exec,1) is executed
    unlock(low_prio);
else
    sem_release(exec,1);
ULysses
  • 978
  • 4
  • 9
  • Sorry, but I really don't see how that works. If both t1 and t4 try to take the lock whilst t2 has it, then t1 and t4 will both end up waiting at the lock(high_prio) and it will not be guaranteed which one will wake first. – JosephH Jul 19 '10 at 12:10
  • yepp, you are right, this i have missed. Still, i am sure there is a way, consider the edit.. – ULysses Jul 19 '10 at 12:44
  • I don't think this would work either; two locks approach could work but you would have to give the 'high priority' threads a chance to overtake the 'low priority' ones (so it should be implemented as a two step lock). – Unreason Jul 19 '10 at 12:47
  • I don't think using a sleep() is a good idea; it's probably not guaranteed to do what you want. Also being rather picky, semaphore() is not part of pthreads, the original poster explicitly said he wanted a pthread solution. (I've not had time to fully think through your solution yet.) – JosephH Jul 19 '10 at 14:54
  • @JosephH, yield or sleep is a precautionary mean. But i do agree that there is no `sem_wait()` implementation in `pthreads` that takes another parameter that tells how many slots to acquire. In this way my answer is not really useful. – ULysses Jul 19 '10 at 15:41
0

(The first two attempts had bugs, pls jump to EDIT2)

Maybe this would work?

if NOT this thread is T1 or T2 or T3
    pthread_mutex_lock(&lock1) // see note below
    pthread_mutex_lock(&lock2)
    Some trivial code
    pthread_mutex_unlock(&lock2)
    pthread_mutex_unlock(&lock1)
else
    pthread_mutex_lock(&lock2)
    Some trivial code
    pthread_mutex_unlock(&lock2)        
end if

Reasoning: Some threads will compete for two locks and therefore will have lower priority and some threads will compete for only one lock and therefore will have higher priority. Still the difference might be marginal and then the resolution would be to introduce some lag between acquiring first lock and attempting the second lock for the higher priority threads in which time the higher priority threads will be given a chance to get the lock2.
(disclaimer: I am a newbie when it comes to this)

EDIT: Another attempt/approach

if NOT (this thread is T1 or T2 or T3)  
    pthread_mutex_lock(&lock1)
    if pthread_mutex_trylock(&lock2) == 0  // low priority threads will not get queued
        Some trivial code
        pthread_mutex_unlock(&lock2)
    end if
    pthread_mutex_unlock(&lock1)
else 
    if (this thread is T1 or T2 or T3)
        pthread_mutex_lock(&lock2)
        Some trivial code
        pthread_mutex_unlock(&lock2)        
    end if
end if

EDIT2: Another attempt (trying to learn something here)

if NOT (this thread is T1 or T2 or T3)  
    pthread_mutex_lock(&lock1)
    while !(pthread_mutex_trylock(&lock2) == 0)
        pthread_yield()
    Some trivial code
    pthread_mutex_unlock(&lock2)
    pthread_mutex_unlock(&lock1)
else 
    if (this thread is T1 or T2 or T3)
        pthread_mutex_lock(&lock2)
        Some trivial code
        pthread_mutex_unlock(&lock2)        
    end if
end if
Unreason
  • 12,556
  • 2
  • 34
  • 50
  • i suspect that you wanted to test if a thread `is NOT t1 or t2 or t3` i.e. low priority. this approach has a type of flaw that my first version had: if hp thread is executing, then there might be two other hp and lp threads waiting on the lock2, and you can not tell which will acquire it first – ULysses Jul 19 '10 at 13:08
  • @ULysses, yes NOT was intended (edited it). As for the several threads waiting for lock2, I think you are right. I mention introducing the lag, but seems that is not enough to guarantee the priority and if I understand correctly your semaphore approach would guarantee it. I'll put another edit... – Unreason Jul 19 '10 at 13:29
  • now what you have is an lp thread fall-though in case an hp thread is executing – ULysses Jul 19 '10 at 13:49
  • @Ulysses, what about `while !(pthread_mutex_lock(&lock1) AND pthread_mutex_try(&lock2))` ? Sort of get a lock on low priority and try a lock on high and if you fail do it again. – Unreason Jul 19 '10 at 14:00
  • nope, this way you have troubles with lock1 on the second iteration of the loop: you either deadlock on it, or always get error, or increment lock count, or even worse, have an undefined behavior. See http://opengroup.org/onlinepubs/007908775/xsh/pthread_mutex_lock.html – ULysses Jul 19 '10 at 14:14
  • @ULysses, does the same comment apply to the EDIT2? – Unreason Jul 19 '10 at 14:27
  • ok, it looks better now. I am not sure, but in such cases I believe there is a chance that the try_lock succeeds before an hp thread acquires lock2: consider hp thread executing, another hp thread waiting on lock2 and lp thread in a loop; now the first hp thread releases lock2 and at the same moment the lp thread executes try_lock and succeeds before the second hp thread. So I'd still refine the code, but i have no experience in pthread, maybe it hadles such situation correctly. – ULysses Jul 19 '10 at 14:42
  • The way I understand it (limited!:)) lp in the loop would not get a lock. I assume the try_lock will only get a lock if there are no threads that are blocked on a mutex. From the opengroup.org: *'If there are threads blocked on the mutex object referenced by mutex when pthread_mutex_unlock() is called, resulting in the mutex becoming available, the scheduling policy is used to determine which thread shall acquire the mutex.'* - I assume this means that for the threads of the same priority the thread can not get to it out of order; so if there was hp blocked on a mutex it would run before lp. – Unreason Jul 19 '10 at 14:52
  • Even your last method still busy-waits when the low priority thread is locked, which is not a good design. – caf Jul 19 '10 at 21:22
  • @caf, I see, basicaly it is not necessary to loop mindlessly because you can use signals to avoid looping. – Unreason Jul 19 '10 at 22:29
  • @Unreason: any MT code using any form of yield IMO is broken. caf posted probably simplest implementation for the case of two priorities. it is correct and is pretty much how that supposed to be done. do not try to reinvent wheels, square at that. (I posted a bit overkill solution for N priorities.) – Dummy00001 Jul 19 '10 at 22:47
0

To implement that with pthreads you would need N lists, one per thread priority. The lists would contain pointers to the thread's pthread_cond_t variables.

Schematic untested meta-code:

/* the main lock */
pthread_mutex_t TheLock = PTHREAD_MUTEX_INITIALIZER;

/* service structures: prio lists and the lock for them */
pthread_mutex_t prio_list_guard = PTHREAD_MUTEX_INITIALIZER;
pthread_cond_t *prio_lists[MY_MAX_PRIO][MY_MAX_THREAD]; /* 0 == highest prio */

/* lock */
void
prio_lock(int myprio)
{
    pthread_cond_t x;

    pthread_mutex_lock( &prio_list_guard );

    if (0 == pthread_mutex_trylock( &TheLock )) {
        pthread_mutex_unlock( &prio_list_guard );
        return 0;
    }

    pthread_cond_init( &x, 0 );
    LIST_ADD( prio_lists[myprio], &x )

    while(1)    /* handle spurious wake-ups */
    {
        pthread_cond_wait( &prio_list_guard, &x );
        if (0 == pthread_mutex_trylock( &TheLock )) 
        {
            LIST_REMOVE( prio_lists[myprio], &x );
            pthread_mutex_unlock( &prio_list_guard );
            return 0;
        }
    }
}

/* unlock */
void
prio_unlock()
{
    int i;
    pthread_cond_t *p;

    pthread_mutex_lock( &prio_list_guard );

    for (i=0; i<MY_MAX_PRIO; i++)
    {
        if ((p = LIST_GETFIRST( prio_lists[i] )))
        {
            pthread_cond_signal( p );
            break;
        }
    }

    pthread_mutex_unlock( &TheLock );

    pthread_mutex_unlock( &prio_list_guard );
}

The code also handles spurious wake-ups from pthread_cond_wait(), but frankly I have never seen that happening.

Edit1. Note that prio_lists above is a primitive form of a priority queue.

Dummy00001
  • 16,630
  • 5
  • 41
  • 63
  • meta code shouldn't mean dirty code. in `prio_lock()` you lock the `prio_list_guard` twice, in the `prio_unlock()` you should first unlock `TheLock`, and then signal the list, otherwise you will always have a 'spurious' wake up in a cycle – ULysses Jul 19 '10 at 13:42
  • @ULysses: fixed. To the "first unlock TheLock" - you can do it, but that way it is IMO cleaner. And it is a matter of taste. `man pthread_cond_wait` for the implied locking of the conditional mutex. The locking thread would wake up only after the unlocking thread drops the prio_list_guard lock. – Dummy00001 Jul 19 '10 at 13:59
  • @Dummy, OK, had no experience with the pthread lib. This code now looks fine to me. – ULysses Jul 19 '10 at 14:07
  • @ULysses: thanks for the review ;) This is very close to (and reuses ideas from) what I do in the production code. – Dummy00001 Jul 19 '10 at 14:12
  • There's no need to use a condition variable per process - one per priority level is sufficient. – caf Jul 19 '10 at 21:23
  • @caf: thanks, interesting idea. would try to test it. the code in my case also attempts (and in my case that is desired) to provide FIFO guarantees, that's why the queue with pthread_cond_t per thread. Also my idea is to wake up only the first thread to avoid the avalanche(?) effect (when many threads are made runnable, but only one can really proceed). will try to test to see how it would work with one pthread_cond_t per priority. – Dummy00001 Jul 19 '10 at 22:42
  • Take a look at my answer - I use `pthread_cond_signal()` to only wake one thread (though I don't provide FIFO guarantees, since the OP didn't ask for it). It's also possible to provide FIFO with only one condition variable, but you then have to use `pthread_cond_broadcast()`, and so it *can* be susceptible to the "thundering herd" problem (I've written an example here: http://stackoverflow.com/questions/3050083/linux-synchronization-with-fifo-waiting-queue/3050871#3050871 ) – caf Jul 20 '10 at 00:09