0

Problem: I have a set of Threads some of which must take a priority to other in acquiring ReentrantLock.

The solution: I can imagine to have a fair ReentrantLock with 2 Condition queues: lowPriority and highPriority. The point is highPriority is signalled before lowPriority. Taking into account fairness of ReentrantLock it must happen that Threads blocked in highPriority always go ahead of Threads blocked on lowPriority.

Implementation:

public class Main {
    public static final Lock lock = new ReentrantLock(true);
    public static final Condition lowPriority = lock.newCondition();
    public static final Condition highPriority = lock.newCondition();
    public static boolean cond;

    public static void lowPriority() throws InterruptedException {
        try {
            lock.lock();
            while(!cond) {
                lowPriority.await();
            }
            cond = false;
            System.out.println("low");
        } finally {
            lock.unlock();
        }
    }

    public static void highPriority() throws InterruptedException {
        try {
            lock.lock();
            while(!cond) {
                highPriority.await();
            }
            cond = false;
            System.out.println("high");
        } finally {
            lock.unlock();
        }
    }

    public static void setCond(){
        try{
            lock.lock();
            cond = true;
            highPriority.signalAll();
            lowPriority.signalAll();
        } finally {
            lock.unlock();
        }
    }
}

QUESTION: The problem that confused me about the solution is that I could not formally prove from JMM standpoint that as soon as there are Threads blocked on highPriority they always win Threads blocked on lowPriority.

Surely I ran a few experiments and high priority threads always won, but is it formally correct?

Some Name
  • 8,555
  • 5
  • 27
  • 77
  • 1
    I think, this is merely governed by the `ReentrantLock` implementation, not the JMM at all. But when you made attempts to reason from JMM standpoint, it would be helpful to share the thoughts you had so far. – Holger Jun 11 '21 at 08:24
  • @Holger The thing that I was confused by mostly is when `signalAll` returns is it reliable to assume that all `Thread`s waiting on a given `Condifition` already enqueued in the `ReentrantLock`'s `WaitSet`? – Some Name Jun 11 '21 at 17:38
  • 2
    I verified that the current implementation does transfer all threads into the waiting queue and the signaled threads check whether they’re the first one in the queue when fairness has been enabled. However, I’m not sure whether we assume that an implementation has to do it that way from the specification’s wording. – Holger Jun 12 '21 at 16:49
  • Perhaps you can use a `PriorityQueue` to hold objects with `Condition`. You only add the item to the queue when the threads need to wait. After a task is completed, you poll the queue and signal the others. – Franz Wong Jun 13 '21 at 07:13
  • @Holger I re-think the idea and found the following race condition: `Thread 1` signalls all the high priority threads blocked on the `Condition highPriority`. `Thread 2` arrives in the middle and calls `setCond()` bringing important updates for `highPriority` and `lowPriority` Threads. Due to the fairness the `Thread 2` is queued on the `ReentrantLock` right after all `highPriority` Threads. Then all threads blocked on `lowPriority` are signalled by `Thread 1` and scheduled after the `Thread 2`.`Thread 2` signals high and low priority threads again and high priority threads loses the update – Some Name Jun 25 '21 at 03:54
  • @Holger The solution I can think of is to introduce one more `Lock` wrapping the execting `ReentrantLock` so the signalling of `Condition highPriority` and `Condition lowPriority` becomes atomic in terms of `Thread`s enqueuing. – Some Name Jun 25 '21 at 03:58
  • @Holger So in essence the original ***fair*** `ReentrantLock` is used as a fifo queue so we could avoid writing our own with `AbstractQueuedSynchronizer`. – Some Name Jun 25 '21 at 04:00

1 Answers1

2

As I understand, for code

highPriority.signalAll();
lowPriority.signalAll();

there is no guarantee that some thread waiting on highPriority will wake up before any thread waiting on lowPriority.
Even when fairness==true threads can wake up in a random order 1:

Note however, that fairness of locks does not guarantee fairness of thread scheduling. Thus, one of many threads using a fair lock may obtain it multiple times in succession while other active threads are not progressing and not currently holding the lock.

Additionally, lowPriority.signalAll() will keep waking up all the low-priority threads even if a new high-priority thread appears in the middle of that process.

So I would insert in the beginning of the lowPriority additional logic, which checks if there are any waiting high-priority threads and, if so, lets them run first.
Something like that:

public final class Main {

  private final Lock lock = new ReentrantLock();
  private final Condition lowPriority = lock.newCondition();
  private final Condition highPriority = lock.newCondition();
  private int numWaitingHighPriority = 0;
  private boolean cond;

  public void lowPriority() throws InterruptedException {
    lock.lock();
    try {
      while (!cond || (numWaitingHighPriority > 0)) {
        if (numWaitingHighPriority > 0) {
          highPriority.signal();
        }
        lowPriority.await();
      }
      cond = false;
      System.out.println("low");
    } finally {
      lock.unlock();
    }
  }

  public void highPriority() throws InterruptedException {
    lock.lock();
    try {
      numWaitingHighPriority++;
      try {
        while (!cond) {
          highPriority.await();
        }
      } finally {
        numWaitingHighPriority--;
      }
      cond = false;
      System.out.println("high");
    } finally {
      lock.unlock();
    }
  }

  public void setCond() {
    lock.lock();
    try {
      cond = true;
      ((numWaitingHighPriority > 0) ? highPriority : lowPriority).signal();
    } finally {
      lock.unlock();
    }
  }
}