Say we have a shared resource that a bunch of different global queues have access to and, for the sake of this question, we use a Dispatch Semaphore to manage that access. When one of these global queues tells the semaphore to wait, the semaphore count is decremented and that thread has access to the shared resource. Is it possible that while the semaphore is waiting, another (different) global queue tries to access this shared resource, and the thread that GCD grabbed from its pool is the same thread that was grabbed for the previous queue (the queue that is currently making the semaphore wait) which would deadlock this thread and prevent the semaphore count from ever re-incrementing?

- 415,655
- 72
- 787
- 1,044

- 2,698
- 1
- 17
- 47
-
So you are imagining that the same thread is doled out to two different queues simultaneously? – matt May 28 '20 at 01:06
-
@matt in this hypothetical, correct. – lurning too koad May 28 '20 at 01:15
-
https://stackoverflow.com/questions/33543263/clarifications-on-dispatch-queue-reentrancy-and-deadlocks – Lou Franco May 28 '20 at 01:16
1 Answers
Short answer:
Yes, using semaphores can result in deadlocks, but not for the reason you suggest.
Long answer:
If you have some dispatched task waiting for a semaphore, that worker thread is blocked until the signal is received and it resumes execution and subsequently returns. As such, you don’t have to worry about another dispatched task trying to use the same thread, because that thread is temporarily removed from the thread pool. You never have to worry about two dispatched tasks trying to use the same thread at the same time. That is not the deadlock risk.
That having been said, we have to be sensitive to the fact that the number of worker threads in the thread pool is extremely limited (currently 64 per QoS). If you exhaust the available worker threads, then anything else dispatched to GCD (with the same QoS) cannot run until some of those previously blocked worker threads are made available again.
Consider:
print("start")
let semaphore = DispatchSemaphore(value: 0)
let queue = DispatchQueue.global()
let group = DispatchGroup()
let count = 10
for _ in 0 ..< count {
queue.async(group: group) {
semaphore.wait()
}
}
for _ in 0 ..< count {
queue.async(group: group) {
semaphore.signal()
}
}
group.notify(queue: .main) {
print("done")
}
That works fine. You have ten worker threads tied up with those wait
calls and then the additional ten dispatched blocks call signal
, and you’re fine.
But, if you increase count
to 100 (a condition referred to as “thread explosion”), the above code will never resolve itself because the signal
calls are waiting for worker threads that are tied up with all of those wait
calls. None of those dispatched tasks with signal
calls will ever get a chance to run. And, when you exhaust the worker threads, that is generally a catastrophic problem because anything trying to use GCD (for that same QoS) will not be able to run.
By the way, the use of semaphores in the thread explosion scenario is just one particular way to cause a deadlock. But for the sake of completeness, it’s worth noting that there are lots of ways to deadlock with semaphores. The most common example is where a semaphore (or dispatch group or whatever) is used to wait for some asynchronous process, e.g.
let semaphore = DispatchSemaphore(value: 0)
someAsynchronousMethod {
// do something useful
semaphore.signal()
}
semaphore.wait()
That can deadlock if (a) you run that from the main queue; but (b) the asynchronous method happens to call its completion handler on the main queue, too. This is the prototypical semaphore deadlock.
I only used the thread-explosion example above because the deadlock is not entirely obvious. But clearly there are lots of ways to cause deadlocks with semaphores.

- 415,655
- 72
- 787
- 1,044
-
This is an excellent answer. Just a quick aside, why did you set the value of the semaphore to `0`? Don't we set the value to the number of threads we want to be able to simultaneously access the shared resource, such as `1`? – lurning too koad May 28 '20 at 15:28
-
You can use whatever value you need. Yes, when using semaphores for synchronization, 1 is a common value (then, again, there are other synchronization mechanisms that we’d generally reach for first, such as reader-writer or simple locks). Or when using them to constrain degree concurrency, then, again, a positive semaphore value is common. But when doing simple “I want to wait until another thread sends me signal”, then 0 is used. It just depends upon your purpose. For my examples above, zero is correct. – Rob May 28 '20 at 15:40
-
Is it theoretically possible that if for example 10 different threads that are constantly calling wait() and signal() on a one specific semaphore (for example in endless while loop) would block one thread for a long time because for example for some reason 9 of those threads would get unlocked much more often than the other one? I mean the order of which thread will be unlocked next is random isn't it, or not? If yes then theoretically such situation that 1 thread would hang for a much much longer time than other 9 could happen (so not exactly deadlock but still might be a problem sometimes)? – Leszek Szary Aug 12 '21 at 15:44
-
1My understanding is dispatch semaphores are “fair” and won’t starve any threads. (Semaphores’ FIFO behavior is no longer referenced explicitly in the documentation, but the [source](https://opensource.apple.com/source/libdispatch/libdispatch-187.9/src/semaphore.c.auto.html) clearly indicates it is FIFO.) This is in stark contrast to `os_unfair_lock`, which explicitly _can_ starve threads (but is admittedly far more performant). Obviously, you can use GCD queues to unambiguously guarantee FIFO behavior. – Rob Aug 12 '21 at 16:28