40

Recently, I had the need for a function that I could use to guarantee synchronous execution of a given block on a particular serial dispatch queue. There was the possibility that this shared function could be called from something already running on that queue, so I needed to check for this case in order to prevent a deadlock from a synchronous dispatch to the same queue.

I used code like the following to do this:

void runSynchronouslyOnVideoProcessingQueue(void (^block)(void))
{
    dispatch_queue_t videoProcessingQueue = [GPUImageOpenGLESContext sharedOpenGLESQueue];

    if (dispatch_get_current_queue() == videoProcessingQueue)
    {
        block();
    }
    else
    {
        dispatch_sync(videoProcessingQueue, block);
    }
}

This function relies on the use of dispatch_get_current_queue() to determine the identity of the queue this function is running on and compares that against the target queue. If there's a match, it knows to just run the block inline without the dispatch to that queue, because the function is already running on it.

I've heard conflicting things about whether or not it was proper to use dispatch_get_current_queue() to do comparisons like this, and I see this wording in the headers:

Recommended for debugging and logging purposes only:

The code must not make any assumptions about the queue returned, unless it is one of the global queues or a queue the code has itself created. The code must not assume that synchronous execution onto a queue is safe from deadlock if that queue is not the one returned by dispatch_get_current_queue().

Additionally, in iOS 6.0 (but not yet for Mountain Lion), the GCD headers now mark this function as being deprecated.

It sounds like I should not be using this function in this manner, but I'm not sure what I should use in its place. For a function like the above that targeted the main queue, I could use [NSThread isMainThread], but how can I check if I'm running on one of my custom serial queues so that I can prevent a deadlock?

Brad Larson
  • 170,088
  • 45
  • 397
  • 571

2 Answers2

36

Assign whatever identifier you want using dispatch_queue_set_specific(). You can then check your identifier using dispatch_get_specific().

Remember that dispatch_get_specific() is nice because it'll start at the current queue, and then walk up the target queues if the key isn't set on the current one. This usually doesn't matter, but can be useful in some cases.

Rob Napier
  • 286,113
  • 34
  • 456
  • 610
  • 4
    Yeah, this does seem like the way to go, and is backed up by this interesting discussion over at the Apple Developer Forums: https://devforums.apple.com/message/710745#714753 – Brad Larson Oct 09 '12 at 20:43
  • The "walk up the target queues" is actually pretty useful here if you use a unique key for the queue you're trying to protect (rather than a unique value). That way you can be more certain that you won't deadlock if you're current on a queue that targets your video queue. You can't be absolutely certain since queues can change their targets at any time, so you could enqueue a block, then have some other block re-target your queue onto the video queue before the block executes. So, um... don't do that. :D – Rob Napier Oct 09 '12 at 22:18
  • @BradLarson Just understand that by allowing re-entrancy, you are breaking the FIFO guarantee of the queue in that re-entrant synchronous calls will "jump" the queue, and will execute before any other blocks that may be sitting in the queue, waiting to execute. – Jody Hagins Oct 10 '12 at 11:34
  • 1
    @RobNapier `dispatch_set_target_queue` is an asynchronous barrier operation, so it will not take place until all blocks currently in the queue have executed. – Jody Hagins Oct 10 '12 at 11:42
  • @JodyHagins, where do you get that info? The docs just say "The new target queue setting will take effect between block executions on the object, but not in the middle of any existing block executions (non-preemptive)." But it doesn't suggest that the change to the target is itself scheduled as a block. You can change even targets while a queue is suspended (in fact this is common), so I'd want to see more details about the point your making. – Rob Napier Oct 10 '12 at 13:26
  • @JodyHagins, regarding the reentrancy, the synchronous call won't jump the queue (that's impossible by design). The problem is that we are currently running on queue A. We then say "schedule this block to run on queue A and wait for it to finish." The new block can't run until the current block finishes, and the current block won't finish until the new block finishes. This is classic deadlock. – Rob Napier Oct 10 '12 at 13:29
  • 1
    @RobNapier Session 210 of WWDC 2011 specifically states that it is an asynchronous barrier operation. I've read it elsewhere as well, just can't remember off the top of my head where. A quick look at the source confirms this - though we both know that we need to be very careful about implementation details. – Jody Hagins Oct 10 '12 at 14:06
  • Noted. The fact that you can change these while suspended was actually irrelevant. That's useful (and a somewhat obvious implementation in retrospect). It means that if you suspend a queue, add a bunch of blocks, change the target, and resume the queue, then the blocks will go to the original target. That's possibly important to understand. – Rob Napier Oct 10 '12 at 14:10
  • @RobNapier Regarding jumping the queue - we may quibble over what jumping the queue means. However, I hold that if you provide any re-entrant synchronous mechanism, then all synchronous calls (except the first) must jump the queue. Brad's original does. Changing to use pre-queue context will as well. You must to avoid deadlock. You can easily see this, with the following pattern: reentrant_sync, dispatch_async, reentrant_sync. The first reentrant_sync will wait until all enqueued blocks execute. Subsequent ones, though, will not. – Jody Hagins Oct 10 '12 at 14:15
  • @RobNapier (continuation) Any block submitted to the queue, whether sync or async, should preserve FIFO ordering. However, multiple reentrant-sync blocks will run **before** any sync blocks that are submitted after the initial reentrant-sync starts. To me, this clearly "jumps" the queue, because subsequent reentrant-sync operations are executed before previously submitted operations. – Jody Hagins Oct 10 '12 at 14:18
  • @RobNapier FWIW, there is a function, `dispatch_set_current_target_queue` that does **not** use a barrier block. In fact, the header file documentation for it says that is why it's there. However, the header file is `private/queue_private.h` so you can guess its intended use ;-) – Jody Hagins Oct 10 '12 at 14:25
  • @JodyHagins - Fair point on the queue jumping, but that's not my concern here. I'm just using this particular serial queue as a lockless means of preventing simultaneous access to a shared resource (an OpenGL ES context in this case). Overall execution order is not an issue, so I'm willing to jump the queue and inline operations if I'm already in the middle of executing a block on the desired queue. My primary concern here is preventing simultaneous access to a shared resource while avoiding deadlocks. – Brad Larson Oct 10 '12 at 16:27
  • 2
    @BradLarson FWIW, that's the same approach Core Data took. They know that `performBlockAndWait` has the exact same issue, and it seems good enough for them... – Jody Hagins Oct 10 '12 at 18:18
1

This is a very simple solution. It is not as performant as using dispatch_queue_set_specific and dispatch_get_specific manually – I don't have the metrics on that.

#import <libkern/OSAtomic.h>

BOOL dispatch_is_on_queue(dispatch_queue_t queue)
{
    int key;
    static int32_t incrementer;
    CFNumberRef value = CFBridgingRetain(@(OSAtomicIncrement32(&incrementer)));
    dispatch_queue_set_specific(queue, &key, value, nil);
    BOOL result = dispatch_get_specific(&key) == value;
    dispatch_queue_set_specific(queue, &key, nil, nil);
    CFRelease(value);
    return result;
}
hfossli
  • 22,616
  • 10
  • 116
  • 130