4

From what I've read about Grand Central Dispatch, GCD does not do preemptive multitasking; it is all a single event loop. I'm having trouble making sense of this output. I have two queues just doing some output (at first I was reading/writing some shared state, but I was able to simplify down to this and still get the same result).

dispatch_queue_t authQueue = dispatch_queue_create("authQueue", DISPATCH_QUEUE_SERIAL);
dispatch_queue_t authQueue2 = dispatch_queue_create("authQueue", DISPATCH_QUEUE_SERIAL);

dispatch_async(authQueue, ^{ 
    NSLog(@"First Block");
    NSLog(@"First Block Incrementing"); 
    NSLog(@"First Block Incremented"); 
});

dispatch_async(authQueue, ^{ 
    NSLog(@"Second Block");
    NSLog(@"Second Block Incrementing");
    NSLog(@"Second Block Incremented"); 
});

dispatch_async(authQueue2,^{ 
    NSLog(@"Third Block"); 
    NSLog(@"Third Block Incrementing");
    NSLog(@"Third Block Incremented"); 
});

I get the following output:

2011-12-15 13:47:17.746 App[80376:5d03] Third Block
2011-12-15 13:47:17.746 App[80376:1503] First Block
2011-12-15 13:47:17.746 App[80376:5d03] Third Block Incrementing
2011-12-15 13:47:17.746 App[80376:1503] First Block Incrementing
2011-12-15 13:47:17.748 App[80376:1503] First Block Incremented
2011-12-15 13:47:17.748 App[80376:5d03] Third Block Incremented
2011-12-15 13:47:17.750 App[80376:1503] Second Block
2011-12-15 13:47:17.750 App[80376:1503] Second Block Incrementing
2011-12-15 13:47:17.751 App[80376:1503] Second Block Incremented

As is evident, the blocks do not execute atomically. My only theory is that GCD writing to stdio via NSLog makes the current execution wait. I can't find anything related to this in the Apple documentation. Can anyone explain this?

Brad Larson
  • 170,088
  • 45
  • 397
  • 571
jgoldberg
  • 455
  • 5
  • 11

4 Answers4

8

GCD does not use any kind of "event loop". It is a new kernel feature in recent releases of Mac OS X and iOS, that doesn't really have any other similar technology that I know of.

The goal is to finish executing all of the code you give it as quickly as the hardware will allow. Note that it's aiming for the quickest finish time, not the quickest start time. A subtle difference, but an important one with real world impact on how it works.

If you only have one idle CPU core, then theoretically only one of them will be executed at a time. Because multi-tasking inside a single core is slower than executing two tasks sequentially. But in reality, this isn't the case. If a CPU core becomes idle or not very busy for a moment (for example, reading the hard drive, or waiting for some other program to respond (Xcode drawing the NSLog output)), then it will quite likely move onto executing some a second GCD item, because the one it's currently doing is stuck.

And of course, most of the time you will have more than one idle CPU core.

It also will not necessarily execute things in the exact order you give it. GCD/the kernel have control over these details.

For your specific example Xcode's debugger is probably only capable of processing a single NSLog() event at a time (at the very least, it has to do the screen drawing one at a time). You've got two queues and they might begin executing simultaneously. If you are sending two NSLog() statements at once one of them will wait for the other to finish first. Because you're not doing anything but printing stuff to Xcode, those two GCD queues will be in a race to be the first to send log data to Xcode. The first one has a slight head start, but it's an extremely slight one and often not enough for it to open a connection with Xcode first.

It all depends on what actual hardware resources are available on the hardware at that specific nanosecond in time. You can't predict it, and need to structure your queues appropriately to assume some control.

Abhi Beckert
  • 32,787
  • 12
  • 83
  • 110
2

Where did you read that GCD does not do pre-emptive multitasking? I think you are mistaken. It is built upon the thread support provided by the system and so GCD blocks dispatched to queues may be preemptively interrupted.

The behaviour you are seeing is exactly what I would expect. The first and second blocks are dispatched to the same queue so GCD will ensure that the first block completes before the second block starts. However, block three is dispatched to a completely different queue (i.e. will be running on a separate background thread) and so its output is interleaved with the other two blocks as the threads are scheduled by the system.

Robin Summerhill
  • 13,695
  • 3
  • 41
  • 37
  • This may be a question of semantics. The kernel can certainly pre-empt a block as it is executing (due to the thread exhausting its CPU quantum or switching context) but GCD itself never interrupts a block in progress in order to go execute a different block. – jkh Dec 18 '11 at 07:55
2

Whatever you have read is wrong, unless you use a serial dispatch queue, all blocks will be executed concurrently.

Tony Million
  • 4,296
  • 24
  • 24
  • He is talking about iOS, where everything but the most recent hardware only has a single CPU core. GCD will never execute two CPU intensive tasks concurrently on a single core. As I understand it they will only be concurrent if there aren't any CPU intensive tasks active anywhere on the system. I think he has slightly misunderstood something he read somewhere. It isn't completely wrong, as you claim. – Abhi Beckert Dec 15 '11 at 21:04
  • Your involvement of hardware is irrelevant, it is a fact that you cannot with any certainty say which block will be executed first or finish executing if you call dispatch_async(dispatch_get_global_queue(0,0), ^{ code here }); twice, even on a single processor. The only time you *CAN* be certain of serial execution is when you use a serial dispatch queue, where every block you submit *WILL* be executed in submission order. Bringing the specifics of hardware into the discussion is irrelevant from a code point of view as you have no idea where you will be executing. – Tony Million Dec 16 '11 at 09:26
  • And I see in this case @jgoldberg is intact creating TWO serial dispatch queues, and submitting two blocks to one queue and one to the other, if you look at the debug output the two blocks submitted to the queue do in fact execute serially while the one submitted to the other queue executes concurrently. It was a misunderstanding on the part of the poster: serial queues execute blocks serially but each serial queue will run concurrently. – Tony Million Dec 16 '11 at 09:30
  • I agree with everything you've said in your comments here. But I don't agree with your answer, where you state "all blocks will be executed concurrently". This is not true. They will only be executed concurrently if there are *hardware* resources available. If you schedule 20,000 CPU intensive blocks, they will not be executed concurrently, as multi-threading within a core has a huge performance hit and GCD is designed specifically to avoid that performance issue. On an A4 CPU they may very well execute one at a time. – Abhi Beckert Dec 18 '11 at 05:30
  • I'm afraid Abhi is wrong. This is not just a hardware-biased decision, and in fact there are opportunities for concurrency even on a single CPU core which GCD will happily avail itself of. Think of it more as a pool of threads, any of which may or may not be executing concurrently regardless of the hardware configuration, and for which a more key question is "which are blocked?" I/O is one such blocking operation, and GCD will create more threads as necessary (within reason) if a given one context-switches back into the kernel and needs to block until its request is processed. – jkh Dec 18 '11 at 07:52
  • @jkh I did specifically say CPU intensive blocks. My understanding is GCD will not execute two *CPU intensive* blocks at once on a single core device, especially an ARM device. Where have you seen otherwise? – Abhi Beckert Dec 18 '11 at 16:45
  • @AbhiBeckert While it is unlikely, it is simply not possible to categorically say that GCD will not execute two CPU-intensive tasks (submitted to concurrent queues) at once. If you still doubt this, take a look at the libdispatch source code and its contract with pthread_workqueue (the source for which is also available as part of the xnu sources). – jkh Dec 22 '11 at 18:20
0

Your queues work in 2 concurent background threads. They supply NSLog messages concurently. While one thread makes NSlog output, another waits.
What's wrong?

Roman Temchenko
  • 1,796
  • 1
  • 13
  • 17