As with most NSOperationQueue
hackery, you can exploit its support for dependencies between operations:
- Create an
NSBlockOperation
subclass, ReaderWriterBlockOperation
. Add to it property BOOL writer
.
- Create an operation queue per protected resource.
- Stop exposing your operation queue to clients. Instead expose the API
-readWithBlock:
and -writeWithBlock:
. Both enqueue an ReaderWriterBlockOperation
, the one with writer == NO
, the other == YES
. Their operation manages dependencies as follows:
-readWithBlock:
does, in an @synchronized(self)
block, a scan of the operations from last to first looking for a writer block. If none is found, it adds operation and returns. If one is found, it makes the new reader block dependent on the writer, enqueues it, and returns.
-writeWithBlock:
does the same thing. Except if no writer is found in the queued operations, it makes the block dependent on all readers found. If one is found in the queued operations, it makes itself dependent on that operation and all following (reader) operations.
This should have the result of blocking all readers till the writer before them has completed, and blocking all writers till the readers before them have completed.
One possible issue: I'm unclear (because the docs are unclear, and I haven't implemented this as such yet) if NSBlockOperation
actually waits for its block to finish running before declaring itself complete. If it doesn't, you'll need to manage that yourself in your operation subclass.
All that said, if the system provides a working solution, such as barrier blocks, you should use that. This whole system is a hack to get an operation queue to do something that dispatch queues have been tuned to handle very well. If performance is not in fact a concern, then why not just use a serial queue (NSOperationQueue
with max concurrent operation count == 1)?