I'm not completely sure of the locking mechanism NSCache uses specifically, but I would suggest one of 5 solutions in particular in modern development for Apple platforms that would probably be one of the ways NSCache is implemented. There are more than 5 possible, this list is not exhaustive. I am just not including more, because in my opinion, I do not believe Apple would use the methods not included here.
All of the following approaches work, but I am going to try hard not to sound like I have an opinion on which approach is better (we'll see how that goes), because that's not what your question is really about. Also, there are many opinions on this subject that I do not want to fan any flames for. What I will go into, is providing my opinion on how I think NSCache is implemented, based on these approaches and my opinion of Apple.
The pthread library's pthread RW lock. This is designed specifically for synchronization, and with as much performance that a lock can give that isn't going with a simple mutex (what a RW lock is actually using). For reads, you obtain a lock with pthread_rwlock_rdlock
, and for writes, you obtain it with pthread_rwlock_wrlock
. Underneath, this is essentially a recursive lock, and shouldn't actually block reads, until there is a write. This optimizes the more common path of reads, and safely accounts for the less common path of writes. For the even less common read/write operation, you must essentially promote it to a write. When using pthread locks (or mutexes), you can go pretty far to optimize your locking based on your own code's logic and internally well defined conditions, though I won't go into that here.
The GCD library's serial queue - While using a serial queue, you use disaptch_sync
for reads, and dispatch_async
for writes. This is different to the pthread RW lock in optimizing for the common path, because both reads and writes are serial. Though it does depend on GCD for optimizing reads with its underlying queue optimizations (e.g. GCD determines when to block appropriately for a dispatch_sync
call). For the even less common read/write operation, you just treat it as a read and use dispatch_sync
. This is again, because this approach has one important difference from the previous in that although reads should be fast and optimized through GCD, they are still serial, an technically still block each other.
The GCD library's concurrent queue - While using a concurrent queue, you use dispatch_sync
for reads, and dispatch_barrier_async
for writes. For the special circumstance of a read/write, there is dispatch_barrier_sync
made almost specifically for that purpose. In this world, reads never block each other, and will not block at all until they hit a barrier, which is reserved for writes and read/writes. This approach also depends on GCD for optimizing reads, but by using a concurrent queue, you are opting out of serial reads. Therefore it is basically up to you to control things that must be serial, and use a barrier when those operations need it.
Foundation framework's NSLock
- This is a classic construct of the Foundation framework. Although it's original implementation was probably different, recently I have looked at the internal makings of this, and in all the cases I have tested, it is essentially an Objective-C wrapper around a corresponding pthread implementation.
Objective-C's @synchronized()
- This is a built-in Objective-C function/keyword that provides locking plus exception handling. What it is underneath is essentially an NSLock with an exception handler. When using it, it is for both reads and writes. Internally they are treated the same, and therefore no optimization is given for one over the other. Your lock is associated with an object you pass to the @synchronized(obj)
function. So essentially, you are locking access to that object regardless of how it is used or mutated within the corresponding ``@synchronized() { }` block.
There are more details and caveats with each approach, though if I was a betting man (and I'm not), and based on a couple of things I will go into after this, I would say Apple is using approach #3 for NSCache.
Reason 1: Apple is concerned about performance, and especially in Foundation. They've invested heavily in GCD since it's inception, and are continuing to invest in it. Most of the reasons that GCD was created are based on removing as much boilerplate as possible without sacrificing performance. I would say Apple is probably dog-fooding what they suggest in a few places internal to Foundation, and NSCache is probably one of them.
Reason 2: Kind of "Reason 1 continued", there is more recent evidence in Apple's investment in GCD and lack of investment in @synchonized()
, and that can be seen in Swift. GCD is the preferred approach to synchronization in Swift.