1

I'm writing an NSObject subclass similar to that of NSCache, which enables file caching to the disk on an iOS Device.

I am in the process of writing the queues for (i) reading (ii) writing, but I wanted to make sure that the type of queue I will be creating is right, and won't therefore cause problems with files becoming corrupt in the future.

For the read queue, I was planning on creating a concurrent queue as many files can be read at the same time without any issue.

For the write queue, however, I was planning on creating a serial queue to prevent more than one file being written to at once.

Can you tell me if this is the correct approach?

dsolimano
  • 8,870
  • 3
  • 48
  • 63
max_
  • 24,076
  • 39
  • 122
  • 211
  • There's some key information missing from this question, such as: Given that multiple files are being read/written, is each file guaranteed to be only read from or written to at any given time? Why multiple files? Do you see multiple threads keeping multiple caches in a single process, or multiple threads accessing a single cache, or what? – jkh Aug 29 '12 at 20:32
  • This is just a general question, I wondered which type of queue was best to use for each process, along with a reason why. – max_ Aug 29 '12 at 22:33

2 Answers2

1

For better performance, I suggest a concurrent queue with dispatch_sync for reads and dispatch_barrier_async for writes. As Mike Ash puts it:

Because this uses the barrier function, it ensures exclusive access to the cache while the block runs. Not only does it exclude all other writes to the cache while it runs, but it also excludes all other reads, making the modification safe.

hpique
  • 119,096
  • 131
  • 338
  • 476
0

Based on your follow-comment, I think the pattern / usage you're looking for is a serial queue (one per cache object / file) with dispatch_async()'d blocks for writing cache entries and dispatch_sync() (note difference) for reading them. The writes can be asynchronous on the serial queue, which will still keep them ordered, and doing synchronous reads will force all pending writes to be completed before trying to read a value back out.

jkh
  • 3,246
  • 16
  • 13
  • Wouldn't it be better if reads where concurrent? No need to wait for other reads as long as nothing is being written. – hpique Mar 26 '14 at 10:50