Two things:
1) On what queue is that code you provided scheduled?
Because its sounds like that queue might share its qos with your download tasks.
Dispatching to another queue does take multiple steps. So just because "comment1" is passed, one of the other queues might get the thread time to do some work while you're dispatching. That would lead to some delay yes and seems to be the most likely cause.
2) The WWDC 2016 video Concurrent Programming with GCD in Swift 3, might help.
Dispatching something new onto a higher priority queue does not automatically start the task right away if all cores of the cpu are already busy with other tasks. GCD will try its bests, but perhaps your tasks on those other queues are very intense/ doing things where interruption will cause problems.
So if you have something that must be started immediately, ensure that one CPU core/thread is idle. I'd try using an OperationQueue
and setting its maxConcurrentOperationCount
. Then place all your download operations onto that operation queue, leaving a core/thread ready to launch your audio.
However I don't think option 2 is valid here since you're saying it sometimes takes a few seconds.
Update:
As you've said, your provided code is run on a queue that is also used for your download tasks, this is primarily a faulty architecture design and I'm sorry to say that there's no solution anyone can come up with without seeing your entire project to help structure it better.
You'll have to figure out when and what tasks to dispatch onto what priority.
Here are my ideas with the code you provided:
Why are you using the .barrier
flag? This flag ensures that no other concurrent tasks are done by the queue, so good if you have data being manipulated by multiple tasks and want to avoid "race conditions". However this also means that after you have dispatched the block, it will wait for other tasks dispatched before it and THEN block the queue to do your tasks.
As other's have commented, profiling your code with instruments will likely show this behaviour.
Here's how I would structure your audio requests:
//A serial (not concurrent) audio queue. Will download one data at a time and play audio
let audioQueue = DispatchQueue(label: "Audio Queue My App", qos: .userInitiated)
//I'm on the main queue here!
audioQueue.async {
//I'm on the audio queue here, thus not blocking the main thread
let result = downloadSomeData()
//Download is done here, and I'm not being blocked by other dispatch queues if their priority is lower.
switch result {
case .success(let data):
//Do something with the data and play audio
case .failure(let error):
//Handle Error
}
}
///Downloads some audio data
func downloadSomeData() -> Swift.Result<Data, Error> {
//Create dispatch group
let dg = DispatchGroup()
dg.enter()
///my audio data
var data: Data?
//Dispatch onto a background queue OR place a block operation onto your OperationQueue
DispatchQueue.global(qos: .background).async {
//Download audio data... Alamofire/ URLSession
data = Data()
dg.leave()
}
dg.wait()
//Data was downloaded! We're back on the queue that called us.
if let nonOptionalData = data {
return .success(nonOptionalData)
} else {
return .failure(NSError(domain: "MyDomain", code: 007, userInfo: [ NSLocalizedDescriptionKey: "No data downloaded, failed?"]))
}
}
Regarding option #2:
It is possible to retrieve the amount of active cores by using ProcessInfo().activeProcessorCount
. Do note that this value could in theory change while running your app due to thermal throttling or other.
By using this information and setting the maxConcurrentOperationCount
of an OperationQueue
equal to the number of active processors you could ensure that operations/tasks do not have to share cpu time. (As long as there are no other DispatchQueues running.)
This approach is not safe however. All it takes is a team member to dispatch a task somewhere else in the future.
What is useful about this information however is that you can limit the amount of resources your download tasks use, by having a dedicated OperationQueue with say for example 2 maximum concurrent operations and putting all of your background work on this DispatchQueue. Your CPU will most likely not be used to its full potential and leave some room.