0

I have several Dispatch work items to execute on a queue i don't want to redeclare the codes, i want to pass them to an array or list of DispatchWorkItems and then inject it to a dispatch queue is there any way to achieve this ?

    func executeDispatchWorkItem(url: String, completion  : @escaping  (Result<String,Error>)-> Void,beganHandler  : @escaping  (String)-> Void){

do {
    beganHandler("\(url) Began to execute ")
    let content = try String(contentsOf:URL(string: url)!)
    completion(.success(content))
}
catch let error {
        completion(.failure(error))
}
sleep(1)

 }

   var serialQueue = DispatchQueue(label: "A queue")

  serialQueue.async {
executeDispatchWorkItem(url: "https://www.google.com/",completion: 
{data in

    switch data {
    case .success(let data):
        print("URL : \(data) completed  with \(String(describing: data))")
    case .failure(let error ):
        print("URL : \(error.localizedDescription) failed  with \(error.localizedDescription)")
    }

}, beganHandler: { me in
        print("\(me) began to execute ")
})

executeDispatchWorkItem(url: "www.facebook.com",completion: {data in

    switch data {
    case .success(let data):
        print("URL : \(data) completed  with \(String(describing: 
data))")
    case .failure(let error ):
        print("URL : \(error.localizedDescription) failed  with \(error.localizedDescription)")
    }

}, beganHandler: { me in
        print("\(me) began to execute ")
})
executeDispatchWorkItem(url: "www.youtube.com",completion: {data in

    switch data {
    case .success(let data):
        print("URL : \(data) completed  with \(String(describing: data))")
    case .failure(let error ):
        print("URL : \(error.localizedDescription) failed  with \(error.localizedDescription)")
    }

}, beganHandler: { me in
        print("\(me) began to execute ")
})

/// HOW EVER I WANT TO ACHIEVE SOMETHING LIKE THIS

let itemsToExecute : [DispatchWorkItem] = [dispatch1.dispatch2]

// IS THIS POSSIBLE ?

serialQueue.sync(execute: itemsToExecute) ?
BigFire
  • 317
  • 1
  • 4
  • 17

1 Answers1

2

Yes, you can have an array of DispatchWorkItem objects, but to dispatch them all, you’d just have to iterate through them, e.g., with either for-in or forEach:

let queue = DispatchQueue(label: "com.domain.app.requests")
let group = DispatchGroup()

let itemsToExecute: [DispatchWorkItem] = [item1, item2]

itemsToExecute.forEach { queue.async(group: group, execute: $0) }

group.notify(queue: .main) {
    print("all done")         // this is called when the requests are done
}

Note, I used async vs sync, because the whole point of using GCD is to avoid blocking the main queue, and while sync blocks, async doesn’t.

This begs the question of why you’d bother using an array of DispatchWorkItem at all, though. Just add the tasks to the queue directly, and the queue takes care of keeping track of all of them for you.


Frankly, we’d probably just want to use URLSession. For example:

@discardableResult
func request(from urlString: String, completion: @escaping  (Result<String,Error>) -> Void) -> URLSessionTask {
    let task = URLSession.shared.dataTask(with: URL(string: urlString)!) { data, response, error in
        guard let data = data, error == nil else {
            completion(.failure(error!))
            return
        }

        guard
            let httpResponse = response as? HTTPURLResponse,
            200..<300 ~= httpResponse.statusCode
        else {
            completion(.failure(NetworkError.invalidResponse(data, response)))
            return
        }

        guard let string = String(data: data, encoding: .utf8) else {
            completion(.failure(NetworkError.nonStringBody))
            return
        }

        completion(.success(string))
    }
    task.resume()
    return task
}

Where perhaps:

enum NetworkError: Error {
    case invalidResponse(Data, URLResponse?)
    case nonStringBody
}

Then, you can do something like:

for urlString in urlStrings {
    group.enter()
    request(from: urlString) { result in
        defer { group.leave() }

        switch result {
        case .failure(let error):
            print(urlString, error)

        case .success(let string):
            print(urlString, string.count)
        }
    }
}

group.notify(queue: .main) {
    print("all done")
}
Rob
  • 415,655
  • 72
  • 787
  • 1,044
  • is there a specific reason why the `group.leave` is called right at the top , inside the completion closure ? and not when the failure or success case is executed ? – BigFire Jul 22 '19 at 07:06
  • The use of `defer { group.leave() }` is a defensive programming technique, that guarantees that there can’t be any path of execution where I neglect to call `leave()`. If you sprinkle `leave()` calls throughout that closure, it’s really easy to miss a path of execution. Or you might add a `guard` statement at some later date, and forget to call `leave()` before `return`. In this case, it’s simple enough that the risk is pretty modest, but it’s just too easy to make mistakes and `defer` w/ `leave` makes the intent clear, and is just safer. – Rob Jul 22 '19 at 07:23
  • at the first example where you used the `group.notify` how come you did not use `group.enter ` ? is there any significance ? – BigFire Jul 22 '19 at 08:03
  • It’s because I used `async(group:)`, which does the `enter` and `leave` calls for us... – Rob Jul 22 '19 at 09:29
  • Thanks once again i've gained understanding – BigFire Jul 22 '19 at 09:43
  • i see that the queue executes all the request and does not wait to get a response from a `DispatchWorkItem` before it moves to the next in the queue, can the above approach be modified to achieve such ? i have asked a question , im sure you can help . https://stackoverflow.com/questions/57145963/how-can-i-wait-to-receive-a-response-from-a-dispatchworkitem-before-moving-on-to – BigFire Jul 22 '19 at 12:23
  • You can, but we often work hard to avoid doing so because it’s so much slower. If if at possible, try to allow the requests run concurrently and just manage your data model to allow data coming in quickly (and possibly out of sequence). If you absolutely have to have the requests run sequentially, the two common techniques are (1) have the completion handler of one initiate the next request; or (2) employ some mechanism that prevents one asynchronous task from letting the next start until the prior one is done (e.g., wrapping them in asynchronous `Operation` subclass). – Rob Jul 22 '19 at 16:19