0

I have a CoreML Image Classification task, that takes the "live stream" from the iOS device's [video] camera and occurs in the background. Once objects have been been identified, and other app logic has occurred, I would like to update the UI's label with some of the data.

Can someone explain how the callout to DispatchQueue.main.asyc(execute: { }) is able to access the variable(s) I have been working with? I think this is essentially a scoping issue?

The code I am currently using:

func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {

    processCameraBuffer(sampleBuffer: sampleBuffer)

}

func processCameraBuffer(sampleBuffer: CMSampleBuffer) {

    let coreMLModel = Inceptionv3()

    if let model = try? VNCoreMLModel(for: coreMLModel.model) {
        let request = VNCoreMLRequest(model: model, completionHandler: { (request, error) in
            if let results = request.results as? [VNClassificationObservation] {

                var counter = 0
                var otherVar = 0

                for item in results[0...9] {

                    if item.identifier.contains("something") {
                        print("some app logic goes on here")
                        otherVar += 10 - counter
                    }
                    counter += 1

                }
                switch otherVar {
                case _ where otherVar >= 10:
                    DispatchQueue.main.async(execute: {
                        let displayVarFormatted = String(format: "%.2f", otherVar / 65 * 100)
                        self.labelPrediction.text = "\(counter): \(displayVarFormatted)%"
                    })
                default:
                    DispatchQueue.main.async(execute: {
                        self.labelPrediction.text = "No result!"
                    })
                }
            }
        })

            if let pixelBuffer: CVPixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) {
                let handler = VNImageRequestHandler(cvPixelBuffer: pixelBuffer, options: [:])
                do {
                    try handler.perform([request])
                } catch {
                    print(error.localizedDescription)
                }
            }
    }
}

Its the self.labelPrediction.text = "" lines inside the switch statement that is causing the issue. This var is always 0 currently.

jingo_man
  • 509
  • 3
  • 13
  • Try putting a breakpoint on the line you mentioned and see what the variables contain. (Generally, blocks capture the values they need.) – Phillip Mills Dec 07 '17 at 13:25

1 Answers1

0

It's not the matter of DispatchQueue. From processCameraBuffer(sampleBuffer:), your code update your UI before it get result.

To solve this, you need to use escaping closure. Your function should look like this.

func processCameraBuffer(sampleBuffer: CMSampleBuffer, completion: @escaping (Int, String) -> Void) {
    // 2.
    let request = VNCoreMLRequest(model: model, completionHandler: { (request, error) in

      DispatchQueue.main.async(execute: {
          // 3.
          let displayVarFormatted = String(format: "%.2f", otherVar / 65 * 100)
          completion(counter, displayVarFormatted)
      })
    }
}

func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) { 
    // 1.
    processCameraBuffer(sampleBuffer) { counter, displayVarFormatted in
        /*
         This Closure will be executed from 
         completion(counter, displayVarFormatted)
        */
        // 4.
        self.labelPrediction.text = "\(counter): \(displayVarFormatted)%"
    }
}

Scope of variable is not a problem from here. You need to handle asynchronous task.

  1. Capture occurs.
  2. processCameraBuffer is called and VNCoreMLRequest executed.
  3. You will get data and execute processCameraBuffer's completion block, by completion().
  4. Update Label.
Changnam Hong
  • 1,669
  • 18
  • 29
  • Thanks @changnam-hong. This function is called from a "live stream" from the iOS device's camera: func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) { processCameraBuffer(sampleBuffer: sampleBuffer) } So this is executed on every frame capture. And it needs to continually rescan the video stream so image classification continually happens. Does this change the answer? – jingo_man Dec 07 '17 at 14:58
  • I have implemented my code to follow this format. However, at step 4 XCode is still complaining that the `self.labelPrediction.text = ""` line still needs to be run on the main thread. When I change this back to a `DispatchQueue.main.async{}`, once again this variable is never set (or is set to 0). – jingo_man Dec 08 '17 at 09:41