-1

I'm developing an ARKit app with Vision framework capabilities (handling CoreML model).

loopCoreMLUpdate() function makes a loop which leads to Very High Energy Impact (CPU=70%, GPU=66%).

How to handle this task and decrease Energy Impact to LOW level?

What is a workaround for this loop issue that will help me decrease a CPU/GPU workload?

Here'a my code:

import UIKit
import SpriteKit
import ARKit
import Vision

class ViewController: UIViewController, ARSKViewDelegate {

    @IBOutlet weak var sceneView: ARSKView!
    let dispatchQueueML = DispatchQueue(label: "AI")
    var visionRequests = [VNRequest]()

    // .........................................
    // .........................................

    override func viewDidAppear(_ animated: Bool) {
        super.viewDidAppear(animated)
        let configuration = AROrientationTrackingConfiguration()
        sceneView.session.run(configuration)

        loopCoreMLUpdate()
    }

    func loopCoreMLUpdate() {          
        dispatchQueueML.async {
            self.loopCoreMLUpdate()  // SELF-LOOP LEADS TO A VERY HIGH IMPACT
            self.updateCoreML()
        }
    }

    func updateCoreML() {
        let piBuffer: CVPixelBuffer? = (sceneView.session.currentFrame?.capturedImage)
        if piBuffer == nil { return }
        let ciImage = CIImage(cvPixelBuffer: piBuffer!)
        let imageRequestHandler = VNImageRequestHandler(ciImage: ciImage, options: [:])

        do {
            try imageRequestHandler.perform(self.visionRequests)
        } catch {
            print(error)
        }
    }
    // .........................................
    // .........................................
}
Andy Jazz
  • 49,178
  • 17
  • 136
  • 220
  • Basically you are asking 'How to reduce battery usage on this function which I haven't posted?' – J. Doe Dec 02 '18 at 13:33

1 Answers1

2

Yes, the line you've marked would definitely be a huge problem. You're not looping here; you're spawning new async tasks as fast as you can, before the previous one even completes. In any case, you're trying to capture CVPixelBuffers faster then they're created, which is a huge waste.

If you want to capture frames, you don't create a tight loop to sample them. You set yourself as the ARSessionDelegate and implement session(_:didUpdate:). The system will tell you when there's a new frame available. (It is possible to create your own rendering loop, but you're not doing that here, and you shouldn't unless you really need your own rendering pipeline.)

Keep in mind that you will receive a lot of frames very quickly. 30fps or 60fps are very common, but it can be as high as 120fps. You cannot use all of that time slice (other things need processor time, too). The point is that you often will not be able to keep up with the frame rate and will either need to buffer for later processing, or drop frames, or both. This is a very normal part of real-time processing.

For this kind of classifying system, you probably want to choose your actual frame rate, maybe as low as 10-20fps, and skip frames in order to maintain that rate. Classifying dozens of nearly-identical frames is not likely helpful.

That said, make sure you've read Recognizing Objects in Live Capture. It feels like that's what you're trying to do, and there's good sample code available for that.

Rob Napier
  • 286,113
  • 34
  • 456
  • 610