1

My app needs to show a camera preview in SwiftUI for QR code scanning. As SwiftUI doesn't have a camera preview, most tutorials describe how to use AVCaptureVideoPreviewLayer (based on UIKit) in a UIPresentationView. However, I found this alternative approach which converts each frame to a CGImage so that it can be presented in a SwiftUI Image view. This seems quite a bit simpler, but I'm concerned in case it might be significantly less efficient than a UIKit-based solution. The conversion function is:

    func createCGImage(from buffer: CVPixelBuffer?) -> CGImage? {
        guard let buffer = buffer else { return nil }
        let context = CIContext()
        let ciImage = CIImage(cvImageBuffer: buffer)
        return context.createCGImage(ciImage, from: ciImage.extent)
    }

Does that two-stage conversion mean CoreImage has to convert the format of every pixel (twice?), while AVCaptureVideoPreviewLayer can skip that to display the CVPixelBuffer without pixel conversion? Or does one or both of those CI conversion stages only change the container, so there's no significant overhead? Or do both involve pixel conversion, but it's done on the GPU in either case so it doesn't make much difference?

realh
  • 962
  • 2
  • 7
  • 22
  • Your Core Image code might be as fast as `AVCaptureVideoPreviewLayer`. The only way to be sure is to measure it. But it's almost certainly not faster, and is probably slower, because you can be sure that Apple has optimized the heck out of `AVCaptureVideoPreviewLayer` for every device. – rob mayoff Sep 21 '22 at 15:37
  • The trouble is, I'd like to know in advance how much more efficient `AVCaptureVideoPreviewLayer` is so I can decide whether it's worth rewriting that part of my app. How would I measure it anyway? The function I would need to profile is buried somewhere only known to Apple, I think. I suppose I could just compare the CPU/GPU usage. – realh Sep 21 '22 at 15:50

0 Answers0