My app needs to show a camera preview in SwiftUI for QR code scanning. As SwiftUI doesn't have a camera preview, most tutorials describe how to use AVCaptureVideoPreviewLayer
(based on UIKit) in a UIPresentationView
. However, I found this alternative approach which converts each frame to a CGImage
so that it can be presented in a SwiftUI Image
view. This seems quite a bit simpler, but I'm concerned in case it might be significantly less efficient than a UIKit-based solution. The conversion function is:
func createCGImage(from buffer: CVPixelBuffer?) -> CGImage? {
guard let buffer = buffer else { return nil }
let context = CIContext()
let ciImage = CIImage(cvImageBuffer: buffer)
return context.createCGImage(ciImage, from: ciImage.extent)
}
Does that two-stage conversion mean CoreImage has to convert the format of every pixel (twice?), while AVCaptureVideoPreviewLayer
can skip that to display the CVPixelBuffer
without pixel conversion? Or does one or both of those CI conversion stages only change the container, so there's no significant overhead? Or do both involve pixel conversion, but it's done on the GPU in either case so it doesn't make much difference?