6

I’m currently working on a feature of my app, which recognizes faces in a camera stream. I’m reading landmark features like the mouth etc. Everything works fine when light conditions are sufficient. But in the dark both ARKit and Vision have trouble. Is there a way to automatically adapt the stream brightnen to background illumination in order to maintain functionality?

Research shows that exposuretime is central to image brightnen. Thus, I’ve tried a function that adapts exposure time. If there is no face recognized and the image is to dark exposuretime would increase by 0.01. Something like the functions from this article . But either it didn’t work, or it was to bright, so no face could be recognized. Because of that I tried the automat version captureDevice.focusMode = .continuousAutoFocus but I didn’t notice any significant improvement.

Here is my code:

Vision API

        let devices = AVCaptureDevice.DiscoverySession(deviceTypes: [.builtInWideAngleCamera], mediaType: AVMediaType.video, position: .front).devices

        //2:  Select a capture device
        do {
            if let captureDevice = devices.first {
                let captureDeviceInput = try AVCaptureDeviceInput(device: captureDevice)



                if(captureDevice.isLowLightBoostSupported){
                try captureDevice.lockForConfiguration()
                captureDevice.automaticallyEnablesLowLightBoostWhenAvailable = true
                captureDevice.unlockForConfiguration()
                }

                if(captureDevice.isExposureModeSupported(.continuousAutoExposure)){
                try captureDevice.lockForConfiguration()
                captureDevice.exposureMode = .continuousAutoExposure
                captureDevice.unlockForConfiguration()
                }


                if(captureDevice.isFocusModeSupported(.continuousAutoFocus)) {
                    try! captureDevice.lockForConfiguration()
                    captureDevice.focusMode = .continuousAutoFocus
                }

                try! captureDevice.lockForConfiguration()
                captureDevice.automaticallyAdjustsVideoHDREnabled = true
                captureDevice.unlockForConfiguration()


                avSession.addInput(captureDeviceInput)
            }

        } catch {
            print(error.localizedDescription)
        }

        let captureOutput = AVCaptureVideoDataOutput()
        captureOutput.setSampleBufferDelegate(self as AVCaptureVideoDataOutputSampleBufferDelegate, queue: DispatchQueue(label: "videoQueue"))
        avSession.addOutput(captureOutput)

ARKit

func renderer(
        _ renderer: SCNSceneRenderer,
        didUpdate node: SCNNode,
        for anchor: ARAnchor) {
        guard let faceAnchor = anchor as? ARFaceAnchor,
            let faceGeometry = node.geometry as? ARSCNFaceGeometry else {
                return
        }



        node.camera?.bloomThreshold = 1


        node.camera?.wantsHDR = true
        node.camera?.wantsExposureAdaptation = true
        node.camera?.exposureAdaptationBrighteningSpeedFactor = 0.2

        node.focusBehavior = .focusable
        faceGeometry.update(from: faceAnchor.geometry)
        expression(anchor: faceAnchor)

...
}
Bruno Eigenmann
  • 346
  • 3
  • 16

0 Answers0