Hey I am using Affectiva Affdex ios SDK. Now I have 2 views.
UIView -> Where i run a camera stream. Code for the same is here:
func allConfig(withCamView cams:UIView) { let captureDevice = AVCaptureDeviceDiscoverySession(deviceTypes: [.builtInDualCamera, .builtInWideAngleCamera], mediaType: AVMediaTypeVideo, position: .unspecified) for device in (captureDevice?.devices)! { if device.position == .front{ do { let input = try AVCaptureDeviceInput(device: device) if session.canAddInput(input) { session.addInput(input) } if session.canAddOutput(previewOutput) { session.addOutput(previewOutput) } previewLayer = AVCaptureVideoPreviewLayer(session: session) previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill previewLayer.connection.videoOrientation = .portrait cams.layer.addSublayer(previewLayer) previewLayer.position = CGPoint(x: cams.frame.width/2, y: cams.frame.height/2) previewLayer.bounds = cams.frame session.startRunning() } catch let avError { print(avError) } } } }
another UICollectionView Cell where I am starting a detector. Code for that is here:
func createDetector() { destroyDetector() let captureDevice = AVCaptureDeviceDiscoverySession(deviceTypes: [.builtInDualCamera, .builtInWideAngleCamera], mediaType: AVMediaTypeVideo, position: .unspecified) for device in (captureDevice?.devices)! { if device.position == .front{ EMDetector = AFDXDetector(delegate: self, using: device, maximumFaces: 2, face: LARGE_FACES) EMDetector.maxProcessRate = 5 // turn on all classifiers (emotions, expressions, and emojis) EMDetector.setDetectAllExpressions(true) EMDetector.setDetectAllEmotions(true) EMDetector.setDetectAllAppearances(true) EMDetector.setDetectEmojis(true) // turn on gender and glasses EMDetector.gender = true EMDetector.glasses = true // start the detector and check for failure let error: Error? = EMDetector.start() if nil != error { print("Some Faliure in detector") print("root cause of error ------------------------- > \(error.debugDescription)") } } } }
These View take 50-50 screen space.
Issue:
Whenever I try and run the app, the camera stream freezes after one second. And that is because the detector starts. Now, if you check there github sample app (https://github.com/Affectiva/affdexme-ios/tree/master/apps/AffdexMe), also available on app store. the camera view is still on even when they are detecting the emotion.
I even tried merging the 2 functions and then calling the function, but somehow one function cancels the other.
what is the way around to this?
Thanks