0

Hey I am using Affectiva Affdex ios SDK. Now I have 2 views.

  1. UIView -> Where i run a camera stream. Code for the same is here:

    func allConfig(withCamView cams:UIView) {
    
    let captureDevice = AVCaptureDeviceDiscoverySession(deviceTypes: [.builtInDualCamera, .builtInWideAngleCamera], mediaType: AVMediaTypeVideo, position: .unspecified)
    
    for device in (captureDevice?.devices)! {
    
        if device.position == .front{
    
            do {
                let input = try AVCaptureDeviceInput(device: device)
    
                if session.canAddInput(input) {
                    session.addInput(input)
                }
    
                if session.canAddOutput(previewOutput) {
                    session.addOutput(previewOutput)
                }
    
                previewLayer = AVCaptureVideoPreviewLayer(session: session)
                previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill
                previewLayer.connection.videoOrientation = .portrait
    
                cams.layer.addSublayer(previewLayer)
    
                previewLayer.position = CGPoint(x: cams.frame.width/2, y: cams.frame.height/2)
                previewLayer.bounds = cams.frame
    
    
                session.startRunning()
    
    
            } catch let avError {
                print(avError)
            }
        }
    }
    
    }
    
  2. another UICollectionView Cell where I am starting a detector. Code for that is here:

     func createDetector() {
    destroyDetector()
    let captureDevice = AVCaptureDeviceDiscoverySession(deviceTypes: [.builtInDualCamera, .builtInWideAngleCamera], mediaType: AVMediaTypeVideo, position: .unspecified)
    for device in (captureDevice?.devices)! {
    
        if device.position == .front{
            EMDetector = AFDXDetector(delegate: self, using: device, maximumFaces: 2, face: LARGE_FACES)
            EMDetector.maxProcessRate = 5
    
            // turn on all classifiers (emotions, expressions, and emojis)
            EMDetector.setDetectAllExpressions(true)
            EMDetector.setDetectAllEmotions(true)
            EMDetector.setDetectAllAppearances(true)
            EMDetector.setDetectEmojis(true)
    
            // turn on gender and glasses
            EMDetector.gender = true
            EMDetector.glasses = true
    
    
    
            // start the detector and check for failure
            let error: Error? = EMDetector.start()
            if nil != error {
                print("Some Faliure in detector")
                print("root cause of error ------------------------- > \(error.debugDescription)")
            }
        }
    }
    
    }
    

These View take 50-50 screen space.

Issue:

Whenever I try and run the app, the camera stream freezes after one second. And that is because the detector starts. Now, if you check there github sample app (https://github.com/Affectiva/affdexme-ios/tree/master/apps/AffdexMe), also available on app store. the camera view is still on even when they are detecting the emotion.

I even tried merging the 2 functions and then calling the function, but somehow one function cancels the other.

what is the way around to this?

Thanks

Aakash Dave
  • 866
  • 1
  • 15
  • 30

1 Answers1

1

The problem is that you're creating a capture session for your first view and the SDK creates another session to process the camera input. You can't have multiple sessions running at the same time.

One way to fix this is to use the image returned from the delegate method func detector(_ detector: AFDXDetector!, hasResults faces: NSMutableDictionary!, for image: UIImage!, atTime time: TimeInterval) in both views.

Another way is to create your camera session then pass the images to the detector yourself.

  1. initialize your detector like this

    EMDetector = AFDXDetector(delegate: self, discreteImages: false, maximumFaces: 2, face: LARGE_FACES)

  2. Then pass images from your capture session to the detector using

    EMDetector.processImage(UIImage!, atTime: TimeInterval)

Moraly
  • 26
  • 2
  • hey many many thanks for the answer. I was really in need of the solution. I had a question. If I go by the first way do I need to define the delegate method and other processed Image methods in both views – Aakash Dave Nov 13 '17 at 18:17
  • 1
    What do you mean by defining the delegate method in both views? Your detector should have only one delegate, then pass the image to your both views – Moraly Nov 13 '17 at 18:33
  • Hey! i am using the other way of passing the image to detector through my cam session. It is not able to find the faces from the images. What can be the issue here? – Aakash Dave Dec 06 '17 at 08:52