4

I am trying to make a simple camera application where the front camera can detect faces. This should be simple enough:

  • Create a CameraView class that inherits from UIImage and place it in the UI. Make sure it implements AVCaptureVideoDataOutputSampleBufferDelegate in order to process frames from the camera in real time.

    class CameraView: UIImageView, AVCaptureVideoDataOutputSampleBufferDelegate 
    
  • Within a function handleCamera, called when the CameraView is instantiated, setup an AVCapture session. Add input from the camera.

    override init(frame: CGRect) {
        super.init(frame:frame)
    
        handleCamera()
    }
    
    func handleCamera () {
        camera = AVCaptureDevice.defaultDevice(withDeviceType: .builtInWideAngleCamera,
                                               mediaType: AVMediaTypeVideo, position: .front)
        session = AVCaptureSession()
    
        // Set recovered camera as an input device for the capture session
        do {
            try input = AVCaptureDeviceInput(device: camera);
        } catch _ as NSError {
            print ("ERROR: Front camera can't be used as input")
            input = nil
        }
    
        // Add the input from the camera to the capture session
        if (session?.canAddInput(input) == true) {
            session?.addInput(input)
        }
    
  • Create output. Create a serial output queue to pass the data to which will then be processed by the AVCaptureVideoDataOutputSampleBufferDelegate (the class itself in this case). Add output to session.

        output = AVCaptureVideoDataOutput()
    
        output?.alwaysDiscardsLateVideoFrames = true    
        outputQueue = DispatchQueue(label: "outputQueue")
        output?.setSampleBufferDelegate(self, queue: outputQueue)
    
        // add front camera output to the session for use and modification
        if(session?.canAddOutput(output) == true){
            session?.addOutput(output)
        } // front camera can't be used as output, not working: handle error
        else {
            print("ERROR: Output not viable")
        }
    
  • Setup the camera preview view and run the session

        // Setup camera preview with the session input
        previewLayer = AVCaptureVideoPreviewLayer(session: session)
        previewLayer?.videoGravity = AVLayerVideoGravityResizeAspectFill
        previewLayer?.connection.videoOrientation = AVCaptureVideoOrientation.portrait
        previewLayer?.frame = self.bounds
        self.layer.addSublayer(previewLayer!)
    
        // Process the camera and run it onto the preview
        session?.startRunning()
    
  • in the captureOutput function run by the delegate, convert the recieved sample buffer to CIImage in order to detect faces. Give feedback if a face is found.

    func captureOutput(_ captureOutput: AVCaptureOutput!, didDrop sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!) {
    
    let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)
    let cameraImage = CIImage(cvPixelBuffer: pixelBuffer!)
    
    
    let accuracy = [CIDetectorAccuracy: CIDetectorAccuracyHigh]
    let faceDetector = CIDetector(ofType: CIDetectorTypeFace, context: nil, options: accuracy)
    let faces = faceDetector?.features(in: cameraImage)
    
    for face in faces as! [CIFaceFeature] {
    
          print("Found bounds are \(face.bounds)")
    
          let faceBox = UIView(frame: face.bounds)
    
          faceBox.layer.borderWidth = 3
          faceBox.layer.borderColor = UIColor.red.cgColor
          faceBox.backgroundColor = UIColor.clear
          self.addSubview(faceBox)
    
          if face.hasLeftEyePosition {
              print("Left eye bounds are \(face.leftEyePosition)")
          }
    
          if face.hasRightEyePosition {
              print("Right eye bounds are \(face.rightEyePosition)")
          }
      }
    }
    

My problem: I can get the camera running but with the multitude of different codes I have tried from all over the internet, I have never been able to get captureOutput to detect a face. Either the application doesn't enter the function or it crashes because of a varible that doesn't work, the most often being that the sampleBuffer variable is nul. What am I doing wrong?

KazToozs
  • 43
  • 4

1 Answers1

1

You need to change your captureOutput function arguments to the following: func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!)

Your captureOutput function calls when buffer drops, not when it gets from camera.

  • 1
    I actually found this with the help of an iOS dev on my internship and forgot to update the question. This was effectively all that was missing, thank you for looking through and hopefully this'll help someone else. – KazToozs May 22 '17 at 18:49
  • Were you able to run the detection smoothly? I've even tried with CIDetectorAccuracyLow still the view looks a bit sluggish when I turn on face detection in real time. – nr5 Mar 06 '19 at 06:21