0

Basically, I'm trying to create a simple real-time facial recognition IOS app that streams the users face and tells them whether their eyes are closed. I'm following the google tutorial here - https://firebase.google.com/docs/ml-kit/ios/detect-faces. I'm on step 2 (Run the Face Detector) and I'm trying to create a visionImage using the CMSampleBufferRef. I'm basically just copying the code and when I do, there is no reference to "sampleBuffer" as shown in the tutorial. I don't know what to do as I really don't understand the CMSampleBuffer stuff.

Austin R.
  • 25
  • 5

1 Answers1

0

ML Kit has a Quickstart app showing how to do that. Here is the code:

https://github.com/firebase/quickstart-ios/tree/master/mlvision

Dong Chen
  • 829
  • 4
  • 7