1

I am trying to detect faces in local video and blur or pixelate them while the video is playing. So far I am getting each video frame and using VISION or MLKIT (I've tried both) I process the frame and pixelate the face. Now the problem is that this process is taking too long and the video is not even played. My idea was to first process all the frames and the export it as a new video and play it but this process takes around 3-4 minutes which is too much waiting.

This is more less how I am getting the frames: Link

And here is how I pixelate the image: Link

What can I do to detect faces and blur them from a local video without taking so much time?

markalex
  • 8,623
  • 2
  • 7
  • 32
Rafael Jimeno
  • 626
  • 2
  • 8
  • 20
  • 2
    What *exactly* is taking too long? Detection, or pixellation? If you take a still image and process your code, is it any better? –  Nov 17 '20 at 23:00
  • Face detection is taking more time, for each frame I convert it to image and send it to MLKit or Vision to detect faces, this is taking long. After getting the image back with the face details, I apply the blur and this is taking less time than a face detection. – Rafael Jimeno Nov 18 '20 at 03:24
  • Then sorry. I'm mostly a CoreImage coder anymore (retired systems analyst) and it doesn't sound like I can be of help. I dabbled in MLKit/Vision back when it was introduced, but that's it. Sounds like you've narrowed it down to the bottleneck. Last thought? Maybe it's not so much the back/forth of processing each frame but the... processing each frame? Either way, good luck. –  Nov 18 '20 at 06:00
  • Have you tried using `CIDetector` to detect faces, init your filter context with metalDevice to get faster result and set CIDetector option to medium accuracy – Coder ACJHP Nov 15 '21 at 12:49
  • Look at [this example](https://github.com/dcordero/BlurFace/blob/master/BlurFace/BlurFace/BlurFace.swift) – Coder ACJHP Nov 15 '21 at 12:50

0 Answers0