I've been using Vision to identify Facial Landmarks, using VNDetectFaceLandmarksRequest
.
It seems that whenever a face is detected, the resulting VNFaceObservation
will always contain all the possible landmarks, and have positions for all of them. It also seems that positions returned for the occluded landmarks are 'guessed' by the framework.
I have tested this using a photo where the subject's face is turned to the left, and the left eye thus isn't visible. Vision returns a left eye landmark, along with a position.
Same thing with the mouth and nose of a subject wearing a N95 face mask, or the eyes of someone wearing opaque sunglasses.
While this can be a useful feature for other use cases, is there a way, using Vision or CIDetector, to figure which face landmarks actually are visible on a photo?
I also tried using CIDetector, but it appears to be able to detect mouths and smiles through N95 masks, so it doesn't seem to be a reliable alternative.