3

I'm trying to detect using ARKit if a real person is using the device. I started from this other question on StackOverflow.

I am able to detect faces from both a "real person" that is in front of the device and "fake person" from photos or videos. I would like to differentiate between them and see if a detected face is "real" or from a photo/video using the TrueDepth Camera.

I tried computing the normals of polygonal faces using the logic from Apple provided here with Calculate the Normal of a Triangle but I'm not sure if I'm doing that correctly.

Here's the code that I currently have where I'm computing the normals based on the vertices that I'm detecting.

    func renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor) {
    guard let faceGeometry = node.geometry as? ARSCNFaceGeometry,
        let faceAnchor = anchor as? ARFaceAnchor
        else { return }
            
    let vertices = faceAnchor.geometry.vertices
    let numberOfVertices = vertices.count
    var index = 0
    var normals = [simd_float3]()
    
    while index < numberOfVertices - 2 {
        let vertex1 = vertices[index]
        let vertex2 = vertices[index + 1]
        let vertex3 = vertices[index + 2]
        
        let vector1 = vertex2 - vertex1
        let vector2 = vertex2 - vertex3
        
        let normal = simd_normalize(simd_cross(vector1, vector2))
        normals.append(normal)
        
        index += 1
    }
    
    faceGeometry.update(from: faceAnchor.geometry)
}

I would like to know if I'm computing the normals correctly and how I should analyze that data so I'm able to differentiate between a "real person" and a "fake person".

0 Answers0