I'm working with Vision framework to detect faces in images. I couldn't find in the Apple's documentation what are the input image requirements. Usually when working with a machine learning model, and particularly with .mlmodel in CoreML, it describes the required input. For example Image (Color 112 x 112)
.
let image: UIImage = someUIImage()
let handler = VNImageRequestHandler(ciImage: CIImage(cgImage: (image?.cgImage)!))
let faceRequest = VNDetectFaceLandmarksRequest(completionHandler: { (request: VNRequest, error: Error?) in
guard let observations = request.results as? [VNFaceObservation]
else {
print("unexpected result type from VNFaceObservation")
return
}
self.doSomething(with observations: observations)
})
do {
try handler.perform([faceRequest])
} catch {
print("Face detection failed: \(error)")
}