I've been playing with Apple's CoreML and Vision APIs.
My goal would be to make a simple proof of concept and be able to recognize nails on a hand picture. This is very specific.
I have been trying to find documentation on how to create your own VNRequest, and I really have no idea on how to do this.
I know that the Vision API offers rectangle, face and text recognition only...
How can I make my own request to teach Vision how to recognize what I want on a picture ?