[I'm a noob in Machine Learning and OpenCV]
These below are the results i.e. 68 facial landmarks that you get on applying the DLib's Facial Landmarks model that can be found here.
It mentions in this script that the models was trained on the on the iBUG 300-W face landmark dataset.
Now, I wish to create a similar model for mapping the hand's landmarks. I have the hand dataset here.
What I don't get is:
1. how am I supposed to train the model on those positions? Would I have to manually mark each joint in every single image or is there an optimised way for this?
2. In DLib's model, each facial landmark position has a particular value for e.g., the right eyebrows are 22, 23, 24, 25, 26 respectively. At what point would they have been given those values?
3. Would training those images on DLib's shape predictor training script suffice or would I have to train the model on other frameworks (like Tensorflow + Keras) too?