I'm trying to make a sign language detection application. I'm using MediaPipe Holistic to extract key points and will use LSTM to train the model.
MediaPipe Holistic generates a total of 543 landmarks (33 pose landmarks, 468 face landmarks, and 21 hand landmarks per hand) for each sign language gesture.
Now, my question is, how I can connect the 543 landmarks to the gesture?. Is there a way that I can the computer that the keypoints that it is extracting belong to a certain gesture?