I am working on Hand Gesture Classification. I came across Google's Hand Gesture Recognizer which uses Mediapipe Model Maker (e.g: "from mediapipe_model_maker import gesture_recognizer") to train the model and generates .task file which I use on Android for prediction. My Question is that the model is only able to recognize my one hand. However, I have signs which require both hands. How can I customize the model in such a way that the model trains and give me predictions in Android on both hands?
https://mediapipe-studio.webapps.google.com/demo/gesture_recognizer https://developers.google.com/mediapipe/solutions/vision/gesture_recognizer
In Android Code, I did: private var defaultNumResults = 2 //was 1 before I expected that it will detect both hands, but it was still detecting one hand. Code link: https://developers.google.com/mediapipe/solutions/vision/gesture_recognizer/android
In python code used to train and make the .task file, I was not able to find any way to customize the model in such a way that it recognizes both hands. Code link: https://developers.google.com/mediapipe/solutions/vision/gesture_recognizer/python