-1

I am working on Hand Gesture Classification. I came across Google's Hand Gesture Recognizer which uses Mediapipe Model Maker (e.g: "from mediapipe_model_maker import gesture_recognizer") to train the model and generates .task file which I use on Android for prediction. My Question is that the model is only able to recognize my one hand. However, I have signs which require both hands. How can I customize the model in such a way that the model trains and give me predictions in Android on both hands?

https://mediapipe-studio.webapps.google.com/demo/gesture_recognizer https://developers.google.com/mediapipe/solutions/vision/gesture_recognizer

In Android Code, I did: private var defaultNumResults = 2 //was 1 before I expected that it will detect both hands, but it was still detecting one hand. Code link: https://developers.google.com/mediapipe/solutions/vision/gesture_recognizer/android

In python code used to train and make the .task file, I was not able to find any way to customize the model in such a way that it recognizes both hands. Code link: https://developers.google.com/mediapipe/solutions/vision/gesture_recognizer/python

1 Answers1

0

In python code used to train and make the .task file

As you can see you call method gesture_recognizer.Dataset.from_folder which also call method _get_hand_data _get_hand_data Number of hands equal to 1 for dataset loader process. You can try to override Dataset load logic or write it by your own.

Dakyz
  • 9
  • 2
  • 2
    Your answer could be improved with additional supporting information. Please [edit] to add further details, such as citations or documentation, so that others can confirm that your answer is correct. You can find more information on how to write good answers [in the help center](/help/how-to-answer). – Community Jun 01 '23 at 13:02