0

I am testing several methods for finding region of interest in hand gesture. in opencv for example I found some methods like camshift (for tracking a interest object), some background extraction methods (MoG, MoG2, ..) which specially are used in video to subtract background from foreground, which can also be used when we have hand as an object in a video with a complex background. and also GrabCut and backproject methods which can be used for hands posture in a static state. Contours, edge detection or skin methods are some other approaches for detecting hand in an image or video. And lastly I found that haar cascade can be used as well. I want to know that for passing from this stage, which algorithm is the best choice, considering that I use images with complex background. some algorithms like Grabcut or backproject were good but the most important problem was that I should manually specify some regions as foreground or background and this is not what it should be. After choosing a method for roi, generally what are the most important features in hand gesture recognition? for extracting features which method is your suggestion? that can work well with one of the general classifiers like svm, knn, etc to classify an specified image.

Thank you all for taking your time

Maryam
  • 109
  • 8

1 Answers1

0

You can start with HSV based skin colour filtering to isolate the skin coloured objects in the image...in most case that will constitute your face and your palm. You can then use face detection to isolate, and then eliminate the face blob. Once you've extracted the palm, you can simplify its contour (check approxPolyDP in OpenCV ) and count the number of convexity defects in the hand contour. Since you haven't specified which programming language you're working on, here's a python code to start you off with skin detection:

import cv2

def nothing(x): #needed for createTrackbar to work in python.
    pass    

cap = cv2.VideoCapture(0)
cv2.namedWindow('temp')
cv2.createTrackbar('bl', 'temp', 0, 255, nothing)
cv2.createTrackbar('gl', 'temp', 0, 255, nothing)
cv2.createTrackbar('rl', 'temp', 0, 255, nothing)
cv2.createTrackbar('bh', 'temp', 255, 255, nothing)
cv2.createTrackbar('gh', 'temp', 255, 255, nothing)
cv2.createTrackbar('rh', 'temp', 255, 255, nothing)
while True:
        ret,img=cap.read()  #Read from source
        hsv=cv2.cvtColor(img,cv2.COLOR_BGR2HSV)
        bl_temp=cv2.getTrackbarPos('bl', 'temp')
        gl_temp=cv2.getTrackbarPos('gl', 'temp')
        rl_temp=cv2.getTrackbarPos('rl', 'temp')
        bh_temp=cv2.getTrackbarPos('bh', 'temp')
        gh_temp=cv2.getTrackbarPos('gh', 'temp')
        rh_temp=cv2.getTrackbarPos('rh', 'temp')
        thresh=cv2.inRange(hsv,(bl_temp,gl_temp,rl_temp),(bh_temp,gh_temp,rh_temp))
        if(cv2.waitKey(10) & 0xFF == ord('b')):
          break #break when b is pressed 
        cv2.imshow('Video', img)
        cv2.imshow('thresh', thresh)
Cireo
  • 4,197
  • 1
  • 19
  • 24
Saransh Kejriwal
  • 2,467
  • 5
  • 15
  • 39
  • Thanks for your comments, if I want to use one of python-c++-opencv algorithm for obtaining region of interest (hand) which method is your recommendation? – Maryam May 06 '16 at 05:46
  • I used the this code, it says invalid syntax in line while True. my python is 3.5 and opencv 3.1 – Maryam May 06 '16 at 05:52
  • This code is built on OpenCV 2.4.9. Hence the invalid syntax. You'll need to make modification according to OpenCV 3.x – Saransh Kejriwal May 06 '16 at 06:09