I am trying to implement a code the get the centroid of the eye and track the eye movement to control a wheel chair. The code applied on an image is:
import cv2
import numpy as np
image = cv2.imread('eye pupil.jpg')
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
cl1 = cv2.createCLAHE(clipLimit=2.0, tileGridSize=(8,8))
clahe = cl1.apply(gray)
bw = cv2.adaptiveThreshold(clahe, 255, cv2.ADAPTIVE_THRESH_MEAN_C,\
cv2.THRESH_BINARY, 21, 12)
kernel = np.ones((5,5), np.uint8)
opening = cv2.morphologyEx(bw, cv2.MORPH_OPEN, kernel, iterations=3)
img, contours, hierarchy = cv2.findContours(opening, cv2.RETR_TREE,
cv2.CHAIN_APPROX_SIMPLE)
draw = cv2.drawContours(image, contours, 5, (0,0,255), 2)
cv2.imshow('draw', draw)
cv2.waitKey(0)
cv2.destroyAllWindows()
and the output is:
Definitely the code I wrote gives me a pretty well centroid to start off with. But when I try implement this on a video the output is so missy.
In the code for a video I use haarcascade_righteye_2splits.xml
or haarcascade_leftteye_2splits.xml
to be more specific for the camera on what to detect.
Any help how can I do an eye pupil tracking code on a video, left, right, and blink.
I am using OpenCV 3.1.0-dev and Python 2.7.x