1

I am trying to implement a code the get the centroid of the eye and track the eye movement to control a wheel chair. The code applied on an image is:

import cv2
import numpy as np

image = cv2.imread('eye pupil.jpg')
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)

cl1 = cv2.createCLAHE(clipLimit=2.0, tileGridSize=(8,8))
clahe = cl1.apply(gray)

bw = cv2.adaptiveThreshold(clahe, 255, cv2.ADAPTIVE_THRESH_MEAN_C,\
            cv2.THRESH_BINARY, 21, 12)

kernel = np.ones((5,5), np.uint8)
opening = cv2.morphologyEx(bw, cv2.MORPH_OPEN, kernel, iterations=3)

img, contours, hierarchy = cv2.findContours(opening, cv2.RETR_TREE,
                                            cv2.CHAIN_APPROX_SIMPLE)

draw = cv2.drawContours(image, contours, 5, (0,0,255), 2)

cv2.imshow('draw', draw)

cv2.waitKey(0)
cv2.destroyAllWindows()

and the output is:

eye pupil centroid

Definitely the code I wrote gives me a pretty well centroid to start off with. But when I try implement this on a video the output is so missy.

In the code for a video I use haarcascade_righteye_2splits.xml or haarcascade_leftteye_2splits.xml to be more specific for the camera on what to detect.

Any help how can I do an eye pupil tracking code on a video, left, right, and blink.

I am using OpenCV 3.1.0-dev and Python 2.7.x

Tes3awy
  • 2,166
  • 5
  • 29
  • 51
  • Take a look at the [Hough Circle Transform](http://docs.opencv.org/2.4/doc/tutorials/imgproc/imgtrans/hough_circle/hough_circle.html), it might help you ! – francis Aug 23 '16 at 19:46
  • @francis Alright – Tes3awy Aug 23 '16 at 21:45
  • @francis not giving me convenient results. – Tes3awy Aug 23 '16 at 23:47
  • In the example you posted, the error seems to be due to the reflection of the ligtht in the eyes. Indeed, there is a small white zone in the pupil. As openings are performed, this white zone grows and translates the centroid to the opposite direction. – francis Aug 24 '16 at 07:27

0 Answers0