2

I need help with iris comparing.

I've already segment and normalize my iris images. Now I want to extraxt features, add it to database, or just simply in list of feature vector, and then compare it with other features vector. I want my application to decide if such iris is already in database or not. Of course images are diffenent, they were done in different light, angle etc.

I thought that Gabor filter would be helpful, so I it to 12 different parameters values:

Mat kernel = Imgproc.getGaborKernel(new Size(25, 25), sigma, theta, lambda, gamma, psi, CvType.CV_64F);
Scalar sum = Core.sumElems(kernel); //kerner normalization
Core.divide(kernel, sum, kernel);   
Imgproc.filter2D(floatSource, dest, CvType.CV_64F, kernel);

Then I compute 12 Hamming distances using this function:

dist_ham = Core.norm(it1.next(), it2.next(), Core.NORM_HAMMING);

And get the average.

And... it does not work. Hamming distanse is similar when I am comparing 2 different images of the same iris or 2 different irises. How can I make my algorithm better? Maybe I should use some implemented in openCV mathers to obtain good results? For me it don't matter which algorithm I will use, I just want have good results. And I'm a little begginer.

Some sample pictures: Person one img1: enter image description here Normalized iris for person one img1: enter image description here

Person one img2: enter image description here Normalized iris for person one img2: enter image description here

Hamming distance for this example is about 29000 (and this is the lowest distance i got, in the most part i got about 30000 - 31000 for the same person iris) Hamming distance for different persons is about 31000 (depends on tested image)

berak
  • 39,159
  • 9
  • 91
  • 89
Araneo
  • 477
  • 2
  • 9
  • 25
  • I think this question is pretty vast & more than 1 approach & solution is there! But before that we need more input. First at What stage you are now? what are the Pre-processing steps you are doing before comparison? what Unwrapping technique you are using? – Balaji R Nov 21 '14 at 10:14
  • Did you normalize the pupil size? Would block-matching work if both irises have the same orientation? Do you have sample images? – Micka Nov 21 '14 at 11:55
  • @BalajiR as I said, I preproces my eye images and obtain an iris pictures in cartesian system. Eg. picture: [link](http://postimg.org/image/6z6ct341f/) This region i get by Hough circle transformation. – Araneo Nov 21 '14 at 12:18
  • @Micka I edit my post and gave an example – Araneo Nov 21 '14 at 12:30
  • In your normalized irises you can see that the brightness or contrast differs for the iris part (while the size/position looks good) and the size of the non-iris parts (lashes and above and the lower dark part) differs much. Try to detect+exclude the non-iris parts. Maybe you can try some kind of "descriptor" of your iris image: HOG/SIFT/BRISK etc – Micka Nov 22 '14 at 12:28
  • @Micka which matcher from openCV are you recomending to solve my problem? If any of theese will be good? – Araneo Dec 08 '14 at 13:32
  • why not just test each one and select the best one? I never matched any irises ;) if you post a lot of more normalized iris images, I (or someone else) "might" get an intuition about the properties the matching task must fulfil. Did you search for scientific papers about the problem? – Micka Dec 08 '14 at 15:13
  • Of course I did. The most common method is to use a Gabor filters and Hamming distance with lashes region cutting which I've tried to do. Unfortunately, I didn't find any solutions or at least suggestion with usage of openCV matchers in problems like mine (very similar, small pictures). I post images later, afterwork. Thanks for your reply. – Araneo Dec 09 '14 at 14:55
  • How did you do the normalization part? I'm struggeling with it – Etun Apr 29 '15 at 13:54
  • It's similar to conversion between polar and rectangular coordinates. – Araneo May 06 '15 at 10:41
  • @Araneo i am struggeling hard with iris detection (Hough), could you share yout code? – dasAnderl ausMinga Feb 10 '16 at 15:07

1 Answers1

1

I was successful in doing this just by implementing the algorithm/math in Daugman's paper, etc. My suggestion is to actually visualize the gabor kernels to find the meaningful combination of parameters such as sigma and lambda. I didn't use OpenCV's GetGaborKernel but used a hand-crafted one.

hiroki
  • 381
  • 1
  • 7