I'm trying to make an algorithm that finds the roi of a latent fingerprint. There is no need for minutiae extraction or image enhancement (though some may be necessary), it only needs to find the region of the image that contains the actual fingerprint. I have tried several approaches, applying local ridge orientation and binarization and thinning to try to find a way to identify the roi. However my results have been lackluster at every turn. I've tried to read a lot of books and papers on this subject, but they are vague at best. Any thoughts on how to achieve this? Currently programming in python
Asked
Active
Viewed 721 times
3
-
You should search for ridge-like objects all over the picture. – Egor Skriptunoff Jul 19 '13 at 11:08
-
How do you propose to do that? Search the neighbourhood of pixels to see if the orientation is roughly the same? Use block direction and use the same approach? Edge detection? Find objects with continous lines? – larstoc Jul 19 '13 at 11:14
-
Yes, location of maximum concentration of continuous lines is your roi. – Egor Skriptunoff Jul 19 '13 at 11:21
-
It is a good approach, but I am a bit concerned as to whether it will hold due to the nature of latent fingerprints vs high quality fingerprints as they are often filled with discontinuities. Connecting the edges that are disconnected requires a lot of enhancement work and a good AI to connect them correctly. Though I think it might work in the majority of the cases, and at least give a good portion of the roi in worst cases – larstoc Jul 19 '13 at 11:30