0

This question is for those who have tried feature detection/matching methods on brain images - it is a broad one, and perhaps a bad one:

How could you tell if the method you used was "good enough?"

What does a successful matching/detection test look like for your data?

EDIT: As of now, I am not trying to detect any distinct features in particular. I'm using OpenCV's ORB, SIFT, SURF, etc detection methods, and seeing what they identify for features. Sometimes, however, the orientation of the brain changes entirely from a few set of images to the next set, so if I compare two images from these sets,the detection methods won't yield any effective results (i.e. the matching will be distinctly, completely off). But if I compare images that look similar, but not identical, the detection seems to work alright. Point is, it seems like detection works for frames that were taken around the same time, but not over a long interval. I wonder if others have come across this and if they have found that detection methods are still useful despite the fact.

haxtar
  • 1,962
  • 3
  • 20
  • 44
  • The question is definitively too broad. Which kind of features are you trying to match? which is your ultimate goal? – Imanol Luengo Jun 20 '16 at 21:05
  • As of now, nothing distinct in particular. I'm using OpenCV's ORB, SIFT, SURF, etc detection methods, and seeing what they identify for features. Sometimes, however, the orientation of the brain changes entirely from a few set of images to the next set, so if I compare two images from these sets,the detection (see above question) – haxtar Jun 20 '16 at 22:04
  • The 3 features you listed should be rotation invariant. If you `1. extract descriptors for an image 2. rotate image 3. extract descriptorsagain`. Most of the descriptors should be the same. That doesn't mean feature1 of the original image will match feature1 of the rotated version. But each descriptor SHOULD have a pair. If you don't get good detection with this test, maybe your matching algorithm is off? hard to tell without more detail – andrew Jun 20 '16 at 22:15
  • Again, without an ultimate goal is difficult to evaluate them. Maybe you are just using the wrong tool for the problem. A simple way to evaluate then is by matching descriptors: from o ne frame to the other which descriptors offer better matching. However... I'm not sure what you try to acclmplish with this in medical imaging... a SIFT keypoint will detect you an "interesting" area... there is absolutely no guarantee that it will detect what you need/want. That is way I was asking for your goal... those descriptors are not very popular in medical imaging (unless densely computed). – Imanol Luengo Jun 21 '16 at 06:53

1 Answers1

1

First of all, you should specify what kind of features or for which purpose, the experiment is going to be performed. Feature extraction is highly subjective in nature, it all depends on what type of problem you are trying to handle. There is no generic feature extraction scheme which works in all cases. For example if the features are pointing out to some tumor classification or lesion, then of course there are different softwares you can use to extract and define your features.

There are different methods to detect the relevant features regarding to the application: SURF algorithm (Speeded Up Robust Features) PLOFS: It is a fast wrapper approach with a subset evaluation. ICA or 'PCA

This paper is a very great review about brain MRI data feature extraction for tissue classification: https://pdfs.semanticscholar.org/fabf/a96897dcb59ad9f04b5ff92bd15e1bd159ef.pdf

I found this paper very good o understand the difference between feature extraction techniques. https://www.sciencedirect.com/science/article/pii/S1877050918301297

fati
  • 125
  • 5