0

I am working on edge detection in images and would like to evaluate the performance of algorithm, if any any one could give me a reference or method on how to proceed it will be really helpful. :)

I do not have ground truth and data set includes color as well as gray images.

Thank you.

DBS
  • 1
  • 3
  • Wouldn't you need an edge detector that is as good as or better than what you're trying to write to validate your results? In which case, why write a new edge detector at all? – Bill Nov 20 '12 at 15:15
  • 1
    I was expecting better comment but anyways thanks for the encouragement :P. And to ans 'why new edge detector?' - I don't think work is done in that manner...did people stop thinking after knowing how to create fire? – DBS Nov 21 '12 at 03:00
  • People may have kept thinking after creating fire, but one of the other things we created was the idea of not reinventing the wheel. – Bill Nov 21 '12 at 17:27
  • Dear Bill, Why are so concerned about some one working on edge detection algorithm? 'Reinventing the wheel' does not fit here. Anyways, if you can help, it will be appreciated else please continue entertaining us. – DBS Nov 21 '12 at 18:03
  • If you really have a novel idea that you have valdi reason to believe will increase the efficacy of edge detection, then you should pursue it. But I just want to caution you that it could be quite an undertaking, and may prove fruitless in the end. Again, thoguh, from a theoretical standpoint you will need a better edge detctor to automatically evaluate your own. Otherwise, it will need to be done by manual, visual inspection. You'll liely need some ground truth, and need to do this on hundreds of images to fully validate your algorithm. Use MTurk so you don't have to do it all on your own. – Bill Nov 21 '12 at 19:17

2 Answers2

2
  1. Create a synthetic data set with known edges, for example by 3D rendering, by compositing 2D images with precise masks (as may be obtained in royalty free photosets), or by introducing edges directly (thin/faint lines). Remember to add some confounding non-edges that look like edges, of a type appropriate for what you're tuning for.

  2. Use your (non-synthetic) data set. Run the reference algorithms that you want to compare against. Also produce combinations of the reference algorithms, for example by voting (majority, at least K out of N, etc). Calculate stats on your algo vs reference algo performance, in terms of (a) number of points your algo classifies as edge which each reference algo, or the combination, does not classify as edge (false positive), or (b) number of points which the reference algo classifies as edge that your algo does not (false negative). You can also calculate a rank correlation-type number for algos by looking at each point and looking at which algos do (or don't) classify that as an edge.

  3. Create ground truth manually. Use reference edge-finding algos as a starting point, then fix up by hand. Probably valuable to do for a small number of images in any case.

Good luck!

Alex I
  • 19,689
  • 9
  • 86
  • 158
  • Thanks for the info. it is nice idea to check algo., but it is strange that there is not data set available with ground truth for edge detection analysis. There many papers but data set is not available, if you are working in same field and have any link, please share. Thank you. – DBS Nov 21 '12 at 16:09
  • @DBS, I suggest you contact the authors of some of these papers and request their data. Many journals require that the authors make data used in a paper available to other researchers on request. It is hit or miss, but you will probably get some data sets. – Alex I Nov 21 '12 at 19:46
1

For comparisons, quantitative measures like what @Alex I explained is best. To do so, you need to define what is "correct" with a ground truth set and a way to consistently determine if a given image is correct or on a more granular level, how correct (some number like a percentage) it is. @Alex I gave a way to do that.

Another option that is often used in graphics research where there is no ground truth is user studies. Usually less desirable as they are time consuming and often more costly. However, if it is a qualitative improvement that you are after or if a quantitative measurement is just too hard to do, a user study is an appropriate solution.

When I mean user study I mean to poll people on how well a result is given the input image. You could give them a scale to rate things on and randomly give them samples from both your results and the results of another algorithm

And of course, if you still want more ideas, be sure to check out edge detection papers to see how they measured their results (I'd actually look here first as they've already gone through this same process and determined what was best for them: google scholar).

Noremac
  • 3,445
  • 5
  • 34
  • 62
  • Yup. Of course Google scholar was the first option. :) Also it would be nice to test on well-known data set (with ground truth) instead self-created data set. :) – DBS Nov 21 '12 at 16:11