1

I am learning OpenCV applications by reading research papers and attempting to duplicate their tests and results. I may have jumped a bit too deep off the beaten path and am now curious the proper way to go about this investigation.

Goal: 1) Register these two images. 2) Stack the exposures (there are actually 20+ in this series). 3) Learn.

Attached below is an example image- shot with a cell phone, in low light, in burst mode. If one were to level stretch one would see there are very few hard edges (some sheets), but there are enough details to manually align portions of the images with each other. I ran this through the default OpenCV implementations of ORB and SIFT and, as expected, came back with poor matches.

I have not yet stumbled upon the right technique described to increase edge detection. As mentioned, no hard edges are present. However I thought I'd previously read that one could downsample the image using a max function and get a better 'edge' detection. That edge should be able to provide registration homography to the higher resolution image. But I can neither find the resource to do so nor any descriptions of similar activity. Help here would be appreciated.

In addition if there are any authored papers discussing this technique that I could be pointed to I'd appreciate it. I'm quite familiar with astrophotography and star stacking, and am looking forward to trying drizzle on a different type of image set.

Downsampling the image techniques I've tried to better indicate edges: Differences of Gaussians, Laplace, directional edge detection, and a few others.

I appreciate the time you've taken to help me learn how to expand my efforts for this.

Thank you.

Edit: Modifying the image's contrast, or brightness, or tonal response, has no effect on the correlation of the image content. At least in the limited set of tests I've been able to run. It makes them 'prettier' but, honestly, the algorithms don't care if they're in 'human visual space' or in 'linear digital counts'. I can post it as a pretty image but, without those sharp edges, most of the filters fail and matches don't succeed- which is the crux of my issues here.

Example Bad image with Matches

J.Hirsch
  • 129
  • 7
  • Show your code and the resulting edge images. Did you try Canny edge detection? Please read this forum's help section about how to ask good questions and providing a minimum verifiable and reproducible set of code. – fmw42 Jan 13 '20 at 17:59
  • I have tried Canny- there aren't edges to find (except for the top 200 rows). That is (I believe) the first fundamental problem. I had initially thought that, by downsampling via Laplace Pyramid or Gaussian Pyramid, I could create enough of an edge that would be detectable. For code I don't believe anything I could provide would add value- I'm more stuck on the theory right now. All code is 'default examples' from OpenCV documentation and other StackExchange posts – J.Hirsch Jan 13 '20 at 18:08
  • 2
    You can enhance the two images using non-linear contrast adjustment such as gamma adjustment. See https://docs.opencv.org/3.4/d3/dc1/tutorial_basic_linear_transform.html I would then suggest you try phase correlation. See https://docs.opencv.org/4.1.1/d7/df3/group__imgproc__motion.html#ga552420a2ace9ef3fb053cd630fdb4952. – fmw42 Jan 13 '20 at 18:47
  • Should I work with the modified adjusted images first? Since they're not in linear space (Being photos), I didn't think adjusting the tonality was a good first step. However I didn't test that adjusting the tonality would affect the matching. I should have- you're the second person to suggest that. I know the image is dark, that's one of the good reasons to try using it. I'll look to make a new dataset with better exposures and artificially bump them down. – J.Hirsch Jan 14 '20 at 14:25

0 Answers0