3

I'm using drones to monitor tulip fields weekly. I capture images every week with a set flight path on my DJI Phantom 4. The images are stitched with Agisoft Metashape into an orthomosaic (large geo-referenced .tif file ~2gb).

I would like to compare orthomosaics at different times. Unfortunately the orthomosaics are not exactly aligned. So when I lay this week's orthomosaic over last week's, they're not aligned. Like so:

Two orthomosaics, same location, one week later. Detail shows orthos are not properly aligned.

As the detail shows, the images are not aligned. I need them to be aligned for proper inspection, growth tracking etc. I would like to create an automatic alignment algorithm that fits week 2 (using some translation, rotation and possible stretching) onto week 1. The difficulty herein lies in the fact that the tulips change over time, so the alignment should rely on non-changing features like the sewer, paths and rows. Also, I would like to extend this method to also work for other crops.

What would be a suitable method for aligning the orthomosaics?

UPDATE: I tested two methods:

  1. Searching for keypoints, match them and use RANSAC to select the best matches and determine a homography matrix. Applying the resulting matrix to warp the second image results in a better fit, but not a great fit.
    1. Trying to optimize the homography matrix based on MSE between the two images in greyscale. Results are similar: slightly better, but far from perfect.

I think the main 'culprit' here is that the images are no perfect match anyway, no matter the homography. Also, the keypoint method seems to detect small details as features, whereas the larger 'details' (like the sewage) qualify much better as matchable features. I think there may be some value in smart pre-processing.

So i'm still working on this and would be thankful for any advice some of you may have!

Kaz Vermeer
  • 145
  • 1
  • 9
  • Could you use some form of colour translation to identify for instance the river, or grey roads. These would give constant positions by which you could localise the images. – Adam Apr 20 '19 at 08:43
  • I'm guessing this would work. However, I am planning to monitor multiple fields and would rather use a method which does not rely on manual selection of the 'fixed' elements. – Kaz Vermeer Apr 22 '19 at 07:12
  • @KazVermeer Were you able to solve the issue completely? If so then can you please update the solution here, it would be very helpful. – HARSH MITTAL May 23 '22 at 11:56

1 Answers1

1

Firstly your question is: that you are using orthomosaics for comparing your garden condition by putting the side by side or may be using some other methods (you have not mentioned what methods you are using for matching them).

Now my answer to this is that if you want to see them in much aligned manner then you can have a homography transform and then use image warping for image2 then you can see the aligned in same way (this should work as your image is not changing in terms of features and also if this is the case then you can use RANSAC algorithm for this problem)

Now these are the steps:

  1. firstly estimate key points and the match the for both the images. Here is a tutorial for step 1.

  2. Then find Homography transform to this images. Here is an opencv implementation of Homography with RANSAC and other robust estimation algorithms.

  3. Then to see them aligned you should warp the second image. Again here is warping algorithms implemented in opencv.

Summery: You can transform one image w.r.t the other one by simple homography relation excluding the multiple moving object case. In this method step 1 will give you coordinates of matched pints of both images just identify target image and source image for the hmography estimator and get homography relation b/w images the transform one to other's perspective.

This video will also help you.

ShivamPR21
  • 119
  • 1
  • 8
  • Thanks ShivamPAI21. Unfortunately, I have not been able to get a full match, although your proposed solution does provide some improvement over the original mis-alignment. See the update in the main question. – Kaz Vermeer Apr 28 '19 at 18:44
  • @KazVermeer can you please provide me the images after process that were mentioned and also an image with key point plotted on it scattered. – ShivamPR21 Apr 28 '19 at 18:54
  • @KazVermeer have you tried to blur the image and then detecting keypoints as your crops will be maximum of the keypoints but we need some major keypoints I think you can use Harris corner detection and other methods and then computing homography all of these are implemented in Opencv – ShivamPR21 Apr 28 '19 at 18:59
  • I've experimented with using blurred images, but not quite enough yet. Will do some more this week, and also add some images to my question. Let's see if we can get this to work! – Kaz Vermeer Apr 29 '19 at 09:54
  • @KazVermeer I also think that the window comparison you are looking for is slightly shifted, try with exact same window co-ordinates on warped images one with identity matrix and other with homography matrix. – ShivamPR21 Apr 29 '19 at 13:21
  • @ShivamPR21: Is there any alternative for step's 1 link? because the link does not work... – just_learning Jun 05 '21 at 18:35
  • 1
    @just_learning https://www.docs.opencv.org/master/dc/dc3/tutorial_py_matcher.html – ShivamPR21 Jun 09 '21 at 09:55