3

I am attempting to align timelapse images using skimage.feature.orb to extract keypoints and then filtering them using skimage.measure.ransac. The transform modelled by RANSAC should then be able to align my images.

The process appears to work well, I get plenty of keypoint matches that are then filtered well by RANSAC. The modelled transformation corrects the rotation perfectly but the translation is way off every time.

Am I simply misunderstanding how the transformation should be applied, or how it is modelled by RANSAC?

# Extract and match features from both images
descriptor_extractor = ORB(n_keypoints = 400, harris_k = 0.0005)
descriptor_extractor.detect_and_extract(image_ref)
descriptors_ref, keypoints_ref = descriptor_extractor.descriptors, descriptor_extractor.keypoints
descriptor_extractor.detect_and_extract(image)
descriptors, keypoints = descriptor_extractor.descriptors, descriptor_extractor.keypoints

# Match features in both images
matches = match_descriptors(descriptors_ref, descriptors, cross_check = True)

# Filter keypoints to remove non-matching
matches_ref, matches = keypoints_ref[matches[:, 0]], keypoints[matches[:, 1]]

# Robustly estimate transform model with RANSAC
transform_robust, inliers = ransac((matches_ref, matches), EuclideanTransform, min_samples = 5, residual_threshold = 0.5, max_trials = 1000)

# Apply transformation to image
image = warp(image, transform_robust.inverse, order = 1, mode = "constant", cval = 0, clip = True, preserve_range = True)

Keypoint matching

Actual vs expected results

I get similar results with other images. I have also tried using the inliers from RANSAC with skimage.transform.estimate_transform but it provides identical results to using transform_robust directly.

Erik White
  • 336
  • 2
  • 8

1 Answers1

1

It turns out that I needed to invert the translation before applying the transform:

# Robustly estimate transform model with RANSAC
transform_robust, inliers = ransac((matches_ref, matches), EuclideanTransform, min_samples = 5, residual_threshold = 0.5, max_trials = 1000)

# Invert the translation
transform_robust = transform(rotation = transform_robust.rotation) + transform(translation = -flip(transform_robust.translation))

# Apply transformation to image
image = warp(image, transform_robust.inverse, order = 1, mode = "constant", cval = 0, clip = True, preserve_range = True)

The result is not perfect, but adjusting my keypoint selection should get it lined up enter image description here

Erik White
  • 336
  • 2
  • 8
  • Could you clarify where `transform()` and `flip()` come from? – Dan Sep 19 '20 at 20:39
  • `flip()` is from numpy. `transform` is a local variable representing a `GeometricTransform` from skimage. See https://scikit-image.org/docs/dev/api/skimage.transform.html – Erik White Sep 21 '20 at 09:12
  • See https://github.com/scikit-image/scikit-image/issues/1749 for discussion on what I believe is the relevant issue. The issue seems to be that keypoints are returned as (row, col) coordinates, but ransac and the estimators want cartesian coordinates which are (col, row) so you need to swap them like `ransac((np.flip(matches_ref, axis=-1), np.flip(matches, axis=-1)) ...` in your example. – aconz2 May 10 '22 at 19:05