Given two images with similar blobs, is there a simple way to find the transformation between them? As an example, I have two images like the following:
The right is the output of a neural network, while the left is an approximate truth (from a shape perspective only). I'm looking to find the transformation to move the left image to best match the position and orientation of the right. In this case, a rotation of some 150-160 degrees CC, and a translation up and right.
This seems to be a shape matching problem with some added constraints, but I'm wondering if there is a way to do it without having to perform a bunch of test transformations/sliding window. Most of the examples I've found have been for classification, and the positional ones are not rotation tolerant.
Ideas I have had so far... I've looked at Hu moments and openCV's matchShapes, which seem like they would get me the similarity (and mirroring, which is a possibility in the data and thus desirable), but I'm not sure how to use them without still using some sort of window. Another option would be SIFT or another feature based approach, but I don't think it would be particularly good given the low information volume of the data and the less similar shapes (Hough transform as a base?). Another brute force method might be to calculate the difference in the centroids, move the left image over the right and then rotate until I find the orientation with the maximum Jaccard index (or use the moments to find the rotation?), but that's the same kind of thing I'm trying to avoid (and it would always be a bit off given the inaccuracy of the NN predictions).
My first instinct is just to make a neural network to do it, but I feel like there is a better answer that I'm just missing.