2

I'd like to perform rectification of a pair of images using the iOS Vision Framework's VNHomographicImageRegistrationRequest. Is it possible?

So far, I've obtained a 3x3 warp matrix that doesn't seem to rectify the images.

How is the warp matrix supposed to be used (I couldn't find any examples online)?

Moreover, how does image alignment differ from image rectification (I understand image rectification, but not image alignment)?

Xcode Playground:

import UIKit
import Foundation
import Vision

var li = UIImage(named: "left.png")
var ri = UIImage(named: "right.png")

let handler = VNSequenceRequestHandler()
let request = VNHomographicImageRegistrationRequest(targetedCGImage: li!.cgImage!, options: [:]) { (req, err) in
    let observation = req.results?.first as! VNImageHomographicAlignmentObservation
    print(observation.warpTransform)
}

try! handler.perform([request], on: ri!.cgImage!)

OpenCV image warping:

import numpy as np
import cv2

# read the pair of images
li = cv2.imread('left.png', 0)
ri = cv2.imread('right.png', 0)

# get the 3x3 warp matrix provided by Vision Framework
ios_vision_warp_mat = np.transpose(np.array([
    [0.746783, -0.0139349, -0.000149109],
    [-0.0426033, 0.861793, -2.39433e-05],
    [133.91, 22.0962, 0.999471]
]))

# warp the image
warped = cv2.warpPerspective(ri, ios_vision_warp_mat, (ri.shape[1], ri.shape[0]))
combined = cv2.addWeighted(warped, 0.5, li, 0.5, 0.0)

cv2.imshow('Combined pair', combined)
cv2.imshow('Unrectified pair', np.concatenate([li, warped], axis=1))

cv2.waitKey(0)
nathan
  • 9,329
  • 4
  • 37
  • 51
  • Hi, did you manage to find how to use the warp matrix? I'm looking for the same thing. – user1995098 Sep 19 '17 at 19:17
  • Even though I'm joining the conversation quite late: I just posted [here](https://stackoverflow.com/questions/51527754/swift-merge-images-using-vnimagehomographicalignmentobservation#51613151) how you can use the warp matrix in Swift using _CoreImage_, but in general OpenCV should do pretty much the same thing. What was the problem with your Python/OpenCV example? If the result looked messed up: Did you try switching the matrix from column to row order, or using the inverse? – Carsten Haubold Jul 31 '18 at 12:22
  • Opencv and Core Image use different origins. You can apply a transform to convert Core Image's homography matrix to Opencv coordinates. See this answer for more detail. https://stackoverflow.com/a/52845002/6693924 – Jan-Michael Tressler Oct 16 '18 at 22:53

0 Answers0