6

I'm having a problem for just one point (x, y) of the image and having already calculated the transformation matrix on the two images calculate what the corresponding point (x, y) in the second image. If i have a pixel point [510,364] from my source image and de transformation matrix that i already calculate:

Matrix Transform:  [[ 7.36664511e-01  3.38845039e+01  2.17700574e+03]
[-1.16261372e+00  6.30840432e+01  8.09587058e+03]
[ 4.28933532e-05  8.15551141e-03  1.00000000e+00]]

i can get my new point : [3730,7635]

How can i do this?

h, status =cv2.findHomography(arraypoints_fire,arraypoints_vertical)

warped_image = cv2.warpPerspective(fire_image_open, h, (vertical_image_open.shape[1],vertical_image_open.shape[0]))
cv2.namedWindow('Warped Source Image', cv2.WINDOW_NORMAL)
cv2.imshow("Warped Source Image", warped_image)

cv2.namedWindow('Overlay', cv2.WINDOW_NORMAL)
overlay_image=cv2.addWeighted(vertical_image_open,0.3,warped_image,0.8,0)
cv2.imshow('Overlay',overlay_image)
Jeru Luke
  • 20,118
  • 13
  • 80
  • 87
João Santos
  • 194
  • 11
  • 1
    `cv2.perspectiveTransform()` will transform *points* --> https://docs.opencv.org/3.0-beta/modules/core/doc/operations_on_arrays.html#perspectivetransform – alkasm Apr 12 '19 at 18:00

1 Answers1

1

I've met the same problem and found an answer here, but on cpp. According docs, opencv warpPerspective uses this formula, where

src - input image.

dst - output image that has the size dsize and the same type as src.

M - 3×3 transformation matrix (inverted).

enter image description here

You can use it directly on the point:


# M - transform matrix, created with cv2.perspectiveTransform

def warp_point(x: int, y: int) -> tuple[int, int]:
    d = M[2, 0] * x + M[2, 1] * y + M[2, 2]

    return (
        int((M[0, 0] * x + M[0, 1] * y + M[0, 2]) / d), # x
        int((M[1, 0] * x + M[1, 1] * y + M[1, 2]) / d), # y
    )

UPD: I've found one more answer, but on python :D

It seems, I forgot brackets, in first part, fixed.

Mastermind
  • 454
  • 3
  • 11