I get that when I apply warpPerspective() again with inverse of transformation matrix, I obtain first image:
M = cv2.getPerspectiveTransform(pts1,pts2)
warped = cv2.warpPerspective(img, M, (cols,rows))
# ... some operations, obtain a rectangle
ret, IM = cv2.invert(M)
restored = cv2.warpPerspective(warped, IM, (cols,rows))
However, I apply some operations to warped
and I get some locations on the image (like a rectangle). How am I going to find corresponding coordinates of the rectangle on the restored
image?
Answers in both python and C++ are appreciated.