0

I have made a small program that reads an image, transforms the perspective and then redraws the image. Currently I rewrite each pixel to the output manually but this way a lot of points are lost and the result is image that is very faint (the larger the transformation the fainter the image). This is my code:

U, V = np.meshgrid(range(img_array.shape[1]), range(img_array.shape[0]))
UV = np.vstack((U.flatten(),V.flatten())).T
UV_warped = cv2.perspectiveTransform(np.array([UV]).astype(np.float32), H)

UV_warped = UV_warped[0]
UV_warped = UV_warped.astype(np.int)

x_translation = min(UV_warped[:,0])
y_translation = min(UV_warped[:,1])

new_width = np.amax(UV_warped[:,0])-np.amin(UV_warped[:,0])
new_height = np.amax(UV_warped[:,1])-np.amin(UV_warped[:,1])

UV_warped[:,0] = UV_warped[:,0] - int(x_translation)
UV_warped[:,1] = UV_warped[:,1] - int(y_translation)

# create box for image
new_img = np.ones((new_height+1, new_width+1))*255 # 0 = black 255 - white background

for uv_pix, UV_warped_pix in zip(UV, UV_warped):
    x_orig = uv_pix[0] # x in origineel
    y_orig = uv_pix[1] # y in origineel
    color = img_array[y_orig, x_orig]

    x_new = UV_warped_pix[0] # new x
    y_new = UV_warped_pix[1] # new y

    new_img[y_new, x_new] = np.array(color)

img = Image.fromarray(np.uint8(new_img))
img.save("test.jpg")

Is there a way to do this differently (with interpolation maybe?) so I won't loose so many pixels and the image is not so faint?

Yorian
  • 2,002
  • 5
  • 34
  • 60

1 Answers1

1

You are looking for the function warpPerspective (As already mentioned in answer to your previous question OpenCV perspective transform in python).

You can use this function like this (although I'm not familiar with python) :

cv2.warpPerspective(src_img, H_from_src_to_dst, dst_size, dst_img)

EDIT: You can refer to this OpenCV tutorial. It uses affine transformations, but there exists similar OpenCV functions for perspective transformations.

Community
  • 1
  • 1
BConic
  • 8,750
  • 2
  • 29
  • 55
  • Thanks for your fast response. – Yorian Feb 21 '14 at 14:08
  • However, my output will be in coordinates. Someone else told me that the problem with warpPerspective is that it doesn't work with large translations apparently very well. – Yorian Feb 21 '14 at 14:17
  • Yes it does work, only if the translation is too large, your source image will leave the field of view of the destination image, so you do not see anything. What do you mean by "my output will be in coordinates" ? – BConic Feb 21 '14 at 14:21
  • The image is taken from the real world, They will therefor be real world coordinates – Yorian Feb 21 '14 at 14:27
  • So your image does not contain colors but coordinates in meters ? – BConic Feb 21 '14 at 14:31
  • It does contain colors (or grayscale). Maybe this explains what I want: I have taken an multiple images under an angle of a beach (images need to be stitched in the future). I want to rectify the image by placing the points in the image in the correct X/Y-coordinate in the real world (and rectify it such that in the final image I'm no longer "looking at an agle" but from above (as if it where taken from a helicopter)) – Yorian Feb 21 '14 at 14:38
  • If the image you want to transform contains colors, then you may use `warpPerspective`, which will take care of the interpolation issues. Now, the actual transformation you want to apply to the image is described by matrix `H`, so you should estimate this matrix consistantly with the rectification you want to achieve. And this is the role of function `cv2.getPerspectiveTransform`. – BConic Feb 21 '14 at 14:45
  • You seem to be confused about the image stitching processing chain... See the edit in my answer. – BConic Feb 21 '14 at 14:57
  • let us [continue this discussion in chat](http://chat.stackoverflow.com/rooms/48078/discussion-between-yorian-and-aldurdisciple) – Yorian Feb 21 '14 at 15:09