3

Im changing an image from front perspective to a bids eye view by using getHomography and warpPerspective.

It works in that the image warps to the desired perspective but the crop is off. It moves the warped image largely outside the image box. I assume the reason is because the operation results in negative coordinates.

I have calculated the points for calculation of the translation matrix manually and not by using any of opencv:s functions for doing that since i.e. the chessboard functions failed to detect the proper points.

I guess this can be fixed by doing additional changes to the transformation matrix. But how is that done? Also, is there a way to make sure the transformed image is centered along the x-axis and then let the y-axis be adjusted to a desired position?

Code snippet that does the job now:

cv::Mat image; // image is loaded with the original image

cv::Mat warpPers; // The container for the resulting image
cv::Mat H;

std::vector<cv::Point2f> src;
std::vector<cv::Point2f> dst;

// In reality several more points.
src.push_back(cv::Point2f(264,301));
src.push_back(cv::Point2f(434,301));
src.push_back(cv::Point2f(243,356));
src.push_back(cv::Point2f(476,356));

dst.push_back(cv::Point2f(243,123));
dst.push_back(cv::Point2f(476,123));
dst.push_back(cv::Point2f(243,356));
dst.push_back(cv::Point2f(476,356));

H = cv::findHomography(src, dst, CV_RANSAC);

cv::warpPerspective(image, 
newPers,
H,
cv::Size(3000,3000),
cv::INTER_NEAREST | CV_WARP_FILL_OUTLIERS
);

cv::namedWindow("Warped persp", cv::WINDOW_AUTOSIZE );
cv::imshow( "Warped persp", newPers);
Einar Sundgren
  • 4,325
  • 9
  • 40
  • 59
  • 2
    you can transform the border points of your image `cv::Point2f(0,0)``cv::Point2f(image.cols, 0)``cv::Point2f(image.cols, image.rows)` `cv::Point2f(0, image.rows)` manually (multiplay with your homography) and check whether they fit in your 3000,3000 sized dst image. Compute their min/max locations and modify your homography's translation part and/or the scale (or the dst image size) accordingly. I didn't check whether the rest of your code is ok, though ;) – Micka Mar 06 '14 at 09:55
  • @Einar If you want to let the `findHomography` function do the work, you have to define your destination points carefully, so that the warped image corresponds to what you need. However, there is a priori nothing wrong in composing the homography with a custom translation (obtained as Micka said) before warping the image. – BConic Mar 06 '14 at 10:51

3 Answers3

14

Opencv gives very convenient way to do perpective transform. The only thing you have to do is take care of the homography return by findHomography. Indeed, maybe some points of the image you provide go in the negative part of the x or y axis. So you have to do some check before warp the image.

step 1: find the homography H with findHomography you will get a classic structure for homography

H = [ h00, h01, h02;
      h10, h11, h12;
      h20, h21,   1];

step 2: search the position of image's corners after warping

So let me define the order for the corner:

(0,0) ________ (0, w)
     |        |
     |________|
(h,0)          (h,w)

To do that, just create a matrix like that:

P = [0, w, w, 0;
     0, 0, h, h;
     1, 1, 1, 1]

Make the product with H and get the warped coordinates:

P' = H * P

step 3: check the minimum in x and y with these new 4 points and get the size of warped image After, you have done the product you will receive something like that:

P' = [s1*x1, s2*x2, s3*x3, s4*x4;
      s1*y1, s2*y2, s3*y3, s4*y4;
      s1   , s2   , s3   , s4]

So to obtain, new valid coordinate just divide line 1 and 2 by the line 3

After that check the minimum for the column on the first line, and the minimum for the row on the second line (use cvReduce)

to find the bounding box that will contains the image (ie the dimension of the dst matrix for the warpPerspective function) just find with cvReduce the maximum over each line

let minx be the minimum on the first row (ie for column), maxx (the maximum for the 1 row) miny and maxy for the second row.

So the size of the warped image should be cvSize(maxx-minx, maxy-miny)

step 4: add a correction to the homography Check if minx and/or miny is/are negative, if minx < 0 then add -minx to h02 and if miny < 0, then add -miny to h12

so H should be:

H = [ h00, h01, h02-minx; //if minx <0
      h10, h11, h12-miny; //if miny <0
      h20, h21,   1];

step 5: warp the image

Patryk
  • 22,602
  • 44
  • 128
  • 244
user3529407
  • 335
  • 2
  • 5
  • @Einar : Have you tried this solution. Did this work for you ? – navderm Dec 15 '14 at 20:43
  • Just a comment. If you're using H.inv() to do the transformation, you should be changing the translation in the inverted homography and not the main homography matrix . I made this error. I hope no one else spends more time in such stupid endeavors :) – navderm Dec 17 '14 at 21:18
  • 2
    This is correct up to step 4. Adding the offset to the homography transform matrix doesn't work. Instead, create a 3x3 identity matrix (I'll call it `T`), put the translation values into `t02` and `t12` then create a new homography matrix `H' = T * H`. Use `H'` in the call to `warpPerspective`. – SSteve Feb 24 '16 at 17:14
  • Can you give more info after calculating P'. Put words in code example please – Martin86 Nov 13 '20 at 13:50
2

I think this question OpenCV warpperspective is similar to the current question cv::warpPerspective only shows part of warped image

So i give you my answer https://stackoverflow.com/a/37275961/15485 also here:

Try the below homography_warp.

void homography_warp(const cv::Mat& src, const cv::Mat& H, cv::Mat& dst);

src is the source image.

H is your homography.

dst is the warped image.

homography_warp adjust your homography as described by https://stackoverflow.com/users/1060066/matt-freeman in his answer https://stackoverflow.com/a/8229116/15485

// Convert a vector of non-homogeneous 2D points to a vector of homogenehous 2D points.
void to_homogeneous(const std::vector< cv::Point2f >& non_homogeneous, std::vector< cv::Point3f >& homogeneous)
{
    homogeneous.resize(non_homogeneous.size());
    for (size_t i = 0; i < non_homogeneous.size(); i++) {
        homogeneous[i].x = non_homogeneous[i].x;
        homogeneous[i].y = non_homogeneous[i].y;
        homogeneous[i].z = 1.0;
    }
}

// Convert a vector of homogeneous 2D points to a vector of non-homogenehous 2D points.
void from_homogeneous(const std::vector< cv::Point3f >& homogeneous, std::vector< cv::Point2f >& non_homogeneous)
{
    non_homogeneous.resize(homogeneous.size());
    for (size_t i = 0; i < non_homogeneous.size(); i++) {
        non_homogeneous[i].x = homogeneous[i].x / homogeneous[i].z;
        non_homogeneous[i].y = homogeneous[i].y / homogeneous[i].z;
    }
}

// Transform a vector of 2D non-homogeneous points via an homography.
std::vector<cv::Point2f> transform_via_homography(const std::vector<cv::Point2f>& points, const cv::Matx33f& homography)
{
    std::vector<cv::Point3f> ph;
    to_homogeneous(points, ph);
    for (size_t i = 0; i < ph.size(); i++) {
        ph[i] = homography*ph[i];
    }
    std::vector<cv::Point2f> r;
    from_homogeneous(ph, r);
    return r;
}

// Find the bounding box of a vector of 2D non-homogeneous points.
cv::Rect_<float> bounding_box(const std::vector<cv::Point2f>& p)
{
    cv::Rect_<float> r;
    float x_min = std::min_element(p.begin(), p.end(), [](const cv::Point2f& lhs, const cv::Point2f& rhs) {return lhs.x < rhs.x; })->x;
    float x_max = std::max_element(p.begin(), p.end(), [](const cv::Point2f& lhs, const cv::Point2f& rhs) {return lhs.x < rhs.x; })->x;
    float y_min = std::min_element(p.begin(), p.end(), [](const cv::Point2f& lhs, const cv::Point2f& rhs) {return lhs.y < rhs.y; })->y;
    float y_max = std::max_element(p.begin(), p.end(), [](const cv::Point2f& lhs, const cv::Point2f& rhs) {return lhs.y < rhs.y; })->y;
    return cv::Rect_<float>(x_min, y_min, x_max - x_min, y_max - y_min);
}

// Warp the image src into the image dst through the homography H.
// The resulting dst image contains the entire warped image, this
// behaviour is the same of Octave's imperspectivewarp (in the 'image'
// package) behaviour when the argument bbox is equal to 'loose'.
// See http://octave.sourceforge.net/image/function/imperspectivewarp.html
void homography_warp(const cv::Mat& src, const cv::Mat& H, cv::Mat& dst)
{
    std::vector< cv::Point2f > corners;
    corners.push_back(cv::Point2f(0, 0));
    corners.push_back(cv::Point2f(src.cols, 0));
    corners.push_back(cv::Point2f(0, src.rows));
    corners.push_back(cv::Point2f(src.cols, src.rows));

    std::vector< cv::Point2f > projected = transform_via_homography(corners, H);
    cv::Rect_<float> bb = bounding_box(projected);

    cv::Mat_<double> translation = (cv::Mat_<double>(3, 3) << 1, 0, -bb.tl().x, 0, 1, -bb.tl().y, 0, 0, 1);

    cv::warpPerspective(src, dst, translation*H, bb.size());
}
Community
  • 1
  • 1
Alessandro Jacopson
  • 18,047
  • 15
  • 98
  • 153
1

If I understood correctly, basically question demands the method to calculate the correct offset for translation of the warped image. I will explain how to get the right offset for translation. Idea is that matching features in two images should have the same coordinate in the final stitched image.

Let's say we refer images as follows:

  • 'source image' (si): the image which needs to be warped
  • 'destination image' (di): the image to whose perspective 'source image' will be warped
  • 'warped source image'(wsi): source image after warping it to the destination image perspective

Following is what you need to do in order to calculate offset for translation:

  1. After you have sampled the good matches and found the mask from homography, store the best match's keypoint(one with a minimum distance and being an inlier (should get the value of 1 in mask obtained from homography calculation)) in si and di. Let's say best match's keypoint in si and diisbm_siandbm_di` respectively..

    bm_si = [x1, y1,1]

    bm_di = [x2, y2, 1]

  2. Find the position of bm_si in wsi by simply multiplying it with the homography matrix (H). bm_wsi = np.dot(H,bm_si)

    bm_wsi = [x/bm_wsi[2] for x in bm_wsi]

  3. Depending on where you will be placing the di on the output of si warping (=wsi), adjust the bm_di

    Let's say if you are warping from the left image to right image (such that left image is si and the right image is di) then you will placing di on the right side wsi and hence bm_di[0] += si.shape[0]

  4. Now after the above steps

    x_offset = bm_di[0] - bm_si[0]

    y_offset = bm_di[1] - bm_si[1]

  5. Using calculated offset find the new homography matrix and warp the si.

    T = np.array([[1, 0, x_offset], [0, 1, y_offset], [0, 0, 1]])

    translated_H = np.dot(T.H)

    wsi_frame_size = tuple(2*x for x in si.shape)

    stitched = cv2.warpPerspective(si, translated_H, wsi_frame_size)

    stitched[0:si.shape[0],si.shape[1]:] = di

Vijendra1125
  • 151
  • 1
  • 5
  • Thank you for the answer. My question was asked quite a few years ago and I have no reasonable way of validating your suggestion today. My actual solution was to manually adjust it to a known size. It was only for a PoC after all. – Einar Sundgren Aug 27 '20 at 05:58