1

I'm trying to achieve next goal:

1) Detect feature points on image and save them into array

2) Copy and rotate original image

3) Detect points on rotated image

4) "Rotate" (transform) original image detected points with the same angle (matrix) as the original image

5) Examine methods' reliability using rotation (check how many features of rotated image meet the transformed features of original image)

Actually my problems start from step 2: when I'm trying to rotate even square image for -90 angle (I need 45 for my task by the way) I get some black/faded borders and produced image is 202x203, while the original is 201x201: -90 rotated image artefacts

The code I'm using to rotate Mat:

- (Mat)rotateImage:(Mat)imageMat angle:(double)angle {

    // get rotation matrix for rotating the image around its center
    cv::Point2f center(imageMat.cols/2.0, imageMat.rows/2.0);
    cv::Mat rot = cv::getRotationMatrix2D(center, angle, 1.0);

    // determine bounding rectangle
    cv::Rect bbox = cv::RotatedRect(center,imageMat.size(), angle).boundingRect();

    // adjust transformation matrix
    rot.at<double>(0,2) += bbox.width/2.0 - center.x;
    rot.at<double>(1,2) += bbox.height/2.0 - center.y;

    cv::Mat dst;
    cv::warpAffine(imageMat, dst, rot, bbox.size());
    return dst;
}

from https://stackoverflow.com/a/24352524/1286212

Also I tried this one with the same result: https://stackoverflow.com/a/29945433/1286212

And the next problem with points rotation, I'm using this code to transform original features by the same angle (-90):

- (std::vector<cv::Point>)transformPoints:(std::vector<cv::Point>)featurePoints fromMat:(Mat)imageMat angle:(double)angle {
    cv::Point2f center(imageMat.cols/2.0, imageMat.rows/2.0);
    cv::Mat rot = cv::getRotationMatrix2D(center, angle, 1.0);
    std::vector<cv::Point> dst;
    cv::transform(featurePoints, dst, rot);
    return dst;
}

And I can't determine if it works as I expect because of wrong rotated image, I've made an example to show what I am talking about:

    cv::Mat testMat(3, 3, CV_8UC3, cv::Scalar(255,0,0));
    testMat.at<Vec3b>(cv::Point(0,1)) = Vec3b(0, 255, 0);

    for(int i = 0; i < testMat.rows; i++) {
        for(int j = 0; j < testMat.cols; j++) {
            Vec3b color = testMat.at<Vec3b>(cv::Point(i,j));
            NSLog(@"Pixel (%d, %d) color = (%d, %d, %d)", i, j, color[0], color[1], color[2]);
        }
    }

    std::vector<cv::Point> featurePoints1;
    std::vector<cv::Point> featureRot;
    cv::Point featurePoint = cv::Point( 0, 1 );
    featurePoints1.push_back(featurePoint);
    cv::Mat rotated = [self rotateImage:testMat angle:-90];
    featureRot = [self transformPoints:featurePoints1 fromMat:testMat angle:90];


    for(int i = 0; i < rotated.rows; i++) {
        for(int j = 0; j < rotated.cols; j++) {
            Vec3b color = rotated.at<Vec3b>(cv::Point(i,j));
            NSLog(@"Pixel (%d, %d) color = (%d, %d, %d)", i, j, color[0], color[1], color[2]);
        }
    }

Both mats (testMat and rotated) must be 3x3, while the second one is 4x5. And this green pixel should be translated from (0, 1) to (1, 2) rotated by -90. But in fact using transformPoints:fromMat:angle: method it's (1, 3) (because of wrong dimensions in rotated image I guess). Here are the logs for original image:

Pixel (0, 0) color = (255, 0, 0)
Pixel (0, 1) color = (0, 255, 0)
Pixel (0, 2) color = (255, 0, 0)
Pixel (1, 0) color = (255, 0, 0)
Pixel (1, 1) color = (255, 0, 0)
Pixel (1, 2) color = (255, 0, 0)
Pixel (2, 0) color = (255, 0, 0)
Pixel (2, 1) color = (255, 0, 0)
Pixel (2, 2) color = (255, 0, 0)

And for the rotated one:

Pixel (0, 0) color = (0, 0, 0)
Pixel (0, 1) color = (0, 0, 0)
Pixel (0, 2) color = (0, 0, 0)
Pixel (0, 3) color = (0, 0, 0)
Pixel (0, 4) color = (255, 127, 0)
Pixel (1, 0) color = (0, 0, 0)
Pixel (1, 1) color = (0, 0, 0)
Pixel (1, 2) color = (0, 0, 0)
Pixel (1, 3) color = (0, 0, 0)
Pixel (1, 4) color = (0, 71, 16)
Pixel (2, 0) color = (128, 0, 0)
Pixel (2, 1) color = (255, 0, 0)
Pixel (2, 2) color = (255, 0, 0)
Pixel (2, 3) color = (128, 0, 0)
Pixel (2, 4) color = (91, 16, 0)
Pixel (3, 0) color = (0, 128, 0)
Pixel (3, 1) color = (128, 128, 0)
Pixel (3, 2) color = (255, 0, 0)
Pixel (3, 3) color = (128, 0, 0)
Pixel (3, 4) color = (0, 0, 176)

As you can see pixel colors are corrupted too. What am I doing wrong or misunderstanding?

UPD SOLVED:

1) You should use boundingRect2f() instead of boundingRect() not to lose floating point precision and get right bounding box

2) You should get your center as cv::Point2f center(imageMat.cols/2.0f - 0.5f, imageMat.rows/2.0 - 0.5f) to obtain actual pixel-indexed center (have no idea why every single answer on SO has actually wrong center obtaining implementation)

Aft3rmath
  • 669
  • 2
  • 12
  • 21

1 Answers1

1

Use boundingRect2f instead of boundingRect . boundingRect2f uses integer values. It loses precision.

Vyacheslav
  • 26,359
  • 19
  • 112
  • 194