I have a problem with OpenCVs warpPerspective function. I'm using OpenCV 2.3.1. My goal is simple: I wish to be able rotate my image around each axis, but with the origin in the center of the image, instead of the upper left corner. It is possible that my issue is more about math than programming :-)
My code looks like this:
Mat transformationMatrix, xRotation(3, 3, CV_32F, Scalar(0)), yRotation(3, 3, CV_32F, Scalar(0)), zRotation(3, 3, CV_32F, Scalar(0)), offset(3, 3, CV_32F, Scalar(0));
Mat transformedImage(baseImage.rows, baseImage.cols, CV_8UC3);
float xRad = 0.0*((float)M_PI/(float)180);
xRotation.at<float>(0,0) = 1;
xRotation.at<float>(1,1) = cos(xRad);
xRotation.at<float>(1,2) = -sin(xRad);
xRotation.at<float>(2,1) = sin(xRad);
xRotation.at<float>(2,2) = cos(xRad);
float yRad = 0.0*((float)M_PI/(float)180);
yRotation.at<float>(0,0) = cos(yRad);
yRotation.at<float>(0,2) = sin(yRad);
yRotation.at<float>(1,1) = 1;
yRotation.at<float>(2,0) = -sin(yRad);
yRotation.at<float>(2,2) = cos(yRad);
float zRad = 10.0*((float)M_PI/(float)180);
zRotation.at<float>(0,0) = cos(zRad);
zRotation.at<float>(0,1) = -sin(zRad);
zRotation.at<float>(1,0) = sin(zRad);
zRotation.at<float>(1,1) = cos(zRad);
zRotation.at<float>(2,2) = 1;
offset.at<float>(0,0) = 1;
offset.at<float>(1,1) = 1;
offset.at<float>(2,2) = 1;
offset.at<float>(0,2) = image.cols/2;
offset.at<float>(1,2) = image.rows/2;
transformationMatrix = xRotation*yRotation*zRotation*offset;
warpPerspective(image, transformedImage, transformationMatrix, Size(transformedImage.rows, transformedImage.cols));
imshow("Output", transformedImage);
Where image is a pre-populated cv::Mat. Here, I'm simply trying to rotate the image by 10 degrees, but later on I'll need to do more complex transformations. The offset I have introduced moves the image so the center is in 0,0, meaning that the image is properly rotated. However, it has the unfortunate side effect of throwing away the three quarters of the image that end up outside the image. Ideally, I should be able to translate the rotated image back into view, but I can't quite wrap my head around that. As far as I can see, that must be done in the same transformation matrix somehow.
I should note that I have seen this question: Specify an origin to warpPerspective() function in OpenCV 2.x, but my hope is that this is possible to do without reimplementing warpPerspective. I have also seen OpenCV warpperspective, but none of the pointers there really helped me. This question: How to use cv::warpPerspective for the perspective transformation? is not really relevant for my problem.
Thanks in advance for any tips :-)