I'm trying to correct for fisheye in an image using OpenCV.
I can't get the calibration step to work. I've tried the code below to calibrate, and even with identical x and y coordinates in the input and output, the returned rms is at least 100.
std::vector<std::vector<cv::Point3d>> objpts;
std::vector<std::vector<cv::Point2d>> imgpts;
cv::Size img_size(1292,964); //dimension of input image in pixels
cv::Mat K, D;
std::vector<cv::Mat> rvecs, tvecs;
objpts.push_back(std::vector<cv::Point3d>());
imgpts.push_back(std::vector<cv::Point2d>());
//loading identical data into each
objpts[0].push_back(cv::Point3d(100, 100, 0));
objpts[0].push_back(cv::Point3d(0, 100, 0));
objpts[0].push_back(cv::Point3d(100, 0, 0));
objpts[0].push_back(cv::Point3d(0, 0, 0));
objpts[0].push_back(cv::Point3d(0, 50, 0));
imgpts[0].push_back(cv::Point2d(100, 100));
imgpts[0].push_back(cv::Point2d(0, 100));
imgpts[0].push_back(cv::Point2d(100, 0));
imgpts[0].push_back(cv::Point2d(0, 0));
imgpts[0].push_back(cv::Point2d(0, 50));
//ret changes depending on the points, but is always around 100-500
double ret = cv::fisheye::calibrate(objpts, imgpts, img_size, K, D, rvecs, tvecs);
//This produces the images seen below
cv::Mat im = cv::imread("env.jpg");
cv::Mat out;
cv::fisheye::undistortImage(im, out, K, D);
cv::imwrite("out_env.jpg", out);
When I try to use these calibrated K and D matrices to undistort an image, I get results that look like this:
For reference, here's the source image:
I got similar issues when I tried using actual pixel coordinates for imgpts and physical dimensions for objpts.
Using opencv 3.4.5
Any help is appreciated! Thanks!