2

I want to calibrate a car video recorder and use it for 3D reconstruction with Structure from Motion (SfM). The original size of the pictures I have took with this camera is 1920x1080. Basically, I have been using the source code from the OpenCV tutorial for the calibration.

But there are some problems and I would really appreciate any help.

So, as usual (at least in the above source code), here is the pipeline:

  1. Find the chessboard corner with findChessboardCorners
  2. Get its subpixel value with cornerSubPix
  3. Draw it for visualisation with drawhessboardCorners
  4. Then, we calibrate the camera with a call to calibrateCamera
  5. Call the getOptimalNewCameraMatrix and the undistort function to undistort the image

In my case, since the image is too big (1920x1080), I have resized it to 640x320 (during SfM, I will also use this size of image, so, I don't think it would be any problem). And also, I have used a 9x6 chessboard corners for the calibration.

Here, the problem arose. After a call to the getOptimalNewCameraMatrix, the distortion come out totally wrong. Even the returned ROI is [0,0,0,0]. Below is the original image and its undistorted version:

Original image Undistorted image You can see the image in the undistorted image is at the bottom left.

But, if I didn't call the getOptimalNewCameraMatrix and just straight undistort it, I got a quite good image. Undistorted image

So, I have three questions.

  1. Why is this? I have tried with another dataset taken with the same camera, and also with my iPhone 6 Plus, but the results are same as above.

  2. Another question is, what is the getOptimalNewCameraMatrix does? I have read the documentations several times but still cannot understand it. From what I have observed, if I didn't call the getOptimalNewCameraMatrix, my image will retain its size but it would be zoomed and blurred. Can anybody explain this function in more detail for me?

  3. For SfM, I guess the call to getOptimalNewCameraMatrix is important? Because if not, the undistorted image would be zoomed and blurred, making the keypoint detection harder (in my case, I will be using the optical flow)?

I have tested the code with the opencv sample pictures and the results are just fine.

Below is my source code:

from sys import argv
import numpy as np
import imutils  # To use the imutils.resize function. 
                       # Resizing while preserving the image's ratio.
                       # In this case, resizing 1920x1080 into 640x360.
import cv2
import glob

# termination criteria
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)

# prepare object points, like (0,0,0), (1,0,0), (2,0,0) ....,(6,5,0)
objp = np.zeros((9*6,3), np.float32)
objp[:,:2] = np.mgrid[0:9,0:6].T.reshape(-1,2)

# Arrays to store object points and image points from all the images.
objpoints = [] # 3d point in real world space
imgpoints = [] # 2d points in image plane.

images = glob.glob(argv[1] + '*.jpg')
width = 640

for fname in images:
    img = cv2.imread(fname)
    if width:
        img = imutils.resize(img, width=width)

    gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)

    # Find the chess board corners
    ret, corners = cv2.findChessboardCorners(gray, (9,6),None)

    # If found, add object points, image points (after refining them)
    if ret == True:
        objpoints.append(objp)

        corners2 = cv2.cornerSubPix(gray,corners,(11,11),(-1,-1),criteria)
        imgpoints.append(corners2)

        # Draw and display the corners
        img = cv2.drawChessboardCorners(img, (9,6), corners2,ret)
        cv2.imshow('img',img)
        cv2.waitKey(500)

cv2.destroyAllWindows()
ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(objpoints, imgpoints, gray.shape[::-1],None,None)

for fname in images:
    img = cv2.imread(fname)
    if width:
        img = imutils.resize(img, width=width)

    h,  w = img.shape[:2]
    newcameramtx, roi=cv2.getOptimalNewCameraMatrix(mtx,dist,(w,h),1,(w,h))

    # undistort
    dst = cv2.undistort(img, mtx, dist, None, newcameramtx)

    # crop the image
    x,y,w,h = roi
    dst = dst[y:y+h, x:x+w]
    cv2.imshow("undistorted", dst)
    cv2.waitKey(500)

mean_error = 0
for i in xrange(len(objpoints)):
    imgpoints2, _ = cv2.projectPoints(objpoints[i], rvecs[i], tvecs[i], mtx, dist)
    error = cv2.norm(imgpoints[i],imgpoints2, cv2.NORM_L2)/len(imgpoints2)
    mean_error += error

print "total error: ", mean_error/len(objpoints)

Already ask someone in answers.opencv.org and he tried my code and my dataset with success. I wonder what is actually wrong.

1 Answers1

0

Question #2:

With cv::getOptimalNewCameraMatrix(...) you can compute a new camera matrix according to the free scaling parameter alpha.

If alpha is set to 1 then all the source image pixels are retained in the undistorted image that is you'll see black and curved border along the undistorted image (like a pincushion). This scenario is unlucky for several computer vision algorithms, because new edges are appeared on the undistorted image for example.

By default cv::undistort(...) regulates the subset of the source image that will be visible in the corrected image and that's why only the sensible pixels are shown on that - no pincushion around the corrected image but data loss.

Anyway, you are allowed to control the subset of the source image that will be visible in the corrected image:

cv::Mat image, cameraMatrix, distCoeffs;
// ...

cv::Mat newCameraMatrix = cv::getOptimalNewCameraMatrix(cameraMatrix, distCoeffs, image.size(), 1.0);

cv::Mat correctedImage;
cv::undistort(image, correctedImage, cameraMatrix, distCoeffs, newCameraMatrix);

Question #1:

It is just my feeling, but you should also take care, if you resize your image after the calibration then the camera matrix must be also "scaled" as well, for example:

cv::Mat cameraMatrix;
cv::Size calibSize; // Image during the calibration, e.g. 1920x1080
cv::Size imageSize; // Your current image size, e.g. 640x320
// ...

cv::Matx31d t(0.0, 0.0, 1.0);
t(0) = (double)imageSize.width / (double)calibSize.width;
t(1) = (double)imageSize.height / (double)calibSize.height;

cameraMatrixScaled = cv::Mat::diag(cv::Mat(t)) * cameraMatrix;

This must be done only for the camera matrix, because the distortion coefficients do not depend on the resolution.

Question #3:

Whatever I think cv::getOptimalNewCameraMatrix(...) is not important in your case, the undistorted image can be zoomed and blurred because you remove the effect of a non-linear transformation. If I were you then I would try the optical flow without calling cv::undistort(...). I think that even a distorted image can contain a lot of good features for tracking.

Kornel
  • 5,264
  • 2
  • 21
  • 28
  • During calibration, I already resized my image to 640x320. So, I don't need to rescale my intrinsic parameter isn't it? And, as for the cv::undistort, I do need it because for SfM, I will need the undistorted version of the image – Hafiz Hilman Mohammad Sofian Sep 18 '16 at 03:35
  • 1
    Then you don't have to rescale your intrinsics, just use the same size of image in `cv::undistort(...)` as in `cv::calibrateCamera(...)`. And it's also better to implement your approach without using `cv::getOptimalNewCameraMatrix(...)`. – Kornel Sep 19 '16 at 06:51
  • But this just get me wondering. What is the problem? Bugs? Because a direct call to `cv2.undistort` solve the case, I can just assume the calibration is successful? – Hafiz Hilman Mohammad Sofian Sep 19 '16 at 09:15
  • `cv::calibrateCamera(...)` returns the final re-projection error, if this is between 0.1 and 1.0 (pixels) then your calibration can be considered good enough. An RMS error less then 1.0 px means sub-pixel accuracy. Could you share your code about how you call `getOptimalNewCameraMatrix`? – Kornel Sep 19 '16 at 09:21