INFO:
I've calibrated my camera and have found the camera's intrinsics matrix (K) and its distortion coefficients (d) to be the following:
import numpy as np
K = np.asarray([[556.3834638575809,0,955.3259939726225],[0,556.2366649196925,547.3011305411478],[0,0,1]])
d = np.asarray([[-0.05165940570900624],[0.0031093602070252167],[-0.0034036648250202746],[0.0003390345044343793]])
From here, I can undistort my image using the following three lines:
final_K = cv2.fisheye.estimateNewCameraMatrixForUndistortRectify(K, d, (1920, 1080), np.eye(3), balance=1.0)
map_1, map_2 = cv2.fisheye.initUndistortRectifyMap(K, d, np.eye(3), final_K, (1920, 1080), cv2.CV_32FC1)
undistorted_image = cv2.remap(image, map_1, map_2, interpolation=cv2.INTER_LINEAR, borderMode=cv2.BORDER_CONSTANT)
The resulting undistored images appears to be correct Left image is distorted, right is undistorted, but when I try to undistort image points using cv2.remap()
points aren't mapped to the same location as their corresponding pixel in the image. I detected the calibration board points in the left image using
ret, corners = cv2.findChessboardCorners(gray, (6,8),cv2.CALIB_CB_ADAPTIVE_THRESH+cv2.CALIB_CB_FAST_CHECK+cv2.CALIB_CB_NORMALIZE_IMAGE)
corners2 = cv2.cornerSubPix(gray, corners, (3,3), (-1,-1), (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.1))
then remapped those points in the following way:
remapped_points = []
for corner in corners2:
remapped_points.append(
(map_1[int(corner[0][1]), int(corner[0][0])], map_2[int(corner[0][1]), int(corner[0][0])])
)
In these horizontally concatenated images, the left image shows the points detected in the distorted image, while the right image shows the remapped location of the points in the right image.
Also, I haven't been able to get correct results using cv2.fisheye.undistortPoints()
. I have the following function to undistort points:
def undistort_list_of_points(point_list, in_K, in_d):
K = np.asarray(in_K)
d = np.asarray(in_d)
# Input can be list of bbox coords, poly coords, etc.
# TODO -- Check if point behind camera?
points_2d = np.asarray(point_list)
points_2d = points_2d[:, 0:2].astype('float32')
points2d_undist = np.empty_like(points_2d)
points_2d = np.expand_dims(points_2d, axis=1)
result = np.squeeze(cv2.fisheye.undistortPoints(points_2d, K, d))
fx = K[0, 0]
fy = K[1, 1]
cx = K[0, 2]
cy = K[1, 2]
for i, (px, py) in enumerate(result):
points2d_undist[i, 0] = px * fx + cx
points2d_undist[i, 1] = py * fy + cy
return points2d_undist
This image shows the results when undistorting using the above function.
(this is all running in OpenCV 4.2.0 on Ubuntu 18.04 in Python 3.6.8)
QUESTIONS
Why isn't this remapping of image coordinates working properly? Am I using map_1
and map_2
incorrectly?
Why are the results from using cv2.fisheye.undistortPoints()
different from using map_1
and map_2
?