2

I know this question was asked a few Times, but the answers doesn't solve my problem.

I want to calibrate a pair of Cameras to use as Stereo Input. But when I run the code I get the error Message :

OpenCV(3.4.1) Error: Assertion failed (nimages > 0 && nimages == (int)imagePoints1.total() && (!imgPtMat2 || nimages == (int)imagePoints2.total())) in collectCalibrationData, file /tmp/opencv-20180529-49540-yj8rbk/opencv-3.4.1/modules/calib3d/src/calibration.cpp, line 3133 Traceback (most recent call last): File "/Users/MyName/Pycharm/Project/calibration.py", line 342, in <module> TERMINATION_CRITERIA ) cv2.error: OpenCV(3.4.1) /tmp/opencv-20180529-49540-yj8rbk/opencv-3.4.1/modules/calib3d/src/calibration.cpp:3133: error: (-215) nimages > 0 && nimages == (int)imagePoints1.total() && (!imgPtMat2 || nimages == (int)imagePoints2.total()) in function collectCalibrationData My Code is :

def distortion_matrix(path, objpoints, imgpoints):

  for item in os.listdir(path):
    if item.endswith(".jpg"):
        cap = cv2.VideoCapture(path+item, cv2.CAP_IMAGES)

        ret, img = cap.read()  # Capture frame-by-frame

        gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)

        keypoints = blobDetector.detect(gray)  # Detect blobs.

                    im_with_keypoints = cv2.drawKeypoints(img, keypoints, np.array([]), (0, 255, 0),
                                              cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)
        im_with_keypoints_gray = cv2.cvtColor(im_with_keypoints, cv2.COLOR_BGR2GRAY)
        ret, corners = cv2.findCirclesGrid(im_with_keypoints, (4, 11), None,
                                           flags=cv2.CALIB_CB_ASYMMETRIC_GRID)  

        if ret == True:
            objpoints.append(objp)  

            corners2 = cv2.cornerSubPix(im_with_keypoints_gray, corners, (11, 11), (-1, -1),
                                        criteria)  
            imgpoints.append(corners2)


  cap.release()

_, leftCameraMatrix, leftDistortionCoefficients, _, _ , objpoints0, imgpoints0 = distortion_matrix("./calibration/left/", objpoints0, imgpoints0)
_, rightCameraMatrix, rightDistortionCoefficients, _, _, objpoints1, imgpoints1 = distortion_matrix("./calibration/right/", objpoints1, imgpoints1)



(_, _, _, _, _, rotationMatrix, translationVector, _, _) = cv2.stereoCalibrate( objp, imgpoints0, imgpoints1, 
                                                                            leftCameraMatrix, leftDistortionCoefficients, 
                                                                            rightCameraMatrix, rightDistortionCoefficients, 
                                                                            imageSize, None, None, None, None,
                                                                            cv2.CALIB_FIX_INTRINSIC, TERMINATION_CRITERIA )

Most times when this gets thrown, it seems that the Message refers to arrays (imgpoint and objpoint) which are empty or not evenly filled. But at the end both got the length 20 (I scan 20 images so this seems right) and every cell of the array has 44 arrays stored (the circle grid I use has 44 points so this seems also right).

**Edit: ** my objp, imgpoint and objpoint are defined like this:

objp = np.zeros((np.prod(pattern_size), 3), np.float32)
objp[0]  = (0, 0, 0)
objp[1]  = (0, 2, 0)
objp[2]  = (0, 4, 0)
objp[3]  = (0, 6, 0)
...


objpoints0 = []
objpoints1 = []

imgpoints0 = []
imgpoints1 = []

** Edit 2: **

If NUM_IMAGES stands for Number of images, I thing I've got it now. But only when I add the new axis after I call distortion_matrix(). Then the code is able to complete. I need to test the results, but at least this problem seems be be solved.

Thank you very much

Chris
  • 25
  • 6

1 Answers1

1

You said you are doing stereo calibration, is there any case where some of the points on your grid does not visible from other camera? This error may appear when one of your view unable to detect all points on the calibration pattern. Three points to consider are
1- Make sure your object points are 3d
2- Make sure your left points, right points and object points have same size (number of views).
3- Make sure your left points, right points and object points have same amount of points at each index of list.

Edit: Your object points objp must contain a list/vector of 3d points, currently its shape is something like (44, 3), it must be (NUM_IMAGES, 44, 3). You can achieve this with objp = np.repeat(objp[np.newaxis, :, :], NUM_IMAGES, axis=0)

unlut
  • 3,525
  • 2
  • 14
  • 23
  • thank you for your fast response, I'll check everything in a second. The grid should be visible (and be recognised ) in all frames, since I wrote the capture script in a way that it only saves when both grids are visible. – Chris Sep 09 '18 at 15:26
  • Now can confirm: 1 : all object points are 3d, but I sometimes find a reference to the datatype stored, when I print it. So in front of the first element is 'array( ' and after the last element ' dtype=float32) ' since the array is create with a numpy comand and I'm not that familiar with it i don't know if it is normal. 2./3. despite the exception mentioned, all Arrays have the same length and their are alle the same amount of points at each index. – Chris Sep 09 '18 at 15:55
  • Can you show how your objPoints and objp variables are defined. – unlut Sep 09 '18 at 16:01
  • Your object points variable objp must be a multidimensional array also, look at my edit. – unlut Sep 09 '18 at 16:57
  • If NUM_IMAGES stands for number of Images, I think you solved it for me. The code is able to run without any Exeptions now. Thank you. – Chris Sep 09 '18 at 19:27
  • It is probably imgpoints0.shape[0] for you, which is number of views you have. – unlut Sep 09 '18 at 19:28