0

I want to calibrate a fisheye camera using https://docs.opencv.org/3.4/db/d58/group__calib3d__fisheye.html.

This is done by capturing a grid from different perspectives and pass the grid information to the calibration function.

Most resources if have found, as in

RaspiCam fisheye calibration with OpenCV

https://medium.com/@kennethjiang/calibrate-fisheye-lens-using-opencv-333b05afa0b0

use a grid that consists of integer values (0,0,0), (1,0,0), (2,0,0), ..., (6,5,0). However, the documentation states that objectPoints is a

vector of vectors of calibration pattern points in the calibration pattern coordinate space

The documentation for the usual calibration https://docs.opencv.org/3.4/d9/d0c/group__calib3d.html contains a bit more information:

In the new interface it is a vector of vectors of calibration pattern points in the calibration pattern coordinate space (e.g. std::vectorstd::vector<cv::Vec3f>). The outer vector contains as many elements as the number of pattern views. If the same calibration pattern is shown in each view and it is fully visible, all the vectors will be the same. Although, it is possible to use partially occluded patterns or even different patterns in different views. Then, the vectors will be different. Although the points are 3D, they all lie in the calibration pattern's XY coordinate plane (thus 0 in the Z-coordinate), if the used calibration pattern is a planar rig. In the old interface all the vectors of object points from different views are concatenated together.

If i understood correctly the grid is supposed to look like this (0,0,0), (1*gridSize,0,0), (2*gridSize,0,0), ..., (6*gridSize,5*gridSize,0) where gridsize is the real-world width=height of a single cell in the grid.

The official opencv sample https://github.com/opencv/opencv/blob/4.x/samples/cpp/stereo_calib.cpp does exactly that:


    {
        for( j = 0; j < boardSize.height; j++ )
            for( k = 0; k < boardSize.width; k++ )
                objectPoints[i].push_back(Point3f(k*squareSize, j*squareSize, 0));
    }

My question: is this a mistake that is copied again and again? Is my assumption that the grid size has an impact on the calibration correct?

Mehno
  • 868
  • 2
  • 8
  • 21
  • grid spacing has *no impact* on **intrinsic** calibration, as long as it's a scale factor. it's only relevant for extrinsic calibration. your confusion stems from not understanding the difference between intrinsic calibration and extrinsic (stereo) calibration. is that the amount of answer you're looking for? – Christoph Rackwitz Aug 12 '22 at 19:22
  • It is a useful comment. That means only the Raspberry tutorial is wrong because the distance between the cameras can not be calibrated without a valid grid size. If you could back up your explenation with a link i will accept it. – Mehno Aug 13 '22 at 14:02

0 Answers0