0

I'm currently using opencv library with c++, and my goal is to cancel a fisheye effect on an image ("make it plane") I'm using the function "undistortImage" to cancel the effect but I need before to perform camera calibration in order to find the parameters K, Knew, and D, but I didn't understand exactly the documentation ( link: http://docs.opencv.org/master/db/d58/group__calib3d__fisheye.html#gga37375a2741e88052ce346884dfc9c6a0a0899eaa2f96d6eed9927c4b4f4464e05). From my understanding, I should give two lists of points and the function "calibrate" is supposed to return the arrays I need. So my question is the following: given a fisheye image, how am I supposed to pick the two lists of points to get the result ? This is for the moment my code, very basic, just takes the picture, display it, performs the undistortion and displays the new image. The elements in the matrix are random, so currently the result is not as expected. Thanks for the answers.

#include "opencv2\core\core.hpp"
#include "opencv2\highgui\highgui.hpp"
#include "opencv2\calib3d\calib3d.hpp"
#include <stdio.h>
#include <iostream>


using namespace std;
using namespace cv;

int main(){

    cout << " Usage: display_image ImageToLoadAndDisplay" << endl;
    Mat image;
    image = imread("C:/Users/Administrator/Downloads/eiffel.jpg", CV_LOAD_IMAGE_COLOR);   // Read the file
    if (!image.data)                              // Check for invalid input
    {
        cout << "Could not open or find the image" << endl;
        return -1;
    }
    cout << "Input image depth: " << image.depth() << endl;

    namedWindow("Display window", WINDOW_AUTOSIZE);// Create a window for display.
    imshow("Display window", image);                   // Show our image inside it.

    Mat Ka = Mat::eye(3, 3, CV_64F); // Creating distortion matrix
    Mat Da = Mat::ones(1, 4, CV_64F);
    Mat dstImage(image.rows, image.cols, CV_32F);

    cout << "K matrix depth: " << Ka.depth() << endl;
    cout << "D matrix depth: " << Da.depth() << endl;

    Mat Knew = Mat::eye(3, 3, CV_64F);
    std::vector<cv::Vec3d> rvec;
    std::vector<cv::Vec3d> tvec;
    int flag = 0; 
    std::vector<Point3d> objectPoints1 = { Point3d(0,0,0),  Point3d(1,1,0),  Point3d(2,2,0), Point3d(3,3,0), Point3d(4,4,0), Point3d(5,5,0), 
        Point3d(6,6,0),  Point3d(7,7,0),  Point3d(3,0,0), Point3d(4,1,0), Point3d(5,2,0), Point3d(6,3,0), Point3d(7,4,0),  Point3d(8,5,0),  Point3d(5,4,0), Point3d(0,7,0), Point3d(9,7,0), Point3d(9,0,0), Point3d(4,3,0), Point3d(7,2,0)};
    std::vector<Point2d> imagePoints1 = { Point(107,84),  Point(110,90),  Point(116,96), Point(126,107), Point(142,123), Point(168,147),
        Point(202,173),  Point(232,192),  Point(135,69), Point(148,73), Point(165,81), Point(189,93), Point(219,112),  Point(248,133),  Point(166,119), Point(96,183), Point(270,174), Point(226,56), Point(144,102), Point(206,75) };

    std::vector<std::vector<cv::Point2d> > imagePoints(1);
    imagePoints[0] = imagePoints1;
    std::vector<std::vector<cv::Point3d> > objectPoints(1);
    objectPoints[0] = objectPoints1;
    fisheye::calibrate(objectPoints, imagePoints, image.size(), Ka, Da, rvec, tvec, flag); // Calibration
    cout << Ka<< endl;
    cout << Da << endl;
    fisheye::undistortImage(image, dstImage, Ka, Da, Knew); // Performing distortion
    namedWindow("Display window 2", WINDOW_AUTOSIZE);// Create a window for display.
    imshow("Display window 2", dstImage);                   // Show our image inside it.

    waitKey(0);                                          // Wait for a keystroke in the window
    return 0;
}
Armand Chocron
  • 149
  • 1
  • 4
  • 14
  • probably you must know the real-world positions of the points. Typically you provide images of a test-pattern, where some features can be easily extracted (e.g. dot pattern or a chessboard pattern) and where you know the relative positions of the pattern elements. – Micka Feb 10 '16 at 14:54
  • Thanks for the answer, but don't you think there is a geometric way to get these points by observation of the image only ? Or given the parameters of the lens can't I obtain them ? – Armand Chocron Feb 10 '16 at 15:12
  • afaik there is no such way. There are however some non-standard approaches to use different kind of known objects. For example in https://hal.inria.fr/inria-00267247/document you just have to know that lines are straight in reality (afair). But you would probably would have to implement that correction process yourself, most common approach is to create a test pattern (or known object) and use that for calibration. On SO there was a question whether to use a coca cola can as calibration pattern, but no idea whether this was solved then. – Micka Feb 10 '16 at 15:17
  • or do you mean to extract the testpattern (chessboard corner) points from the image? Yes, that's possible. – Micka Feb 10 '16 at 15:20
  • 1
    The answer below is very clear, thank you for all the help !! – Armand Chocron Feb 10 '16 at 15:29

1 Answers1

1

For calibration with cv::fisheye::calibrate you must provide

objectPoints    vector of vectors of calibration pattern points in the calibration pattern coordinate space. 

This means to provide KNOWN real-world coordinates of the points (must be corresponding points to the ones in imagePoints), but you can choose the coordinate system positon arbitrarily (but carthesian), so you must know your object - e.g. a planar test pattern.

imagePoints vector of vectors of the projections of calibration pattern points

These must be the same points as in objectPoints, but given in image coordinates, so where the projection of the object points hit your image (read/extract the coordinates from your image).

For example, if your camera did capture this image (taken from here ):

image of a testpattern, captured by a fisheye camera

you must know the dimension of your testpattern (up to a scale), for example you could choose the top-left corner of the top-left square to be position (0,0,0), the top-right corner of the top-left square to be (1,0,0), and the bottom-left corner of the top-left square to be (1,1,0), so your whole testpattern would be placed on the xy-plane.

Then you could extract these correspondences:

pixel        real-world
(144,103)    (4,3,0)
(206,75)     (7,2,0)
(109,151)    (2,5,0)
(253,159)    (8,6,0)

for these points (marked red):

enter image description here

The pixel position could be your imagePoints list while the real-world positions could be your objectPoints list.

Does this answer your question?

Micka
  • 19,585
  • 4
  • 56
  • 74
  • Wow this is really clear ! Thankk you ! So, if I undurstand it correctly, the actual length of the square side does not matter (cf. the length of the vector in the real world ), everything is relative. And also, do you have an idea of the number of points I should take ? – Armand Chocron Feb 10 '16 at 15:24
  • the more points the better, and the more distributed over the image the better. I don't know how many points are needed as a minimum, sorry. The actual size of the squares does only matter if you'll need or provide the correct camera intrinsic parameters (incl. scale), afaik it is not needed for the distortion correction. However I must tell you that I didn't use the fisheye::calibrate function yet myself, so all this is just from my non-fisheye calibration experience and might be erroneous, so if it doesn't work in your case don't trust too much on my example :) – Micka Feb 10 '16 at 15:29
  • I used the function as you said but I'm getting some weird valued in the K matrix. I printed the values in the console and this is what I got: [-1.#IND, -1.#IND, -1.#IND; 0, -1.#IND, -1.#IND; 0, 0, 1]. Do you have an Idea of the problem ? – Armand Chocron Feb 14 '16 at 13:10
  • can you post the images and testpattern sizes you used for calibration? – Micka Feb 14 '16 at 13:12
  • I used the image posted above. I need several images to perform the calibration ? – Armand Chocron Feb 14 '16 at 13:21
  • you took the image that I've posted? For your camera you need your own images (taken by that camera). But you could use my image for testing... but maybe you need more points. I'll try later – Micka Feb 14 '16 at 13:23
  • I updated the code above. The source image is the image of the chessboard that you uploaded here. – Armand Chocron Feb 14 '16 at 13:23
  • I just want to check currently if the functions work correcty on a general image, so the image you uploaded is enough for me. I used 20 points, I can try with more points and see. – Armand Chocron Feb 14 '16 at 13:25
  • sorry, no idea... maybe http://stackoverflow.com/questions/26794937/fisheye-lens-calibration-with-opencv-3-0 helps – Micka Feb 14 '16 at 17:30