1

In the below picture, I have the 2D locations of the green points and I want to calculate the locations of the red points, or, as an intermediate step, I want to calculate the locations of the blue points. All in 2D.

checkerboard with autodetected green points and hand-painted blue and red points

Of course, I do not only want to find those locations for the picture above. In the end, I want an automated algorithm which takes a set of checkerboard corner points to calculate the outer corners.

I need the resulting coordinates to be as accurate as possible, so I think that I need a solution which does not only take the outer green points into account, but which also uses all the other green points' locations to calculate a best fit for the outer corners (red or blue).

If OpenCV can do this, please point me into that direction.

Daniel S.
  • 6,458
  • 4
  • 35
  • 78
  • See [findChessboardCorners](http://docs.opencv.org/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html#findchessboardcorners) and this [tutorial](http://docs.opencv.org/doc/tutorials/calib3d/camera_calibration/camera_calibration.html). – beaker Oct 19 '14 at 17:47
  • @beaker, mh..I'm not sure how this is supposed to help me. findChessboardCorners searches for quads - that is, the chessboard is required to be completely empty - rather than searching corners. How to use that tutorial for my purpose? Any part of it being especially important for me? – Daniel S. Oct 19 '14 at 18:47
  • Sorry, I hadn't read that you *already* had the green points detected. Have you tried `solvePnP/solvePnPRansac` to find the transform between the points on an idealized chessboard and the found points in your image? Then you could apply the transform to idealized corner points to find out where they should be in the 2D image. – beaker Oct 19 '14 at 19:37

2 Answers2

1

In general, if all you have is the detection of some, but not all, the inner corners, the problem cannot be solved. This is because the configuration is invariant to translation - shifting the physical checkerboard by whole squares would produce the same detected corner position on the image, but due to different physical corners.

Further, the configuration is also invariant to rotations by 180 deg in the checkerboard plane and, unless you are careful to distinguish between the colors of the squares adjacent each corner, to rotations by 90 deg and reflections with respect the center and the midlines.

This means that, in addition to detecting the corners, you need to extract from the image some features of the physical checkerboard that can be used to break the above invariance. The simplest break is to detect all 9 corners of one row and one column, or at least their end-corners. They can be used directly to rectify the image by imposing the condition that their lines be at 90 deg angle. However, this may turn out to be impossible due to occlusions or detector failure, and more sophisticated methods may be necessary.

For example, you can try to directly detect the chessboard edges, i.e. the fat black lines at the boundary. One way to do that, for example, would be to detect the letters and numbers nearby, and use those locations to constrain a line detector to nearby areas.

By the way, if the photo you posted is just a red herring, and you are interested in detecting general checkerboard-like patterns, and can control the kind of pattern, there are way more robust methods of doing it. My personal favorite is the "known 2D crossratios" pattern of Matsunaga and Kanatani.

Francesco Callari
  • 11,300
  • 2
  • 25
  • 40
  • I dont understand the first paragraph: If you move the board, then the detected corners will also move with the board. And that's what I expect and what I want. – Daniel S. Oct 27 '14 at 15:55
  • You say "more sophisticated methods may be necessary." -- these sophisticated methods are what I'm asking for. You can see on the picture in my question that I'm aware of the problems like occlusion and that these are the problems I'm trying to solve. – Daniel S. Oct 27 '14 at 19:55
  • > If you move the board, then... I said "shifting...by whole squares". You want and expect to _identify_ the corners you detected, so you can tell where the board boundary is. And I said that this is impossible if all you have is a few inner corners. This is because in that case there may be _multiple_ poses of the physical board that may produce the _same_ detected corners. – Francesco Callari Oct 27 '14 at 22:19
  • Ah, now I understand. No it's not important for me if a detected corner is, for example, the junction of H3 and G4. Or put another way, I'm aware that from the set of green points in the picture, I can't tell if the board is rotatd 90deg or not. My problem is, starting from the set of green points, to find the set of blue points or red points, no matter in what order. – Daniel S. Oct 28 '14 at 14:55
  • Yes, I understand your problem - and am telling you that what you think is not important is actually essential :-) You cannot solve it using only detected _inner_ corners (i.e. your _green_ ones), unless you break the translation invariance somehow, even if you are willing to ignore rotations and reflections. To convince yourself that this is the case, try marking a few "inside" green corners in the image, and then find the chessboard translations that are compatible with them. – Francesco Callari Oct 28 '14 at 16:51
  • Ok, understood and I agree that on every side of the chess board, at least one point needs to be detected so it's possible to tell the dimension of the chess board. But well, of course, in general, there is some minimum number of points which need to be detected. E.g. with 2 points, of course it's gonna fail. And it will in certain situations in the final application. – Daniel S. Oct 28 '14 at 18:43
1

I solved it robustly, but not accurately, with the following solution:

  • Find lines with at least 3 green points closely matching the line. (thin red lines in pic)
  • Keep bounding lines: From these lines, keep those with points only to one side of the line or very close to the line.
  • Filter bounding lines: From the bounding lines, take the 4 best ones/those with most points on them. (bold white lines in pic)
  • Calculate the intersections of the 4 remaining bounding lines (none of the lines are perfectly parallel, so this results in 6 intersections, of which we want only 4).
  • From the intersections, remove the one farthest from the average position of the intersections until only 4 of them are left.
  • That's the 4 blue points.

checkerboard with autodetected points, lines and outer corners

You can then feed these 4 points into OpenCV's findPerspectiveTransform function to find a perspective transform (aka a homography):

Point2f* srcPoints = (Point2f*) malloc(4 * sizeof(Point2f));    
std::vector<Point2f> detectedCorners = CheckDet::getOuterCheckerboardCorners(srcImg);
for (int i = 0; i < MIN(4, detectedCorners.size()); i++) {
    srcPoints[i] = detectedCorners[i];
}

Point2f* dstPoints = (Point2f*) malloc(4 * sizeof(Point2f));
int dstImgSize = 400;
dstPoints[0] = Point2f(dstImgSize * 1/8, dstImgSize * 1/8);
dstPoints[1] = Point2f(dstImgSize * 7/8, dstImgSize * 1/8);
dstPoints[2] = Point2f(dstImgSize * 7/8, dstImgSize * 7/8);
dstPoints[3] = Point2f(dstImgSize * 1/8, dstImgSize * 7/8);

Mat m = getPerspectiveTransform(srcPoints, dstPoints);

For our example image, the input and output of findPerspectiveTranform looks like this:

input
    (349.1, 383.9) -> ( 50.0,  50.0)
    (588.9, 243.3) -> (350.0,  50.0)
    (787.9, 404.4) -> (350.0, 350.0)
    (506.0, 593.1) -> ( 50.0, 350.0)
output
    (      1.6     -1.1    -43.8 )
    (      1.4      2.4  -1323.8 )
    (      0.0      0.0      1.0 )

You can then transform the image's perspective to board coordinates:

Mat plainBoardImg;
warpPerspective(srcImg, plainBoardImg, m, Size(dstImgSize, dstImgSize));

Results in the following image:

plainBoardImg

For my project, the red points that you can see on the board in the question are not needed anymore, but I'm sure they can be calculated easily from the homography by inverting it and then using the inverse for back-tranforming the points (0, 0), (0, dstImgSize), (dstImgSize, dstImgSize), and (dstImgSize, 0).

The algorithm works surprisingly reliable, however, it does not use all the available information, because it uses only the outer points (those which are connected with the white lines). It does not use any data of the inner points for additional accuracy. I would still like to find an even better solution, which uses the data of the inner points.

Daniel S.
  • 6,458
  • 4
  • 35
  • 78