2

I am trying to triangulate points from a projector and a camera using Structured Light in OpenCV Python. In this process I have a list of tuples that match one to one between the camera and the projector. I am passing this to cv2.undistortedPoints() as below:

camera_normalizedPoints = cv2.undistortPoints(camera_points, camera_K, camera_d)

However, python is throwing the following error and I am unable to understand what the error means.

camera_normalizedPoints = cv2.undistortPoints(camera_points, camera_K, camera_d) cv2.error: /home/base/opencv_build/opencv/modules/imgproc/src/undistort.cpp:312: error: (-215) CV_IS_MAT(_src) && CV_IS_MAT(_dst) && (_src->rows == 1 || _src->cols == 1) && (_dst->rows == 1 || _dst->cols == 1) && _src->cols + _src->rows - 1 == _dst->rows + _dst->cols - 1 && (CV_MAT_TYPE(_src->type) == CV_32FC2 || CV_MAT_TYPE(_src->type) == CV_64FC2) && (CV_MAT_TYPE(_dst->type) == CV_32FC2 || CV_MAT_TYPE(_dst->type) == CV_64FC2) in function cvUndistortPoints

Any help is greatly appreciated.

Thanks.

Jeru Luke
  • 20,118
  • 13
  • 80
  • 87
Shubs
  • 90
  • 1
  • 7
  • 1
    You should have included exactly what the points look like that you're passing in. OpenCV in Python typically wants points in a *two-channel* array, and I believe you're passing them as a single-channel array. Instead of points as a list of lists like `[[x1, y1], [x2, y2], ...]`, they should be one level deeper, like `[[[x1, y1]], [[x2, y2]], ...]`. Also make sure the points are 32-bit or 64-bit floats, so in total the points arrays should look like `np.array([[[x1, y1]], [[x2, y2]], ...], dtype=np.float32)`. If that solves it I'll write it up as an answer. – alkasm Nov 20 '17 at 23:36
  • @AlexanderReynolds - Yes, I am passing the points as `[[x1, y1], [x2, y2], ...]`. As you suggested, I am now trying to add another dimension by the command `camera_points = np.array([camera_points], dtype=np.float32)`, but instead of `[[[x1, y1]], [[x2, y2]], ...]` I am getting `[[[x1, y1], [x2, y2], ...]]`. Can you please guide me how to do it correctly so I can check? – Shubs Nov 21 '17 at 00:00
  • You should just be able to transpose those points by flipping the axes. `points = points.transpose(1,0,2)` should do the trick (this flips the 0 and 1 axes). – alkasm Nov 21 '17 at 00:05
  • @AlexanderReynolds - Yes, that worked. Thank you! – Shubs Nov 21 '17 at 00:12
  • Great, I've added it as an answer. – alkasm Nov 21 '17 at 00:17

2 Answers2

6

The documentation is not always explicit about the input shape in Python unfortunately, and undistortPoints() doesn't even have Python documentation yet.

The input points need to be an array with the shape (n_points, 1, n_dimensions). So if you have 2D coordinates, they should be in the shape (n_points, 1, 2). Or for 3D coordinates they should be in the shape (n_points, 1, 3). This is true for most OpenCV functions. AFAIK, this format will work for all OpenCV functions, while some few OpenCV functions will also accept points in the shape (n_points, n_dimensions). I find it best to just keep everything consistent and in the format (n_points, 1, n_dimensions).

To be clear this means an array of four 32-bit float 2D points would look like:

points = np.array([[[x1, y1]], [[x2, y2]], [[x3, y3]], [[x4, y4]]], dtype=np.float32)

If you have an array that has the shape (n_points, n_dimensions) you can expand it with np.newaxis:

>>> points = np.array([[1, 2], [3, 4], [5, 6], [7, 8]])
>>> points.shape
(4, 2)
>>> points = points[:, np.newaxis, :]
>>> points.shape
(4, 1, 2)

or with np.expand_dims():

>>> points = np.array([[1, 2], [3, 4], [5, 6], [7, 8]])
>>> points.shape
(4, 2)
>>> points = np.expand_dims(points, 1)
>>> points.shape
(4, 1, 2)

or with various orderings of np.transpose() depending on the order of your dimensions. For e.g. if your shape is (1, n_points, n_dimensions) then you want to swap axis 0 with axis 1 to get (n_points, 1, n_dimensions), so points = np.transpose(points, (1, 0, 2)) would change the axes to put axis 1 first, then axis 0, then axis 2, so the new shape would be correct.


If you think this is an odd format for points, it is if you only consider a list of points, but reasonable if you think about points as coordinates of an image. If you have an image, then the coordinates of each point in the image is defined by an (x, y) pair, like:

(0, 0)    (1, 0)    (2, 0)    ...
(0, 1)    (1, 1)    (2, 1)    ...
(0, 2)    (1, 2)    (2, 2)    ...
...

Here it makes sense to put each coordinate into a separate channel of a two-channel array, so that you get one 2D array of x-coordinates, and one 2D array of y-coordinates, like:

Channel 0 (x-coordinates):

0    1    2    ...
0    1    2    ...
0    1    2    ...
...

Channel 1 (y-coordinates):

0    0    0    ...
1    1    1    ...
2    2    2    ...
...

So that's the reason for having each coordinate on a separate channel.


Some other OpenCV functions which require this format include cv2.transform() and cv2.perspectiveTransform(), which I've answered identical questions about before, here and here respectively.

alkasm
  • 22,094
  • 5
  • 78
  • 94
0

I also reach this problems, and I take some time to research an finally understand.

In the open system, distort operation is before camera matrix, so the process order is: image_distorted ->camera_matrix -> un-distort function->camera_matrix->back to image_undistorted.

So you need a small fix to and camera_K again.

camera_normalizedPoints = cv2.undistortPoints(camera_points, camera_K, camera_d, cv2.Mat.sye(3,3), camera_K)

Forumla: https://i.stack.imgur.com/nmR5P.jpg

B.Blue
  • 1
  • 1