0

I have been trying to calculate a distance between an object and one of my cameras (the same model). After calibration, I successfully got the following matrix. The resolution was 600 x 480.

Left Camera Matrix
[[624.65871068   0.         279.196958  ]
 [  0.         637.37379116 267.35689643]
 [  0.           0.           1.        ]]
Left Distortion Coefficients
[[ 0.07129149 -0.32551738  0.00165004  0.00582742  0.55830776]]
Right Camera Matrix
[[628.54997755   0.         278.88536288]
 [  0.         638.88299319 262.29519192]
 [  0.           0.           1.        ]]
Right Distortion Coefficients
[[ 0.05281363 -0.20836547  0.0015596   0.00694854 -0.18818856]]
Calibrating cameras together...
Rotation Matrix
[[ 9.78166692e-01 -2.92706245e-02 -2.05750220e-01]
 [ 2.87961989e-02  9.99571250e-01 -5.30056367e-03]
 [ 2.05817156e-01 -7.39989429e-04  9.78590185e-01]]
Translation Vector
[[6.87024126]
 [0.33698621]
 [0.82946341]]

If I can detect an object from the both cameras, let us say an object found in pixel (a, b) in the left camera and (c, d) in the right camera. Is there any way that I can get the distance between one camera to the object?

Plus, the script I have implements cv2.stereoRectify and cv2.initUndistortRectifyingMap, which can be used to get the fixed frames and then to calculate a depth map using cv2.StereoBM_create(). Well, to be honest, I am not sure whether this can be used to calculate the distance.

Thanks!

SSS
  • 621
  • 2
  • 7
  • 25

1 Answers1

0

You are computing the distance to the camera, you need to know two parameters from your (normally "LEFT") camera matrix: (1) Baseline; (2) focal length.

Usually we use "LEFT" image as the major reference image, because most of the time, we compute the depth map / disparity image based on left image.

After you have the coordinates of the object (x,y) on the left image, you can inverse the formula and compute the Z-distance as the follows:

enter image description here

Reference: OpenCV - Depth Map from Stereo Images

Howard GENG
  • 1,075
  • 7
  • 16
  • Thanks! What if the y values are slightly different (b != d)? How would that be calculated as well? – SSS Aug 22 '18 at 02:52
  • It could because of the stereo cameras are not perfectly installed aligned. You should (1) do calibration and compute camera matrices; (2) do image rectification; (3) generate disparity map from the stereo image pair; (4) locate the coordinate (x,y) on left image; (5) compute distance. Because the world and your calibration is not perfect, you could always have small error in your matrices as well. So a small gap between 3~5 pixels is usually a reasonable tolerance I guess. – Howard GENG Aug 22 '18 at 04:09
  • @Howrad GENG Thanks!! It if is okay, could you share me the detailed calculation of getting the depth map assuming that I have the rectified images? I can't quite formulate the equation from Bf/Z = x - x' – SSS Aug 22 '18 at 15:14
  • You could look at the reference in my answer. There is an example of how you compute a disparity map from a pair of rectified images in python. – Howard GENG Aug 22 '18 at 15:59
  • Another reference: [Depth from Stereo](https://github.com/IntelRealSense/librealsense/blob/v2.15.0/doc/depth-from-stereo.md). – Catree Aug 22 '18 at 16:10
  • Thank you, @Catree – SSS Aug 22 '18 at 21:19