I have a camera that has known intrinsic parameters but unknown extrinsic parameters. I am attempting to get an extrinsic calibration and to do that, I positioned a chessboard pattern with known world coordinates in the frame of the camera. To get the extrinsic calibration, I am comparing two OpenCV methods from the Calib3D module:
val flags = CALIB_USE_INTRINSIC_GUESS or
CALIB_FIX_PRINCIPAL_POINT or
CALIB_FIX_FOCAL_LENGTH or
CALIB_FIX_K1 or
CALIB_FIX_K2 or
CALIB_FIX_K3 or
CALIB_FIX_K4 or
CALIB_FIX_K5 or
CALIB_FIX_K6 or
CALIB_FIX_S1_S2_S3_S4 or
CALIB_FIX_TAUX_TAUY
val overallRms = Calib3d.calibrateCamera(
objectPoints,
imagePoints,
imageSize,
cameraMatrix,
distortionCoefficients,
rotationVectors,
translationVectors,
flags
)
and
val solveReturnValue = Calib3d.solvePnP(
objectPointsInFrame.second,
imagePointsInFrame.second,
cameraMatrix,
distortionCoefficients,
rotationVector,
translationVector,
false // useExtrinsicsGuess
)
Both methods yield the same results for translation and rotation +- 1.5 units. However, I have read on multiple occasions (e.g., here ) that solvePnP
yields the inverse of calibrateCamera
and one needs to invert the transformation again by
- computing the rotation matrix
R
from the rotation vector usingCalib3D.Rodrigues()
- inverting the rotation by inverting the rotation matrix
- inverting the translation using
-R.t()*tvec
which according to my testing does not seem to be the case. Hence, my questions: Is it safe to assume that rotationVectors
and translationVectors
are the extrinsic calibration of the camera? And when is the aforementioned transformation actually necessary?