I am performing camera calibration using calibrateCamera. Part of the output is a set of Rodrigues rotation vectors, and 3-D translation vectors.
I am interested in the world position of the cameras. If I plot the translation points directly, the results look incorrect. I feel that I am getting my coordinate spaces confused, but I am having trouble parsing the opencv documentation:
rvecs – Output vector of rotation vectors (see Rodrigues() ) estimated for each pattern view. That is, each k-th rotation vector together with the corresponding k-th translation vector (see the next output parameter description) brings the calibration pattern from the model coordinate space (in which object points are specified) to the world coordinate space, that is, a real position of the calibration pattern in the k-th pattern view (k=0.. M -1).
My question is, how do I derive the camera world position from a Rodrigues rotation vector and corresponding translation vector obtained using opencv's calibratecamera?