I have filmed an object on aruco board from two positions with the same camera. I've undistorted images and calibrated camera before work. I take one red point on one shot and calculate 3d line, which corresponds to this point 3D space, and then project it to another image:
The problem is that there is some discrepancy (~5px-15px) between line and point on another image. I also observed the same problem with opengl-generated images, so it doesn't seem to be a problem of my camera. I use this piece of code to detect board position:
MarkerMapPoseTracker MSPoseTracker; // tracks the pose of the marker map
MSPoseTracker.setParams(camParam, theMarkerMapConfig);
MSPoseTracker.estimatePose(ret.markers);
Is it possible to increase tolerance somehow? I've also found function which has some sort of tolerance parameter:
bool estimatePose(Marker& m, const CameraParameters& cam_params, float markerSize, float minErrorRatio = 4 /*tau_e in paper*/)
but I don't know how to pass this parameter to MSPoseTracker.estimatePose. How can I improve precision, if I believe its possible to do it at least theory?