0

I am trying to find the real world distance from camera to ORB feature points using Monocular ORB SLAM2.

I calculated the Euclidean distance between world coordinates of each ORB feature point and the world coordinate of the current key frame's camera location. This process is repeated for all the frames. So a distance is obtained for each ORB point in the current frame.

In Viewer.cc

std::vector<MapPoint*> MP = mpTracker->mCurrentFrame.mvpMapPoints;
cv::Mat CC = mpTracker->mCurrentFrame.GetCameraCenter();
mpFrameDrawer->DrawFrame(MP,CC);

In FrameDrawer.cc

cv::Mat FrameDrawer::DrawFrame(std::vector<MapPoint*> DistMapPt, cv::Mat CameraPt)
{
   .
   .
   .

    else if(state==Tracking::OK) //TRACKING
    {
        mnTracked=0;
        mnTrackedVO=0;
        const float r = 5;

        for(int i=0;i<n;i++)
        {
            if(vbVO[i] || vbMap[i])
            {
                cv::Point2f pt1,pt2;
                pt1.x=vCurrentKeys[i].pt.x-r;
                pt1.y=vCurrentKeys[i].pt.y-r;
                pt2.x=vCurrentKeys[i].pt.x+r;
                pt2.y=vCurrentKeys[i].pt.y+r;

                float MapPointDist;
                 MapPointDist = sqrt(pow((DistMapPt[i]->GetWorldPos().at<float>(0)-CameraPt.at<float>(0)),2)+pow((DistMapPt[i]->GetWorldPos().at<float>(1)-CameraPt.at<float>(1)),2)+pow((DistMapPt[i]->GetWorldPos().at<float>(2)-CameraPt.at<float>(2)),2));               

                }
   .
   .
   .

            }


        }
    }

How ever this calculated distance is neither equal to nor can be scaled to the real distance. The same method gives relatively accurate distances in RGBD ORB SLAM2.

Is there any method to scale distances in Monocular ORB SLAM?

Swathy
  • 30
  • 9
  • I'm not familiar with ORB-SLAM2 and currently not working with SLAM in general, but from past experience it should be possible to do so. Are you sure you camera and IMU calibration is OK? Because usually this is main issue which lead to wrong coordinates. – sklott Jul 26 '19 at 06:46
  • @sklott The camera and calibration parameters are correct. I tried with sequences and parameters provided by KITTI and TUM data sets also – Swathy Jul 26 '19 at 10:36

1 Answers1

2

Please look at this post: "ORB-SLAM2 arbitrarily define scale at initialization, as the median scene depth. In practice, scale is different every time you initialize orb-slam2." It is impossible to obtain correct scale in monocular SLAM as you cannot estimate real world depth from sequence of images. You need another source of data such as second camera, IMU, Lidar, robot odometry or a marker with known real-world dimensions. In RGBD case the depth is known from the depth sensor so the coordinates are scaled correctly.

Piotr Siekański
  • 1,665
  • 8
  • 14
  • It says "In practice, scale is different every time you initialize orb-slam2". But when I tried for each of the frames in a single execution, the scaling was different. So is the scale fixed during initialization or does it change with each frame? – Swathy Jul 26 '19 at 11:29