I am currently writing a project that will allow robot to find its location depending on the photos of the ceiling. The camera is mounted on the robot and is facing the ceiling directly (meaning that the center of the photo is always consider to be the position of the robot). The idea is to establish 0,0 position and orientation of the x,y axis using the first photo and then finding the distance and rotation between that and the next photo (which will be taken in the slightly different position) and establish new 0,0 position and orientation of x,y axis and so on. I am finding the features on the photo using the following algorithm (so far on only one image):
#include <opencv/cv.h>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/nonfree/nonfree.hpp>
#include <iostream>
using namespace cv;
using namespace std;
int main()
{
Mat img = imread("ceiling.jpg");
if (img.empty())
{
cout << "Cannot load an image!" << endl;
getchar();
return -1;
}
SIFT sift(10); //number of keypoints
vector<KeyPoint> key_points;
Mat descriptors, mascara;
Mat output_img;
sift(img,mascara,key_points,descriptors);
drawKeypoints(img, key_points, output_img);
namedWindow("Image");
imshow("Image", output_img);
imwrite("image.jpg", output_img);
waitKey(0);
return 0;
}
Is there any function that could help me do that?