1

I want to do 3D-reconstruction using structure-from-motion algorithm. I am using opencv to do this in python. But some how the obtained pointcloud is breaking into 2 halves. My input images are: Image 1 Image 2 Image 3. I am matching every 2 images like image1 with image2 and image2 with image 3. I tried different feature detectors like SIFT, KAZE and SURF. With the obtained points I compute the essential matrix. I got the camera intrinsics from the camera calibration from Opencv and are stored in the variables 'mtx' and 'dist' in the code below.

```file = os.listdir('Path_to _images')
file.sort(key=lambda f: int(''.join(filter(str.isdigit,f))))
path = os.path.join(os.getcwd(),'Path_to_images/')

for i in range(0, len(file)-1):

    if(i == len(file) - 1):
        break

    path1 = cv2.imread(path + file[i], 0)
    path1 = cv2.equalizeHist(path1)

    path2 = cv2.imread(path + file[i+1], 0)
    path2 = cv2.equalizeHist(path2)

# Feature Detection #
    sift = cv2.xfeatures2d.SIFT_create()
    kp1, des1 = sift.detectAndCompute(path1,None)
    kp2, des2 = sift.detectAndCompute(path2,None)

# Feature Matching #
    FLANN_INDEX_KDTREE = 0              
    index_params = dict(algorithm = FLANN_INDEX_KDTREE, trees = 5)
    search_params = dict(checks=50)   
    flann = cv2.FlannBasedMatcher(index_params,search_params)
    matches = flann.knnMatch(des1,des2,k=2)

    good = []
    pts1 = []
    pts2 = []

    for j, (m,n) in enumerate(matches):
        if m.distance < 0.8*n.distance:
            good.append(m)
            pts2.append(kp2[m.trainIdx].pt)
            pts1.append(kp1[m.queryIdx].pt)

    pts1 = np.int32(pts1)
    pts2 = np.int32(pts2)

    pts1 = np.array([pts1],dtype=np.float32)
    pts2 = np.array([pts2],dtype=np.float32)

# UNDISTORTING POINTS #

    pts1_norm = cv2.undistortPoints(pts1, mtx, dist)
    pts2_norm = cv2.undistortPoints(pts2, mtx, dist)

# COMPUTE FUNDAMENTAL MATRIX #

    F, mask = cv2.findFundamentalMat(pts1_norm,pts2_norm,cv2.FM_LMEDS)

# COMPUTE ESSENTIAL MATRIX #

    E, mask = cv2.findEssentialMat(pts1_norm, pts2_norm, focal=55.474, pp=(33.516, 16.630), method=cv2.FM_LMEDS, prob=0.999, threshold=3.0)


# POSE RECOVERY #
    points, R, t, mask = cv2.recoverPose(E, pts1_norm, pts2_norm)
    anglesBetweenImages = rotationMatrixToEulerAngles(R)

    sys.stdout = open('path_to_folder/angles.txt', 'a')
    print(anglesBetweenImages)

#  COMPOSE PROJECTION MATRIX OF R, t #
    matrix_1 = np.hstack((R, t))
    matrix_2 = np.hstack((np.eye(3, 3), np.zeros((3, 1))))

    projMat_1 = np.dot(mtx, matrix_1)
    projMat_2 = np.dot(mtx, matrix_2)

# TRIANGULATE POINTS #
    point_4d_hom = cv2.triangulatePoints(projMat_1[:3], projMat_2[:3], pts1[:2].T, pts2[:2].T)


# HOMOGENIZE THE 4D RESULT TO 3D #


    point_4d = point_4d_hom

    point_3d = point_4d[:3, :].T                # Obtains 3D points
    np.savetxt('/path_to_folder/'+ file[i] +'.txt', point_3d)

After cv2.triangulatePoints, I expected to obtain one pointcloud. But the result I got has 2 surfaces as shown in the image below.

Result 1. I really appreciate if anyone can tell me what is going wrong with my algorithm. Thanks!

parupalu
  • 51
  • 2
  • 9

1 Answers1

0

you need to do this interativilly

like this:

cv::Mat pointsMat1(2, 1, CV_64F);
cv::Mat pointsMat2(2, 1, CV_64F);

int size0 = m_history.getHistorySize();

for(int i = 0; i < size0; i++){
  cv::Point pt1 = m_history.getOriginalPoint(0, i);
  cv::Point pt2 = m_history.getOriginalPoint(1, i);

  pointsMat1.at<double>(0,0) = pt1.x;
  pointsMat1.at<double>(1,0) = pt1.y;
  pointsMat2.at<double>(0,0) = pt2.x;
  pointsMat2.at<double>(1,0) = pt2.y;

  cv::Mat pnts3D(4, 1, CV_64F);

  cv::triangulatePoints(m_projectionMat1, m_projectionMat2, pointsMat1, pointsMat2, pnts3D);
}