2

In order to validate results of two-view SFM approach for estimating camera pose [R|t], I made use of the chessboard patterns which I used for calibration, especially that "calibrateCamera" function in OpenCV returns rotation and translation vectors for each pattern. And hence, relative pose between let's say the first two patterns can be calculated easily.

Yet I don't get the correct camera pose, and I have been struggling so hard figuring out the problem, but to no vain.

I would really appreciate your contributions to solve my problem.

MY CODE Description:

  • undistort images
  • find chessboard corners in two images
  • match points (verified by plotting side to side the two images and the lines)
  • estimate fundamental matrix (verified : x'T * F * x = 0)
  • Essential Matrix (E) = KT * F * K (verified : X'T * E * X = 0)
  • SVD of E = U * S * VT
  • R = U * W * VT or U * WT * VT such that WT = [0,-1,0; 1,0,0; 0,0,1]

    FundMat, mask = cv2.findFundamentalMat(imgpoints1, imgpoints2, cv2.FM_LMEDS)
    
    # is the fundamental matrix is really a fundamental Matrix. xFx'=0 ??
    # verfication of fundamental matrix
    
    for i in range(len(imgpoints1)):
    
        X = np.array([imgpoints1[i][0],imgpoints1[i][1],1])
        X_prime = np.array([imgpoints2[i][0],imgpoints2[i][1],1])
        err = np.dot(np.dot(X_prime.T,FundMat),X)
        if mask[i] == True:
           print(err)
    
    
        # E = [t]R = (K_-T)_-1 * F * K = K_T*F*K
    term1 = np.dot(np.transpose(mtx), FundMat)       # newcameramtx , mtx
    E = np.dot(term1, mtx)                           # newcameramtx , mtx
    
    
         # verfication of Essential matrix
    for i in range(len(imgpoints1)):
    
        X_norm = np.dot(np.linalg.inv(mtx), np.array([imgpoints1[i][0],imgpoints1[i][1],1]).T)
        X_prime_norm = np.dot(np.linalg.inv(mtx), np.array([imgpoints2[i][0],imgpoints2[i][1],1]).T)
        err_Ess = np.dot(np.dot(X_prime_norm.T,E),X_norm)
        if mask[i] == True:
            print(err_Ess)
    
    # SVD of E 
    U,S,V_T = np.linalg.svd(E)
    
    # computation of Rotation and Translation without enforcement 
    W = np.array([[0,-1,0],[1,0,0],[0,0,1]])
    
    
    Rot1 = np.dot(np.dot(U, W), V_T)  
    
    Rot2 = np.dot(np.dot(U, W.T), V_T)
    
Anas El-wakil
  • 23
  • 1
  • 7

1 Answers1

0

Your problem is that you are using the points from the chessboard: you cannot estimate the Fundamental matrix from coplanar points. One way to fix this is to match scene points using a generic approach, like SIFT or SURF. The other way is to estimate the Essential matrix directly using the 5-point algorithm, because the Essential matrix can be calculated from coplanar points.

Also, keep in mind that you can only calculate the camera pose up to scale from the Essential matrix. In other words, your translation will end up being a unit vector. One way to calculate the scale factor to get the actual length of the translation is to use your chessboard.

Dima
  • 38,860
  • 14
  • 75
  • 115
  • Thank you so much for you reply ! , I compared between the rotations which resulted from both Essential matrix computed one time by 5-point algorithm "cv2.findEssentialMat", and the other is computer from the fundamental matrix. I found out that both give more or less similar rotation matrices, very slight differences. but nevertheless , none of them matched the relative rotation matrix, but however, there was also slight difference and the whole signs of the matrix were inverted. Namely Rot_from_E = - R_relative – Anas El-wakil Jul 28 '16 at 10:16
  • I in fact had a problem with the configuration, this algorithm computes the camera pose assuming that the camera is changing , but in my case the object was changing . so the relative pose should be computed the following, R_relative = T1 * inv(T2) , instead : R_rel = inv(T2) * T1 . I actually made another code where I modeled the camera and assumed the points and poses , also to be away from being working with coplanar object. and I got the rotation matrix right. – Anas El-wakil Jul 28 '16 at 10:22
  • Actually, now I tried a different thing . I removed the image distortion and the Essential matrix from the 5-point algorithm gave me the exact relative pose. – Anas El-wakil Jul 28 '16 at 19:40