In order to validate results of two-view SFM approach for estimating camera pose [R|t], I made use of the chessboard patterns which I used for calibration, especially that "calibrateCamera" function in OpenCV returns rotation and translation vectors for each pattern. And hence, relative pose between let's say the first two patterns can be calculated easily.
Yet I don't get the correct camera pose, and I have been struggling so hard figuring out the problem, but to no vain.
I would really appreciate your contributions to solve my problem.
MY CODE Description:
- undistort images
- find chessboard corners in two images
- match points (verified by plotting side to side the two images and the lines)
- estimate fundamental matrix (verified : x'T * F * x = 0)
- Essential Matrix (E) = KT * F * K (verified : X'T * E * X = 0)
- SVD of E = U * S * VT
R = U * W * VT or U * WT * VT such that WT = [0,-1,0; 1,0,0; 0,0,1]
FundMat, mask = cv2.findFundamentalMat(imgpoints1, imgpoints2, cv2.FM_LMEDS) # is the fundamental matrix is really a fundamental Matrix. xFx'=0 ?? # verfication of fundamental matrix for i in range(len(imgpoints1)): X = np.array([imgpoints1[i][0],imgpoints1[i][1],1]) X_prime = np.array([imgpoints2[i][0],imgpoints2[i][1],1]) err = np.dot(np.dot(X_prime.T,FundMat),X) if mask[i] == True: print(err) # E = [t]R = (K_-T)_-1 * F * K = K_T*F*K term1 = np.dot(np.transpose(mtx), FundMat) # newcameramtx , mtx E = np.dot(term1, mtx) # newcameramtx , mtx # verfication of Essential matrix for i in range(len(imgpoints1)): X_norm = np.dot(np.linalg.inv(mtx), np.array([imgpoints1[i][0],imgpoints1[i][1],1]).T) X_prime_norm = np.dot(np.linalg.inv(mtx), np.array([imgpoints2[i][0],imgpoints2[i][1],1]).T) err_Ess = np.dot(np.dot(X_prime_norm.T,E),X_norm) if mask[i] == True: print(err_Ess) # SVD of E U,S,V_T = np.linalg.svd(E) # computation of Rotation and Translation without enforcement W = np.array([[0,-1,0],[1,0,0],[0,0,1]]) Rot1 = np.dot(np.dot(U, W), V_T) Rot2 = np.dot(np.dot(U, W.T), V_T)