8

Given a set of N points in a 3D space, I am trying to find the best fitting plane using SVD and Eigen.

My algorithm is:

  1. Center data points around (0,0,0).
  2. Form 3xN matrix of point coordinates.
  3. Calculate SVD of the matrix.
  4. Set the smallest singular vector corresponding to the least singular value as normal of the plane.
  5. Set distance from origin to the plane as normal∙centroid.

I can't figure out how to use Eigen's SVD Module to find the smallest singular vector corresponding to the least singular value of point coordinates matrix.

So far I have this code (steps 1, 2 and 5 of the algorithm):

Eigen::Matrix<float, 3, 1> mean = points.rowwise().mean();
const Eigen::Matrix3Xf points_centered = points.colwise() - mean;

int setting = Eigen::ComputeThinU | Eigen::ComputeThinV;
Eigen::JacobiSVD<Eigen::Matrix3Xf> svd = points_centered.jacobiSvd(setting);

Eigen::Vector3d normal = **???**

double d = normal.dot(mean);
dustymax
  • 330
  • 3
  • 11

2 Answers2

4

Denoting U = svd.matrixU(), the vectors U.col(0) and U.col(1) defines a base of your plane and U.col(2) is normal to your plane.

U.col(0) also defines the direction with the greatest standard deviation.

billx
  • 185
  • 11
  • Could you please explain a bit more about why the flag `ComputeFullU` is used instead of `ComputeThinU`? According to https://eigen.tuxfamily.org/dox/classEigen_1_1JacobiSVD.html, `ComputeThinU` will calculate the first three singular vectors since we have a 3xN matrix. And the third one is what we need. Thus I'm a little confused here. – vicky Lin Dec 26 '22 at 11:57
  • 1
    This is a rather old answer. Now that I re-read this, I indeed do not see why `ComputeFullU` is necessary. – billx Dec 29 '22 at 14:28
1

Your problem is basically how to do a least-square fitting using the Eigen JacobiSVD module. Here's a link with a more helpful example. The basic idea of least-square fitting is that you first take the vector difference of all the N-1 points with one of the N points, and then try to approximate all such N-1 vectors as a linear combination of two basis vectors, which define the 2D plane.

Zhuoran He
  • 873
  • 9
  • 15
  • This link is about _normal_ least-square fitting, whereas the question is about **total least squares**. – ggael Sep 09 '16 at 12:48
  • I realize my algorithm is not optimal. It makes more sense to put the centroid on the best-fit plane, rather than one of the N points. I just derived mathematically using Lagrange multipliers that the algorithm given in the question already minimizes the sum of squares of the N point-to-plane distances. This is a coordinate-independent conclusion. After the derivation, there's no more least-square to do algorithmically. The correct code to implement the algorithm is given in billx's answer. – Zhuoran He Sep 09 '16 at 18:30
  • My example does the normal least-square fitting, in which an over-determined linear equation `Ax=b` is solved using the least-square formula `x=(A'*A)^(-1)*A'*b`. The implementation is more clever. First an SVD is done to A=U*S*V' and then the solution is `x=V*S^(-1)*U'*b`. This avoids inverting `A'*A`, which has a squared condition number compared with `A` itself. – Zhuoran He Sep 09 '16 at 18:45
  • This question is more interesting than I previously thought. A least-square-looking problem eventually doesn't need to do any least squares. Let me give it a one-up! – Zhuoran He Sep 09 '16 at 18:52