2

I've seen several posts on this subject, but I need a pure Python (no Numpy or any other imports) solution that accepts a list of points (x,y,z coordinates) and calculates a normal for the closest plane that to those points.

I'm following one of the working Numpy examples from here: Fit points to a plane algorithms, how to iterpret results?

def fitPLaneLTSQ(XYZ):
    # Fits a plane to a point cloud,
    # Where Z = aX + bY + c        ----Eqn #1
    # Rearanging Eqn1: aX + bY -Z +c =0
    # Gives normal (a,b,-1)
    # Normal = (a,b,-1)
    [rows,cols] = XYZ.shape
    G = np.ones((rows,3))
    G[:,0] = XYZ[:,0]  #X
    G[:,1] = XYZ[:,1]  #Y
    Z = XYZ[:,2]

    (a,b,c),resid,rank,s = np.linalg.lstsq(G,Z)
    normal = (a,b,-1)
    nn = np.linalg.norm(normal)
    normal = normal / nn
    return normal


XYZ = np.array([
        [0,0,1],
        [0,1,2],
        [0,2,3],
        [1,0,1],
        [1,1,2],
        [1,2,3],
        [2,0,1],
        [2,1,2],
        [2,2,3]
        ])

print fitPLaneLTSQ(XYZ)

[ -8.10792259e-17 7.07106781e-01 -7.07106781e-01]

I'm trying to adapt this code: Basic ordinary least squares calculation to replace np.linalg.lstsq

Here is what I have so far without using Numpy using the same coords as above:

xvals = [0,0,0,1,1,1,2,2,2]
yvals = [0,1,2,0,1,2,0,1,2]
zvals = [1,2,3,1,2,3,1,2,3]

""" Basic ordinary least squares calculation. """
sumx, sumy = map(sum, [xvals, yvals])
sumxy = sum(map(lambda x, y: x*y, xvals, yvals))
sumxsq = sum(map(lambda x: x**2, xvals))
Nsamp = len(xvals)
# y = a*x + b
# a (slope)
slope = (Nsamp*sumxy - sumx*sumy) / ((Nsamp*sumxsq - sumx**2))
# b (intercept)
intercept = (sumy - slope*sumx) / (Nsamp)
a = slope
b = intercept

normal = (a,b,-1)
mag = lambda x : math.sqrt(sum(i**2 for i in x))
nn = mag(normal)
normal = [i/nn for i in normal]
print normal

[0.0, 0.7071067811865475, -0.7071067811865475]

As you can see, the answers come out the same, but that is only because of this particular example. In other examples, they don't match. If you look closely you'll see that in the Numpy example the 'z' values are fed into 'np.linalg.lstsq', but in the non-Numpy version the 'z' values are ignored. How do I work in the 'z' values to the least-squares code?

Thanks

Community
  • 1
  • 1
terrachild
  • 183
  • 1
  • 3
  • 10
  • Googling "least squares plane fitting" yields plenty of info, e.g. https://www.geometrictools.com/Documentation/LeastSquaresFitting.pdf and http://stackoverflow.com/a/1400338/2482744 – Alex Hall May 29 '16 at 10:25
  • 2
    this better be some kind of pro-forma exercise; or is there a pragmatic motivation I am missing? – Eelco Hoogendoorn May 29 '16 at 10:41
  • Alex, thanks I saw the stackoverflow example already, but couldn't quite translate it to python. Eelco, due to custom python builds in some applications, Numpy is not importable. That is the reason for no Numpy. – terrachild May 29 '16 at 10:48
  • @terrachild: Are you describing a python interpreter embedded inside another application? – Eric May 29 '16 at 11:44
  • [tinynumpy](https://github.com/wadetb/tinynumpy) might be of interest – Eric May 29 '16 at 17:20
  • Eric, yes. Many programs use custom builds of Python, and for some reason they don't compile them properly. They use non-standard compilers which causes module importation problems like 'Magic Number' errors, and other issues. I think they just don't know how important it is to follow convention. – terrachild May 29 '16 at 19:34

2 Answers2

0

I do not think you can get away without implementing some basic matrix operations. As this is a multivariate linear regression problem, you will definitely need dot product, transpose and norm. These are easy. The difficult part is that you also need matrix inverse or QR decomposition or something similar. People usually use BLAS for these for good reasons, implementing them is not easy - but not impossible either.

With QR decomposition

I would start by creating a Matrix class that has the following methods

  • dot(m1, m2) (or __matmul__(m1, m2) if you have python 3.5): it is just the sum of products, should be straightforward
  • transpose(self): swapping matrix elements, should be easy
  • norm(self): square root of sum of squares (should be only used on vectors)
  • qr_decomp(self): this one is tricky. For an almost pure python implementation see this rosetta code solution (disclaimer: I have not thoroughly checked this code). It uses some numpy functions, but these are basic functions you can implement for your matrix class (shape, eye, dot, copysign, norm).
  • leastsqr_ut(R, A): solve the equation Rx = A if R is an upper triangular matrix. Not trivial, but should be easy enough as you can solve it equation by equation from the bottom.

With these, the solution is easy:

  • Generate the matrix G as detailed in your numpy example
  • Find the QR decomposition of G
  • Solve Rb = Q'z for b using that R is an upper triangular matrix

Then the normal vector you are looking for is (b[0], b[1], -1) (or the norm of it if you want a unit length normal vector).

With matrix inverse

The inverse of a 3x3 matrix is relatively easy to calculate, but this method is much less numerically stable than doing QR decomposition. If it is not an important concern, then you can do the following: implement

  • dot(m1, m2) (or __matmul__(m1, m2) if you have python 3.5): it is just the sum of products, should be straightforward
  • transpose(self): swapping matrix elements, should be easy
  • norm(self): square root of sum of squares (should be only used on vectors)
  • det(self): determinant, but it is enough if it works on 2x2 and 3x3 matrices, and for those simple formulas are available
  • inv(self): matrix inverse. It is enough if it works on 3x3 matrices, there is a simple formula for example here

Then the formula for b is b = inv(G'G) * (G'z) and your normal vector is again (b[0], b[1], -1).

As you can see, none of these are simple, and most of it is replicating some numpy functionality while making it a lot slower lot slower. So make sure you have absolutely no other choice.

Martin Stancsics
  • 370
  • 1
  • 11
  • Martin, thanks for the pointers. Because the host programs (C4D, Maya, 3DS Max, etc.) all have matrix, dot, and norm functions, maybe this won't be so hard. Given that I have those functions available, which of the 2 options that you recommended do you think I should try? – terrachild May 29 '16 at 21:41
  • qr decomposition is numerically stable, so prefer that if it is available; singular value decomposition is also stable and can be used to compute inverses, but chances are if you have SVD you also have QR available. – Neapolitan May 29 '16 at 22:18
  • I would also go with QR decomposition for any serious application. – Martin Stancsics May 30 '16 at 07:28
0

I generated a code with a similar purpose (see "tangentplane_3D" function in the linked code).

In my case I had a scatter cloud of points that define a 3D ellipsoid. For each point I wanted to determine the tangent plane to the ellipsoid containing such point --> Goal: Determination of a 3D plane.

The problem can be seen in the following way: A plane is defined by its normal and the normal can be seen as the eigenvector associated to the minimum of the eigenvalues of a n set of points.

What I did, and you can check it on the code I posted, is to select k points close to the point of interest at which I wanted to calculate the tangent plane. Then, I performed a 3D Single Value Decomposition to these k points. Finally, from these SVD I selected the minimum eigenvalue and its associated eigenvector which is, in fact, the normal of the plane best fitting my set of points, and thus in my case, tangent to the ellipsoid plane. With the normal vector and the point you can subsequently calculate the complete plane equation.

I hope it helps!!

Best wishes.

Community
  • 1
  • 1
pedro_galher
  • 334
  • 8
  • 18
  • However, this solution requires numpy for the singular value decomposition computation. I don't know if it still helps you in that case. – pedro_galher May 13 '19 at 13:52