4

My program tries to solve a system of linear equations. In order to do that, it assembles matrix coeff_matrix and vector value_vector, and uses Eigen to solve them like:

Eigen::VectorXd sol_vector = coeff_matrix
        .colPivHouseholderQr().solve(value_vector);

The problem is that the system can be both over- and under-determined. In the former case, Eigen either gives a correct or uncorrect solution, and I check the solution using coeff_matrix * sol_vector - value_vector.

However, please consider the following system of equations:

a + b - c     =  0
        c - d =  0
        c     = 11
      - c + d =  0

In this particular case, Eigen solves the three latter equations correctly but also gives solutions for a and b.

What I would like to achieve is that only the equations which have only one solution would be solved, and the remaining ones (the first equation here) would be retained in the system.

In other words, I'm looking for a method to find out which equations can be solved in a given system of equations at the time, and which cannot because there will be more than one solution.

Could you suggest any good way of achieving that?

Edit: please note that in most cases the matrix won't be square. I've added one more row here just to note that over-determination can happen too.

Michał Górny
  • 18,713
  • 5
  • 53
  • 76

3 Answers3

6

I think what you want to is the singular value decomposition (SVD), which will give you exact what you want. After SVD, "the equations which have only one solution will be solved", and the solution is pseudoinverse. It will also give you the null space (where infinite solutions come from) and left null space (where inconsistency comes from, i.e. no solution).

chaohuang
  • 3,965
  • 4
  • 27
  • 35
  • Could you point me to some page or document which would help me understand how to use it? Wikipedia's talking a lot maths... – Michał Górny Jul 10 '12 at 15:51
  • 3
    Actually it requires a quite bit maths to learn SVD, which is the climax of linear algebra. I just googled some introductions, see [here](http://stat.asu.edu/~eric/msri/matlab2.pdf), and [here](http://www.math.pitt.edu/~sussmanm/2071Spring08/lab09/index.html). HTH. – chaohuang Jul 10 '12 at 16:01
  • Thanks. I moved around null spaces a bit and assembled something which I pasted as a solution (since comments aren't good for actual code). Not sure if I invented something real or just a random pattern though. – Michał Górny Jul 10 '12 at 17:50
2

Based on the SVD comment, I was able to do something like this:

Eigen::FullPivLU<Eigen::MatrixXd> lu = coeff_matrix.fullPivLu();

Eigen::VectorXd sol_vector = lu.solve(value_vector);
Eigen::VectorXd null_vector = lu.kernel().rowwise().sum();

AFAICS, the null_vector rows corresponding to single solutions are 0s while the ones corresponding to non-determinate solutions are 1s. I can reproduce this throughout all my examples with the default treshold Eigen has.

However, I'm not sure if I'm doing something correct or just noticed a random pattern.

Michał Górny
  • 18,713
  • 5
  • 53
  • 76
  • If I remember my linear algebra correctly, it seems right. I've used similar tricks on applied linear algebra exams where we were allowed to use matlab, but had to work out the solutions step-by-step. – Kuba hasn't forgotten Monica Jul 29 '12 at 03:40
1

What you need is to calculate the determinant of your system. If the determinant is 0, then you have an infinite number of solutions. If the determinant is very small, the solution exists, but I wouldn't trust the solution found by a computer (it will lead to numerical instabilities).

Here is a link to what is the determinant and how to calculate it: http://en.wikipedia.org/wiki/Determinant

Note that Gaussian elimination should also work: http://en.wikipedia.org/wiki/Gaussian_elimination With this method, you end up with lines of 0s if there are an infinite number of solutions.

Edit

In case the matrix is not square, you first need to extract a square matrix. There are two cases:

  1. You have more variables than equations: then you have either no solution, or an infinite number of them.
  2. You have more equations than variables: in this case, find a square sub-matrix of non-null determinant. Solve for this matrix and check the solution. If the solution doesn't fit, it means you have no solution. If the solution fits, it means the extra equations were linearly-dependant on the extract ones.

In both case, before checking the dimension of the matrix, remove rows and columns with only 0s.

As for the gaussian elimination, it should work directly with non-square matrices. However, this time, you should check that the number of non-empty row (i.e. a row with some non-0 values) is equal to the number of variable. If it's less you have an infinite number of solution, and if it's more, you don't have any solutions.

PierreBdR
  • 42,120
  • 10
  • 46
  • 62
  • Determinant is only valid in square matrices. – Michał Górny Jul 10 '12 at 14:35
  • @MichałGórny I edited my solution for the non-square case. But only square matrix admit a unique solution. Also, your example was with a square matrix. – PierreBdR Jul 10 '12 at 14:48
  • My example is also an over-determined matrix where two of the variables have an unique solution. Nothing is that simple... By the way, I was thinking of something similar but I'm afraid that all that calculations would make the solving awfully inefficient. – Michał Górny Jul 10 '12 at 15:23