2

I'm writing a Python code using numpy. In my code I use "linalg.solve" to solve a linear system of n equations in n variables. Of course the solutions could be either positive or negative. What I need to do is to have always positive solutions or at least equal to 0. To do so I first want the software to solve my linear system of equations in this form

x=np.linalg.solve(A,b)

in which x is an array with n variables in a specific order (x1, x2, x3.....xn), A is a n dimensional square matrix and b is a n-dimensional array. Now I thought to do this:

-solve the system of equations

-check if every x is positive

-if not, every negative x I'll want them to be =0 (for example x2=-2 ---->x2=0)

-with a generic xn=0 want to eliminate the n-row and the n-coloumn in the n dimensional square matrix A (I'll obtain another square matrix A1) and eliminate the n element in b obtaining b1.

-solve the system again with the matrix A1 and b1

-re-iterate untill every x is positive or zero

-at last build a final array of n elements in which I'll put the last iteration solutions and every variable which was equal to zero ( I NEED THEM IN ORDER AS IT WOULD HAVE BEEN NO ITERATIONS so if during the iterations it was x2=0 -----> xfinal=[x1, 0 , x3,.....,xn]

Think it 'll work but don't know how to do it in python.

Hope I was clear. Can't really figure it out!

nearchos
  • 43
  • 1
  • 10

2 Answers2

5

You have a minimization problem, i.e.

min ||Ax - b||
s.t. x_i >= 0 for all i  in [0, n-1]

You can use the Optimize module from Scipy

import numpy as np
from scipy.optimize import minimize

A = np.array([[1., 2., 3.],[4., 5., 6.],[7., 8., 10.]], order='C')
b = np.array([6., 12., 21.])
n = len(b)

# Ax = b --> x = [1., -2., 3.]

fun = lambda x: np.linalg.norm(np.dot(A,x)-b)
# xo = np.linalg.solve(A,b)
# sol = minimize(fun, xo, method='SLSQP', constraints={'type': 'ineq', 'fun': lambda x:  x})
sol = minimize(fun, np.zeros(n), method='L-BFGS-B', bounds=[(0.,None) for x in xrange(n)])

x = sol['x'] # [2.79149722e-01, 1.02818379e-15, 1.88222298e+00]

With your method I get x = [ 0.27272727, 0., 1.90909091].

In the case you still want to use your algorithm, it is below

n = len(b)
x = np.linalg.solve(A,b)
pos = np.where(x>=0.)[0]

while len(pos) < n:
    Ap = A[pos][:,pos]
    bp = b[pos]
    xp = np.linalg.solve(Ap, bp)
    x = np.zeros(len(b))
    x[pos] = xp
    pos = np.where(x>=0.)[0]

But I don't recommend you to use it, you should use the minimize option.

sebacastroh
  • 608
  • 4
  • 13
  • Done with the minimize option as you suggested. Is it normal that my computational time has increased drastically? For a 100 elements array (for the variables) before it was 1 second now it needs 40 seconds. Can't imagine with a 1000 elements array! – nearchos May 02 '16 at 16:00
  • What are you comparing? The minimization and your algorithm? However, even it depends the numbers you have in your matrix and vector, 40 seconds it seems a lot for a 100 x 100 matrix. In my computer it didn't take that long a random example, it was less than a second. Are you sure that the minimization is taking all the time in your code? – sebacastroh May 02 '16 at 18:09
  • Right now it took just 1 second. Don't know why it took 40 seconds before. By the way think is normal that with a 1000x1000 it needs a lot more! Is SLSQP the best method in my case? Can you explain to me (easy way if possible, know nothing about those stuffs) why? – nearchos May 02 '16 at 18:28
  • I'm not an expert about the methods used by Scipy in the optimization module really. However, I noticed that it is better to change the constraint to the boundary option. If you do that you can use a faster method: minimize(fun, np.zeros(n), method='L-BFGS-B', bounds=[(0.,None) for x in xrange(n)]). I'll edit my answer in order to add this. – sebacastroh May 02 '16 at 18:59
  • I tried this way too but doesn't give me a good result as the other method did. Don't really know why, the only thing that I had to change is the xrange with range (I use Python 3). By the way can you explain to me the principles you used to set the boundary condition in the first ( and the second even if it seems not to work) case? Need to know what I'm doing ( thesis work for graduation) . – nearchos May 02 '16 at 19:18
  • it should be `n = A.shape[1]` for non-square matrices because n is the dimension of the domain (i.e. the length of the input vector), that must not be the of b – pas-calc Dec 11 '21 at 18:40
2

Even faster and more reliable also using minimization is the method scipy.optimize.lsq_linear that is specially dedicated for linear optimization.

The bounds are transposed and use np.inf instead of None for the upper bound.

Working example:

from scipy.optimize import lsq_linear
n = A.shape[1]
res = lsq_linear(A, b, bounds=np.array([(0.,np.inf) for i in range(n)]).T, lsmr_tol='auto', verbose=1)
y = res.x

You provide a matrix A and a vector b that have the same number of rows (= the dimension of the range of A).

pas-calc
  • 115
  • 9