I wrote a python code that imports some data which I then manipulate to get a nonsquare matrix A. I then used the following code to solve the matrix equation.
from scipy.optimize import lsq_linear
X = lsq_linear(A_normalized, Y, bounds=(0, np.inf), method='bvls')
As you can see I used this particular method because I require all the X coefficients to be positive. However, I realized that lsq_linear from scipy.optimize minimizes the L2 norm of AX - Y to solve the equation. I was wondering if anyone knows of an alternative to lsq_linear that solves the equation by minimizing the L1 norm instead. I have looked in the scipy documentation, but so far I haven't had any luck finding such an alternative myself.
(Note that I actually know what Y is and what I am trying to solve for is X).
Edit: After trying various suggestions from the comments section and after much frustration I finally managed to sort of make it work using cvxpy. However, there is a problem with it. First of all the elements of X are supposed to all be positive, but that is not the case. Moreover, when I multiply the matrix A_normalized with X they are not equal to Y. My code is below. Any suggestions on what I can do to fix it would be highly appreciated. (By the way my original use of lsq_linear in the code above gave me an X that satisfied A_normalized*X = Y.)
import cvxpy as cp
from cvxpy.atoms import norm
import numpy as np
Y = specificity
x = cp.Variable(22)
objective = cp.Minimize(norm((A_normalized @ x - Y), 1))
constraints = [0 <= x]
prob = cp.Problem(objective, constraints)
result = prob.solve()
print("Optimal value", result)
print("Optimal var")
X = x.value # A numpy ndarray.
print(X)
A_normalized @ X == Y