I am using scipy.optimize.minimize
to minimize a function with l2
norm constraints and non-negative constraints on the computed parameters (some related links 1, 2, 3). More specifically, I have tried
con = ({'type': 'ineq', 'fun': lambda x: x}, {'type': 'eq', 'fun': lambda x: np.dot(x.T, x) - 1})
or
con = {'type': 'eq', 'fun': lambda x: np.dot(x.T, x) - 1}
bounds = [(0., None)] * n_features
with method SLSQP
. The the accuracy of the algorithm is fine If the positive constraint is not used. However, when the positive constraint is used, most of the computed parameters x
became zero and the algorithm returns low accuracy. To avoid this situation, I have tried different initializations, e.g., non-negative x0
with l2
norm equal to 1, x0
values close to zero, x0
values close to 1 but didin't help. Any suggestions to improve the functionality of the algorithm are highly appreciated.
EDIT: A running example can be found here.