0

I'm trying to solve the following optimization problem, for data x_1, ... x_n d-dimensional vectors:

opt_problem

where the variables are \lambda_{ij}, i=1, ... n, j = 1, ... k (real numbers) and w_1, ... w_k vectors in R^d

Under the constraints

constraints

for h = 2, ... d and all i

So that the optimization function is convex, but the feasible region identified by the constraints is not.

I'm completely new to the optimization ecosystem in Python, I was wondering if there is a de-facto standard for this kind of problem or at least some suggestion on where to start from (scipy? pyomo?)

mariob6
  • 469
  • 1
  • 6
  • 16
  • I think we are missing something. From this, we just could replace `y(i)=sum(j,lambda(i,j)*w(j))` and optimize for `y(i)`.Also note that there are good non-convex solvers around. – Erwin Kalvelagen Jun 25 '20 at 08:22
  • I think the point here is that k < d. So that we cannot optimize for y(i) without taking into account the fact that they live in k-dimensional subspace of R^d. Can you name a couple of these non-convex solvers, so that I can start looking into that? – mariob6 Jun 25 '20 at 08:54
  • 1
    Baron, Couenne, Antigone, Gurobi to name a few. – Erwin Kalvelagen Jun 25 '20 at 09:05

0 Answers0