1

I have a problem that is almost in quadratic form, but involves an extra set of vector variables.

That is, I want to find the vector x that minimizes

J = x' C x

where C is a known positive definite matrix. The minimization is subject to the usual constraints

A x >= b

x >= 0

for fixed constraint matrix A and vector b. So far, this is a standard problem that can easily be solved.

However, I also have the additional vector variable y and constraints as follows:

x = F y + g

y >= 0

The matrix F and the vector g are known, but F is not invertible (or even square). In my particular problem, y has a much smaller dimension than x, so F has many more rows than columns.

I want to solve for both x and y. My first attempt was to create an extended vector z = (x, y) and re-write the problem in terms of z. However, doing so results in a problem of the form:

Minimize z' Q x,

but now Q is singular. So solve.QP cannot be used.

I can solve this problem easily in CPLEX, AMPL, etc., but I specifically want to solve it in R using quadprog.

Can anybody tell me how to rewrite the problem so that I can solve for both x and y, specifically using solve.QP?

Thanks!

QuantRabbi
  • 11
  • 2
  • Isn't that just a linear constraint? So besides Ax >= b you would have F y + g = x (where the dimensions must be compatible of course: e.g. rows(F) = cols(x)). In their API this would probably just be a vertical stack of (F,A) where F will have an extra column to handle the offset and the definition of `meq`, the border between F and A rows. – sascha Oct 08 '18 at 18:43
  • Thanks sascha. No, it's not just a linear constraint because y is unknown. I need to solve for both x and y. – QuantRabbi Oct 08 '18 at 18:52
  • There is no problem if `Q` is smaller: `min x'Qx subject to Ax+By=b` (as long as Q is positive semi-definite). I.e. we can solve `min x1^2 subject to Ax=b`. – Erwin Kalvelagen Oct 09 '18 at 09:09

1 Answers1

1

I don't see your non-linearity (see edit-remark!).

    x = F y + g
<-> 0 = F y + g - x

which for example looks like:

F = (1,2
     3,4
     5,6)
y = (y0
     y1)
x = (x0
     x1,
     x2)

and then:

    0 = F y + g - x
<-> 0 = D z 

where

    D = (F,1,-1,-1,-1)
<-> D = (1, 2, 1, -1, -1, -1
         2, 3, 1, -1, -1, -1
         4, 5, 1, -1, -1, -1)

and z = (y0
         y1
         g
         x0
         x1
         x2)

plus some bound on g (if the API has nothing better)

Edit Well, i saw g as scalar, but it should be clear what to do if not.

sascha
  • 32,238
  • 6
  • 68
  • 110
  • Thanks again. I appreciate your input. I'm not sure I follow your response. What happened to C? Remember, I am trying to minimize x' C x. Also note that g is a constant vector. – QuantRabbi Oct 08 '18 at 19:12
  • C is the objective matrix, not a constraint (so we don't touch it then; you might need to replace x with z and zero out all relevant parts, as C grew in dimensionality then, if going for that route). g being a constant vector does not hurt; but above changes in trivial ways. If this is unintuitive, you probably should use a non low-level wrapper. – sascha Oct 08 '18 at 19:12
  • The problem is that if I extend the unknown vector from x to z = (x, y), then I must replace the objective matrix (because the dimension must match) with something that always turns out to be singular. The changes are indeed trivial, but solve.QP cannot solve with a singular objective matrix. – QuantRabbi Oct 08 '18 at 19:18
  • C needs to be PSD and that's not touched by increasing dimensionality with zeros. Singularity usually comes in play during solving (`error... bla bla is singular; but that's not matrix C which is reasoned about; usually those are the KKT-based linear-systems). – sascha Oct 08 '18 at 19:24
  • The original C is PD. Padding it with zeros causes it to become singular. – QuantRabbi Oct 08 '18 at 19:33