0

I am working on a quadratic programming problem.

So I got two matrices A and B (time series actually), and I wanna find the matrix X, s.t. A*X is closest to B, on the condition that X contains all positive values. (so X can be seen as a weight matrix)

Since it's a minimize problem, and there is limitation on X, I am considering using quadratic programming. Specifically, my goal is to find X with:

min sum (A*X - B).^2,   that is:

min sum 1/2 X^t * (A^t*A) * X - (B^t*A) * X
s.t. X is positive

this form seems quite similar to the QP problem:

1/2 x^t*Q*x + c^t*x
s.t. A*x < b

My problems are:

My X is a matrix instead of a vector in QP.
Is there a variant of QP for this problem? Am I right to head to QP?

How to represent the limitation on X positive?

It would be great if you could be specific about R functions.

Thanks a lot!

dirt
  • 3
  • 1
  • Which dimensions? Does the mat-mul work out algebraically? QP is quite general and mostly only positive-semidefinite QP is feasible (to solve to global-opt; convex). Creating the standardform is not that hard, but it's unclear if it's the right tool/approach yet. It sounds like a matrix-factorization, potentially [NMF](https://en.wikipedia.org/wiki/Non-negative_matrix_factorization) where special algorithms are available. But even NMF is non-convex in general. So: be much more precise and formal! – sascha Jan 11 '18 at 08:52
  • @sascha Thank you so much for your reply! Since X need to be consisted of non-negative values, mat-mul doesn't work out. Actually I'm also thinking about NMF, but when the weight is manually set to positive, the loss function always goes infinite, and penalize does't help. I guess it's because the negative ones are forced to zero. That's why I am considering QP. For the dimensions of the matrices, say A is t * m, B is t * n, and X is m * n. – dirt Jan 11 '18 at 10:01
  • I absolutely misinterpreted the task here. NMF is a very different task. Erwin's answer looks correct. – sascha Jan 11 '18 at 14:14
  • @sascha Yes LP works very well. Thank you anyway! – dirt Jan 12 '18 at 06:28

1 Answers1

1

This should be convex and straightforward to solve with a QP algorithm. I often rewrite this as:

 min sum((i,k),d^2(i,k))
 d(i,k) = sum(j, a(i,j)*x(j,k)) - b(i,k)
 x(j,k) ≥ 0, d(i,k) free

This is now obviously convex (a diagonal Q matrix). In some cases this form may be easier to solve than putting everything in the objective. In a sense we made the problem less non-linear. You can also solve this as an LP by using a different norm:

 min sum((i,k),abs(d(i,k)))
 d(i,k) = sum(j, a(i,j)*x(j,k)) - b(i,k)
 x(j,k) ≥ 0, d(i,k) free

or

 min sum((i,k),y(i,k))
 -y(i,k) ≤ d(i,k) ≤ y(i,k)
 d(i,k) = sum(j, a(i,j)*x(j,k)) - b(i,k)
 x(j,k) ≥ 0, y(i,k) ≥ 0, d(i,k) free
Erwin Kalvelagen
  • 15,677
  • 2
  • 14
  • 39
  • Thank you so much! I also think this is what I hope to do. I'm trying LP with the third form, but got huge matrices. Still trying... – dirt Jan 11 '18 at 16:28
  • I am just confused about for the first QP, and second LP with absolute value. I am using R packages, but QP only provide matrix form, not possible for element-wise square and summation. Also LP not possible for absolute values. Do you happen to know any packages or other tools that can handling these? Thanks a lot! – dirt Jan 11 '18 at 16:31
  • (1) Most or all QP algorithms under R use the matrix interface. This is not always easy to use but for this problem should not be too bad. (2) The abs() function is linearized in the third model (just keep on reading). (3) The OMPR package allows algebraic formulation of LP and MIP models. – Erwin Kalvelagen Jan 11 '18 at 16:46
  • Yeah I kept on LP. And I just got very nice result. Thank you!! – dirt Jan 12 '18 at 06:26
  • I am a bit confused with the `sum((i,j), f(i,j))` notation here. Does the first tuple mean that we have a nested sum symbols over `i` and `j`? So, is it equivalent to `sum(i, sum(j, f(i,j)))`? – hansolo Nov 16 '20 at 11:08
  • 1
    @hansolo Same thing: sum over all combinations `i,j`. – Erwin Kalvelagen Nov 16 '20 at 13:20