3

This is a newbie question. I am trying to minimize the following QP problem:

x'Qx + b'x + c, for A.x >= lb where:

  1. x is a vector of coordinates,
  2. Q is a sparse, strongly diagonally dominant, symmetric matrix typically of size 500,000 x 500,000 to 1M x 1M
  3. b is a vector of constants
  4. c is a constant
  5. A is an identity matrix
  6. lb is a vector containing lower bounds on vector x

Following are the packages I have tried:

  1. Optim.jl: They have a primal interior-point algorithm for simple "box" constraints. I have tried playing around with the inner_optimizer, by setting it to GradientDescent()/ ConjugateGradient(). No matter what this seems to be very slow for my problem set.

  2. IterativeSolver.jl: They have a conjugate gradient solver but they do not have a way to set constraints to the QP problem.

  3. MathProgBase.jl: They have a dedicated solver for Quadratic Programming called the Ipopt(). It works wonderfully for small data sets typically around 3Kx3K matrix, but it takes too long for the kind of data sets I am looking at. I am aware that changing the linear system solver from MUMPS to HSL or WSMP may produce significant improvement but is there a way to add third party linear system solvers to the Ipopt() through Julia?

  4. OSQP.jl: This again takes too long to converge for the data sets that I am interested in.

Also I was wondering if anybody has worked with large data sets can they suggest a way to solve a problem of this scale really fast in Julia using the existing packages?

  • It's overkill for this problem, but some of the algorithms from `NLopt` will work, and should be fast. Also note that the unconstrained variant of this problem has a closed-form solution, so you should always check that solution first to see if it satisfies the constraint, before going on to try a numerical solver. – Colin T Bowers Jul 12 '18 at 00:26
  • Just to pecise that Ipopt is not a solved dedicated to QP, but a generic NLP solver. In Linux if you already have installed in the system, julia packages normally do use your installation. So you can try with the different linear solvers you cited and the few others that Ipopt supports. – Antonello Jul 12 '18 at 05:21

1 Answers1

3

You can try the OSQP solver with different parameters to speedup convergence for your specific problem. In particular:

  • If you have multiple cores, MKL Pardiso can significantly reduce the execution time. You can find details on how to install it here (It basically consists in running the default MKL installer). After that, you can use it in OSQP as follows

    model = OSQP.Model()
    OSQP.setup!(model; P=Q, q=b, A=A, l=lb, u=ub, linsys_solver="mkl pardiso")
    results = OSQP.solve!(model)
    
  • The number of iterations depends on your stepsize rho. OSQP automatically updates it trying to find the best one. If you have a specific problem, you can disable the automatic detection and play with it yourself. Here is an example for try different rho

    model = OSQP.Model()
    OSQP.setup!(model; P=Q, q=b, A=A, l=lb, u=ub, linsys_solver="mkl pardiso",
                adaptive_rho=false, rho=1e-3)
    results = OSQP.solve!(model)
    

    I suggest you to try different rho values maybe logspaced between 1e-06 and 1e06.

  • You can reduce the iterations by rescaling the problem data so that the condition number of your matrices is not too high. This can significantly reduce the number of iterations.

I pretty sure that if follow these 3 steps you can make OSQP work pretty well. I am happy to try OSQP for your problem if you are willing to share your data (I am one of the developers).

Slightly unrelated, you can call OSQP using MathProgBase.jl and JuMP.jl. It also supports the latest MathOptInterface.jl package that will replace MathProgBase.jl for the newest version of JuMP.

bstellato
  • 160
  • 1
  • 6