0

I am wondering what is the fastest convex optimizer in Matlab or is there any way to speed up current solvers? I'm using CVX, but it's taking forever to solve the optimization problem I have. The optimization I have is to solve

minimize norm(Ax-b, 2)
subject to
x >= 0
and x d <= delta

where the size of A and b are very large.

Is there any way that I can solve this by a least square solver and then transfer it to the constraint version to make it faster?

Erin
  • 177
  • 3
  • 14

1 Answers1

0

I'm not sure what x.d <= delta means, but I'll just assume it's supposed to be x <= delta.

You can solve this problem using the projected gradient method or an accelerated projected gradient method (which is just a slight modification of the projected gradient method, which "magically" converges much faster). Here is some python code that shows how to minimize .5|| Ax - b ||^2 subject to the constraint that 0 <= x <= delta using FISTA, which is an accelerated projected gradient method. More details about the projected gradient method and FISTA can be found for example in Boyd's manuscript on proximal algorithms.

import numpy as np
import matplotlib.pyplot as plt

def fista(gradf,proxg,evalf,evalg,x0,params):
    # This code does FISTA with line search

    maxIter = params['maxIter']
    t = params['stepSize'] # Initial step size
    showTrigger = params['showTrigger']

    increaseFactor = 1.25
    decreaseFactor = .5

    costs = np.zeros((maxIter,1))

    xkm1 = np.copy(x0)
    vkm1 = np.copy(x0)

    for k in np.arange(1,maxIter+1,dtype = np.double):

        costs[k-1] = evalf(xkm1) + evalg(xkm1)
        if k % showTrigger == 0:
            print "Iteration: " + str(k) + "    cost: " + str(costs[k-1])

        t = increaseFactor*t

        acceptFlag = False
        while acceptFlag == False:
            if k == 1:
                theta = 1
            else:
                a = tkm1
                b = t*(thetakm1**2)
                c = -t*(thetakm1**2)
                theta = (-b + np.sqrt(b**2 - 4*a*c))/(2*a)

            y = (1 - theta)*xkm1 + theta*vkm1
            (gradf_y,fy) = gradf(y)
            x = proxg(y - t*gradf_y,t)
            fx = evalf(x)
            if fx <= fy + np.vdot(gradf_y,x - y) + (.5/t)*np.sum((x - y)**2):
                acceptFlag = True
            else:
                t = decreaseFactor*t

        tkm1 = t
        thetakm1 = theta
        vkm1 = xkm1 + (1/theta)*(x - xkm1)
        xkm1 = x

    return (xkm1,costs)


if __name__ == '__main__':

    delta = 5.0
    numRows = 300
    numCols = 50
    A = np.random.randn(numRows,numCols)
    ATrans = np.transpose(A)
    xTrue = delta*np.random.rand(numCols,1)
    b = np.dot(A,xTrue)
    noise = .1*np.random.randn(numRows,1)
    b = b + noise

    def evalf(x):
        AxMinusb = np.dot(A, x) - b
        val = .5 * np.sum(AxMinusb ** 2)
        return val

    def gradf(x):
        AxMinusb = np.dot(A, x) - b
        grad = np.dot(ATrans, AxMinusb)
        val = .5 * np.sum(AxMinusb ** 2)
        return (grad, val)

    def evalg(x):
        return 0.0

    def proxg(x,t):
        return np.maximum(np.minimum(x,delta),0.0)

    x0 = np.zeros((numCols,1))
    params = {'maxIter': 500, 'stepSize': 1.0, 'showTrigger': 5}
    (x,costs) = fista(gradf,proxg,evalf,evalg,x0,params)

    plt.figure()
    plt.plot(x)
    plt.plot(xTrue)

    plt.figure()
    plt.semilogy(costs)
littleO
  • 942
  • 1
  • 11
  • 26