I am trying to solve a nonlinear convex optimization problem of the following form c'x + f(x)
subject to some affine constraints. Before that I wanted to make sure that I can solve a simple problem without f(x)
. Since I have the analytical first and second derivative of the problem I want to use them to get faster results. I am trying to use cvxopt to solve the problem. I wrote following code for the problem.
maximize Sum(l*r_i*x_i,i=1:n-1)
s.t. sum(x_i, i=1:n) =1
x_i -v_i*x_n<=0 i=1,...,n-1
0<=x_i<=1
But I cannot achieve optimal solution to this using cvxopt. If I use the modeling functionality of cvxopt, I can solve it easily. But unfortunately, I am not sure if I can provide analytical first and second derivative using that.
Here is the code for the problem
from cvxopt import matrix, log, div, spdiag, solvers, spmatrix
import numpy as np
def F(x=None, z = None):
m,n = A.size
if x is None: return 0,matrix(0.5,(n,1))
if (min(x) <0. or max(x)>1.): return None
f = matrix(-l*r.T*x[:-1])
Df = -l*matrix(np.append(r,0)).T
if z is None: return f,Df
H = matrix(0.0, (n,n))
return f,Df,H
dd = 6
l = 19
C = 1.2 * l
theta = 1.25
v = [4.99, 4.66, 3.84, 4.58, 2.54, 1.83]
r = matrix(np.array([max(0.8, 1 - 0.04 * i) for i in range(dd)]))
A = matrix(np.zeros([dd, dd + 1], dtype=float))
for i in range(dd):
A[i, i] = 1.0
A[i, -1] = -v[i]
b = matrix(np.zeros([dd, 1], dtype=float))
A_eq = matrix(np.ones([1, dd + 1], dtype=float))
b_eq = matrix([1.0])
solvers.cp(F,G=A,h=b, A= A_eq,b=b_eq).solve()
I am probably making some mistake in this code. Any help or guidance will be greatly appreciated.