0

I have a convex non-linear integer program of the following form:

formula

formula

K is a fixed integer greater than 0. Betas are real numbers greater than zero.

Note that x is a positive integer and the function to be optimized as well as the constraint are convex.

I have found that this problem falls under the umbrella of Mixed-Integer Non-linear programming (MINLP), except that there are no real part in the variable to be optimized so it is not really "mixed".

I have found a lot of references to python solvers for MINLP but it looks like they all work by alternating between solving for the integer and real part. Among them Pyomo's mindt solver which I managed to run on my problem.

from pyomo.environ import *
import numpy as np


N = 50000
K = N * 2
beta = np.absolute(np.random.randn(N) * 1000)

model = ConcreteModel()

for i in range(N):
    model.__setattr__(f"x{i}", Var(within=RangeSet(K), bounds=(1, K), initialize=1))
model.c = Constraint(expr=sum([model.__getattribute__(f"x{i}") for i in range(N)]) <= K)
model.objective = Objective(expr=sum([beta[i]/sqrt(model.__getattribute__(f"x{i}")) for i in range(N)]), sense=minimize)

SolverFactory('mindtpy').solve(model, strategy="OA", mip_solver='glpk', nlp_solver='ipopt') 
res = np.array([model.__getattribute__(f"x{i}").value for i in range(N)])

However, this is extremely slow and runtime seems to scale exponentially with N. Duration is tractable for small N but not for N as big as 50000.

I wonder if the solver doesn't loose time trying to optimize on the real variables even though there are not? Isn't there an open-source python solver specialized in Convex Integer Non-Linear Programs? It is ok if the solver is not available in python, but it must be open-source.

LucG
  • 1,238
  • 13
  • 25
  • 1
    Just as an aside, agreeing with Reinderien, solving purely with integer values is very difficult, so virtually all solvers that solver integer problems will work by solving the real relaxation of the problem which is much faster and then explore integer solutions 'near' the relaxed solution by adding cuts or constraints or similar to enforce integrality. This works very well in most cases. You may be able to improve the convergence by analysis as Reinderien has done and maybe reformulating your model. – TimChippingtonDerrick Jul 28 '23 at 10:40

1 Answers1

1

I think your solvers being "mixed" is not the problem, and all MIP solvers have a good understanding of problem classes that are purely integral. Integer programming is slower than continuous optimization, and non-linear optimization is slower than linear optimization; MINLP is the most difficult of all worlds.

You should start with an analytic assessment of your problem. The naive Jacobian is

n = 50  # 50_000
k = 2*n
rand = default_rng(seed=0)
beta = np.abs(rand.normal(scale=1_000, size=n))


def cost_unconstrained(x: np.ndarray) -> float:
    return beta.dot(1/np.sqrt(x))


def jacobian_unconstrained(x: np.ndarray) -> np.ndarray:
    return -0.5*beta * x**-1.5


error = check_grad(
    func=cost_unconstrained,
    grad=jacobian_unconstrained,
    x0=rand.uniform(0.5, 1.5, n),
)
assert error < 0.01

continuous_soln = minimize(
    fun=cost_unconstrained,
    jac=jacobian_unconstrained,
    x0=x0,
    bounds=Bounds(lb=0, ub=k),
    constraints=LinearConstraint(A=np.ones(n), lb=k, ub=k),
)
assert continuous_soln.success, continuous_soln.message

But critically that isn't the Jacobian that actually matters. Notice that if you try to set it to 0 to find the minimum it will give you the trivial solution of x=0, but that violates your k-hyperplane constraint.

Reframe the problem so that the objective and its Jacobian include the k-hyperplane constraint:

def cost_constrained(x: np.ndarray) -> float:
    xn = k - x.sum()
    return beta[:-1].dot(1/np.sqrt(x)) + beta[-1]/np.sqrt(xn)


def jacobian_constrained(x: np.ndarray) -> float:
    sum_term = 0.5*beta[-1] * (k - x.sum())**-1.5
    return -0.5*beta[:-1] * x**-1.5 + sum_term


error = check_grad(
    func=cost_constrained,
    grad=jacobian_constrained,
    x0=rand.uniform(0.5, 1.5, n-1),
)
assert error < 0.01

constrained_soln = minimize(
    fun=cost_constrained,
    jac=jacobian_constrained,
    x0=x0,
    bounds=Bounds(lb=0, ub=k),
    constraints=LinearConstraint(A=np.ones(n-1), lb=0, ub=k),
)
assert constrained_soln.success, constrained_soln.message

This produces the same optimized result, but shows you that it's possible to find an analytic optimum in the continuous case:

0 == -0.5*beta[:-1] * x**-1.5 + 0.5*beta[-1] * (k - x.sum())**-1.5
0.5*beta[:-1] * x**-1.5 == 0.5*beta[-1] * (k - x.sum())**-1.5
beta[:-1] * x**-1.5 == beta[-1] * (k - x.sum())**-1.5
beta[:-1]/beta[-1] == (x/(k - x.sum())**1.5
(beta[:-1]/beta[-1])**(2/3) = x/(k - x.sum())
(beta[:-1]/beta[-1])**(2/3)(k - x.sum()) = x
k*(beta[:-1]/beta[-1])**(2/3) = x + (beta[:-1]/beta[-1])**(2/3) * x.sum()
k*bb23 = x + bb23*x.sum()
k * bb23i = xi + bb23i*(x0 + x1 + x2...)
bb23 = (beta[:-1]/beta[-1])**(2/3)
A = bb23[:, np.newaxis] + np.eye(n-1)
b = bb23 * k
x0, *_ = np.linalg.lstsq(a=A, b=b, rcond=None)

This analytic optimum is linear, deterministic, and (aside from some potential memory problems in A) possibly easier to find than with a minimize call. Of course it's continuous and not integral, so it is not your end solution, but it can be used as a reasonable starting point for an actual MINLP solver.

To help with memory occupied by A, you may instead solve in sparse format:

bb23 = (beta[:-1]/beta[-1])**(2/3)
A = scipy.sparse.bmat([
    [scipy.sparse.eye(n-1), -bb23[:, np.newaxis]],
    [np.ones(n-1), 1],
], format='csc')
b = scipy.sparse.csc_array(([k], [n-1], [0, 1]), shape=(n, 1))
x0 = scipy.sparse.linalg.spsolve(A=A, b=b)
Reinderien
  • 11,755
  • 5
  • 49
  • 77