-1

I am solving a minimization problem in Python that allocates packet capacities over the edges of a graph in a way that the loss of packets throughout the network/graph is minimum. Packets are generated at nodes following a Poisson Distribution . The problem is scipy.optimize.minimize() cannot accept only integers as inputs to the objective function loss_obj(x). It operates over all float values satisfying the constraint. Method find_loss() finds the loss of edge e assuming k as its capacity. I am pasting only the optimization function below because the original code is over 300 lines.

#here we find loss of an edge e of the graph 
def find_loss(e,lmd,q,k): 
    edge_pmf=find_final_dist(e,lmd,q) 
    l_e=sum(edge_pmf[k+1:])
    return l_e       

net_cap=12 #this is the net capacity to be allocated over the edges

#The minimization function 
x0=[1]*a  
for i in range(a):
    x0[i]=net_cap/a

#x=[c1,c2,c3,...]
def loss_obj(x):
    s=0 
    for i in range(len(x)):
       l=find_loss(edge_enum[i],lamd,q,m.floor(x[i]))  
       s+=l 
    return s 
print('Initial guess ',x0)    
def constraint(x):
    b=sum(x[0:]) 
    return b-net_cap 

con={'type':'eq','fun':constraint}   
b=(0,net_cap)
bounds=[]
for i in range(a):
    bounds.append(b)    
cap_op=minimize(loss_obj,x0,method='SLSQP',constraints=con,bounds=bounds)
print('\n',cap_op.x)

This is the output shown:

Initial guess  [0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5]

 [0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5
 0.5 0.5 0.5 0.5 0.5 0.5]

Although I have shown here a vector with only 24 elements just to demonstrate the issue, my network has 104 edges and hence it cannot be solved with scipy.optimize.brute() or itertools.combinations() because the system cannot handle too large arrays and gives a Memory Error. Linear Programming Problem is not what I am aiming at, so PuLP won't be a good solution. Can someone please figure out a way to minimize integer-input functions?

  • You're gonna have to give us some code examples so we can see what you're doing... – user32882 Mar 22 '20 at 08:48
  • scipys optimizers are all about continuous and twice-differentiable optimization. Although your question is lacking many many details (and the code is incomplete), it's relatively easy to claim, that you won't be able to reformulate your task to be compatible with scipy (while keeping tractability). If pulp is an alternative or if more general solvers in the MINLP-domain are needed depends on your objective: linear or easy linearization? yes, use a MIP-solver (=pulp). – sascha Mar 22 '20 at 09:22
  • @sascha Linear Programming Problem is not my aim, I just want to minimize my loss function but only with integers as inputs. The loss output will be floatvalue,thats for sure. But to take integers only in my optimization vector is the real challenge. For LPP I will need to create lots of variables in this case(>100) ,which is quite tedious. – Divyayan Dey Mar 22 '20 at 11:20
  • My comment still applies. Whatever you do, integrality-enforcing effects (most of the time) in NP-hardness and some need for combinatorial approaches like bnb/tree-search: either with a custom-impl or just by using a black-box integer-programming solver. Look at the difference between Linear Programming (poly) and Integer Programming (NP-hard): a subset of the variables is to be enforced as integral. That is all! The only difference to yours is, that you might keep that subset restricted to the objective. But this is not much of a difference.(at least without analyzing the problem much deeper) – sascha Mar 22 '20 at 15:55
  • My minimization function is not a linear combination of the input variables, its a computational evaluation done with the capacity assigned. So, my objective needs to be written as a sum of functions of the inputs, not the sums of the inputs itself. So, can we even call it a IPP? – Divyayan Dey Mar 22 '20 at 17:50
  • There are many meta-heuristics that can deal with integer inputs and black-box objective functions. – Erwin Kalvelagen Mar 22 '20 at 21:19
  • @ErwinKalvelagen can you name some? I have been searching through some metaheuristic algorithms for optimzation but they just keep making my solution methodology complex. – Divyayan Dey Mar 23 '20 at 11:26
  • Eg evolutionary meta-heuristics. Start with integer-valued random populations and take it from there. – Erwin Kalvelagen Mar 23 '20 at 13:47

1 Answers1

0

Since the loss function is clear, how about using Bayesian Optimization (BO)? It makes "smart guess" based on the former guesses suggested by the Bayes rule. The following is one popular implementation of BO in Python and its documents are clear.

https://github.com/fmfn/BayesianOptimization

Luke
  • 41
  • 6
  • Note that my main focus is to have only integers as inputs to my objective function.I don't see any such restrictions in the Bayesian Optimization. – Divyayan Dey Mar 22 '20 at 17:57