3

I have an optimization problem that can hardly be solved by analytic nor numeric solvers, as I'm not able to provide the derivatives for it. Therefore im looking for a solution using heuristic or genetic algorithms.

My problem consists of the following:

  • single objective
  • large scale, but app. less than 10.000 variables
  • mixed integer (MIP) (variables mainly decimals, a few are boolean/integer variables)
  • constrained (variable-boundary constraints, equality and inequality constraints, app. the same amount as variables)

So my questions are:

  1. Is there a paper that takes all the points into account (especially mixed integer programming) in a heuristic/genetic algorithm?

  2. Is there a good way to achieve mixed integer programming in a heuristic/genetic algorithm?

  3. How does one handle equality constraints in a heuristic/genetic algorithm the best way?

  4. Are there any (open source) libraries out there that could be promising?


My expirience so far with my problem implementing in the MOEA-Framework using the algorithmtypes NSGAII (and some of its derivatives) or a plain stupid random number generator is that, when using equality constraints or a MIP problem, the GA does not find a solution, even not when allowing alot of generations and a large populationsize for a really small problem.

Buni
  • 251
  • 2
  • 13
  • AFAIK, a decimal variable is not integer... How does your objective function look like? – JaBe Mar 10 '15 at 12:47
  • Yes, decimal (or floating point numbers) != integers. I thought MIP would combine the optimization of decimal and integral variables, maybe I'm wrong in here? The objective function is a sum of unpredictable, non-linear, client defined subfunctions ($$\rightarrow x \in \mathbb R$$) and some (boolean) decision variables (which can also be interpreted as integer variables with possible values of 0/1). – Buni Mar 10 '15 at 13:25
  • Ugh, MathJax seems not to work here. The objective function consists of variables in the set of real numbers. – Buni Mar 10 '15 at 13:31
  • 1
    I've got a dataset of 50 000 variables (MachineReassignment dataset B10) in [OptaPlanner](http://www.optaplanner.org/) that solves quite well with Late Acceptance (a form of Local Search). You might want to try LA or another LS variant. – Geoffrey De Smet Mar 10 '15 at 14:40
  • @GeoffreyDeSmet Thanks for your suggestion. I implemented a tiny subset of my problem as a test-scenario and (finally) got it running. However it seems that the objective/planning variables are never changed (instead only 'move 0.0 <=> *default value*'). Each variable got lower and upper boundary constraints and a sum over all variables must match a predefined sum. All constraints are implemented as hard-constraints and the score is calculated by the difference the violated constraint. How get the variables mutated in OptaPlanner? – Buni Mar 18 '15 at 13:15
  • @GeoffreyDeSmet And congratulations on OptaPlanner, it looks really featurerich and mature! – Buni Mar 18 '15 at 13:33
  • 1
    "the sum over all variables must match a predefined sum" -> a custom move could help a lot there :/ In future versions, I 'll add more support for quantitative variables and their moves. – Geoffrey De Smet Mar 18 '15 at 16:30
  • @GeoffreyDeSmet Thanks for your replies :) I Implemented a custom move (thanks to the docs it was quite easy) which *for now* simply assigns random numbers to the problem. It seems that OptaPlanner is good when there is a set of possible values for the variables (*queens can only be in different rows* etc, therefore it supports *moves* and not like in genetic algorithms some kind of *mutations* etc) but when it comes to more mathematical like problems OptaPlanner seems to be more of a wrapper-framework but without such mutating algorithms. Am I right in this or am I missing something? – Buni Mar 19 '15 at 13:20
  • I guess i have a look at [this](http://stackoverflow.com/a/19387480/3410653) first – Buni Mar 19 '15 at 14:23
  • I think a custom move that decreases 1 entity's variable with x and increases another entity's variable with x, could do a lot for the local search algorithms here. I seriously doubt if GA will do any better here. MIP approaches might (but they can't scale to that var size), but I wouldn't discard local search yet. It's just a pain that custom moves are needed to them to work well on this use case :( – Geoffrey De Smet Mar 19 '15 at 16:32
  • @GeoffreyDeSmet Thank you again. I've implemented this kind of custom move from your suggestion. My knowledge is still limited on this kind of optimization. Do you can suggest any resource on how to implement and apply a proper heuristic that fits for a problem? How do I formulate the right formulation/decompose the problem etc.? I'd like to learn about that. I found [this PDF as a short overview](http://www.yuribykov.com/LAHC/LAHC_IOC.pdf) of adapting an algorithm to a specific problem. I guess this should be a seperate question, i might ask it here :) – Buni Mar 25 '15 at 12:29

1 Answers1

0

Have you tried on sciPy/numPy? it has a command which performs simulated annealing. The only thing you must do is convert constraints to differences and sum them to the objective function. Look at this:

Min Z=x1^2+x2^2

subject to:

x1+x2>=2

Here's my python code:

#import modules
import numpy as np
from scipy import optimize

#define Constraint 1
def diffC1(x1,x2):
    return max(x1+x2-2,0)

#penalization value
M=10000.

#define your objective function (with Constraint 1 inserted)
def f(X):
    x1,x2=X
    print x1,x2
    return x1**2+x2**2+M*(diffC1(x1,x2))
#initialize x1=2 and x2=2
X0=np.array([2., 2.])

#try your function
f(X0)

#solve!
np.random.seed(555)   # Seeded to allow replication.
res = optimize.anneal(f, X0, schedule="boltzmann",
                          full_output=True, maxiter=10000,
                           dwell=250, disp=True)

#evaluate your achieved objective function value
f(res[0])
  • Thanks! Im currently looking for a algorithm that is capable of solving more kind of mathematical problems (like yours) instead of real-world ones (like OptaPlanner does). @GeoffreyDeSmet and his OptaPlanner gave me a pretty good start on this and Late Acceptance (LAHC) seems to be a pretty good fit. However I am currently struggling with the general representation of decision-variables (so MIP comes into play) on such heuristic algorithms. – Buni Mar 25 '15 at 12:21
  • I understand. Well, as you can see in this simulated annealing, the objective function can take any form of restrictions. If some variables are integers, you could try to minimize the difference like this: suppose in some iteration "x1=4.75", "Decimal Part=x1-math.ceil(x1)" where Ceil is a function that takes only the integer part of x1. Then minimize Decimal part as part of the objective function as before. – Juan Sandoval Lavieri Mar 25 '15 at 18:44
  • On the other hand, I'd recommend to use genetic algorithms to represent the search space of x1 as only integers (and of other variables as whatever they are). I'm currently developing a GeneAlg Package oriented to MathematicalProgramming, tell me if you'd be interested :). – Juan Sandoval Lavieri Mar 25 '15 at 18:51
  • I'm intrested in your methods, I'd really like to get an eye on your lib :) – Buni Apr 13 '15 at 12:32
  • Awesome! As soon as I finish the "scratch" I'll give you the github link :) – Juan Sandoval Lavieri Jul 28 '15 at 23:35