0

I'm using the following code to make experiments and start learnig how to use DE to optimize more complex problems. i need and optimizer that can work with integer numbers.

from scipy.optimize import differential_evolution
def objfun(x):
    print('N')
    return x[0]+2*x[1]**-4*x[2]
solution=differential_evolution(objfun,bounds=((1,10000),(1,200000),(1,50000)),popsize=0,maxiter=3,polish=False,disp=True)

The problem rises when setting popsize. I get more population than expected and if i set it to 0 it keep getting me 10 elements for the first population and then 5 for the other populations till it gets to maxiter.

that's an exampre of the output i get with the above code

runfile('D:/PYTHON/untitled0.py', wdir='D:/PYTHON')
N
N
N
N
N
N
N
N
N
N
differential_evolution step 1: f(x)= 318.074
N
N
N
N
N
differential_evolution step 2: f(x)= 169.667
N
N
N
N
N
differential_evolution step 3: f(x)= 169.667

I really don't understand what i'm doing wrong, at least i expected popsize=0 to give an error. Moreover, are there any other hidden parameters to set initial population size that must be edited?

I'm still a beguinner, i've started with python a few week ago so i'd be really thankfull for a simple explaination.

Thank a lot to everyone who take time to answer me.

Steve

Steve
  • 37
  • 7
  • Is your objective function `x[0] + 2x[1]**(-4x[2])` or `x[0] + (2x[1] **(-4))*x[2]` ? I think its the objective function that changes it – gnahum Apr 14 '20 at 19:51
  • yes, it's the second one. I have the problem also with other code and in that case is just neural network best number of neurons finding. What i don't understand how to "tell" python to create N initial functions and then make them produce Y generations. When i did that in matlab i had the same 2 parameters for initial population and generation number, i thought it was the same here but i don't understand why i have 10 popsize and 3 generations of 5 elements. – Steve Apr 15 '20 at 14:18
  • In regards to why if you run it several times it'll give you different numbers. This is due to the seed (for randomness) being set dynamically. (i.e. the seed is set during runtime. If you set the seed=0, then the objfn won't move (since the popsize is 0). – gnahum Apr 15 '20 at 17:33
  • I'll add an answer to this question and you can let me know if you need more help. – gnahum Apr 15 '20 at 17:33

2 Answers2

0

There are a couple parts to the questions so I'll have a couple parts to this answer.

Why does popsize=0 not throw an error?

This is in scipy implementation of differential_evolution for why it doesn't throw an error. You can see that the first call will result in 10 calls and while the second and third result in only 5. This is because of the random seed.

When you call differential_evolution there is an argument seed which determines the "randomness" in the function. Since the first time point, it can be very off from the true value, it will call it 10 times, while in 1 step, the function can optimize to the true value.

If you set the seed and popsize is 0:

If you do set a seed then you can reproduce the code and see if it is right.

Here is where the seed is 0 (it does not optimize more):

>>> soln = differential_evolution(objfun, bounds=((1,10000),(1,200000),(1,50000)),popsize=0,maxiter=3,polish=False,disp=True, seed=0)
differential_evolution step 1: f(x)= 1098.52
differential_evolution step 2: f(x)= 1098.52
differential_evolution step 3: f(x)= 1098.52

Seed with popsize > 0:

It could be possible that the first iteration calls the function more times than the last, and this could be due to how the function is being optimized and the stochastic nature of it.

differential_evolution step 1: f(x)= 183.92
differential_evolution step 2: f(x)= 183.92
differential_evolution step 3: f(x)= 5.81206

If we change the popsize to something larger than 10 we will get closer to the minimum.

>>> soln = differential_evolution(objfun, bounds=((1,10000),(1,200000),(1,50000)),popsize=100,maxiter=3,polish=False,disp=True, seed=0)
differential_evolution step 1: f(x)= 10.3284
differential_evolution step 2: f(x)= 8.35376
differential_evolution step 3: f(x)= 2.65333
gnahum
  • 426
  • 1
  • 5
  • 13
  • Thanks a lot, i helped me also if i don't get everything clear. I had problems in finding good documentation. Is there a place where is explained what you have written? so can learn better to use this tools? – Steve May 08 '20 at 06:23
  • I'm not sure where it is written down specifically. I mainly looked at the documentation for `differential_evolution` and the corresponding implementation. – gnahum May 08 '20 at 17:10
  • Thank you, i read it too but i wasn't able to get those details. i think becouse i'm not skilled enough, thank you again. – Steve May 10 '20 at 16:22
0

In differential_evolution the total number of members in the population is set here. It is calculated as:

self.num_population_members = max(5, popsize * self.parameter_count)

This means the minimum number of population members you can have is 5, and the maximum is popsize * self.parameter_count. The minimum number of 5 is because the best2bin strategy requires at least 5 population members.

The population size does not change during the minimisation. After the population is initialised each member of the initial population (total size np) has to be evaluated. During the first iteration of the minimisation loop there are a further np evaluations of the objective. At the end of the first iteration an update is printed. Thus, there will be 2 * np printings of N before the first update is printed. Each subsequent iteration will only have np printings of N.

Andrew Nelson
  • 460
  • 3
  • 11