1

I start to use Python scipy for simulation-based optimization. Here a simple example, where in the real case, x is an input to the simulation model (a constant variable over time) and it has a lower and upper bound (40 and 80). t_sim is a function of x and time. The model is simulated for 43200 seconds and the goal is to minimize t_sim over the simulation time.

import numpy as np
from scipy import optimize

def cost(x):
    # in the real case, x is an input to the simulation and  
    #  t_sim is based on a simulation result (t_sim(x))
    TimeInput = np.array([0.,43200.])
    t_sim = np.array([x[0],x[0]])

    obj = np.trapz(t_sim, TimeInput)

    return obj
x0 = np.array([70])
bnds = ((40, 80),)
res = optimize.minimize(cost, x0, method='TNC', bounds=bnds)
print res.fun
print res.x

In which order of magnitude should the value of the opimization function be? In this simple example it is 1728000. Should the objective be scaled to 1? This is related with my second question: Are the default values for the solver settings (such as termination tolerance, step size used for numerical jacobian, maximum step for the line search etc.) based on a certain order of magnitude of the objective function?

Matias
  • 581
  • 1
  • 5
  • 16
  • I see it's your second trial (after the first one got closed) and it's still very broad, especially when asking about all those optimizers available. (1) Why did you not try it? Just compare the behaviour between a scaled and a non-scaled version. Easy to do right? (2) Why are you using methods built for multivariate-optimization when you only got one variable? Don't do that (3) Objective-scale is usually way less problematic than variable-scales! (4) Most stopping-criterions are based on the gradient or any second-order information. There might be some (obj_new - obj_old) crit though. – sascha Aug 31 '17 at 17:00
  • (5) tuning CG and co. within TNC (as in your example) is probably much more relevant. – sascha Aug 31 '17 at 17:00
  • (I just realized, that the older question i thought of, basically asking the same thing, is from another user. If that is not you, i apologize for my first sentence) – sascha Aug 31 '17 at 17:13
  • A Newton method will do finite differences to get gradients. This may not be the best approach to use when having expensive function evaluations. Often it is better is to use a method that does not require gradients, – Erwin Kalvelagen Sep 01 '17 at 19:04

0 Answers0