I start to use Python scipy for simulation-based optimization. Here a simple example, where in the real case, x is an input to the simulation model (a constant variable over time) and it has a lower and upper bound (40 and 80). t_sim is a function of x and time. The model is simulated for 43200 seconds and the goal is to minimize t_sim over the simulation time.
import numpy as np
from scipy import optimize
def cost(x):
# in the real case, x is an input to the simulation and
# t_sim is based on a simulation result (t_sim(x))
TimeInput = np.array([0.,43200.])
t_sim = np.array([x[0],x[0]])
obj = np.trapz(t_sim, TimeInput)
return obj
x0 = np.array([70])
bnds = ((40, 80),)
res = optimize.minimize(cost, x0, method='TNC', bounds=bnds)
print res.fun
print res.x
In which order of magnitude should the value of the opimization function be? In this simple example it is 1728000. Should the objective be scaled to 1? This is related with my second question: Are the default values for the solver settings (such as termination tolerance, step size used for numerical jacobian, maximum step for the line search etc.) based on a certain order of magnitude of the objective function?