You can define alpha
using a sigmoid function:
alpha = 1/(1+exp(-x))
https://en.wikipedia.org/wiki/Sigmoid_function
Based on this definition, alpha will always be in the range [0, 1]
. Then, you can change the target of the optimization in scipy.optimize.fsolve
to calibrate the value of x
instead of alpha
directly.
The variable x
is free of constraints, so any optimization method works.
PS. This technique is very common in machine learning.
PS2. Adding constraints to an optimizer is only important when they are not being fulfilled. So for example, if your alpha
solution is already in the range [0,1]
, then you can keep the optimizer as it is.