I am trying to optimize the cost function(mean squared setpoint error) for pH process with the help of scipy.optimize python library, since my laptop specs is low it is taking more time to converge to the optimal point. I think it is due to the precision of the array(dttype=np.float64) declared in the program.
- I tried decreasing the prediction horizon and control horizon
- I somewhat optimized the prediction model to execute fast
- I tried changing the precision of the arrays in the python program to np.float32, the computation was fast, but it was not converging to the optimal point, optimization process was static