I use scipy.optimize.root
with the hybr
method (best one ?) to find the root of a numeric function
I print the residual at each iteration
delta d 117.960112417
delta d 117.960112417
delta d 117.960112417
delta d 117.960048733
delta d 117.960112427
delta d 117.960112121
delta d 1.46141491664
delta d 0.0322651167588
delta d 0.000363688881595
delta d 4.05494689256e-08
How can I accelerate the root finding, by increasing the size of the step, especially between the firsts iterations ? I don't know how exactly work the algorithm, but it looks strange that the 3 first results are the same, and 3 nexts are quite identical too.
Reading the doc, I've tried to modify the eps
factor, without sucess
EDIT : @sasha, here is a very basic function to illustrate the issue
def f(X1,X2):
print ' X1 , diff , norm ' , X1 , X2 - X1 , np.linalg.norm(X2 - X1)
return X2 - X1
Xa = np.array([1000,1000,1000,1000])
Xb = np.array([2000,2000,2000,2000])
SOL = scipy.optimize.root(f,Xa,(Xb,))
The result will be the following We have the 3 identical iterations at the beginning, whatever the length of X
X1 , diff , norm [1000 1000 1000 1000] [1000 1000 1000 1000] 2000.0
X1 , diff , norm [ 1000. 1000. 1000. 1000.] [ 1000. 1000. 1000. 1000.] 2000.0
X1 , diff , norm [ 1000. 1000. 1000. 1000.] [ 1000. 1000. 1000. 1000.] 2000.0
X1 , diff , norm [ 1000.0000149 1000. 1000. 1000. ] [ 999.9999851 1000. 1000. 1000. ] 1999.99999255
X1 , diff , norm [ 1000. 1000.0000149 1000. 1000. ] [ 1000. 999.9999851 1000. 1000. ] 1999.99999255
X1 , diff , norm [ 1000. 1000. 1000.0000149 1000. ] [ 1000. 1000. 999.9999851 1000. ] 1999.99999255
X1 , diff , norm [ 1000. 1000. 1000. 1000.0000149] [ 1000. 1000. 1000. 999.9999851] 1999.99999255
X1 , diff , norm [ 2000. 2000. 2000. 2000.] [-0. -0. -0. -0.] 4.36239133705e-09
X1 , diff , norm [ 2000. 2000. 2000. 2000.] [ 0. 0. 0. 0.] 0.0