For odr.set_job(fit_type=2)
(ordinary least squares) odr.set_job(fit_type=0)
(explicit ODR) the uncertainties sd_beta
and the covariance matrix cov_beta
are significantly different for the two fit types for the same data - which has an error in the y-part, only.
Has anyone an idea where this originates from?
Below is a a code example for reproducing the effect.
On the side: For the hidden scaling factor between sd_beta
and cov_beta
I found some help here: How to compute standard error from ODR results?
import numpy as np
import scipy.odr as spodr
def linear_func(pars, x):
"""Define a (linear) function to fit the data with."""
a, b = pars
return b * x + a
x = np.asfarray( [1.01585124, 2.03965699, 2.89148684] )
y = np.asfarray( [1.16201599, 2.15275179, 2.87248892] )
dy = np.asfarray( [0.15520856, 0.33160654, 0.45461028] )
real_data = spodr.RealData(x, y, sy=dy)
linear_model = spodr.Model(linear_func)
odr = spodr.ODR(real_data, linear_model, beta0=[0., 1.])
# fit_type=2 means least square fit
odr.set_job(fit_type=2)
out = odr.run()
print('fit_type=2 -------------------------------------------')
out.pprint()
print('\n')
# fit_type=0 means explicit ODR
odr.set_job(fit_type=0)
out = odr.run()
print('fit_type=0 -------------------------------------------')
out.pprint()
giving out the following result:
fit_type=2 -------------------------------------------
Beta: [0.22131257 0.92950773]
Beta Std Error: [0.0444006 0.03004 ]
Beta Covariance: [[ 0.10678885 -0.06586652]
[-0.06586652 0.04888189]]
Residual Variance: 0.01846085213372428
Inverse Condition #: 0.02685695006039764
Reason(s) for Halting:
Sum of squares convergence
fit_type=0 -------------------------------------------
Beta: [0.24510628 0.91688192]
Beta Std Error: [0.07210707 0.03482789]
Beta Covariance: [[ 2.29687687 -1.03022278]
[-1.03022278 0.53584146]]
Residual Variance: 0.0022636956774060857
Inverse Condition #: 0.017439127336256896
Reason(s) for Halting:
Sum of squares convergence