I am trying to fit a model using LMFIT, I can easily do the following:
def loss_function(params):
residuals = []
for x, measured in ...:
y = predict(x, params)
residuals.append(y - measured)
return residuals
params = Parameters()
params.add(...)
model = Minimizer(loss_function, params)
result = model.minimize(method='leastsq')
And get very reasonable results
Now I have also some uncertainties associated with my measured
variable (e.g. measurement errors) so I would like to weight the points in my residuals by the standard error associated to it (suppose it is constantly 20% of the measured value). The code now becomes something like this:
def loss_function(params):
residuals = []
for x, measured in ...:
y = predict(x, params)
residuals.append((y - measured) / (measured * 0.2))
return residuals
params = Parameters()
params.add(...)
model = Minimizer(loss_function, params)
result = model.minimize(method='leastsq')
The problem is that now I get totally unreliable fitting results. Why? How can I fix this?