I am trying to understand the errors from Curve_Fit and would appreciate if somebody could look over my code and advise if I have understood the error processing correctly. My objective is to take observables, fit a polynomial then using fit parameters use these to predict values. Key to my need is to not only have a predicted value but also a measure of error for the value (y). I think I have understood outputs of standard errors from curve_fit parameters and in my understanding I use these to calculate a standard deviation for the predicted value (y). Again in my understanding the error I have calculated is the standard error which is "1*sigma" and one sigma will contain 68% of scattered readings assuming a normal distribution. Have I understood this correctly and have I utilized outputs from Curve_Fit correctly to deliver this. Below is some proof of concept code.
from scipy.optimize import curve_fit
import numpy as np
import matplotlib.pyplot as plt
import random
#####function
def dfunc(x, a, b, c):
return a*x*x + b*x + c
#Set up trial data
knt=20
ANAL = np.zeros((3,knt))
rand = 20 #% randomization in trial data
for i in range(0,knt):
ANAL[0,i] = i
ANAL[1,i] = dfunc(i,1,2,3)*random.randint(100-rand/2,100+rand/2)/100
#fit, determine parameters values and errors
dprm, dpcv = curve_fit(dfunc, ANAL[0,],ANAL[1,])
dgcv = np.sqrt(np.diag(dpcv))
print ("Parameter values a,b,c: %5.2f %5.2f %5.2f"
% (dprm[0],dprm[1],dprm[2]))
print ("Parameter standard err: %5.2f %5.2f %5.2f"
% (dgcv[0],dgcv[1],dgcv[2]))
#Utilize to predict "y std err" for x in range 0-20
for x in range(0,20):
ANAL[2,x]=dfunc(x,*dgcv)
print ("For x=%5.2f y=%6.2f with y std err(%6.2f)"
% (x,ANAL[1,x],ANAL[2,x]))
plt.plot(ANAL[0,], ANAL[1,], 'r+', label='trial data')
plt.plot(ANAL[0,], dfunc(ANAL[0,], *dprm), 'b-', label='best fit')
plt.errorbar(ANAL[0,], dfunc(ANAL[0,], *dprm), yerr=ANAL[2,],linestyle="none")
plt.xlabel('value x')
plt.ylabel('y observations')
plt.legend()
plt.show()