2

I am trying to fit curve to a step function. I tried no. of approaches like using sigmoid function,using ratio of polynomials, fitting Gauss function to derivative of step, but none of them are looking okay. Now, I came up with the idea of creating a perfect step and compute convolution of perfect step to a Gauss function and find best fit parameter using non-linear regression. But this is also not looking good. I am writing here code both for Sigmoid and convolution approach. First with Sigmoid Function fit:

Function:

function d=fit_sig(param,x,y)
a=param(1);
b=param(2);
d=(a./(1+exp(-b*x)))-y;
end

main code:

a=1, b=0.09;
p0=[a,b];
sig_coff=lsqnonlin(@fit_sig,p0,[],[],[],xavg_40s1,havg_40s1);
% Plot the original and experimental data.
sig_new = sig_coff(1)./(1+exp(-sig_coff(2)*xavg_40s1));
d= havg_40s1-step_new;
figure;
plot(xavg_40s1,havg_40s1,'.-r',xavg_40s1,sig_new,'.-b');
xlabel('x-pixel'); ylabel('dz/dx (mm/pixel)'); axis square; 

This is not working at all. I think my initial guesses are wrong. I tried multiple numbers but could not get it correct. I tried using curve fitting tool too but that is also not working.

Code for creating perfect step:

h=ones(1,numel(havg_40s1)); %height=1mm
h(1:81)=-0.038;
h(82:end)=1.002; %or 1.0143
figure; 
plot(xavg_40s1,havg_40s1,'k.-', 'linewidth',1.5, 'markersize',16);
hold on
plot(xavg_40s1,h,'.-r','linewidth',1.5,'markersize',12);

Code using convolution approach:

Function:

function d=fit_step(param,h,x,y)
A=param(1);
mu=param(2);
sigma=param(3);
d=conv(h,A*exp(-((x-mu)/sigma).^2),'same')-y;
end

main code:

param1=[0.2247    8.1884    0.0802];
step_coff=lsqnonlin(@fit_step,param1,[],[],[],h,dx_40s1,havg_40s1);
% Plot the original and experimental data.
step_new = conv(h,step_coff(1)*exp(-((dx_40s1-step_coff(2))/step_coff(3)).^2),'same');

figure;
plot(xavg_40s1,havg_40s1,'.-r',xavg_40s1,step_new,'.-b');

This is close but edge of step has been shifted plus corners are looking sharper than measured step.

Could someone please help me out the best way to fit a step function or any suggestion to improve the code??

X data:

12.6400 12.6720 12.7040 12.7360 12.7680 12.8000 12.8320 12.8640 12.8960 12.9280 12.9600 12.9920 13.0240 13.0560 13.0880 13.1200 13.1520 13.1840 13.2160 13.2480 13.2800 13.3120 13.3440 13.3760 13.4080 13.4400 13.4720 13.5040 13.5360 13.5680 13.6000 13.6320 13.6640 13.6960 13.7280 13.7600 13.7920 13.8240 13.8560 13.8880 13.9200 13.9520 13.9840 14.0160 14.0480 14.0800 14.1120 14.1440 14.1760 14.2080 14.2400 14.272 14.3040 14.3360 14.3680 14.4000 14.4320 14.4640 14.4960 14.5280 14.5600 14.5920 14.6240 14.6560 14.6880 14.7200 14.7520 14.7840 14.8160 14.8480 14.8800 14.9120 14.9440 14.9760 15.0080 15.0400 15.0720 15.1040 15.1360 15.1680 15.2000 15.2320 15.2640 15.2960 15.3280 15.3600 15.3920 15.4240 15.4560 15.4880 15.5200 15.5520 15.5840 15.6160 15.6480 15.6800 15.7120 15.7440 15.7760 15.8080 15.8400 15.8720 15.9040 15.9360 15.9680 16.0000 16.0320 16.0640 16.0960 16.1280 16.1600 16.1920 16.2240 16.2560 16.2880 16.3200 16.3520 16.3840 16.4160 16.4480 16.4800 16.5120 16.5440 16.5760 16.6080 16.6400 16.6720 16.7040 16.7360 16.7680 16.8000 16.8320 16.8640 16.8960 16.9280 16.9600 16.9920 17.0240 17.0560 17.0880 17.1200 17.1520 17.1840 17.2160 17.2480 17.2800 17.3120 17.3440 17.3760 17.4080 17.4400 17.4720 17.5040 17.5360 17.5680 17.6000 17.6320 17.6640 17.6960 17.7280 17.7600

Y Data:

-0.0404 -0.0405 -0.0350 -0.0406 -0.0412 -0.0407 -0.0378 -0.0405 -0.0337 -0.0417 -0.0413 -0.0387 -0.0352 -0.0373 -0.0369 -0.0388 -0.0384 -0.0351 -0.0401 -0.0314 -0.0375 -0.0390 -0.0330 -0.0343 -0.0341 -0.0369 -0.0424 -0.0369 -0.0309 -0.0387 -0.0346 -0.0433 -0.0410 -0.0355 -0.0343 -0.0396 -0.0369 -0.0400 -0.0377 -0.0330 -0.0416 -0.0348 -0.0380 -0.0338 -0.0349 -0.0359 -0.0418 -0.0336 -0.0375 -0.0309 -0.0362 -0.0422 -0.0437 -0.0352 -0.0303 -0.0335 -0.0358 -0.0467 -0.0341 -0.0306 -0.0322 -0.0338 -0.0418 -0.0417 -0.0299 -0.0264 -0.0308 -0.0352 -0.0330 -0.0261 -0.0088 -0.0071 0.0013 0.0012 0.0151 0.0352 0.0475 0.0764 0.1423 0.2617 0.4057 0.6241 0.8076 0.8872 0.9248 0.9340 0.9395 0.9514 0.9650 0.9708 0.9875 0.9852 0.9955 0.9971 0.9966 0.9981 0.9983 0.9932 1.0013 1.0011 0.9961 1.0044 0.9994 1.0028 1.0028 0.9996 1.0009 1.0024 1.0027 1.0075 1.0017 1.0001 1.0033 1.0062 1.0071 1.0032 1.0026 1.0027 1.0062 1.0063 0.9981 1.0025 0.9994 1.0075 1.0026 1.0035 1.0018 0.9999 1.0045 1.0067 0.9980 1.0044 0.9976 0.9976 1.0087 1.0026 1.0010 0.9997 1.0025 0.9943 1.0098 0.9964 0.9994 0.9973 0.9997 1.0084 1.0035 0.9974 0.9967 0.9967 1.0013 1.0060 1.0026 0.9960 0.9970 0.9987 1.0054 1.0048 0.9952 0.9937 0.9972

Attached are the images of the measured step and fitted curve. Fitted curve using Sigmoid Function Fitted curve using convolution approach

Swati Jain
  • 63
  • 2
  • 8

2 Answers2

1

Why not take a simple approach? Smooth your data, compute it’s derivative, then find the max of that derivative. You can do the first two steps by convolving with the derivative of a Gaussian, which is easy to generate.

The location of the max is the shift of the step function you’re trying to fit. The mean of the values to the left and the mean of the values to the right are the low and high values of the step function.


From first principles (toolboxes will make all of these steps simpler), the Gaussian gradient is computed like this:

x = [12.6400 12.6720 12.7040 12.7360 12.7680 12.8000 12.8320 12.8640 12.8960 12.9280 12.9600 12.9920 13.0240 13.0560 13.0880 13.1200 13.1520 13.1840 13.2160 13.2480 13.2800 13.3120 13.3440 13.3760 13.4080 13.4400 13.4720 13.5040 13.5360 13.5680 13.6000 13.6320 13.6640 13.6960 13.7280 13.7600 13.7920 13.8240 13.8560 13.8880 13.9200 13.9520 13.9840 14.0160 14.0480 14.0800 14.1120 14.1440 14.1760 14.2080 14.2400 14.272 14.3040 14.3360 14.3680 14.4000 14.4320 14.4640 14.4960 14.5280 14.5600 14.5920 14.6240 14.6560 14.6880 14.7200 14.7520 14.7840 14.8160 14.8480 14.8800 14.9120 14.9440 14.9760 15.0080 15.0400 15.0720 15.1040 15.1360 15.1680 15.2000 15.2320 15.2640 15.2960 15.3280 15.3600 15.3920 15.4240 15.4560 15.4880 15.5200 15.5520 15.5840 15.6160 15.6480 15.6800 15.7120 15.7440 15.7760 15.8080 15.8400 15.8720 15.9040 15.9360 15.9680 16.0000 16.0320 16.0640 16.0960 16.1280 16.1600 16.1920 16.2240 16.2560 16.2880 16.3200 16.3520 16.3840 16.4160 16.4480 16.4800 16.5120 16.5440 16.5760 16.6080 16.6400 16.6720 16.7040 16.7360 16.7680 16.8000 16.8320 16.8640 16.8960 16.9280 16.9600 16.9920 17.0240 17.0560 17.0880 17.1200 17.1520 17.1840 17.2160 17.2480 17.2800 17.3120 17.3440 17.3760 17.4080 17.4400 17.4720 17.5040 17.5360 17.5680 17.6000 17.6320 17.6640 17.6960 17.7280 17.7600];
y = [-0.0404 -0.0405 -0.0350 -0.0406 -0.0412 -0.0407 -0.0378 -0.0405 -0.0337 -0.0417 -0.0413 -0.0387 -0.0352 -0.0373 -0.0369 -0.0388 -0.0384 -0.0351 -0.0401 -0.0314 -0.0375 -0.0390 -0.0330 -0.0343 -0.0341 -0.0369 -0.0424 -0.0369 -0.0309 -0.0387 -0.0346 -0.0433 -0.0410 -0.0355 -0.0343 -0.0396 -0.0369 -0.0400 -0.0377 -0.0330 -0.0416 -0.0348 -0.0380 -0.0338 -0.0349 -0.0359 -0.0418 -0.0336 -0.0375 -0.0309 -0.0362 -0.0422 -0.0437 -0.0352 -0.0303 -0.0335 -0.0358 -0.0467 -0.0341 -0.0306 -0.0322 -0.0338 -0.0418 -0.0417 -0.0299 -0.0264 -0.0308 -0.0352 -0.0330 -0.0261 -0.0088 -0.0071 0.0013 0.0012 0.0151 0.0352 0.0475 0.0764 0.1423 0.2617 0.4057 0.6241 0.8076 0.8872 0.9248 0.9340 0.9395 0.9514 0.9650 0.9708 0.9875 0.9852 0.9955 0.9971 0.9966 0.9981 0.9983 0.9932 1.0013 1.0011 0.9961 1.0044 0.9994 1.0028 1.0028 0.9996 1.0009 1.0024 1.0027 1.0075 1.0017 1.0001 1.0033 1.0062 1.0071 1.0032 1.0026 1.0027 1.0062 1.0063 0.9981 1.0025 0.9994 1.0075 1.0026 1.0035 1.0018 0.9999 1.0045 1.0067 0.9980 1.0044 0.9976 0.9976 1.0087 1.0026 1.0010 0.9997 1.0025 0.9943 1.0098 0.9964 0.9994 0.9973 0.9997 1.0084 1.0035 0.9974 0.9967 0.9967 1.0013 1.0060 1.0026 0.9960 0.9970 0.9987 1.0054 1.0048 0.9952 0.9937 0.9972];

sigma = 3;
cutoff = ceil(4*sigma);
kernel = -cutoff:cutoff;
kernel = -kernel .* exp(-0.5 * kernel.^2 / sigma.^2);
grad = conv(y,kernel,'same');

We can find the maximum sample with max:

[~,ii] = max(grad);

This is the sample nearest the middle of the transition point. We can refine this location by fitting a parabola to the 3 samples around the peak:

px = x(ii-1:ii+1).';
py = grad(ii-1:ii+1).';
% solve the equation: py = [px.*px, px, ones(3,1)] * params;
params = [px.*px, px, ones(3,1)] \ py;
x_max = -params(2)/(2*params(1));

Finally, we might want to include the y values before and after the transition into the fitting:

left = median(y(x<x_max));
right = median(y(x>x_max));

(Though we might want to assume that left=0 and right=1.)

Plotting:

plot(x,y)
hold on
plot([x(1),x_max,x_max,x(end)],[left,left,right,right])

data + fitted step function


To fit a full error function (which is the integral of the Gaussian) we just need one more step: above we fitted a parabola to the three samples around the maximum, now instead we fit a parabola to the logarithm of the y values (see this other Q&A for an explanation), and select all y values above 0.2 times the peak value for this fit to avoid fitting to noise. This is approximately 2 sigma, and should be enough to obtain an accurate estimate of the Gaussian peak. The parameters of the Gaussian peak are also the parameters to the smoothed error function, we can correct the estimated sigma for this additional smoothing:

% using grad from the code above (as well as x and y)
[m,ii] = max(grad);
w = sum(grad > m * 0.2) / 2;
px = x(ii-w:ii+w).';
py = log(grad(ii-w:ii+w)).';
% solve the equation: py = [px.*px, px, ones(3,1)] * params;
params = [px.*px, px, ones(size(px))] \ py;
% obtain Gaussian parameters 
fitted_mu = -params(2)/(2*params(1));
fitted_sigma = sqrt(-0.5/params(1));
% correct for smoothing applied
fitted_sigma = sqrt(fitted_sigma^2 - (sigma*mean(diff(x)))^2);
% evaluated fitted function
fitted_y = (erf((x-fitted_mu)/fitted_sigma) + 1) / 2 * (right-left) + left;

clf
plot(x,y)
hold on
plot(x,fitted_y)

data + fitted erf function

Cris Luengo
  • 55,762
  • 10
  • 62
  • 120
  • Thanks for your response! This is looking more like to generate a perfect step but what I am looking for is the curve fit to my data. Are you trying to say that if I find convolution of this perfect step to the grad, I will get the best fit to my step? I tried that the corner roundness is looking close but again edge is way to different than it should be. – Swati Jain Jun 16 '19 at 19:40
  • Hs=[left*(ones(1,65)) right right*(ones(1,64))]; xs1=min(x):0.16/4:x_max; xs2=x_max:0.16/4:max(x); xs=[xs1 x_max xs2]; figure; plot(x,y,'.-') hold on plot(xs,Hs,'.-') grad=grad(7:151); grad=grad/sum(grad); hfit=conv(Hs,grad,'same'); figure; plot(xs,hfit,'.-k'); hold on plot(x,y,'.-b'); – Swati Jain Jun 16 '19 at 19:41
1

Here is an example graphical fitter using your data and a sigmoid from my equation search. This example uses the standard scipy differential_evolution genetic algorithm module to determine initial parameter estimates for curve fitting, and that module uses the Latin Hypercube algorithm to ensure a thorough search of parameter space requiring bounds within which to search. In this example the search bounds are derived from the data. Note that it is much easier to estimate ranges for the initial parameter estimates rather than specific values.

plot

import numpy, scipy, matplotlib
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
from scipy.optimize import differential_evolution
import warnings

xData = numpy.array([12.6400, 12.6720, 12.7040, 12.7360, 12.7680, 12.8000, 12.8320, 12.8640, 12.8960, 12.9280, 12.9600, 12.9920, 13.0240, 13.0560, 13.0880, 13.1200, 13.1520, 13.1840, 13.2160, 13.2480, 13.2800, 13.3120, 13.3440, 13.3760, 13.4080, 13.4400, 13.4720, 13.5040, 13.5360, 13.5680, 13.6000, 13.6320, 13.6640, 13.6960, 13.7280, 13.7600, 13.7920, 13.8240, 13.8560, 13.8880, 13.9200, 13.9520, 13.9840, 14.0160, 14.0480, 14.0800, 14.1120, 14.1440, 14.1760, 14.2080, 14.2400, 14.272, 14.3040, 14.3360, 14.3680, 14.4000, 14.4320, 14.4640, 14.4960, 14.5280, 14.5600, 14.5920, 14.6240, 14.6560, 14.6880, 14.7200, 14.7520, 14.7840, 14.8160, 14.8480, 14.8800, 14.9120, 14.9440, 14.9760, 15.0080, 15.0400, 15.0720, 15.1040, 15.1360, 15.1680, 15.2000, 15.2320, 15.2640, 15.2960, 15.3280, 15.3600, 15.3920, 15.4240, 15.4560, 15.4880, 15.5200, 15.5520, 15.5840, 15.6160, 15.6480, 15.6800, 15.7120, 15.7440, 15.7760, 15.8080, 15.8400, 15.8720, 15.9040, 15.9360, 15.9680, 16.0000, 16.0320, 16.0640, 16.0960, 16.1280, 16.1600, 16.1920, 16.2240, 16.2560, 16.2880, 16.3200, 16.3520, 16.3840, 16.4160, 16.4480, 16.4800, 16.5120, 16.5440, 16.5760, 16.6080, 16.6400, 16.6720, 16.7040, 16.7360, 16.7680, 16.8000, 16.8320, 16.8640, 16.8960, 16.9280, 16.9600, 16.9920, 17.0240, 17.0560, 17.0880, 17.1200, 17.1520, 17.1840, 17.2160, 17.2480, 17.2800, 17.3120, 17.3440, 17.3760, 17.4080, 17.4400, 17.4720, 17.5040, 17.5360, 17.5680, 17.6000, 17.6320, 17.6640, 17.6960, 17.7280, 17.7600])

yData = numpy.array([-0.0404, -0.0405, -0.0350, -0.0406, -0.0412, -0.0407, -0.0378, -0.0405, -0.0337, -0.0417, -0.0413, -0.0387, -0.0352, -0.0373, -0.0369, -0.0388, -0.0384, -0.0351, -0.0401, -0.0314, -0.0375, -0.0390, -0.0330, -0.0343, -0.0341, -0.0369, -0.0424, -0.0369, -0.0309, -0.0387, -0.0346, -0.0433, -0.0410, -0.0355, -0.0343, -0.0396, -0.0369, -0.0400, -0.0377, -0.0330, -0.0416, -0.0348, -0.0380, -0.0338, -0.0349, -0.0359, -0.0418, -0.0336, -0.0375, -0.0309, -0.0362, -0.0422, -0.0437, -0.0352, -0.0303, -0.0335, -0.0358, -0.0467, -0.0341, -0.0306, -0.0322, -0.0338, -0.0418, -0.0417, -0.0299, -0.0264, -0.0308, -0.0352, -0.0330, -0.0261, -0.0088, -0.0071, 0.0013, 0.0012, 0.0151, 0.0352, 0.0475, 0.0764, 0.1423, 0.2617, 0.4057, 0.6241, 0.8076, 0.8872, 0.9248, 0.9340, 0.9395, 0.9514, 0.9650, 0.9708, 0.9875, 0.9852, 0.9955, 0.9971, 0.9966, 0.9981, 0.9983, 0.9932, 1.0013, 1.0011, 0.9961, 1.0044, 0.9994, 1.0028, 1.0028, 0.9996, 1.0009, 1.0024, 1.0027, 1.0075, 1.0017, 1.0001, 1.0033, 1.0062, 1.0071, 1.0032, 1.0026, 1.0027, 1.0062, 1.0063, 0.9981, 1.0025, 0.9994, 1.0075, 1.0026, 1.0035, 1.0018, 0.9999, 1.0045, 1.0067, 0.9980, 1.0044, 0.9976, 0.9976, 1.0087, 1.0026, 1.0010, 0.9997, 1.0025, 0.9943, 1.0098, 0.9964, 0.9994, 0.9973, 0.9997, 1.0084, 1.0035, 0.9974, 0.9967, 0.9967, 1.0013, 1.0060, 1.0026, 0.9960, 0.9970, 0.9987, 1.0054, 1.0048, 0.9952, 0.9937, 0.9972])

def func(x, a, b, Offset): # Sigmoid A with Offset from zunzun.com
    return 1.0 / (1.0 + numpy.exp(-1.0 * a * (x - b))) + Offset


# function for genetic algorithm to minimize (sum of squared error)
def sumOfSquaredError(parameterTuple):
    warnings.filterwarnings("ignore") # do not print warnings by genetic algorithm
    val = func(xData, *parameterTuple)
    return numpy.sum((yData - val) ** 2.0)


def generate_Initial_Parameters():
    # min and max used for bounds
    maxX = max(xData)
    minX = min(xData)
    maxY = max(yData)
    minY = min(yData)

    parameterBounds = []
    parameterBounds.append([minX, maxX]) # search bounds for a
    parameterBounds.append([minX, maxX]) # search bounds for b
    parameterBounds.append([minY, maxY]) # search bounds for Offset

    # "seed" the numpy random number generator for repeatable results
    result = differential_evolution(sumOfSquaredError, parameterBounds, seed=3)
    return result.x

# by default, differential_evolution completes by calling curve_fit() using parameter bounds
geneticParameters = generate_Initial_Parameters()

# now call curve_fit without passing bounds from the genetic algorithm,
# just in case the best fit parameters are aoutside those bounds
fittedParameters, pcov = curve_fit(func, xData, yData, geneticParameters)
print('Fitted parameters:', fittedParameters)
print()

modelPredictions = func(xData, *fittedParameters) 

absError = modelPredictions - yData

SE = numpy.square(absError) # squared errors
MSE = numpy.mean(SE) # mean squared errors
RMSE = numpy.sqrt(MSE) # Root Mean Squared Error, RMSE
Rsquared = 1.0 - (numpy.var(absError) / numpy.var(yData))

print()
print('RMSE:', RMSE)
print('R-squared:', Rsquared)

print()


##########################################################
# graphics output section
def ModelAndScatterPlot(graphWidth, graphHeight):
    f = plt.figure(figsize=(graphWidth/100.0, graphHeight/100.0), dpi=100)
    axes = f.add_subplot(111)

    # first the raw data as a scatter plot
    axes.plot(xData, yData,  'D')

    # create data for the fitted equation plot
    xModel = numpy.linspace(min(xData), max(xData))
    yModel = func(xModel, *fittedParameters)

    # now the model as a line plot
    axes.plot(xModel, yModel)

    axes.set_xlabel('X Data') # X axis data label
    axes.set_ylabel('Y Data') # Y axis data label

    plt.show()
    plt.close('all') # clean up after using pyplot

graphWidth = 800
graphHeight = 600
ModelAndScatterPlot(graphWidth, graphHeight)
James Phillips
  • 4,526
  • 3
  • 13
  • 11
  • Philips Thank you for your response! Could you please tell me which software did you use for this? – Swati Jain Jun 16 '19 at 19:47
  • I could get fit using this Sigmoid equation with offset in MATLAB as well. Could you please explain, how did bound were added to a and b? I can see you are putting some bounds for 'a' and 'b' within min and max value of x, and for offset within min and max value of y. y-values of my fitted curve are little bit off compared to your image. – Swati Jain Jun 16 '19 at 20:31
  • @SwatiJain For the equation search, I used the "function finder" on my zunzun.com open source Python curve and surface fitting web site to find the Sigmoid With Offset. As for the genetic algorithm search bounds in my example code, the bounds can be generous so I always first try bounds derived from the data - and again it worked well. – James Phillips Jun 16 '19 at 21:33
  • I am not familiar with Python at all. If you could tell me the math behind bound, I can apply it on my MATLAB code. I mean are these conditions working something like this: min(x) – Swati Jain Jun 17 '19 at 01:28
  • These are search bounds for a genetic algorithm. If you would like to try the direct link for fitting this equation it is http://zunzun.com/Equation/2/Sigmoidal/Sigmoid%20A/ – James Phillips Jun 17 '19 at 07:56