0

I have data points (x, y) that I need to fit an exponential function to,

y = A + B * exp(C * x),

but I can neither use the Curve Fitting Toolbox nor the Optimization Toolbox.

User rayryeng was good enough to help me with working code:

x = [0    0.0036    0.0071    0.0107    0.0143    0.0178    0.0214    0.0250    0.0285    0.0321    0.0357    0.0392    0.0428    0.0464    0.0464];
y = [1.3985    1.3310    1.2741    1.2175    1.1694    1.1213    1.0804    1.0395    1.0043    0.9691    0.9385    0.9080    0.8809    0.7856    0.7856];

M = [ones(numel(x),1), x(:)]; %// Ensure x is a column vector
lny = log(y(:)); %// Ensure y is a column vector and take ln

X = M\lny; %// Solve for parameters
A = exp(X(1)); %// Solve for A
b = X(2); %// Get b

xval = linspace(min(x), max(x));
yval = A*exp(b*xval);
plot(x,y,'r.',xval,yval,'b');

However, this code only fits the equation without offset

y = A * exp(B * x).

How can I extend this code to fit the three-parameter equation?


In another attempt, I managed to fit the function using fminsearch:

function [xval, yval] = curve_fitting_exponential_1_optimized(x,y,xval)

start_point = rand(1, 3);
model = @expfun;
est = fminsearch(model, start_point);

    function [sse, FittedCurve] = expfun(params)
        A = params(1);
        B = params(2);
        C = params(3);
        FittedCurve = A + B .* exp(-C * x);
        ErrorVector = FittedCurve - y;
        sse = sum(ErrorVector .^ 2);
    end

yval = est(1)+est(2) * exp(-est(3) * xval);
end

The problem here is that the result depends on the starting point which is randomly chosen, so I don't get a stable solution. But since I need the function for automatization, I need something stable. How can I get a stable solution?

Community
  • 1
  • 1
Trenera
  • 1,435
  • 7
  • 29
  • 44
  • You could also have a look at [ezyfit](http://www.fast.u-psud.fr/ezyfit/). – m.s. Apr 20 '15 at 13:41
  • As far as I understood from the links, there is no way to preform such curve fitting without optimization. Is that correct? – Trenera Apr 20 '15 at 13:52
  • @ViharChervenkov You can use [`fminsearch`](http://www.mathworks.com/help/matlab/ref/fminsearch.html) as I [described here](http://stackoverflow.com/a/28222744/3121310). – TroyHaskin Apr 20 '15 at 13:59
  • @TroyHaskin Thanks, but as I said I was hoping to find something that doesn't depend on an initial guess – Trenera Apr 20 '15 at 14:03
  • @ViharChervenkov My bad. I forgot to read the last half of the post before commenting. Apologies. However, unless you can linearize the fit, a initial guess/search region is required for nonlinear curve fitting. – TroyHaskin Apr 20 '15 at 14:07
  • In this case you can do a good initial guess (the minimum value) and the results of the direct fit for the rest of the parameters. – rubenvb Apr 20 '15 at 14:12
  • @TroyHaskin Thanks for the last comment! Now I know that what I look for simply does not exist. Have a great day! – Trenera Apr 20 '15 at 14:13
  • @rubenvb Do you mean setting the "a" constant (of yval = a + b* exp(c* xval)) equal to min(y) and then carrying on with the same code? – Trenera Apr 20 '15 at 14:15
  • @Vihar no, I mean using min(y) as the first guess of your fminsearch call for the parameter `a`. – rubenvb Apr 20 '15 at 14:18
  • @rubenvb That would mean: start_point = [min(y) rand(1, 1) rand(1,1)]; Right? – Trenera Apr 20 '15 at 14:24
  • @ViharChervenkov *there is no way to perform such curve fitting without optimization. Is that correct?* I'm not sure what you mean by that... Curve fitting always involved solving some optimization problem. – jub0bs Apr 20 '15 at 15:16
  • @Jubobs, the OP probably means "without using a numerical optimization algorithm". – A. Donda Apr 20 '15 at 15:22
  • 1
    @Vihar yes, and you could even subtract min(y) from all data, do the `M\(lny-ln(min(y)))` and use those as starting values for fminsearch. That should give a good first estimate in any case, but you need to determine if this is actually faster than justs starting from a sane, (but not random!) initial guess. – rubenvb Apr 21 '15 at 07:07

2 Answers2

4

How to adapt rayryeng's code for three parameters?

rayryeng used the strategy to linearize a nonlinear equation so that standard regression methods can be applied. See also Jubobs' answer to a similar question.

This strategy does no longer work if there is a non-zero offset A. We can fix the situation by getting a rough estimate of the offset. As rubenvb mentioned in the comments, we could estimate A by min(y), but then the logarithm gets applied to a zero. Instead, we could leave a bit of space between our guess of A and the minimum of the data, say half its range. Then we subtract A from the data and use rayreng's method:

x = x(:);  % bring the data into the standard, more
y = y(:);  % convenient format of column vectors
Aguess = min(y) - (max(y) - min(y) / 2;
guess = [ones(size(x)), -x] \ log(y - Aguess);
Bguess = exp(guess(1));
Cguess = guess(2);

For the given data, this results in

Aguess =  0.4792
Bguess =  0.9440
Cguess = 21.7609

Other than for the two-parameter situation, we cannot expect this to be a good fit. Its SSE is 0.007331.

How to get a stable solution?

This guess is however useful as a starting point for the nonlinear optimization:

start_point = [Aguess, Bguess, Cguess];
est = fminsearch(@expfun, start_point);
Aest = est(1);
Best = est(2);
Cest = est(3);

Now the optimization arrives at a stable estimate, because the computation is deterministic:

Aest = -0.1266
Best =  1.5106
Cest = 10.2314

The SSE of this estimate is 0.004041.

This is what the data (blue dots) and fitted curves (green: guess, red: optimized) look like:

Community
  • 1
  • 1
A. Donda
  • 8,381
  • 2
  • 20
  • 49
  • @BenVoigt, but his optimization code already accomodates a non-zero offset. – A. Donda Apr 20 '15 at 16:14
  • Ah ok, he wants a modification of the first code piece. Argh. Ok, thanks for pointing that out. – A. Donda Apr 20 '15 at 16:14
  • Thank You for the answer. I just want to point out that I don't have the Statistical toolbox either, so I cannot use range. I use max(y)-min(y) instead. You might want to change that in your answer. – Trenera Apr 21 '15 at 07:28
  • @ViharChervenkov, I didn't realize that `range` belongs to a toolbox, thanks for pointing that out. Glad I could help. – A. Donda Apr 22 '15 at 06:51
  • @rayryeng, it appears I keep misspelling your name. Sorry about that! – A. Donda May 10 '15 at 03:32
  • @A.Donda - Haha no problem :) A lot of people misspell it, and I correct their posts as well. BTW, +1 to your answer. – rayryeng May 10 '15 at 05:20
0

Here is the whole function in all its glory - special thanks to A. Donda!

function [xval, yval] = curve_fitting_exponential_1_optimized(x,y,xval)

x = x(:);  % bring the data into the standard, more convenient format of column vectors
y = y(:);

Aguess = min(y) - (max(y)-min(y)) / 2;
guess = [ones(size(x)), -x] \ log(y - Aguess);
Bguess = exp(guess(1));
Cguess = guess(2);

start_point = [Aguess, Bguess, Cguess];
est = fminsearch(@expfun, start_point);

    function [sse, FittedCurve] = expfun(params)
        A = params(1);
        B = params(2);
        C = params(3);
        FittedCurve = A + B .* exp(-C * x);
        ErrorVector = FittedCurve - y;
        sse = sum(ErrorVector .^ 2);
    end

yval = est(1)+est(2) * exp(-est(3) * xval);
end
Trenera
  • 1,435
  • 7
  • 29
  • 44