0

I have such x vectors and y vectors described in below :

x = [0 5 8 15 18 25 30 38 42 45 50];
y = [81.94 75.94 70.06 60.94 57.00 50.83 46.83 42.83 40.94 39.00 38.06];

with these values how can i find an coefficients of y = a*(b^x) ??

I've tried this code but it finds for y = a*e^(b*x)

clear, clc, close all, format compact, format long 
% enter data
x = [0 5 8 15 18 25 30 38]; 
y = [81.94 75.94 70.06 60.94 57.00 50.83 46.83 42.83];   
n = length(x);
y2 = log(y);
j = sum(x);
k = sum(y2);
l = sum(x.^2);
m = sum(y2.^2);
r2 = sum(x .* y2);
b = (n * r2 - k * j) / (n * l - j^2) 
a = exp((k-b*j)/n) 

y = a * exp(b * 35) 
result_68 = log(68/a)/b 

I know interpolation techniques but i couldn't implement it to my solutions...

Many thanks in advance!

Hayra
  • 456
  • 2
  • 7
  • 22

3 Answers3

4

Since y = a * b ^ x is the same as log(y) = log(a) + x log(b) you can do

>> y = y(:);
>> x = x(:);
>> logy = log(y);
>> beta = regress(logy, [ones(size(x)), x]);
>> loga = beta(1);
>> logb = beta(2);
>> a = exp(loga);
>> b = exp(logb);

so the values of a and b are

>> a, b
a =
      78.8627588780382
b =
     0.984328823937827

Plotting the fit

>> plot(x, y, '.', x, a * b .^ x, '-') 

gives you this -

enter image description here

NB the regress function is from the statistics toolbox, but you can define a very simple version which does what you need

function beta = regress(y, x)
    beta = (x' * x) \ (x' * y);
end
Chris Taylor
  • 46,912
  • 15
  • 110
  • 154
  • I like that while this is in fact a math question, more than a programming question, it was solved very quickly. – Henrik Apr 30 '15 at 07:46
  • I'd say that I am a mathematician, more than a programmer, so this question was clearly well suited to me! – Chris Taylor Apr 30 '15 at 07:48
  • Why i can not implement as you mentioned like log(y) = log(a) + x log(b) ? Without regress or beta, maybe i am doing with this code but actually i didn't get the solution. Could you please describe it more ? – Hayra Apr 30 '15 at 08:06
  • @Hayra I don't understand what your problem is - can you say your question again? – Chris Taylor Apr 30 '15 at 08:59
1

As an extension to the answer given by Chris Taylor, which provides the best linear fit in the logarithmic-transformed domain, you can find a better fit in the original domain by solving directly the non-linear problem with, for example, the Gauss-Newton algorithm

Using for example the solution given by Chris as a starting point:

x = [0 5 8 15 18 25 30 38 42 45 50];
y = [81.94 75.94 70.06 60.94 57.00 50.83 46.83 42.83 40.94 39.00 38.06];

regress = @(y, x) (x' * x) \ (x' * y);

y = y(:);
x = x(:);
logy = log(y);
beta = regress(logy, [ones(size(x)), x]);
loga = beta(1);
logb = beta(2);
a = exp(loga)
b = exp(logb)
error = sum((a*b.^x - y).^2)

Which gives:

>> a, b, error
a =
  78.862758878038164
b =
   0.984328823937827
error =
  42.275290442577422

You can iterate a bit further to find a better solution

beta = [a; b];
iter = 20
for k = 1:iter
    fi = beta(1)*beta(2).^x;
    ri = y - fi;
    J = [ beta(2).^x, beta(1)*beta(2).^(x-1).*x ]';
    JJ = J * J';
    Jr = J * ri;
    delta = JJ \ Jr;
    beta = beta + delta;
end

a = beta(1)
b = beta(2)
error = sum((a*b.^x - y).^2)

Giving:

>> a, b, error

a =
  80.332725222265623
b =
   0.983480686478288
error =
   35.978195088265906
Iban Cereijo
  • 1,675
  • 1
  • 15
  • 28
  • Thanks. It looks like your loop does 20 iterations of gradient descent - is that right? – Chris Taylor Apr 30 '15 at 08:58
  • It's Gauss-Newton in this case as an example. It has quicker convergence and does not require the hessian. We could implement gradient descent, or even pure Newton, as the hessian is really easy to compute for this function. – Iban Cereijo Apr 30 '15 at 09:08
0

One more option is to use MATLAB's Curve Fitting Toolbox which lets you generate the fit interactively, and can also emit the MATLAB code needed to run the fit.

Curve fitting tool screenshot

Edric
  • 23,676
  • 2
  • 38
  • 40