0

I am trying to understand how the following code optimize the objective function.

I thought if I use vector as an input for fminsearch as below, the result 'Kp' should give me the point where the objective function is minimized.

For example, Kp[1] should give me the point where the objective function is minimized for given K[1] and Z[1]. So it is just solving univariate function.

But if I have the objective function as below. Kp[4] should also affect the Kp[1] because there is flipud(Vs) inside the function and vice versa.

And fminsearch still gives me the result without showing any error message.

If this is the case, what does exactly fminsearch minimize? Is Kp[1] still minimize the objective function for given K[1] and Z[1]?

Does fminsearch take this as a multivariate optimization problem instead of univariate in the case without flipud(Vs)?

If so, how does fminsearch minimze the objective function because then there are more than one equations to minimize in this case?

clear
clc
R         = 1.008;
sig       = 0.75;
tempkgrid = linspace(-2,6,2)';
K = [tempkgrid ; tempkgrid];
Z = [2*ones(2,1);4*ones(2,1)];
aconst1      = -2*ones(4,1);
aconst2      = 6*ones(4,1);
const = R * (K + Z);
obj         = @(Vs) -((1/(1-1/sig)) * ((Z + K - Vs./R) > 0) .* (Z + K - Vs./R).^(1-1/sig) + flipud(Vs));
Kp          = fminsearchbnd(@(c) norm(obj(c)) ,aconst1, aconst1, const);
Chang
  • 83
  • 1
  • 6

1 Answers1

0

When the input is a vector, fminsearch solves a multivariate minimization problem. In other words, it minimizes one multivariate function, e.g.

f(x1, x2, x3, ...)

rather than minimizes multiple univariable functions at the same time,

f(x1, x2=x2_0, x3=x3_0, ...),

f(x1=x1_0, x2, x3=x3_0, ...),

f(x1=x1_0, x2=x2_0, x3, ...), ...

To see how it finds a local minimium in N-dimensional space, it helps to understand the idea of gradient descent method. Although fminsearch uses a different algorithm (Nelder-Mead method), the basic idea behind all minimization methods are similar.

Its wiki page provides a good anology of gradient descent method:

A person stucked in the mountains is trying to get down (i.e. trying to find the global minimum).

There is a heavy fog, so the path down the mountain is not visible; they must use local information to find the minimum.

They can use gradient descent method:
Look at the steepness of the hill at their current position, and then proceed in the direction with the steepest descent (i.e. downhill).

Using this method, they would eventually find their way down the mountain or possibly get stuck in some "hole" (i.e. local minimum or saddle point), like a mountain lake.

fminsearch use the Nelder-Mead simplex algorithm, you can check it out on either matlab's reference page or its wiki page.

Miscellaneous
  • 388
  • 2
  • 14