Approach 1: Numerical (but naive)
This approach uses an anonymous function with vectorization, numerically computes v
for the range of possible a
to a stepsize (precision in a
) of 0.01.
Depending on the precision required, one can simply reduce stepsize
until the answer converges (stops changing) within tolerance.
% MATLAB R2017a
u = 2.75;
s = 3.194;
fh =@(a) sqrt(u.^2 + 2.*a.*s);
aLB = 0.1;
aUB = 1.5;
stepsize = 0.01; % Reduce until your answer converges (stops changing)
a = aLB:stepsize:aUB;
v = fh(a);
[v_max, ind] = max(v) % v_max = 4.1406
a(ind) % a(ind) = 1.5000
Approach 2: Numerical
This approach uses a linear penalty to add the constraint aLB <= a <= aUB
into the objective function for numerical optimization using fminsearch
. Notice that fminsearch
requires an initial starting guess for a
and that the objective function must be vectorized.
This works well when the objective function is convex (over a
). If the objective function is not convex, then one approach would be to do this many times from different start points then take the best answer as your "best answer found so far."
Since we are maximizing here and fminsearch
only minimizes, we introduce the negative sign and minimize. As for the penalty function, we could have made it quadratic or increased the weight but we know the feasible range of a
which makes such methods unnecessary here.
f2h =@(a) -fh(a) + abs(a-aLB).*(a < aLB) + abs(a-aUB).*(a > aUB);
[a_best, v_max_neg] = fminsearch(f2h,1)
v_max = -vmax_neg
You can see the objective function is concave by inspection (though the 2nd derivative would show this as well). So negating it gives a convex function which means the local solution (optimum) returned by fminsearch
would also be the global solution.
