1

I have an unconstrained quadratic optimization problem- I have to find u that will minimize norm (u^H * A_k *u - d_k)) where A_k is 4x4 matrix (k=1,2...180) and u is 4x1 vector(H denotes hermitian).

There are several functions in MATLAB to solve optimization problems but I am not able to figure out the method I need to use for my OP. I would highly appreciate if anyone provides me some hint or suggestion to solve this problem in MATLAB.

zsha
  • 197
  • 7
  • Does the index k play any role in your solution? Or is it just |u'Au-d| -> min for e.g. a given k? – Felix Darvas Nov 23 '15 at 18:05
  • no k is just an index,i.e i have to min |u'A_ku-d_k| for k=1,2...180. – zsha Nov 23 '15 at 19:31
  • since you dont seem to have any constraints, you could just use 'fminunc' – Felix Darvas Nov 23 '15 at 19:34
  • Your writeup is not clear. Is your A matrix symmetric? What are the dimensions on A? Does k denote the dimensions on A? Or do you have to solve this problem 180 times for k=1 to 180? In the comment, you wrote |u'A_ku - d_k| which denotes absolute value but in the question, there are parenthesis. Which is it? – Matthew Gunn Nov 23 '15 at 20:20
  • Yes, matrix A is symmetric 4x4 and positive definite. u'A_ku is a scalar and d_k is also scalar for a given k and eventually the OP is finding the vector u that would minimize: summation(sigma)from k=1,..180 norm(u'Au-d). (In latex the OP can be written as \sum_{k=1}^{180}(u^H\textbf{A_k}u - d_k))$$ but I dont know why it doesnt appear in this way here!) – zsha Nov 23 '15 at 20:35
  • Thanks. I can read latex code even if it isn't rendered. :) Looks like a bit of a tricky problem to be honest. At first I thought it was a straight quadratic programming problem, but I think the norm or absolute value on the outside destroys the convexity of the problem? – Matthew Gunn Nov 23 '15 at 20:37

2 Answers2

2

Math preliminaries:

If A is symmetric, positive definite, then the the matrix A defines a norm. This norm, called the A norm, is written ||x||_A = x'Ax. Your problem, minimize (over x) |x'*A*x - d| is equivalent to searching for a vector x whose 'A norm' is equal to d. Such a vector is trivial to find by simply scaling up or down any non-zero vector until it has the appropriate magnitude.

  1. Pick arbitrary y (can't be all zero). eg. y = [1; zeros(n-1, 1)]; or y = rand(n, 1);
  2. Solve for some scalar c such that x = c*y and x'Ax=d. Substituting we get c^2 y'Ay=d hence:.

Trivial procedure to find an answer:

y = [1; zeros(n-1)];   %Any arbitrary y will do as long as its not all zero
c = sqrt(d / (y'*A*y)) 
x = c * y`

x now solves your problem. No toolbox required!

Community
  • 1
  • 1
Matthew Gunn
  • 4,451
  • 1
  • 12
  • 30
  • thank you so much for your effort. Yes I have solved it by your method as well as by using fminsearch but I wish to do it in a "technical" way ;) So I am trying to solve it using Newtons-Ramphson method. I hope it gives a good estimate! – zsha Nov 24 '15 at 13:06
1
options = optimoptions('fminunc','GradObj','on','TolFun',1e-9,'TolX',1e-9);
x = fminunc(@(x)qfun(x,A,d),zeros(4,1),options);

with

function [y,g]=qfun(x,A,d)
y=(x'*A*x-d);
g=2*y*(A+A')*x;
y=y*y;

Seems to work.

Felix Darvas
  • 507
  • 3
  • 5
  • I appreciate your suggestion, but I am looking for a method that doesnt require optimization toolbox of MATLAB. – zsha Nov 23 '15 at 20:25
  • Since you have the function and the gradient, you could just implement your own optimization. Newton's method (https://en.wikipedia.org/wiki/Newton%27s_method_in_optimization) should work I think. – Felix Darvas Nov 23 '15 at 20:45
  • Yes, even I am trying to implement it using Newtons-Ramphson method. Is there any matlab built in function to compute the first and second order derivatives? – zsha Nov 24 '15 at 13:04
  • You have the 1st order derivative right there, its output g of qfun. Second derivative is also straightforward to compute from that. – Felix Darvas Nov 24 '15 at 17:19
  • I see. But shouldn't it be g= 2*y*(A)*x? (I didnt get the "+A' " part) – zsha Nov 24 '15 at 19:38
  • inner derivative is d/dx (x'*A*x), so chain rule applies, i.e. d/dx(x')*A*x+x'* d/dx(A*x) – Felix Darvas Nov 24 '15 at 19:42
  • 1
    This is a more rigorous treatment on the derivatives involved https://en.wikipedia.org/wiki/Matrix_calculus – Felix Darvas Nov 24 '15 at 19:53