I'm trying to minimize a discretized function using a method of steepest decent. This should be fairly straightforward, but I'm having trouble with the search 'climbing' out of any local minimum. Here's my code in Mathematica, but its syntax is easy to follow.
x = {some ordered pair as a beginning search point};
h = 0.0000001; (* something rather small *)
lambda = 1;
While[infiniteloop == True,
x1 = x[[1]];
x2 = x[[2]];
x1Gradient = (f[x1-h,x2]-f[x1+h,x2])/(2h);
x2Gradient = (f[x1,x2-h]-f[x1,x2+h])/(2h);
gradient = {x1Gradient,x2Gradient};
(* test if minimum is found by normalizing the gradient*)
If[Sqrt[x1Gradient^2 + x2Gradient^2] > 0.000001,
xNew = x + lambda*g,
Break[];
];
(* either pass xNew or reduce lambda *)
If[f[xNew[[1]],xNew[[2]]] < f[x1,x],
x = xNew,
lambda = lambda/2;
];
];
Why would this ever climb a hill? I'm puzzled because I even test if the new value is less than the old. And I don't pass it when it is! Thoughts?