2

I'm having trouble finding a clear answer on this despite a number of questions both here and on the math stackexchange asking more specific questions regarding optimizers and root finders, such as this one.

So far I know that any root finding problem can be viewed as a minimization problem, and in the past I've been faulted for using optimizer libraries for problems that can be solved with root finding libraries. What, performance wise, makes the difference between these two? In a situation where I'm only trying to find a single root (not all of them), why would root finding algorithms explicitly outperform optimizers?

A concrete example of this in the Julia programming language would be cases where Roots.jl would be preferable (performance wise) over Optim.jl.

Will
  • 170
  • 10
  • 2
    I am not an expert of the domain but I think that algorithms used to find roots tends to be a bit more specific than the one used to find a minimum and specificity help to design faster algorithms. Moreover, a root finding algorithm can stop when it find a 0, while a minimization algorithm do not know if the *local minima* is the global one. – Jérôme Richard Jul 23 '21 at 19:33
  • It turns out that the last part you mentioned (stop at zero) is really what makes some algorithms perform better. – Will Jul 29 '21 at 17:25

0 Answers0