0

I have a hyperbolic function and i need to find the 0 of it. I have tried various classical methods (bisection, newton and so on).

Second derivatives are continuous but not accessible analytically, so i have to exclude methods using them.

For the purpose of my application Newton method is the only one providing sufficient speed but it's relatively unstable if I'm not close enough to the actual zero. Here is a simple screenshot:

enter image description here

The zero is somewhere around 0.05. and since the function diverges at 0, if i take a initial guess value greater then the minimum location of a certain extent, then i obviously have problems with the asymptote.

Is there a more stable method in this case that would eventually offer speeds comparable to Newton?

I also thought of transforming the function in an equivalent better function with the same zero and only then applying Newton but I don't really know which transformations I can do.

Any help would be appreciated.

marc_s
  • 732,580
  • 175
  • 1,330
  • 1,459
mbrivio
  • 39
  • 7
  • Care to show us the equation ? –  Jan 17 '21 at 16:18
  • Hi @YvesDaoust, unfortunately the function is not really explicit and it does not have a closed form that i can simply post, moreover is calculated and assembled in various modules. It just similar to an hyperbolic function, that's why i used this example. It's scheleton however should be something like y = A/x^2 + B/x with A and B constants. The graph that i posted is simply y= (1/x)-20 – mbrivio Jan 18 '21 at 10:34

3 Answers3

3

Dekker's or Brent's method should be almost as fast as Newton. If you want something simple to implement yourself, the Illinois variant of the regula-falsi method is also reasonably fast. These are all bracketing methods, so should not leave the domain if the initial interval is inside the domain.

def illinois(f,a,b,tol=1e-8):
    '''regula falsi resp. false postion method with
        the Illinois anti-stalling variation'''
    fa = f(a)
    fb = f(b)
    if abs(fa)<abs(fb): a,fa,b,fb = b,fb,a,fa
    while(np.abs(b-a)>tol):
        c = (a*fb-b*fa)/(fb-fa)
        fc = f(c)
        if fa*fc < 0:
            fa *= 0.5
        else:
            a, fa = b, fb
        b, fb = c, fc
    return b, fb
Lutz Lehmann
  • 25,219
  • 2
  • 22
  • 51
  • thanks, regula-falsi, is actually performing good for my problem. I'll test now on various runs – mbrivio Jan 14 '21 at 09:47
2

How about using log(x) instead of x?

sams-studio
  • 631
  • 3
  • 10
  • It creates a discontinuity at x=1. Also i don't think it preserves the 0 position – mbrivio Jan 14 '21 at 08:57
  • the function it's something in the form of 1/x – mbrivio Jan 14 '21 at 08:58
  • 1
    It doesn't ... `log(1) = 0`, but it's continuous. Then when you found `x`, of course you take `exp(x)` to obtain the original `x`. – Dr. V Jan 14 '21 at 08:59
  • then i haven't understood, you suggest to build the function 1/log(x) let's say, find the 0 with newton or equivalent, and then exp(x) to get the original result? – mbrivio Jan 14 '21 at 09:05
  • You leave the function values as they are (I assume it's in your actual problem not just `1/x - 20`, because then `x=1/20` and the entire question trivial), but invent `z = log(x)` and consider the new function `y = f(z)`. If you plot that, it's maybe more linear and can be solved by Newton without issues. – Dr. V Jan 14 '21 at 09:11
  • And then yes, `exp(z)` as described by Dr. V is the root that you're looking for (which you also stated correctly). – sams-studio Jan 14 '21 at 09:13
  • I just did the plot ... I'm afraid in this case you don't escape the problem. Now the curve flattens out on the left side. – Dr. V Jan 14 '21 at 09:14
  • That's probably because you don't have enough points and/or the smallest x value is too small. But it usually doesn't matter because the distance between the points along the x-direction tends to be very small and therefore the error won't become too large. – sams-studio Jan 14 '21 at 09:21
1

For your case, @sams-studio's answer might work, and I would try that first. In a similar situation - also in multi-variate context - I used Newton-homotopy methods.

Basically, you limit the Newton step until the absolute value of y is descending. The cheapest way to implement is that you half the Newton step if y increases from the last step. After a few steps, you're back at Newton with full second order convergence.

Disclamer: If you can bound your solution (you know a maximal x), the answer from @Lutz Lehmann would also be my first choice.

Dr. V
  • 1,747
  • 1
  • 11
  • 14
  • I don't really have an upper bound but since my 'x' is a thickness, it can't go to infinity but i don't know the upper value a-priori. I can always insert a artificialy upper bound which is big enough. I'll have a look to Newton-homotopy methods. thanks for suggesting – mbrivio Jan 14 '21 at 09:38