5

To test whether a continuous function has a root one simple root in a given interval [x0, x1] is relatively easy: according to Intermediate value theorem when the sign of function value at x0 is opposite to the sign at x1, there is (at least) one root.

For example, given a quadratic function:

g(x): a*x**2 + b*x + c = 0

The test looks like:

if sign of g(x0) is opposite of sign of g(x1)
then return true
else return false

For multivariate case there is Poincaré–Miranda theorem but I have a bit of difficulty to implement the test correctly from reading the linked article.

Given two quadratic bivariate functions:

g1(x, y): a1*x**2 + b1*y**2 + c1*x*y + d1*x + e1*y + f1 = 0
g2(x, y): a2*x**2 + b2*y**2 + c2*x*y + d2*x + e2*y + f2 = 0

and a rectangular region [x0, x1] x [y0, y1], how to check if there is at least one root in the region?

I mean, I assume the test should look somewhat like (this doesn't work):

if (sign of g1(x0, y0) is opposite of sign of g1(x1, y0) and
    sign of g1(x0, y1) is opposite of sign of g1(x0, y1) and
    sign of g2(x0, y0) is opposite of sign of g2(x1, y0) and
    sign of g2(x0, y1) is opposite of sign of g2(x0, y1))
then return true
else return false

Please, does anyone know what pairs of functions, interval end-points, and logical operators to check and in what order?

Ecir Hana
  • 10,864
  • 13
  • 67
  • 117

2 Answers2

2

You need to check whether your bivariate functions

g1(x, y): a1*x**2 + b1*y**2 + c1*x*y + d1*x + e1*y + f1 = 0
g2(x, y): a2*x**2 + b2*y**2 + c2*x*y + d2*x + e2*y + f2 = 0

satisfy

I).  g1(x0,y) < 0, for all y in [y0,y1]
II). g2(x,y0) < 0, for all x in [x0,x1]

and

III). g1(x1,y) > 0, for all y in [y0,y1]
IV).  g2(x,y1) > 0, for all x in [x0,x1]

Your functions are quadratic, so this can be done without sampling values along all 4 boundaries for the 4 cases. For example, for the first condition on g1(x0,y), simply plug in x0 for x, obtaining a quadratic equation in y:

G1(y) = b1*y**2 + c1*x0*y + e1*y + (f1 + d1*x0 + a1*x0**2)

We need to check whether G1 is ever positive for y in [y0,y1]. Since G1 is quadratic, its maximum either occurs where {G1' = 0, G1'' < 0} or at the endpoints. So:

a. express G1' analytically, use simple bisection to find a root in [y0,y1]
b. if there is one, say y*, express G1'' analytically and compute G1''(y*)
c. if G1''(y*) is also < 0 then you have your maximum y*
d. if G1(y*) > 0 then the condition are violated, you may break
e. if not, then test the endpoints G1(y0), G1(y1).
f. if any of those are > 0 then break.

If your function passes these tests without breaking, you have satisfied the first of the 4 conditions (I) above.

The conditions (II-IV) can be tested in a similar manner. If all conditions hold, the Miranda test holds and you have a coincident root of the two functions. If not, then you are in the "maybe" case - the functions may still have a common root, but you'll have to use a different method to prove existence.

Charles Pehlivanian
  • 2,083
  • 17
  • 25
  • Thanks a lot! Please, I have couple of more questions: 1. So all the tests are joined by the `AND`? I mean, `if g1(x0,y0) < 0 AND g1(x0,y1) < 0 AND g2(x0,y0) < 0 AND g2(x1,y0) < 0 AND g1(x1,y0) > 0 AND g1(x1,y1) > 0 AND g2(x0,y1) > 0 AND g2(x1,y1) > 0`? 2. When you say `<` (in I). and II).) and `>` (in III). and IV).), you really mean that there should be opposite signs, so even `>` and `<` would count, right? – Ecir Hana Mar 12 '16 at 09:20
  • 3. The quadratic functions were just an example, what should I do in the case of cubics or higher? I mean, isn't there a trick which would work in general case? Because `a. - f.` become increasingly harder to do with higher degrees. 4. What to do in "maybe" case? Split the region into for quadrants? Perhaps there is other test which could be used in conjunction with M.-P. to distinguish even the "maybe" case? 5. If I knew there is zero or one root in the region, does it change the situation or help at all? – Ecir Hana Mar 12 '16 at 09:20
  • Yes - AND - all tests must be satisfied. Yes, opposite signs would do as you describe. – Charles Pehlivanian Mar 12 '16 at 11:59
  • If the functions are not quadratics, but polynomials, the tests are more difficult. For a quartic, for example, you still need to verify that the maximum in test I is < 0. There are now 3 possible 0's for the derivative, which you must find by bisection, by judiciously windowing and bisecting. Then go through the analagous process. With a general function there is no sure-fire way to tell if the function is < 0 on the entire boundary, which is @Chris Beck's point above. – Charles Pehlivanian Mar 12 '16 at 12:05
1

First of all, your original "intermediate value"-based code doesn't quite do what it is advertised to do:

To test whether a continuous function has a root in a given interval [x0, x1] is relatively easy: according to Intermediate value theorem when the sign of function value at x0 is opposite to the sign at x1, there is (at least) one root.

The test looks like:

if sign of g(x0) is opposite of sign of g(x1)
then return true
else return false

This "test" has one-sided error, as pointed out by David Eisenstat. If the signs are indeed opposite, then return true is okay, but if the signs are not opposite, then return false should perhaps be return maybe or something...


Second of all as regards the Poincare Miranda theorem, in higher dimensions comparing the signs of a few points doesn't give you enough information to apply the theorem.

Consider n continuous functions of n variables. Assume that for every variable x_i, the function f_i is constantly negative when x_i = 0 and constantly positive when x_i = 1. Then there is a point in the n-dimensional unit cube in which all functions are simultaneously equal to 0.

There is no black-box test if a continuous function is "constantly negative" on some region.

You would need to assume something more, like, you assume it is actually a low-degree polynomial and you sample it at enough points to discover its coefficients, etc.


If we assume like you stated that we have two bivariate quadratics, and we actually have (or deduce) the coefficients... it is possible.

What I would do is simply, substitute in the value for x_i in each function as required, so it reduces to a univariate quadratic, and then solve for its roots (if any) using the quadratic formula like we learned in grade school. Then check whether they occur in the region of interest. Then test a point in-between the roots to determine the sign. Then you'll know if the theorem can be applied.

It's possible that you can solve for a precise condition in closed-form but I'm not sure if this will actually help you write a better (simpler / more efficient) implementation.

Here is some pseudocode:

 def quadratic_positive_in_region(p, x_0, x_1)
   ASSERT(p is univariate)
   ASSERT(x_0 <= x_1)

   // If one of the roots lies in the region then
   // we are zero there, and thus not positive
   def roots = quadratic_formula(p)
   for r in roots:
     if x_0 <= r and r <= x_1 then return false

   // If there are no roots in the region then
   // we are either always positive or always negative,
   // so test a single point to determine.
   if p(x_0 + x_1 / 2) > 0 then return true       
   return false

 def poincare_miranda(g1, g2, x_0, x_1, y_0, y_1)
   return quadratic_positive_in_region(-g1 | y = y_0, x_0, x_1) and
          quadratic_positive_in_region( g1 | y = y_1, x_0, x_1) and
          quadratic_positive_in_region(-g2 | x = x_0, y_0, y_1) and
          quadratic_positive_in_region( g2 | x = x_1, y_0, y_1)

 def generalized_test(g1, g2, x_0, x_1, y_0, y_1)
   return poincare_miranda( g1,  g2, x_0, x_1, y_0, y_1) or
          poincare_miranda(-g1,  g2, x_0, x_1, y_0, y_1) or
          poincare_miranda(-g1, -g2, x_0, x_1, y_0, y_1) or
          poincare_miranda( g1, -g2, x_0, x_1, y_0, y_1)

I'm using some notation here where - operator can be applied to a polynomial, also | notation represents substitution of a value for a variable in a polynomial.

Chris Beck
  • 15,614
  • 4
  • 51
  • 87
  • If I would allow for the "maybe" answer, how would the 2D test look like? Similarly to 1D case, if there was just one root in the region, would the test work? And lastly, with that "substitution" and quadratic, wouldn't I get quartic equation, instead of "univariate quadratic"? – Ecir Hana Mar 09 '16 at 15:04
  • (1) The only possible answers of either test are, 'conditions of theorem met, there is definitely a root', and 'failed to apply theorem, no conclusion'. You could code it up as `true`, `false` as long as you keep this meaning in mind. (2) When you have a bivariate quadratic in `x` and `y`, and you substitute `x = constant` for some constant value into it, you are left with a quadratic in `y`. – Chris Beck Mar 09 '16 at 15:07
  • (1) sure, but my main problem is how would such 2D test look like? – Ecir Hana Mar 09 '16 at 15:09
  • I described the 2D test in my answer, but I'll make an edit to give more detail I guess – Chris Beck Mar 09 '16 at 15:12
  • Thanks a lot for the edit but this is not what I'm asking. I don't want to solve quadratic equations: "The main significance of the Poincaré-Miranda Theorem is its ability to guarantee the existence of a zero of a function without having to solve for one explicitly" (from the linked article references). And what do you mean by "and we actually have (or deduce) the coefficients"? The two functions have known coefficients. And again, similarly to 1D case (which has no "one-sided error"), if there was just one root, why would it not be possible to use the theorem? – Ecir Hana Mar 09 '16 at 16:19
  • (1) The 1D case does indeed have one-sided error, as shown by the example. To wit: If `f(x_0)` and `f(x_1)` have different signs and `f` is continuous, then indeed, `f` has a zero somewhere between `x_0` and `x_1`. But the converse is false -- a continuous `f` may have a zero between `x_0` and `x_1` without also having `x_0` and `x_1` having different signs. If you assume that globally, the function has at most one root, then it becomes if and only if, but this is no longer the intermediate value theorem, it's something else. – Chris Beck Mar 09 '16 at 17:21
  • (2) It just depends how you are representing a continuous function in your program. Normally a continuous function can't be "written down" in your program, it has to be represented as a blackbox. For instance, if you were writing C, you might pass your algorithm a pointer to a function of signature `double f(double)` or something like this. Then you could implement your test in terms of this function pointer. However, you can't test things like "is this function always positive", unless you test every possible value. Even if the function is known to be continuous, there is no finite test. – Chris Beck Mar 09 '16 at 17:27
  • If you know the function is a low degree polynomial , not merely continuous, then there are multiple ways you can write your program. You could expect the coefficients to be passed to you, or you could expect it to be a "blackbox" function pointer again. But it turns out that, whichever one you pick, you can convert freely to the other. If I give you the blackbox and not the coefficients, you can still recover the coefficients, using polynomial interpolation and a small number of test evaluations. That was what I meant by "if we actually have (or deduce) the coefficients". – Chris Beck Mar 09 '16 at 17:28
  • Edit: the comment "there is no finite test" makes more sense if instead of `double` we consider that the function takes and receives some `bigint` based, arbitrary-precision rational number instead. – Chris Beck Mar 09 '16 at 17:32
  • Also, what I wrote in comment (1) about it becoming if and only if provided you know that there is only one root, was wrong: Even if (a polynomial) is known to have only a single root globally, it could be like `f(x) = x^2` and have a double root. Then if `x_0 = -1`, `x_1 = 1`, the test reports false, even though there is a root, and it's the only root. – Chris Beck Mar 09 '16 at 17:38
  • Also: re: "The main significance of the Poincaré-Miranda Theorem is its ability to guarantee the existence of a zero of a function without having to solve for one explicitly" Indeed, and my pseudocode isn't solving for the zero either, it really is just testing the condition of the theorem in the simplest way possible. But keep in mind, this is a mathematical theorem, it's intended to be used in situations where you know by some natural means that certain functions are always positive or always negative. That condition just isn't easy to *test* for arbitrary black-box continuous functions. – Chris Beck Mar 09 '16 at 18:00
  • It will of course work fine if you implement `quadratic_positive_in_region` in whatever way you please, that might or might not involve "solving" for roots explicitly. – Chris Beck Mar 09 '16 at 18:01
  • Let us [continue this discussion in chat](http://chat.stackoverflow.com/rooms/105834/discussion-between-ecir-hana-and-chris-beck). – Ecir Hana Mar 09 '16 at 18:02