0

I am looking for an algorithm that would efficiently calculate b^e where b and e are rational numbers, ensuring that the approximation error won't exceed given err (rational as well). Explicitly, I am looking for a function:

rational exp(rational base, rational exp, rational err)

that would preserve law |exp(b, e, err) - b^e| < err

Rational numbers are represented as pairs of big integers. Let's assume that all rationality preserving operations like addition, multiplication etc. are already defined.

I have found several approaches, but they did not allow me to control the error clearly enough. In this problem I don't care about integer overflow. What is the best approach to achieve this?

radrow
  • 6,419
  • 4
  • 26
  • 53
  • Well, one obvious answer is to use exact rational arithmetic, in which case the error is zero. I presume you are hoping to find the best performing algorithm that still obeys the error bound given. – President James K. Polk Apr 12 '19 at 20:03
  • 1
    @JamesKPolk Not so obvious given that `2^0.5` is not rational. The error is required. – btilly Apr 12 '19 at 20:14
  • @btilly: Oops, I forgot that exponent can also be rational. My bad. – President James K. Polk Apr 12 '19 at 20:16
  • Your problem would be much easier if the requirement were a relative precision (such as a given number of significant digits) rather than the absolute precision you now require. – Rory Daulton Apr 12 '19 at 23:14
  • Trying to exponentiate rational numbers pushes beyond the bounds where rational arithmetic is usually useful. Since you are not going to get an exact result, why not just use [GMP](https://gmplib.org) and [MPFR](https://www.mpfr.org) to get a result as accurate as you desire? Any finite floating-point number is a rational number, so that satisfies the problem you have stated. – Eric Postpischil Apr 13 '19 at 00:49
  • My language does not and will never support traditional floats. I may steal their algorithms, but I need error control – radrow Apr 13 '19 at 13:20

1 Answers1

1

This one is complicated, so I'm going to outline the approach that I'd take. I do not promise no errors, and you'll have a lot of work left.

I will change variables from what you said to exp(x, y, err) to be x^y within error err.If y is not in the range 0 <= y < 1, then we can easily multiply by an appropriate x^k with k an integer to make it so. So we only need to worry about fractional `y

If all numerators and denominators were small, it would be easy to tackle this by first taking an integer power, and then taking a root using Newton's method. But that naive idea will fall apart painfully when you try to estimate something like (1000001/1000000)^(2000001/1000000). So the challenge is to keep that from blowing up on you.

I would recommend looking at the problem of calculating x^y as x^y = (x0^y0) * (x0^(y-y0)) * (x/x0)^y = (x0^y0) * e^((y-y0) * log(x0)) * e^(y * log(x/x0)). And we will choose x0 and y0 such that the calculations are easier and the errors are bounded.

To bound the errors, we can first come up with a naive upper bound b on x0^y0 - something like "next highest integer than x to the power of the next highest integer than y". We will pick x0 and y0 to be close enough to x and y that the latter terms are under 2. And then we just need to have the three terms estimated to within err/12, err/(6*b) and err/(6*b). (You might want to make those errors tighter half that then make the final answer a nearby rational.)

Now when we pick x0 and y0 we will be aiming for "close rational with smallish numerator/denominator". For that we start calculating the continued fraction. This gives a sequence of rational numbers that quickly converges to a target real. If we just cut off the sequence fairly soon, we can quickly find a rational number that is within any desired distance of a target real while keeping relatively small numerators and denominators.

Let's work from the third term backwards.

We want y * log(x/x0) < log(2). But from the Taylor series if x/2 < x0 < 2x then log(x/x0) < x/x0 - 1. So we can search the continued fraction for an appropriate x0.

Once we have found it, we can use the Taylor series for log(1+z) to calculate log(x/x0) to within err/(12*y*b). And then the Taylor series for e^z to calculate the term to our desired error.

The second term is more complicated. We need to estimate log(x0). What we do is find an appropriate integer k such that 1.1^k <= x0 < 1.1^(k+1). And then we can estimate both k * log(1.1) and log(x0 / 1.1^k) fairly precisely. Find a naive upper bound to that log and use it to find a close enough y0 for the second term to be within 2. And then use the Taylor series to estimate e^((y-y0) * log(x0)) to our desired precision.

For the first term we use the naive method of raising x0 to an integer and then Newton's method to take a root, to give x0^y0 to our desired precision.

Then multiply them together, and we have an answer. (If you chose the "tighter errors, nicer answer", then now you'd do a continued fraction on that answer to pick a better rational to return.)

btilly
  • 43,296
  • 3
  • 59
  • 88