Burnikel and Ziegler's "RecursiveDivision" algorithm for dividing big numbers has two preconditions, one of which is "the Quotient Q must fit into n digits." How do you know if the precondition holds without first doing the division?
-
See my answer. In short: if you divide `n` digits by `m` digits, the result has at most `n-m+1` digits. You can check this with a few manually calculated examples. – Rudy Velthuis Mar 31 '17 at 12:40
1 Answers
As far as I know, Burnikel-Ziegler has a different precondition. What counts is the number of limbs (in my case, a limb is a 32 bit unsigned integer). If you divide n
limbs by m
limbs, the result is at most n-m+1
limbs (I assume the same calculation is true for the numbers of digits). So that could give you a hint.
But in my BigInteger
code, the precondition is:
function ShouldUseBurnikelZiegler(LSize, RSize: Integer): Boolean;
begin
// http://mail.openjdk.java.net/pipermail/core-libs-dev/2013-November/023493.html
Result := (RSize >= BigInteger.BurnikelZieglerThreshold) and
((LSize - RSize) >= BigInteger.BurnikelZieglerOffsetThreshold);
end;
LSize
is the size of the left operand (dividend) and RSize
the size of the right operand (divisor) in limbs. The thresholds for my code are:
const
BurnikelZieglerThreshold = 91;
BurnikelZieglerOffsetThreshold = 5;
You should (experimentally) find the thresholds for your own code.
In my code, I already gave the link where I got that.
I am aware of the fact that not everyone is familiar with Pascal (or Object-Pascal), but I think the above piece of code is readable enough to get the idea.

- 28,387
- 5
- 46
- 94
-
Thanks. Your answer helped me look at the problem differently. B&Z used the term "digits" for what other seem to call "limbs" now. The "fit into n digits" precondition seems more like a post-condition to me; it will always be true for normal math, and so should hold for a bigInteger implementation, right? BTW, what is BurnikelZieglerOffsetThreshold conceptually? – jjj Apr 02 '17 at 02:07
-
The OffsetThreshold is the *difference* in sizes (in my code, `LSize - RSize`). The other threshold is the absolute size (in limbs). If the sizes (or differences in sizes) are obove those threshold, then Burnikel-Ziegler is faster than the normal (Knuth, a.k.a. base-case) division. Burnikel-Ziegler has quite some overhead, that is why it is not faster for small integers (or even for "small" BigIntegers). It only makes sense for "large" BigIntegers. You can also use the number of digits, if you like, but then the thresholds are factor larger than the one for limbs. continued in next comment. – Rudy Velthuis Apr 02 '17 at 14:52
-
It is not true that this holds for normal integers. The exact thresholds depend on how fast your normal (base-case) division is. You can only experimentally determine the thresholds for your code, by simply comparing normal division by Burnikel-Ziegler and testing at which sizes (and size differences) when Burnikel-Ziegler becomes faster. I did this for my code. Other BigInteger implementations also did this and reached other thresholds. Again, BZ is only faster for not too small BigIntegers, due to the overhead of the code. – Rudy Velthuis Apr 02 '17 at 14:57
-
Great. I understand there is overhead/speed crossover based on the number of limbs. I was not aware of the size difference between dividend and divisor. Thanks. What do you mean my "It is not true that this holds for normal integers"? What is "it" and what is "normal integers"? (BTW, I should use the term "limbs"; I'm not working in base 10.) – jjj Apr 03 '17 at 02:31
-
Would you mind sharing your pascal code for your base-case division? I am having the worst time translating Knuth's Algorithm D into Eiffel. – jjj Apr 03 '17 at 02:32
-
Eiffel? Wow, that's a nice language, but I haven't seen anything of it for many years. Anyway, take a look at: https://github.com/rvelthuis/BigNumbers/blob/master/Source/Velthuis.BigIntegers.pas and just ignore the assembler parts. Each function has a pure Pascal version, delimited by `{$IFDEF PUREPASCAL}` and either `{$ELSE}` or `{$ENDIF}`. Note that I use 16 bit integers (Word or UInt16 in Delphi speak), because using 32 bit integers gives some 64 bit intermediate results, and these are slow, in 32 bit Pascal (but not in assembler). The 16 bit way turned out to be a lot faster in my setup. – Rudy Velthuis Apr 03 '17 at 15:53
-
FWIW, there is a file called `divmnu.c` on the web that should be easily translated to Eiffel or any other language. It also uses 16 bit integers (32 bit limbs split up into 16bit slices), IIRC. It translates Knuth's algorithm very well. E.g. at Hacker's Delight: http://www.hackersdelight.org/hdcodetxt/divmnu.c.txt . – Rudy Velthuis Apr 03 '17 at 15:59
-
@JimmyJohnson: FWIW, how about an upvote or acceptance? That is how this site works. – Rudy Velthuis Apr 04 '17 at 07:09
-
I upvoted your answer, but it seems I have less that some number of reputation points so may votes are recorded but "don't change the publicly displayed post score." If I earn some points some how, I'll come back and see if I can fix that. – jjj Apr 04 '17 at 23:49
-