0

There are a lot of such types for many languages. As far as I know, here is how it works.

Rational just stores two separate digits for numerator and denominator (like 3 and 10 for 0.3).

BigNum stores each digit of the number in some kind of an "array" and do the column-arithmetic as humans usually do. For example, 0.1 stores like [0, '.', 1]. If we would like to add 0.2 to it, it will result in something like this:

  [0, '.', 1]
+ [0, '.', 2]
= [0, '.', 3]

Am I right? Is there any other popular arbitrary-precision arithmetic? If so, how is it called?

I'm not talking about any specific implementation, but rather the general idea of what it usually does.

FrozenHeart
  • 19,844
  • 33
  • 126
  • 242
  • I don't know the internals of `BigNum`, but I doubt it stores the decimals as text. Fact is that `BigNum` is useful for decimal values, while `Rational` is useful for fractions (rational numbers). `Rational` is good for values like `1/3`, because for rationals, `1/3 * 6` really returns `2`. – Rudy Velthuis Aug 13 '16 at 10:38
  • @Rudy Velthuis And when should I prefer `BigNum` instead of `Rational` then? – FrozenHeart Aug 13 '16 at 11:25
  • For values you get as decimal string, or as integer, etc. BigNum is very likely more performant. – Rudy Velthuis Aug 13 '16 at 11:52
  • @Rudy Velthuis "For values you get as decimal string" -- some `Rational` implementations accepts decimal strings too – FrozenHeart Aug 13 '16 at 12:10
  • sure, but as I said, BigNum is likely more performant. – Rudy Velthuis Aug 13 '16 at 12:27
  • @Rudy Velthuis Why do you think so? – FrozenHeart Aug 13 '16 at 12:30
  • I have implemented both in a different language. I suppose they work similarly. Especially for decimals ans integers, BigNum doesn't have to do a lot of work Rational must do. Most multi-precision Rational implemenations use a BigInteger or BigDecimal or BigNum internally. – Rudy Velthuis Aug 13 '16 at 12:38
  • FWIW, a BigNum (or BigDecimal) usually stores 0.3 as an integer (or BigInteger) 3 scaled by an exponent -1, i.e. as 3 * 10^-1. 17.345 is stored as 17345 * 10^-3 (integer 17345, exponent -3). The exponent can also be negated, or called scale. – Rudy Velthuis Aug 13 '16 at 12:48

1 Answers1

1

There are a couple of different approaches in wide use:

  • Arbitrary-precision integers are typically stored as an array of integers. Examples are long integers in Python, BigInteger in Java, or mpz in the GMP library (and languages such as Julia and Mathematica which use GMP).

  • Arbitrary-precision floats are stored as an arbitrary-precision integer and an exponent. This is available as either:

    • base-2 (e.g. mpf in GMP, MPFR and languages which uses these libraries): tend to be favoured in technical and numerical areas as they act exactly like normal floating point numbers but with extra precision (so can be used to verify methods or calculations).
    • base-10 (e.g. BigDecimal in Java, decimal in Python.) Tend to be favoured for financial applications (as there are fewer worries about round-off for currencies), and people who can't get their head around the fact that 0.1 + 0.2 != 0.3 (based on the frequency with which these are unnecessarily advocated on StackOverflow).
  • Rationals (e.g. mpq in GMP, fractions in Python) stores a number as a ratio of (usually) arbitrary-precision integers. These are nice as the results for elementary arithmetic operations (+, -, *, /) are always exact, even for things like x/3. The downside is that they don't work for non-rational functions (such as sqrt or sin), and can quickly blow up if not used carefully (e.g. if used in an iterative algorithm such as Newton's method).

  • Double-double arithmetic stores the number as a pair of floating point numbers (typically C doubles, hence the name). The idea is that the second element of the pair is less than the unit in last place of the first, so you have effectively doubled the available precision. These ideas go back to a paper by Dekker (1971), and can be extended to triple-double and quad-double. The advantage is that they can exploit existing floating point hardware, so can be much faster than arbitrary precision floats, however the downside is that the exponent range is still the same as the underlying floating point format (and the precision is still fixed, so not really "arbitrary"). I'm not sure which libraries are in common use, but David Bailey has a good summary of his software.

Simon Byrne
  • 7,694
  • 1
  • 26
  • 50