There are a lot of such types for many languages. As far as I know, here is how it works.
Rational
just stores two separate digits for numerator and denominator (like 3 and 10 for 0.3).
BigNum
stores each digit of the number in some kind of an "array" and do the column-arithmetic as humans usually do. For example, 0.1 stores like [0, '.', 1]. If we would like to add 0.2 to it, it will result in something like this:
[0, '.', 1]
+ [0, '.', 2]
= [0, '.', 3]
Am I right? Is there any other popular arbitrary-precision arithmetic? If so, how is it called?
I'm not talking about any specific implementation, but rather the general idea of what it usually does.