Our program manipulates extensively real numbers that happen to be very small or big. while we don't need a very high precision. We are strongly concerned about performance (CPU usage).
Such numbers could be 2.5687e-45785 , for instance.
Remark : as the program does a lot of additions, it is not an efficient possibility to work with the logarithms.
Fortunately, we don't need arbitrary small or high numbers neither (it would require to code the numbers with a structure with variable size). Let say 1.0e100000 is a reasonable limit for us.
I know that C offers several types for floating point numbers : float, double, long double. I have taken a look at the way numbers are encoded and what that implies in term of limits (IEE 754). The bits used to represents the numbers are separated in 3 groups : a sign bit, an exponent group (the powers of 2 that conditions how big or small values numbers can reach) and a fraction part that gives the precision. The more bits in the exponent, the smaller and higher the numbers.
long doubles on my X86-64 family computer have 15 bits of exponent and 63 bits for fraction.( https://en.wikipedia.org/wiki/Extended_precision). They can represent numbers in the range 3.65×10^−4951 to 1.18×10^4932.
But for our need, it has too much precision but cannot represent numbers as small or big as I would like. Ideally, it would be fine if I could transfer some fraction bits to exponent bits. Of course, I am aware that this type is not flexible because it is the actual type that the hardware manipulates (Intel 8087 math coprocessor for long double).
Before devising our own representation of numbers (probably with a double and an additional integer seen as an extra exponent) and understanding that it won't be easy to make it efficient(in term of CPU usage) because of the renormalisations that would occur, I would like to know whether libraries exists that already provides number representations tha-64t match our needs and are efficient.