How can I define my own float-point format (type) with specific precision and certain bitness of exponent and significand? For example, 128-bit float-point number with 20-bit exponent and 107-bit significand (not standart 15/112-bit), or 256-bit one with 19/236-bit exponent/significand.
Asked
Active
Viewed 4,470 times
6
-
See the implementation of normal float/double algorithms and do the same for 128-bit, etc. I found that AIX supports 128bit floating point - if the source is available, you can see there: "The AIX® operating system supports a 128-bit long double data type...". – i486 Jan 10 '15 at 23:37
-
There's no direct support in the language itself. You have to (1) get the underlying algorithms right and (2) horse around with operator overloading to make your type "feel like" a native type. – tmyklebu Jan 11 '15 at 00:10
-
1@i486: The implementation of single and double precision floating point is hardware circuitry in the CPU. "do the same for 128-bit" Are you honestly suggesting that people design and fabricate their own CPU designs? – Ben Voigt Jan 11 '15 at 01:19
-
And in fact the reason AIX has 128 bit FP is because they have 128 bits hardware. (Similarly, 80 bits `long double` on x87) – MSalters Jan 11 '15 at 07:11
-
@BenVoigt: `float` and `double` math can be implemented in CPU but this is not true for all CPU-s, respectively for all C compilers. There are many old compilers (including such for i386) which implement floating point arithmetic in software way. In other words, there is full implementation in C or ASM, and not in CPU hardware. – i486 Jan 11 '15 at 21:30
-
@i486: This is true, but it is no longer by any means "normal". Besides, emulating hardware operations in software is horribly inefficient. If you are going to use software, use an encoding that is friendly to software. – Ben Voigt Jan 11 '15 at 21:31
-
Eugene asks how to implement his own FP format. Whether it will be fast enough is other question. For example, for the needs of RSA/DH cryptography it is necessary to work with big numbers - which is not (always) supported natively by CPU. But encryption with RSA/DH works every day. – i486 Jan 11 '15 at 21:34
-
@i486: I'm not saying that doing math in software is impossible. I'm saying that you make different choices, starting with the format. – Ben Voigt Jan 11 '15 at 21:42
-
@BenVoigt: OK. But you wrote: "Are you honestly suggesting that people design and fabricate their own CPU designs?". For that was my comment later - that there is also a software way. – i486 Jan 11 '15 at 21:51
-
@i486: Yeah, just saying you missed a step. Step one should not be "Find the implementation your system uses for single and double-precision float arithmetic", since that is in hardware. Step one would have to be "find a system that uses software to implement". – Ben Voigt Jan 11 '15 at 21:53
1 Answers
5
There are 2 ways to do this. You can create your own class where you have a member for the exponent and a member for the mantissa, and you can write code for the operators you need, and then implement all of the functions you'd need that normally exist in the standard math library. (Things like atan()
, sin()
, exp()
and pow()
.)
Or you can find an existing arbitrary precision library and use it instead. While implementing it yourself would be interesting and fun, it is likely to have a lot of errors in it and to be an extremely large amount of work, unless your use-case is extremely constrained.
Wikipedia has a list of arbitrary precision math libraries that you can look into for yourself.

user1118321
- 25,567
- 4
- 55
- 86
-
Indeed, trying to pack the components as bitfields makes absolutely no sense once you depart from the particular formats that match the hardware circuits. – Ben Voigt Jan 11 '15 at 01:22