If a language wished to offer consistent floating-point semantics on both x87 hardware and on hardware that supports the binary128 type, would existing binary128 implementations be able to operate efficiently with rules that required all intermediate results to be rounded equivalent to the 80-bit type found on the x87? Although the x87 cannot efficiently operate with languages which require results to be evaluated at the equivalent of float
or double
precision because those types have different exponent ranges and thus different behavior with denormalized values, it would appear that both binary128 and binary80 use the same size exponent field, and thus rounding off the bottom 48 bits of significant should yield consistent results throughout the type's computational range.
Would it be reasonable for a language design to assume that future PC-style hardware will either support the 80-bit type via x87 instructions or via an FPU that could emulate the behavior of the 80-bit type even if values required 128 bits to store?
For example, if a language defined types:
- ieee32 == Binary32 that is not implicitly convertible to/from any other type except real32 or realLiteral
- ieee64 == Binary64 that is not implicitly convertible to/from any other type except real64 or realLiteral
- real32 == Binary32 that eagerly converts to realComp for all calculations, and is implicitly convertible from all real types
- real64 == Binary64 that eagerly converts to realComp for all calculations, and is impliticly convertible from all real types
- realComp == Intermediate-result type that takes 128 bits to store regardless of the precision stored therein
- realLiteral == Type of non-suffixed floating-point literals and constant expressions; processed internally as maximum-precision value, but only usable as a type for literals and constant expressions; stored as maximum precision except in cases where it would be immediately coerced to a smaller type, in which case it would be stored as the destination type.
would it be reasonable for the language to provide semantics that would promise that realComp
would always be processed as exactly 80-bit precision, or would such a promise be likely to pose an execution-time penalty on some platforms? Would it be better to simply specify it as being 80 bits or better, with a promise that any platform which sometimes has 128 bits of precision will do so consistently? What should one try to promise on hardware which has an exactly 64-bit FPU (on a typical 16- or 32-bit micro without a 64-bit FPU, computations on realComp
would be faster than on double
)?