2

Is there a way/switch to restrict the size of long doubles to 64 bits when compiling using GCC?

Pascal Kesseli
  • 1,620
  • 1
  • 21
  • 37

2 Answers2

5

Possibly via -mlong-double-64 command line switch, but the question is: why do you want to do that?

The x86 ABI and x86-64 System V ABI mandate a long double of 96/80 bits respectively¹, i.e. you need to recompile not only your application, but anything it uses which exposes a long double in the API.

¹ And according to the same doc, GCC on x86-64 uses 128-bit long doubles.

peppe
  • 21,934
  • 4
  • 55
  • 70
  • 1
    Page 11 of the [ABI](http://www.x86-64.org/documentation_folder/abi-0.99.pdf) mandates an 80 bit `long double`. There is a separate `__float128` type. – Brett Hale Feb 25 '15 at 18:53
  • @BrettHale: oh, interesting that x86 is 96 bits then. I stand corrected. Fixed my reply. – peppe Feb 25 '15 at 19:57
-2

Since typically (read: all platforms that I know of) double is 64bit, using long double explicitely demands a more-precise floating point number. Thus, there's no way to revert that.

Marcus Müller
  • 34,677
  • 4
  • 53
  • 94
  • 2
    Both C and C++ only mandate that a `long double` be *at least* `double` precision. – Brett Hale Feb 25 '15 at 18:37
  • Yes, but I only *know* platforms where gcc makes long double at least 80b long, because that's what the user indicates he wants. – Marcus Müller Feb 25 '15 at 20:15
  • Your answer still implies that `long double` is wider than `double`. That's not always true. And since gcc has a `-mlong-double-64` option for some platforms, your answer is factually incorrect (though there are substantial drawbacks to using that option). – Keith Thompson Feb 25 '15 at 20:43
  • Keith, again I must stress the *typical* and *all platforms **I** know* from my answer. Noone (aside from the cited Bionic C, which seems to be such a special case that the gcc manual cites it!) seems to do that, breaking the ABI. – Marcus Müller Feb 25 '15 at 20:48