2

gcc has the __int128 type natively.

However, it’s not defined in limits.h. I’m mean there’re no such things as INT128_MAX or INT128_MIN

And gcc is interpreting literal constants as 64 bits integers. This means that if I write #define INT128_MIN −170141183460469231731687303715884105728 it will complain about the type telling it has truncated the value.

This is especially annoying for shifting on arrays. How to overcome this ?

user2284570
  • 2,891
  • 3
  • 26
  • 74
  • [For how long](https://www.google.com/search?q=%22INT128_MAX%22+gcc) did [you search](https://github.com/arm-embedded/gcc-arm-none-eabi.debian/blob/master/src/gcc/testsuite/c-c%2B%2B-common/ubsan/float-cast.h#L14)? – KamilCuk Jun 11 '20 at 12:58
  • Does this answer your question? [gcc 7.3 128-bit unsigned integer operation](https://stackoverflow.com/questions/60860827/gcc-7-3-128-bit-unsigned-integer-operation) – KamilCuk Jun 11 '20 at 12:59
  • @KamilCuk of course not ! – user2284570 Jun 11 '20 at 14:10
  • @KamilCuk I would also add your first link leads to no response at all and that the second link doesn't work because the preprocessor computes constants on a 64 bits basis so that it overflows. – user2284570 Jun 11 '20 at 14:12

5 Answers5

4
static const __uint128_t UINT128_MAX =__uint128_t(__int128_t(-1L));
static const __int128_t INT128_MAX = UINT128_MAX >> 1;
static const __int128_t INT128_MIN = -INT128_MAX - 1;
user942598
  • 41
  • 2
4

Just don't be ridiculous and use this for the unsigned maximum:

static const __uint128_t UINT128_MAX = ~__uint128_t{};

EDIT: You might want to take a look at these:

template <typename U>
constexpr static auto bit_size_v(CHAR_BIT * sizeof(U));

template <typename U>
constexpr static U min_v(std::is_signed_v<U> ? U(1) << (bit_size_v<U> - 1) : U{});

template <typename U>
constexpr static U max_v(~min_v<U>);
user1095108
  • 14,119
  • 9
  • 58
  • 116
3

Since you have the tag [g++], I assume you are interested in a C++ solution: the usual std::numeric_limits<__int128>::max() just works...

Marc Glisse
  • 7,550
  • 2
  • 30
  • 53
0

As currently gcc doesn't have support for defining int128 integer literals, usually compose int128 by using (high<<64)|low.

However this source has a perfect asnwer:

#define INT128_MAX (__int128)(((unsigned __int128) 1 << ((sizeof(__int128) * __CHAR_BIT__) - 1)) - 1)
#define INT128_MIN (-INT128_MAX - 1)
#define UINT128_MAX ((2 * (unsigned __int128) INT128_MAX) + 1)

INT128_MAX is 2^127 - 1 which is (1 << 127) - 1. Then the rest of constants can be calculated.

KamilCuk
  • 120,984
  • 8
  • 59
  • 111
  • The macro above is computed on 64 bits and overflows at compile time. – user2284570 Jun 11 '20 at 14:42
  • `is computed on 64 bits` no it is not. That's what for all the `__int128` casts are. `overflows at compile time` How do you check that? What do you mean by "overflows at compile time"? – KamilCuk Jun 11 '20 at 21:43
  • That the bit shifts is computed on 64 bits before being casted to `__int128`. – user2284570 Jun 15 '20 at 23:49
  • Please post an [MCVE] of your problem. There is nothing "computed on 64 bits before being "casted" in the code shown. How do you check that? What do you mean by "overflows at compile time"? Please show a compilable source code that will demonstrate your problem, including desired behavior that you expected from your code and the behavior of code. How do those behaviors differ? Type casts have lower precedence then bitshift. It could be, preprocessor is requested to calculate values with `intmax_t` precision, it could be that gcc preprocessor is using `int64_t` to compare values. – KamilCuk Jun 16 '20 at 07:11
  • Simple : just compile your example using `-Wall` and it will warm. – user2284570 Jun 16 '20 at 08:25
  • 1
    [No, it does not, godbolt link](https://godbolt.org/z/RvBLv8). Please post an [MCVE]. Please be specific - compiling just those 3 macro definitions as single think in the unit for sure doesn't generate a warning. – KamilCuk Jun 16 '20 at 13:43
  • [According to your link](https://godbolt.org/z/RvBLv8), it was silent. If you look at the assembly you’ll see the constants defined are `9223372036854775808` and `−9223372036854775808` which correspond to `2⁶³` and `−2⁶³` instead of `2¹²⁷` and `−2¹²⁷`. – user2284570 Jun 16 '20 at 16:05
  • 1
    ? Would you expect compiler to generate a 128bit `mov` on 64bit registers? How? If you expect to see 2^127 in the assembly, please generate compiler output on a 128-bit architecture. The values are stored on multiple registers... The value is `mov rax, -1 movabs rdx, 9223372036854775807`. – KamilCuk Jun 16 '20 at 21:35
  • `the bit shifts is computed on 64 bits before being casted to __int128` obviously not. A cast has [higher precedence](https://en.cppreference.com/w/c/language/operator_precedence) than a shift so in `(unsigned __int128) 1 << shift` the cast is done first then a shift on the 128-bit value is done – phuclv Nov 29 '20 at 04:36
  • @user2284570 in x86 double-register values are stored in the pair dx:ax/edx:eax/rdx:rax. If you look at the [godbolt link above](https://godbolt.org/z/RvBLv8) you can also easily see that: INT128_MAX HIGH = rdx = 0x7FFFFFFFFFFFFFFF, INT128_MAX LOW = rax = 0xFFFFFFFFFFFFFFFF. Same to INT128_MIN and UINT128_MAX. In fact [your method is far worse with and without optimization](https://godbolt.org/z/jj47cf) because a memory load is required instead of embedded the immediate directly in the instruction – phuclv Nov 29 '20 at 04:54
-2

Not ideal in terms of performance and memory usage but this is the only thing I found. Of course in this won’t work at all on architectures where unaligned memory access is not authorized.

const __int128 llong_min=*(__int128*)"\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xfd";
const __int128 llong_max=*(__int128*)"\x7f\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff";
user2284570
  • 2,891
  • 3
  • 26
  • 74
  • This is undefined behavior if the address of those string literals is not aligned to `_Alignof(__int128)` and also depends on endianess. – KamilCuk Jun 11 '20 at 14:30
  • @KamilCuk how to use `_Alignof(__int128)` ? An edit is welcome ! – user2284570 Jun 11 '20 at 14:43
  • your `llong_min` is **wrong**. It must be `"\x80\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00";`. And you don't use `_Alignof(__int128)` to align a variable, it's to check the alignment. To align a variable use [`__attribute__ ((aligned (8)))`](https://gcc.gnu.org/onlinedocs/gcc/Common-Type-Attributes.html#Common-Type-Attributes) in gcc and [`__declspec(align(8))`](https://learn.microsoft.com/en-us/cpp/cpp/align-cpp?view=msvc-160) in MSVC – phuclv Nov 29 '20 at 04:57
  • 1
    But this method is worse in every way even after correcting the alignment. It needs many memory loads and prevents compiler optimization (as the compiler doesn't know the constant value), just see the [godbolt link above](https://godbolt.org/z/jj47cf). Besides it violates strict aliasing rule and is less readable. `((unsigned __int128)0x7FFFFFFFFFFFFFFF << 64) | 0xFFFFFFFFFFFFFFFF` is shorter and gives immediate sense about the low and high parts of the 128-bit value – phuclv Nov 29 '20 at 05:04