I am pulling my hair out for the last couple of days with this "innocuous" piece of code (minimal reproducible example, part of a larger modular multiplication routine):
#include <iostream>
#include <limits>
using ubigint = unsigned long long int;
using bigint = long long int;
void modmul(bigint a, bigint b, ubigint p) {
ubigint ua = a < 0 ? -a : a;
ubigint ub = b < 0 ? -b : b;
ua %= p;
ub %= p;
std::cout << "ua: " << ua << '\n';
}
int main() {
bigint minbigint = std::numeric_limits<bigint>::min();
bigint maxbigint = std::numeric_limits<bigint>::max();
std::cout << "minbigint: " << minbigint << '\n';
std::cout << "maxbigint: " << maxbigint << '\n';
modmul(minbigint, maxbigint, 2314); // expect ua: 2036, got ua: 0
}
I am compiling on macOS 11.4 with clang 12.0 installed from Homebrew
clang version 12.0.0
Target: arm64-apple-darwin20.5.0
Thread model:posix
InstalledDir: /opt/homebrew/opt/llvm/bin
When compiling with clang -O1
, the program spits out the expected result (in this case, 2036, I've checked with Wolfram Mathematica, Mod[9223372036854775808, 2314]
, and this is correct). However, when I compile with clang -O2
or clang -O3
(full optimization), somehow the variable ua
is zeroed out (its value becomes 0
). I am at a complete loss here, and have no idea why this happens. IMO, there's no UB, nor overflow, or anything dubious in this piece of code. I'd greatly appreciate any advice, or if you can reproduce the issue on your side.
PS: the code behaves as expected on any other platforms, including Windows/Linux/FreeBSD/Solaris), with any combination of compilers. I'm only getting this error on Apple M1 with clang 12 (didn't test with other compilers on M1).