Is it possible to control which CPU instruction sets are used by the MS C Runtime Library (Visual Studio 2013, 2015)? If I step into the disassembly for, say, cos(), the code compares against a precalculated set of CPU capabilities and then executes the function using the 'best' capabilities available on the CPU. The problem is that different instruction sets yield different results, so the results differ depending on the CPU architecture.
As an example, building a 64-bit executable of:
std::cout << std::setprecision(20) << cos(-0.61385470201194381) << std:: endl;
On Haswell/Broadwell and later returns 0.81743370050726594 (same as x86). On older CPUs returns 0.81743370050726583.
The Runtime Library uses the FMA instruction set if available, executes a different implementation and yields the different results. Note that this is not affected by the compiler options selected in the application because the Runtime Libraries are provided pre-compiled. Also note that the floating point precision control function _controlfp() cannot control the precision of the 64-bit runtime.
Is it possible to control which instruction sets the Runtime Library uses so that the results can be more deterministic?