After asking this SO question, I received a very interesting comment from @AndonM.Coleman that I had to verify.
Since your disassembled code is written for x86, it is worth pointing out that XOR will set/clear the Zero Flag whereas NOT will not (sometimes useful if you want to perform a bitwise operation without affecting jump conditions that rely on flags from previous operations). Now, considering you're not writing assembly directly, you really have no access to this flag in a meaningful way so I doubt this is the reason for favoring one over the other.
His comment got me curious if the following code would produce the same assembly instructions
#include <iostream>
int main()
{
unsigned int val = 0;
std::cout << "Enter a numeric value: ";
std::cin >> val;
if ( (val ^ ~0U) == 0)
{
std::cout << "Value inverted is zero" << std::endl;
} else
{
std::cout << "Value inverted is not zero" << std::endl;
}
if ( (~val) == 0)
{
std::cout << "Value inverted is zero" << std::endl;
} else
{
std::cout << "Value inverted is not zero" << std::endl;
}
return 0;
}
For the following two operations
if ( (val ^ ~0U) == 0 )
and
if ( (~val) == 0 )
The not optimized build in Visual Studio 2010 gives the following disassembly:
if ( (val ^ ~0U) == 0)
00AD1501 mov eax,dword ptr [val]
00AD1504 xor eax,0FFFFFFFFh
00AD1507 jne main+86h (0AD1536h)
if ( (~val) == 0)
00AD1561 mov eax,dword ptr [val]
00AD1564 not eax
00AD1566 test eax,eax
00AD1568 jne main+0E7h (0AD1597h)
My question regards optimisation. Is it better to write
if ( (val ^ ~0U) == 0)
or
if ( (~val) == 0)