-1

While typing a program as a high level programmer, n = 0; looks more efficient and clean.

But is n = 0; really more efficient than if (n != 0) n = 0;?

  1. when n is more likely to be 0.

  2. when n is less likely to be 0.

  3. when n is absolutely uncertainty.

Language: C (C90)

Compiler: Borland's Turbo C++

Minimal reproducible code

void scanf();

void main()
{
int n; // 2 bytes

n=0; // Expression 1

scanf("%d",&n); // Absolutely uncertain

if(n!=0) n=0; // Expression 2

}

Note: I have mentioned the above code only for your reference. Please don't go with it's flow.

If your not comfortable with the above language/standard/compiler, then please feel free to explain the above 3 cases in your preferred language/standard/compiler.

  • 1
    With what compilers? on which systems? for which type of `n`? This is too broad to be answerable, and almost certainly not worth worrying about in 99.999% of cases. Just type `n = 0;` and get on with worrying about actually useful things. Fwiw, personally, I doubt branching is cheaper than whatever tiny penalty comes from reassigning the same value. – underscore_d Nov 09 '19 at 20:57
  • 1
    Think of it this way, no matter which one you choose it will always require atleast one instruction (compare or store), so... – smac89 Nov 09 '19 at 21:00
  • 1
    The question does not have a [mre]. – Antti Haapala -- Слава Україні Nov 09 '19 at 21:58
  • 1
    It either is that both programs have the same external behaviour or not, and we cannot tell. If they have the same behaviour they can produce the same machine code. If they're not the same then one of them is likely faster than the other... – Antti Haapala -- Слава Україні Nov 09 '19 at 22:01
  • 1
    The answer to this is type dependent and the causal impart of changing functionality. Too broad – chux - Reinstate Monica Nov 09 '19 at 22:09
  • I have updated the answer. Please re-consider your actions. – Nephew of Stackoverflow Nov 10 '19 at 11:33
  • That's how it is declared in the header files. Header files just contain simple declarations and probably some macros(e.g. `#define NULL 0`) that the programmer might need while writing his/her C code. As far as I know, they are already defined and compiled in a file which the linker links after the main compilation process gets successfully completed. The compiler can only check for errors in the file it's compiling, so you can have any number of undefined **extern** declarations as long as they aren't practically used anywhere. If you use a undefined function the linker will throw an error. – Nephew of Stackoverflow Nov 12 '19 at 20:38

2 Answers2

5

If n is a 2's complement integral type or an unsigned integral type, then writing n = 0 directly will certainly be no slower than the version with the condition check, and a good optimising compiler will generate the same code. Some compilers compile assignment to zero as XOR'ing a register value with itself, which is a single instruction.

If n is a floating point type, a 1s' complement integral type, or a signed magnitude integral type, then the two code snippets differ in behaviour. E.g. if n is signed negative zero for example. (Acknowledge @chqrlie.) Also if n is a pointer on a system than has multiple null pointers representations, then if (n != 0) n = 0; will not assign n, when n is one of the various null pointers. n = 0; imparts a different functionality.

"will always be more efficient" is not true. Should reading n have a low cost, writing n a high cost (Think of re-writing non-volatile memory that needs to re-write a page) and is likely n == 0, then n = 0; is slower, less efficient than if (n != 0) n = 0;.

Govind Parmar
  • 20,656
  • 7
  • 53
  • 85
Bathsheba
  • 231,907
  • 34
  • 361
  • 483
  • 1
    You raise a good point regarding floating point types, but your example is incorrect: if `n` is `NaN`, `n != 0` will be true (because `n == 0` will be false). The case where the two code snippets differ is if `n` is negative zero: `n = -0.0;` – chqrlie Nov 09 '19 at 21:38
  • 2
    @chqrlie: It's late! Good point, I've stolen it. – Bathsheba Nov 09 '19 at 21:40
  • 2
    For completeness, we could add that the behavior on negative zeros is not restricted to floating point types, on improbable architectures where integers are represented using ones' complement or sign/magnitude... Another special case to consider is if `n` is defined as `volatile`. Writing to it might have unwanted side effects. – chqrlie Nov 09 '19 at 21:46
  • 2
    @chux-ReinstateMonica: It's time to wiki this - please add contributions at your leisure. – Bathsheba Nov 09 '19 at 22:08
  • xor-ing n with n...So is writing 0 to n more expensive that xor-ing itself? – Nephew of Stackoverflow Nov 12 '19 at 12:28
  • 1
    @NephewofStackoverflow: I was always under that impression yes although apparently it's not the case with ARM64 – Bathsheba Nov 12 '19 at 12:30
  • So assuming that n is a integer, directly writing any number onto it is better than checking if it's not that number and then writing that number. – Nephew of Stackoverflow Nov 12 '19 at 16:26
3

n = 0;

will always be more efficient as there is no condition check.

https://godbolt.org/z/GEzfcD

0___________
  • 60,014
  • 4
  • 34
  • 74