50

I wonder if there is a difference in performance between

checking if a value is greater / smaller than another

for(int x = 0; x < y; x++); // for y > x

and

checking if a value is not equal to another

for(int x = 0; x != y; x++); // for y > x

and why?

In addition: What if I compare to zero, is there a further difference?

It would be nice if the answers also consider an assebled view on the code.

EDIT: As most of you pointed out the difference in performance of course is negligible but I'm interested in the difference on the cpu level. Which operation is more complex?

To me it's more a question to learn / understand the technique.

I removed the Java tag, which I added accidentally because the question was meant generally not just based on Java, sorry.

Peter Lawrey
  • 525,659
  • 79
  • 751
  • 1,130
das Keks
  • 3,723
  • 5
  • 35
  • 57

7 Answers7

39

You should still do what is clearer, safer and easier to understand. These micro-tuning discussions are usually a waste of your time because

  • they rarely make a measurable difference
  • when they make a difference this can change if you use a different JVM, or processor. i.e. without warning.

Note: the machine generated can also change with processor or JVM, so looking this is not very helpful in most cases, even if you are very familiar with assembly code.

What is much, much more important is the maintainability of the software.

Peter Lawrey
  • 525,659
  • 79
  • 751
  • 1,130
  • +1 I've always loved your answers to the question regarding performance and efficiency. Specially because I'm too bad at it :) – Rohit Jain Sep 03 '13 at 15:50
  • @RohitJain Or you realise it's not as easy as just guessing what might be a good idea. It is very hard to find an example which is better for performances reason even if it is more obscure. – Peter Lawrey Sep 03 '13 at 15:56
  • @PeterLawrey +1 Agreed. Obscurity for the sake of performance should not even be considered unless your application needs to run as optimally as possible (Such as running under environments with limited memory/computing abilities, or some ultra-complex simulation that needs as much juice as possible). Otherwise you're wasting all that free space, yo. – Super Cat Nov 17 '15 at 03:02
  • @SuperCat Agreed, and you can prove that the more obscure code is actually faster. ;) This is often missing. BTW The JVM optimises common/specific patterns of code. This means that if you use an uncommon pattern (as far as the JVM is concerned) your code is likely to be slower not faster even if in theory it could be faster if you used a different compiler. – Peter Lawrey Nov 17 '15 at 11:53
  • Honestly, I'm annoyed by answers like these. My application runs on a specific JVM with specific specs and I have a single loop with a few statements that should be executed as many times as possible in a short timespan. Even moving a method out of an if statement and saving its result in a variable to then put that variable in the if made a clearly measurable impact! – xeruf May 06 '18 at 19:18
  • @Xerus I suggest you try using JMH. Even if you save the value in a field it can be optimised away. – Peter Lawrey May 12 '18 at 13:11
  • putting it in a field produced a measurable performance boost, that's the thing, it's kinda unrelated to the original question. Something you wouldn't expect to speed it up at all actually made an impact, so I'm fairly sure every little thing can make an impact. – xeruf May 12 '18 at 22:38
  • 4
    I'm not sure why this is the highest voted answer, as the OP asked a question in relation to performance, not software engineering best practices. – ajq88 Jul 29 '18 at 10:53
17

Now 6 years later and after still receiving occasional notifications from this question I'd like to add some insights that I've gained during my computer science study.

Putting the above statements into a small program and compiling it...

public class Comp {
    public static void main(String[] args) {
        int y = 42;

        for(int x = 0; x < y; x++) {
            // stop if x >= y
        }

        for(int x = 0; x != y; x++) {
            // stop if x == y
        }
    }
}

... we get the following bytecode:

  public static void main(java.lang.String[]);
    Code:
       // y = 42
       0: bipush        42  
       2: istore_1

       // first for-loop
       3: iconst_0
       4: istore_2
       5: iload_2
       6: iload_1
       7: if_icmpge     16      // jump out of loop if x => y
      10: iinc          2, 1
      13: goto          5

       // second for-loop
      16: iconst_0
      17: istore_2
      18: iload_2
      19: iload_1
      20: if_icmpeq     29      // jump out of loop if x == y
      23: iinc          2, 1
      26: goto          18

      29: return

As we can see, on bytecode level both are handled in the same way and use a single bytecode instruction for the comparison.

As already stated, how the bytecode is translated into assembler/machine code depends on the JVM. But generally this conditional jumps can be translated to some assembly code like this:

; condition of first loop
CMP eax, ebx
JGE label  ; jump if eax > ebx

; condition of second loop
CMP eax, ebx
JE  label  ; jump if eax == ebx

On hardware level JGE and JE have the same complexity.

So all in all: Regarding performance, both x < y and x != y are theoretically the same on hardware level and one isn't per se faster or slower than the other.

das Keks
  • 3,723
  • 5
  • 35
  • 57
  • a pandemic later you're still getting notifications over this. I'm pretty sure there were faster ways to definitely settle this question, beside doing a full computer science study, but I get it, you do what you've got to do. Congratulations on your study, and thank you for this answer, that you should accept as the correct one. It's well deserved! – Andrei Feb 22 '22 at 17:43
16

The performance is absolutely negligible. Here's some code to prove it:

public class OpporatorPerformance {
    static long y = 300000000L;

    public static void main(String[] args) {
        System.out.println("Test One: " + testOne());
        System.out.println("Test Two: " + testTwo());
        System.out.println("Test One: " + testOne());
        System.out.println("Test Two: " + testTwo());
        System.out.println("Test One: " + testOne());
        System.out.println("Test Two: " + testTwo());
        System.out.println("Test One: " + testOne());
        System.out.println("Test Two: " + testTwo());

    }

    public static long testOne() {
        Date newDate = new Date();
        int z = 0;
        for(int x = 0; x < y; x++){ // for y > x
            z = x;
        }
        return new Date().getTime() - newDate.getTime();
    }

    public static long testTwo() {
        Date newDate = new Date();
        int z = 0;
        for(int x = 0; x != y; x++){ // for y > x
            z = x;
        }
        return new Date().getTime() - newDate.getTime();
    }

}

The results:

Test One: 342
Test Two: 332
Test One: 340
Test Two: 340
Test One: 415
Test Two: 325
Test One: 393
Test Two: 329
James Dunn
  • 8,064
  • 13
  • 53
  • 87
  • +1 On my last run I got `Test One: 113 Test Two: 113` on my laptop. – Peter Lawrey Sep 03 '13 at 15:57
  • 7
    You should use System.nanoTime() for benchmarking, that gives you more precision. – Rohit Jain Sep 03 '13 at 15:57
  • 1
    @RohitJain, thanks! Somehow I hadn't learned that yet. The thing I like about it is that now I don't need to import `java.util.Date`. Although, the javadoc does say that "This method provides nanosecond precision, but not necessarily nanosecond resolution (that is, how frequently the value changes) - no guarantees are made except that the resolution is at least as good as that of currentTimeMillis()." So, I wouldn't necessarily trust the accuracy of the extra precision. – James Dunn Sep 03 '13 at 16:03
  • I wouldn't trust `Date().getTime()` - you're creating a new object just to measure the time, which may actually obscure the benchmark. Why not use `nanoTime()` or at least `currentTimeMillis()` (which is one of the fastest calls btw)? – xeruf May 06 '18 at 19:20
5

There is rarely a performance hit but the first is much more reliable as it will handle both of the extraordinary cases where

  1. y < 0 to start
  2. x or y are messed with inside the block.
OldCurmudgeon
  • 64,482
  • 16
  • 119
  • 213
  • It is worth noting that more types support `!=` than `<` (all iterators, for instance, as opposed to just random access iterators), thus `!=` is usually used for consistency. – Aykhan Hagverdili Mar 19 '19 at 18:11
3

Other people seem to have answered from a measured perspective, but from the machine level you'd be interested in the Arithmetic Logic Unit (ALU), which handles the mathy bits on a "normal computer". There seems to be a pretty good detail in How does less than and greater than work on a logical level in binary? for complete details.

From purely a logical level, the short answer is that it's easier to tell if something is not something than it is to tell if something is relative to something, however this has likely been optimized in your standard personal computer or server, so you'll only see actual gains likely in small personal builds like on-board computers for drones or other micro-technologies.

aetherwalker
  • 111
  • 1
  • 5
3

The performance in theory is the same. When you do a less than or not equal to operation, in the processor level you actually perform a subtract operation and check if the negative flag or zero flag is enabled in the result. In theory the performance will be the same. Since the difference is only checking the flag set.

Lohith S
  • 89
  • 3
-5

Wonder if nested for each test, same results ?

for(int x = 0; x < y; x++)
{   
  for(int x2 = 0; x2 < y; x2++)  {}   
}

for(int x = 0; x != y; x++)
{
  for(int x2 = 0; x2 != y; x2++) {}    
}
Macrofeet
  • 1
  • 1
  • 5
    why did this get an upvote? Test it yourself and then provide us an answer, not another question. – xeruf May 06 '18 at 19:21