13

I'm experimenting with a lexer, and I found that switching from a while-loop to an if-statement and a do-while-loop in one part of the program led to ~20% faster code, which seemed crazy. I isolated the difference in the compiler generated code to these assembly snippets. Does anyone know why the fast code is faster?

In the assembly, 'edi' is the current text position, 'ebx' is the end of the text, and 'isAlpha' is a lookup table that has a 1 if the character is alphabetic, and 0 otherwise.

The slow code:

slow_loop:
00401897  cmp   edi,ebx 
00401899  je    slow_done (4018AAh) 
0040189B  movzx eax,byte ptr [edi] 
0040189E  cmp   byte ptr isAlpha (4533E0h)[eax],0 
004018A5  je    slow_done (4018AAh) 
004018A7  inc   edi  
004018A8  jmp   slow_loop (401897h) 
slow_done:

The fast code:

fast_loop:
0040193D  inc   edi  
0040193E  cmp   edi,ebx 
00401940  je    fast_done (40194Eh) 
00401942  movzx eax,byte ptr [edi] 
00401945  cmp   byte ptr isAlpha (4533E0h)[eax],0 
0040194C  jne   fast_loop (40193Dh) 
fast_done:

If I run just these assembly snippets against a megabyte of text consisting only of the letter 'a', the fast code is 30% faster. My guess is the slow code is slow because of branch misprediction, but I thought in a loop that'd be a one time cost.

Here's the program that I used to test both snippets:

#include <Windows.h>
#include <string>
#include <iostream>

int main( int argc, char* argv[] )
{
    static char isAlpha[256];
    for ( int i = 0; i < sizeof( isAlpha ); ++i )
        isAlpha[i] = isalpha( i ) ? 1 : 0;

    std::string test( 1024*1024, 'a' );

    const char* start = test.c_str();
    const char* limit = test.c_str() + test.size();

    DWORD slowStart = GetTickCount();
    for ( int i = 0; i < 10000; ++i )
    {
        __asm
        {
            mov edi, start
            mov ebx, limit

            inc edi

        slow_loop:
            cmp   edi,ebx
            je    slow_done
            movzx eax,byte ptr [edi]
            cmp   byte ptr isAlpha [eax],0
            je    slow_done
            inc   edi
            jmp   slow_loop

        slow_done:
        }
    }
    DWORD slowEnd = GetTickCount();
    std::cout << "slow in " << ( slowEnd - slowStart ) << " ticks" << std::endl;

    DWORD fastStart = GetTickCount();
    for ( int i = 0; i < 10000; ++i )
    {
        __asm
        {
            mov edi, start
            mov ebx, limit

        fast_loop:
            inc   edi
            cmp   edi,ebx
            je    fast_done
            movzx eax,byte ptr [edi]
            cmp   byte ptr isAlpha [eax],0
            jne   fast_loop

        fast_done:
        }
    }
    DWORD fastEnd = GetTickCount();
    std::cout << "fast in " << ( fastEnd - fastStart ) << " ticks" << std::endl;

    return 0;
}

The output of the test program is

slow in 8455 ticks
fast in 5694 ticks
Mysticial
  • 464,885
  • 45
  • 335
  • 332
briangreenery
  • 673
  • 4
  • 14
  • 4
    That *is* crazy - it's a very common optimization for compilers to do by themselves. As for why it's faster, there's one fewer jump per iteration in the fast code, and jumps only have a limited throughput. – harold Jun 28 '12 at 10:54
  • 2
    a performance based register profiler would probably yield the best answer, but apart from the obvious jump, I'm guessing the second bit of code is faster because its fitting better into the code cache (there is also few bytes for fetch-and-decode, but that overhead is meaningless here). jump target alignment may also be another factor, but thats hard to tell here without addresses – Necrolis Jun 28 '12 at 11:06
  • Brian, what is your CPU? Take a look to http://www.agner.org/optimize/ And harold, static jmps are predicted as always taken on modern (non-Atom) x86 CPUs, so they should cost nothing. – osgx Jun 28 '12 at 22:56
  • My CPU is an Intel Core i7-2600K. I'll check out that link. – briangreenery Jun 28 '12 at 23:14
  • brian, thanks, your microarchitecture is SNB (Sandy Bridge). Can you post your binary file in internet? – osgx Jun 29 '12 at 08:59
  • 2
    @osgx correctly predicted jumps have no latency, but they do have a limited throughput – harold Jun 29 '12 at 09:50
  • did you account for the affects of the cache in your test? slow code can run faster, fast code can run slower, simply by where it is located and or where it is relative to the test data, etc. – old_timer Jun 29 '12 at 13:45
  • dwelch, the test code is a short loop and data is accessed linearly. Code will be cached in L1i (even in u-op cache on SNB), and data access will be hardware-prefetched after several iterations of loop. – osgx Jun 29 '12 at 14:33
  • This optimization in compilers is called "Loop Rotation" and it really helps to performance. – Zinovy Nis Oct 12 '14 at 16:58

2 Answers2

12

Sorry, I was not able to reproduce your code exactly on GCC (linux), but I have some results and I think that main idea was saved in my code.

There is a tool from Intel to analyse code fragment performance: http://software.intel.com/en-us/articles/intel-architecture-code-analyzer/ (Intel IACA). It is free to download and test it.

In my experiment, report for slow loop:

Intel(R) Architecture Code Analyzer Version - 2.0.1
Analyzed File - ./l2_i
Binary Format - 32Bit
Architecture  - SNB
Analysis Type - Throughput

Throughput Analysis Report
--------------------------
Block Throughput: 3.05 Cycles       Throughput Bottleneck: Port5

Port Binding In Cycles Per Iteration:
-------------------------------------------------------------------------
|  Port  |  0   -  DV  |  1   |  2   -  D   |  3   -  D   |  4   |  5   |
-------------------------------------------------------------------------
| Cycles | 0.5    0.0  | 0.5  | 1.0    1.0  | 1.0    1.0  | 0.0  | 3.0  |
-------------------------------------------------------------------------

N - port number or number of cycles resource conflict caused delay, DV - Divide
D - Data fetch pipe (on ports 2 and 3), CP - on a critical path
F - Macro Fusion with the previous instruction occurred

| Num Of |              Ports pressure in cycles               |    |
|  Uops  |  0  - DV  |  1  |  2  -  D  |  3  -  D  |  4  |  5  |    |
---------------------------------------------------------------------
|   1    |           |     |           |           |     | 1.0 | CP | cmp edi,
|   0F   |           |     |           |           |     |     |    | jz 0xb
|   1    |           |     | 1.0   1.0 |           |     |     |    | movzx ebx
|   2    |           |     |           | 1.0   1.0 |     | 1.0 | CP | cmp cl, b
|   0F   |           |     |           |           |     |     |    | jz 0x3
|   1    | 0.5       | 0.5 |           |           |     |     |    | inc edi
|   1    |           |     |           |           |     | 1.0 | CP | jmp 0xfff

For fast loop:

Throughput Analysis Report
--------------------------
Block Throughput: 2.00 Cycles       Throughput Bottleneck: Port5

Port Binding In Cycles Per Iteration:
-------------------------------------------------------------------------
|  Port  |  0   -  DV  |  1   |  2   -  D   |  3   -  D   |  4   |  5   |
-------------------------------------------------------------------------
| Cycles | 0.5    0.0  | 0.5  | 1.0    1.0  | 1.0    1.0  | 0.0  | 2.0  |
-------------------------------------------------------------------------

N - port number or number of cycles resource conflict caused delay, DV - Divide
D - Data fetch pipe (on ports 2 and 3), CP - on a critical path
F - Macro Fusion with the previous instruction occurred

| Num Of |              Ports pressure in cycles               |    |
|  Uops  |  0  - DV  |  1  |  2  -  D  |  3  -  D  |  4  |  5  |    |
---------------------------------------------------------------------
|   1    | 0.5       | 0.5 |           |           |     |     |    | inc edi
|   1    |           |     |           |           |     | 1.0 | CP | cmp edi,
|   0F   |           |     |           |           |     |     |    | jz 0x8
|   1    |           |     | 1.0   1.0 |           |     |     |    | movzx ebx
|   2    |           |     |           | 1.0   1.0 |     | 1.0 | CP | cmp cl, b
|   0F   |           |     |           |           |     |     |    | jnz 0xfff

So in slow loop JMP is an extra instruction in Critical Path. All pairs of cmp+jz/jnz are merged (Macro-fusion) into single u-op. And in my implementation of code the critical resource is Port5, which can execute ALU+JMP (and it is the only port with JMP capability).

PS: If somebody has no idea about where ports are located, there are pictures first second; and article: rwt

PPS: IACA has some limitations; it models only some part of CPU (Execution units), and doesn't account cache misses, branch mispredictions, different penalties, frequency/power changes, OS interrupts, HyperThreading contention for Execution units and many other effects. But it is useful tool because it can give you some quick look inside the most internal core of modern Intel CPU. And it only works for inner loops (just like the loops in this question).

osgx
  • 90,338
  • 53
  • 357
  • 513
  • So due to how the instructions can be scheduled as micro-ops, the slow loop takes 3.05 cycles to execute and the fast loop takes 2 cycles. That is why there's such a large difference between their execution time even though there's only 1 additional instruction in the slow loop. Is that right? – briangreenery Jun 29 '12 at 19:15
  • The IACA is simulator and it can't be fully exact. `Block Throughput:` is computed for some ideal case (e.g. in infinite loop without cachemisses) and for some model of CPU (not very-exact). This tool can estimate that in best case there will be bottleneck in Execution Unit #5 (Port5) - and the minimal time for single iteration is 3 or 2 clock cycles. It can be computed by anybody with knowledge of basic instruction translation to microops and knowing of hardware required by JE/JNE/JMP instructions. – osgx Jun 29 '12 at 19:25
  • This additional instruction make the critical path longer, so it will affect best case. Thank you for interesting question! – osgx Oct 13 '14 at 19:28
2

Your test text causes the loop to run to completion for each and every iteration, and the fast loop has one fewer instruction.

Phil
  • 2,238
  • 2
  • 23
  • 34