Questions tagged [branch-prediction]

In computer architecture, a branch predictor is a digital circuit that tries to guess which way a branch (e.g. an if-then-else structure) will go before this is known for sure. The purpose of the branch predictor is to improve the flow in the instruction pipeline. Branch predictors play a critical role in achieving high effective performance in many modern pipelined microprocessor architectures such as x86.

Why is it faster to process a sorted array than an unsorted array? Stack Overflow's highest-voted question and answer is a good introduction to the subject.


In computer architecture, a branch predictor is a digital circuit that tries to guess which way a branch (e.g. an if-then-else structure) will go before this is known for sure. The purpose of the branch predictor is to improve the flow in the instruction pipeline.

Branch predictors play a critical role in achieving high effective performance in many modern pipelined microprocessor architectures such as x86.

Two-way branching is usually implemented with a conditional jump instruction. A conditional jump can either be "not taken" and continue execution with the first branch of code which follows immediately after the conditional jump - or it can be "taken" and jump to a different place in program memory where the second branch of code is stored.

It is not known for certain whether a conditional jump will be taken or not taken until the condition has been calculated and the conditional jump has passed the execution stage in the instruction pipeline.

Without branch prediction, the processor would have to wait until the conditional jump instruction has passed the execute stage before the next instruction can enter the fetch stage in the pipeline. The branch predictor attempts to avoid this waste of time by trying to guess whether the conditional jump is most likely to be taken or not taken. The branch that is guessed to be the most likely is then fetched and speculatively executed. If it is later detected that the guess was wrong then the speculatively executed or partially executed instructions are discarded and the pipeline starts over with the correct branch, incurring a delay.

The time that is wasted in case of a branch misprediction is equal to the number of stages in the pipeline from the fetch stage to the execute stage. Modern microprocessors tend to have quite long pipelines so that the misprediction delay is between 10 and 20 clock cycles. The longer the pipeline the greater the need for a good branch predictor.

Source: http://en.wikipedia.org/wiki/Branch_predictor


The Spectre security vulnerability revolves around branch prediction:


Other resources

Special-purpose predictors: Return Address Stack for call/ret. ret is effectively an indirect branch, setting program-counter = return address. This would be hard to predict on its own, but calls are normally made with a special instruction so modern CPUs can match call/ret pairs with an internal stack.

Computer architecture details about branch prediction / speculative execution, and its effects on pipelined CPUs

  • Why is it faster to process a sorted array than an unsorted array?
  • Branch prediction - Dan Luu's article on branch prediction, adapted from a talk. With diagrams. Good introduction to why it's needed, and some basic implementations used in early CPUs, building up to more complicated predictors. And at the end, a link to TAGE branch predictors used on modern Intel CPUs. (Too complicated for that article to explain, though!)
  • Slow jmp-instruction - even unconditional direct jumps (like x86's jmp) need to be predicted, to avoid stalls in the very first stage of the pipeline: fetching blocks of machine code from I-cache. After fetching one block, you need to know which block to fetch next, before (or at best in parallel with) decoding the block you just fetched. A large sequence of jmp next_instruction will overwhelm branch prediction and expose the cost of misprediction in this part of the pipeline. (Many high-end modern CPUs have a queue after fetch before decode, to hide bubbles, so some blocks of non-branchy code can allow the queue to refill.)
  • Branch target prediction in conjunction with branch prediction?
  • What branch misprediction does the Branch Target Buffer detect?

Cost of a branch miss


Modern TAGE predictors (in Intel CPUs for example) can "learn" amazingly long patterns, because they index based on past branch history. (So the same branch can get different predictions depending on the path leading up to it. A single branch can have its prediction data scattered over many bits in the branch predictor table). This goes a long way to solving the problem of indirect branches in an interpreter almost always mispredicting (X86 prefetching optimizations: "computed goto" threaded code and Branch prediction and the performance of interpreters — Don't trust folklore), or for example a binary search on the same data with the same input can be really efficient.

Static branch prediction on newer Intel processors - according to experimental evidence, it appears Nehalem and earlier do sometimes use static prediction at some point in the pipeline (backwards branches default to predicted-taken, forward to not-taken.) But Sandybridge and newer seem to be always dynamic based on some history, whether it's from this branch or one that aliases it. Why did Intel change the static branch prediction mechanism over these years?

Cases where TAGE does "amazingly" well


Assembly code layout: not so much for branch prediction, but because not-taken branches are easier on the front-end than taken branches. Better I-cache code density if the fast-path is just a straight line, and taken branches mean the part of a fetch block after the branch isn't useful.

Superscalar CPUs fetch code in blocks, e.g. aligned 16 byte blocks, containing multiple instructions. In non-branching code, including not-taken conditional branches, all of those bytes are useful instruction bytes.


Branchless code: using cmov or other tricks to avoid branches

This is the asm equivalent of replacing if (c) a=b; with a = c ? b : a;. If b doesn't have side-effects, and a isn't a potentially-shared memory location, compilers can do "if-conversion" to do the conditional with a data dependency on c instead of a control dependency.

(C compilers can't introduce a non-atomic read/write: that could step on another thread's modification of the variable. Writing your code as always rewriting a value tells compilers that it's safe, which sometimes enables auto-vectorization: AVX-512 and Branching)

Potential downside to cmov in scalar code: the data dependency can become part of a loop-carried dependency chain and become a bottleneck, while branch prediction + speculative execution hide the latency of control dependencies. The branchless data dependency isn't predicted or speculated, which makes it good for unpredictable cases, but potentially bad otherwise.

363 questions
3
votes
1 answer

Rust generic parameters and compile time if

Using C++ template and if constexpr I found a trick that I like a lot: suppose you have a function with some tunable option that are known compile-time, I can write something like template void my_func() { ... …
MaPo
  • 613
  • 4
  • 9
3
votes
2 answers

Is there automatic L1i cache prefetching on x86?

I looked at the wiki article on branch target predictor; it's somewhat confusing: I thought the branch target predictor comes into play when a CPU decides which instruction(s) to fetch next (into the CPU pipeline to execute). But the article…
deshalder
  • 507
  • 2
  • 13
3
votes
1 answer

In CUDA kernels, __assume() or __builtin_assume()?

CUDA offers the kernel author two functions, __builtin_assume() and __assume(). Their signatures are the same: void __builtin_assume(bool exp); void __assume(bool exp); and so is their one-line documentation. Are they the same? Is one of them…
einpoklum
  • 118,144
  • 57
  • 340
  • 684
3
votes
1 answer

Can I improve branch prediction with my code?

This is a naive general question open to any platform, language, or compiler. Though I am most curious about Aarch64, C++, GCC. When coding an unavoidable branch in program flow dependent on I/O state (compiler cannot predict), and I know that one…
3
votes
1 answer

BR/RET timing discrepancy when returning from contrived subroutine to a modified return address

In my adventures of experimenting around with the 64-bit ARM architecture, I noticed a peculiar speed difference depending on whether br or ret is used to return from a subroutine. ; Contrived for learning/experimenting purposes only, without any…
Mona the Monad
  • 2,265
  • 3
  • 19
  • 30
3
votes
0 answers

Why do unconditional jumps take up BTB space?

https://blog.cloudflare.com/branch-predictor/ contains an excellent analysis of the performance of branches on modern hardware. One thing that surprised me was the finding that unconditional jumps take up space in the branch target buffer.…
rwallace
  • 31,405
  • 40
  • 123
  • 242
3
votes
0 answers

Can branch prediction be optimised when branching on constant data?

I'm targeting Skylake hardware and compiling using Clang. Say we have some code structured like this: int always_runs(int i, int acc); // ~30 cycles int sometimes_runs(int i, int acc); // ~20 cycles int foo(const std::array
Sam
  • 410
  • 2
  • 10
3
votes
1 answer

How do you convert a boolean condition to an integer type in Java without a branching or jump in the compiled byte-code and JITed machine-code

As in the example given here for C/C++: ... This is due to a new technique described in "BlockQuicksort: How Branch Mispredictions don't affect Quicksort" by Stefan Edelkamp and Armin Weiss. In short, we bypass the branch predictor by using small…
3
votes
0 answers

c++20 Likely and Unlikely performance optimization

I read about the attributes likely and unlikely of c++20, and i want to ask if there are some reasonable and official data of performance advantages that this new attributes give to the execution. I mean there are examples execution test that give…
Zig Razor
  • 3,381
  • 2
  • 15
  • 35
3
votes
0 answers

Changing irrelevant part of the function changes papi measurement of branch prediction

I am playing with the codes that I found online and I want to try different branch prediction codes to have a better understanding of branch predictors. CPU is AMD Ryzen 3600. Basically, what I am doing is in the code below, I am trying to measure a…
user12527223
3
votes
3 answers

Branch on null vs null object performance

Which is most efficient: using a null object, or a branch on nullptr. Example in C++: void (*callback)() = [](){}; // Could be a class member void doDoStuff() { // Some code callback(); // Always OK. Defaults to nop // More code …
user877329
  • 6,717
  • 8
  • 46
  • 88
3
votes
1 answer

Understanding branch prediction efficiency

I tried to measure branch prediction cost, I created a little program. It creates a little buffer on stack, fills with random 0/1. I can set the size of the buffer with N. The code repeatedly causes branches for the same 1<
geza
  • 28,403
  • 6
  • 61
  • 135
3
votes
1 answer

When will dynamic branch prediction be useful?

For static branch prediction one always assume that the branch is not taken, while for dynamic branch prediction if the branch is taken before then it is more likely to be taken again. But I cannot come up with a situation that this is useful? What…
Kindred
  • 1,229
  • 14
  • 41
3
votes
1 answer

How does branch prediction interact with the instruction pointer

It's my understanding that at the beginning of a processor's pipeline, the instruction pointer (which points to the address of the next instruction to execute) is updated by the branch predictor after fetching, so that this new address can then be…
1110101001
  • 4,662
  • 7
  • 26
  • 48
3
votes
2 answers

What does it mean to "train" a branch predictor?

I was reading this article about a theoretical CPU vulnerability similar to Spectre, and it noted that: "The attacker needs to train the branch predictor such that it reliably mispredicts the branch." I roughly understand what branch prediction…