Questions tagged [branch-prediction]

In computer architecture, a branch predictor is a digital circuit that tries to guess which way a branch (e.g. an if-then-else structure) will go before this is known for sure. The purpose of the branch predictor is to improve the flow in the instruction pipeline. Branch predictors play a critical role in achieving high effective performance in many modern pipelined microprocessor architectures such as x86.

Why is it faster to process a sorted array than an unsorted array? Stack Overflow's highest-voted question and answer is a good introduction to the subject.


In computer architecture, a branch predictor is a digital circuit that tries to guess which way a branch (e.g. an if-then-else structure) will go before this is known for sure. The purpose of the branch predictor is to improve the flow in the instruction pipeline.

Branch predictors play a critical role in achieving high effective performance in many modern pipelined microprocessor architectures such as x86.

Two-way branching is usually implemented with a conditional jump instruction. A conditional jump can either be "not taken" and continue execution with the first branch of code which follows immediately after the conditional jump - or it can be "taken" and jump to a different place in program memory where the second branch of code is stored.

It is not known for certain whether a conditional jump will be taken or not taken until the condition has been calculated and the conditional jump has passed the execution stage in the instruction pipeline.

Without branch prediction, the processor would have to wait until the conditional jump instruction has passed the execute stage before the next instruction can enter the fetch stage in the pipeline. The branch predictor attempts to avoid this waste of time by trying to guess whether the conditional jump is most likely to be taken or not taken. The branch that is guessed to be the most likely is then fetched and speculatively executed. If it is later detected that the guess was wrong then the speculatively executed or partially executed instructions are discarded and the pipeline starts over with the correct branch, incurring a delay.

The time that is wasted in case of a branch misprediction is equal to the number of stages in the pipeline from the fetch stage to the execute stage. Modern microprocessors tend to have quite long pipelines so that the misprediction delay is between 10 and 20 clock cycles. The longer the pipeline the greater the need for a good branch predictor.

Source: http://en.wikipedia.org/wiki/Branch_predictor


The Spectre security vulnerability revolves around branch prediction:


Other resources

Special-purpose predictors: Return Address Stack for call/ret. ret is effectively an indirect branch, setting program-counter = return address. This would be hard to predict on its own, but calls are normally made with a special instruction so modern CPUs can match call/ret pairs with an internal stack.

Computer architecture details about branch prediction / speculative execution, and its effects on pipelined CPUs

  • Why is it faster to process a sorted array than an unsorted array?
  • Branch prediction - Dan Luu's article on branch prediction, adapted from a talk. With diagrams. Good introduction to why it's needed, and some basic implementations used in early CPUs, building up to more complicated predictors. And at the end, a link to TAGE branch predictors used on modern Intel CPUs. (Too complicated for that article to explain, though!)
  • Slow jmp-instruction - even unconditional direct jumps (like x86's jmp) need to be predicted, to avoid stalls in the very first stage of the pipeline: fetching blocks of machine code from I-cache. After fetching one block, you need to know which block to fetch next, before (or at best in parallel with) decoding the block you just fetched. A large sequence of jmp next_instruction will overwhelm branch prediction and expose the cost of misprediction in this part of the pipeline. (Many high-end modern CPUs have a queue after fetch before decode, to hide bubbles, so some blocks of non-branchy code can allow the queue to refill.)
  • Branch target prediction in conjunction with branch prediction?
  • What branch misprediction does the Branch Target Buffer detect?

Cost of a branch miss


Modern TAGE predictors (in Intel CPUs for example) can "learn" amazingly long patterns, because they index based on past branch history. (So the same branch can get different predictions depending on the path leading up to it. A single branch can have its prediction data scattered over many bits in the branch predictor table). This goes a long way to solving the problem of indirect branches in an interpreter almost always mispredicting (X86 prefetching optimizations: "computed goto" threaded code and Branch prediction and the performance of interpreters — Don't trust folklore), or for example a binary search on the same data with the same input can be really efficient.

Static branch prediction on newer Intel processors - according to experimental evidence, it appears Nehalem and earlier do sometimes use static prediction at some point in the pipeline (backwards branches default to predicted-taken, forward to not-taken.) But Sandybridge and newer seem to be always dynamic based on some history, whether it's from this branch or one that aliases it. Why did Intel change the static branch prediction mechanism over these years?

Cases where TAGE does "amazingly" well


Assembly code layout: not so much for branch prediction, but because not-taken branches are easier on the front-end than taken branches. Better I-cache code density if the fast-path is just a straight line, and taken branches mean the part of a fetch block after the branch isn't useful.

Superscalar CPUs fetch code in blocks, e.g. aligned 16 byte blocks, containing multiple instructions. In non-branching code, including not-taken conditional branches, all of those bytes are useful instruction bytes.


Branchless code: using cmov or other tricks to avoid branches

This is the asm equivalent of replacing if (c) a=b; with a = c ? b : a;. If b doesn't have side-effects, and a isn't a potentially-shared memory location, compilers can do "if-conversion" to do the conditional with a data dependency on c instead of a control dependency.

(C compilers can't introduce a non-atomic read/write: that could step on another thread's modification of the variable. Writing your code as always rewriting a value tells compilers that it's safe, which sometimes enables auto-vectorization: AVX-512 and Branching)

Potential downside to cmov in scalar code: the data dependency can become part of a loop-carried dependency chain and become a bottleneck, while branch prediction + speculative execution hide the latency of control dependencies. The branchless data dependency isn't predicted or speculated, which makes it good for unpredictable cases, but potentially bad otherwise.

363 questions
2
votes
4 answers

Intel: serializing instructions and branch prediction

The Intel Architecture's Developer's Manual (Vol3A, Section 8-26), says: The Pentium processor and more recent processor families use branch-prediction techniques to improve performance by prefetching the destination of a branch instruction…
Justicle
  • 14,761
  • 17
  • 70
  • 94
2
votes
0 answers

Perf branch misses on non conditional instructions

I want to understand branch prediction behavior of a program I work on. For this, I use the perf tool. I recorded with: perf record -e branches,branch-misses and visualizing it with perf report --hierarchy -M intel I get results, but I don't…
2
votes
1 answer

constexpr function params for not known at compile time booleans C++

I need to run a function with N boolean variables, I want to make them constexpr in order to exterminate comparisons and save the code from branch prediction failure. What I mean is: templateFunc(args...); as the b1..bn…
2
votes
0 answers

The pipeline execution diagram with or without a delay slot, with predict-taken

I am working on a problem in the topic of The processors. This problem is in the book whose title is "Computer Organization and Design (6th Edition)". The problem is as follows: Clearly, this problem is about the branch-taken branch predictor, and…
2
votes
0 answers

Why does a high percentage of identical input data reduce performance?

I have some code which I have been working on and in order to optimise it I have been trying to understand the compiler's optimisation process by testing how different types of input data affect its performance. A simplified version of my code is as…
Amelia
  • 417
  • 1
  • 4
  • 12
2
votes
3 answers

What is faster in C++: mod (%) or another counter?

At the risk of this being a duplicate, maybe I just can't find a similar post right now: I am writing in C++ (C++20 to be specific). I have a loop with a counter that counts up every turn. Let's call it counter. And if this counter reaches a…
Jere
  • 1,196
  • 1
  • 9
  • 31
2
votes
1 answer

Is it possible to "skip" the branch predictor when you know the path ahead of time?

Lets say my code is the following. It's a silly nonsense example but the point is there's at least 2 cycles of work before getting to the branch. Maybe more since the multiply depends on previous values. Is there any chance of this taking the…
2
votes
0 answers

How does trace-inputs provided to trace-driven simulators look like?

Simulators used to study computer architecture performance are broadly categorized as execution-driven and trace-driven. They work in the following fashion. Trace Driven Simulator: A real machine is used to execute a benchmark program/software in…
2
votes
3 answers

What role do branch mispredictions play in hash table lookup performance?

A typical hash table lookup algorithm - including one of the ones claiming to be the fastest in the world - is structured something a little bit like this. while (true) { if (currentSlot.isEmpty) return null; if (currentSlot.key == key) return…
Sam
  • 410
  • 2
  • 10
2
votes
2 answers

Could branch prediction optimization be inherited?

Does it make sense to implement own branch prediction optimization in own VM interpreter or it is enough to run VM on hardware that already has branch prediction optimization support?
k06a
  • 17,755
  • 10
  • 70
  • 110
2
votes
0 answers

Is it possible to mitigate branch misprediction penalty by giving earlier hint?

TLDR: I want to give runtime branch prediction hits for x86-64, ideally if compiled by MSVC without asm, for a branch that is based on random data, by peeking into that data. Is it possible? Assume sequentially interpreting a byte stream, where…
Alex Guteniev
  • 12,039
  • 2
  • 34
  • 79
2
votes
1 answer

Pipeline Processor Design to handle both branch outcomes

So I have recently been studying about Pipeline processor architecture, mainly in the context of Y86-64. There, I have just read about Branch Prediction and how in case of a mispredicted branch, the Fetch, Decode and Execute Pipeline registers have…
2
votes
1 answer

Performance penalty: denormalized numbers versus branch mis-predictions

For those that have already measured or have deep knowledge about this kind of considerations, assume that you have to do the following (just to pick any for the example) floating-point operator: float calc(float y, float z) { return sqrt(y * y + z…
ABu
  • 10,423
  • 6
  • 52
  • 103
2
votes
0 answers

Branch Prediction for Try Catch

I recently read this very interesting and highly rated question about Branch prediction, and it got me thinking - how do try-catch clauses affect branch prediction (in java)? There's a lot of information out there regarding if/else, but none seem to…
Sam
  • 1,234
  • 3
  • 17
  • 32
2
votes
4 answers

C# reinterpret bool as byte/int (branch-free)

Is it possible in C# to turn a bool into a byte or int (or any integral type, really) without branching? In other words, this is not good enough: var myInt = myBool ? 1 : 0; We might say we want to reinterpret a bool as the underlying byte,…
Timo
  • 7,992
  • 4
  • 49
  • 67