Questions tagged [branch-prediction]

In computer architecture, a branch predictor is a digital circuit that tries to guess which way a branch (e.g. an if-then-else structure) will go before this is known for sure. The purpose of the branch predictor is to improve the flow in the instruction pipeline. Branch predictors play a critical role in achieving high effective performance in many modern pipelined microprocessor architectures such as x86.

Why is it faster to process a sorted array than an unsorted array? Stack Overflow's highest-voted question and answer is a good introduction to the subject.


In computer architecture, a branch predictor is a digital circuit that tries to guess which way a branch (e.g. an if-then-else structure) will go before this is known for sure. The purpose of the branch predictor is to improve the flow in the instruction pipeline.

Branch predictors play a critical role in achieving high effective performance in many modern pipelined microprocessor architectures such as x86.

Two-way branching is usually implemented with a conditional jump instruction. A conditional jump can either be "not taken" and continue execution with the first branch of code which follows immediately after the conditional jump - or it can be "taken" and jump to a different place in program memory where the second branch of code is stored.

It is not known for certain whether a conditional jump will be taken or not taken until the condition has been calculated and the conditional jump has passed the execution stage in the instruction pipeline.

Without branch prediction, the processor would have to wait until the conditional jump instruction has passed the execute stage before the next instruction can enter the fetch stage in the pipeline. The branch predictor attempts to avoid this waste of time by trying to guess whether the conditional jump is most likely to be taken or not taken. The branch that is guessed to be the most likely is then fetched and speculatively executed. If it is later detected that the guess was wrong then the speculatively executed or partially executed instructions are discarded and the pipeline starts over with the correct branch, incurring a delay.

The time that is wasted in case of a branch misprediction is equal to the number of stages in the pipeline from the fetch stage to the execute stage. Modern microprocessors tend to have quite long pipelines so that the misprediction delay is between 10 and 20 clock cycles. The longer the pipeline the greater the need for a good branch predictor.

Source: http://en.wikipedia.org/wiki/Branch_predictor


The Spectre security vulnerability revolves around branch prediction:


Other resources

Special-purpose predictors: Return Address Stack for call/ret. ret is effectively an indirect branch, setting program-counter = return address. This would be hard to predict on its own, but calls are normally made with a special instruction so modern CPUs can match call/ret pairs with an internal stack.

Computer architecture details about branch prediction / speculative execution, and its effects on pipelined CPUs

  • Why is it faster to process a sorted array than an unsorted array?
  • Branch prediction - Dan Luu's article on branch prediction, adapted from a talk. With diagrams. Good introduction to why it's needed, and some basic implementations used in early CPUs, building up to more complicated predictors. And at the end, a link to TAGE branch predictors used on modern Intel CPUs. (Too complicated for that article to explain, though!)
  • Slow jmp-instruction - even unconditional direct jumps (like x86's jmp) need to be predicted, to avoid stalls in the very first stage of the pipeline: fetching blocks of machine code from I-cache. After fetching one block, you need to know which block to fetch next, before (or at best in parallel with) decoding the block you just fetched. A large sequence of jmp next_instruction will overwhelm branch prediction and expose the cost of misprediction in this part of the pipeline. (Many high-end modern CPUs have a queue after fetch before decode, to hide bubbles, so some blocks of non-branchy code can allow the queue to refill.)
  • Branch target prediction in conjunction with branch prediction?
  • What branch misprediction does the Branch Target Buffer detect?

Cost of a branch miss


Modern TAGE predictors (in Intel CPUs for example) can "learn" amazingly long patterns, because they index based on past branch history. (So the same branch can get different predictions depending on the path leading up to it. A single branch can have its prediction data scattered over many bits in the branch predictor table). This goes a long way to solving the problem of indirect branches in an interpreter almost always mispredicting (X86 prefetching optimizations: "computed goto" threaded code and Branch prediction and the performance of interpreters — Don't trust folklore), or for example a binary search on the same data with the same input can be really efficient.

Static branch prediction on newer Intel processors - according to experimental evidence, it appears Nehalem and earlier do sometimes use static prediction at some point in the pipeline (backwards branches default to predicted-taken, forward to not-taken.) But Sandybridge and newer seem to be always dynamic based on some history, whether it's from this branch or one that aliases it. Why did Intel change the static branch prediction mechanism over these years?

Cases where TAGE does "amazingly" well


Assembly code layout: not so much for branch prediction, but because not-taken branches are easier on the front-end than taken branches. Better I-cache code density if the fast-path is just a straight line, and taken branches mean the part of a fetch block after the branch isn't useful.

Superscalar CPUs fetch code in blocks, e.g. aligned 16 byte blocks, containing multiple instructions. In non-branching code, including not-taken conditional branches, all of those bytes are useful instruction bytes.


Branchless code: using cmov or other tricks to avoid branches

This is the asm equivalent of replacing if (c) a=b; with a = c ? b : a;. If b doesn't have side-effects, and a isn't a potentially-shared memory location, compilers can do "if-conversion" to do the conditional with a data dependency on c instead of a control dependency.

(C compilers can't introduce a non-atomic read/write: that could step on another thread's modification of the variable. Writing your code as always rewriting a value tells compilers that it's safe, which sometimes enables auto-vectorization: AVX-512 and Branching)

Potential downside to cmov in scalar code: the data dependency can become part of a loop-carried dependency chain and become a bottleneck, while branch prediction + speculative execution hide the latency of control dependencies. The branchless data dependency isn't predicted or speculated, which makes it good for unpredictable cases, but potentially bad otherwise.

363 questions
0
votes
1 answer

Branch predictor based on probability

Given some assembly code ,it is known that 90% of branches should be taken. I have no knowledge regarding branch conditions.And decision about take or not take for each branch should be done only based on probability. The branch`s offset can be…
YAKOVM
  • 9,805
  • 31
  • 116
  • 217
0
votes
1 answer

Inline assembly with "jmp 0f" or "b 0f" at the beginning

updated Changed the 2nd line of assembly to the mnemonic actually being used (mflr) and added more info at the bottom. I ran across some code (using gcc) resembling the following (paraphrased): #define SOME_MACRO( someVar ) \ do { …
Brian Vandenberg
  • 4,011
  • 2
  • 37
  • 53
0
votes
1 answer

Reporting profile-guided compilation to the source code

In this question I will focus on Visual Studio 2012 and GCC 4.7 On the one hand, profile-guided compilation optimizes branch prediction by instrumenting the code at run-time, and then using this information during a second compilation. On the other…
qdii
  • 12,505
  • 10
  • 59
  • 116
0
votes
1 answer

Can branch predictors predict perfectly when the number of loop iterations is not constant?

Would the following code incur a branch misprediction penalty on let say an Intel Core i7? for(i = 0, count = *ptr; i < count; i++) { // do something } count can be 0, 1, or 2.
cleong
  • 7,242
  • 4
  • 31
  • 40
0
votes
2 answers

gcc branch prediction

Here's my demo program: #include #include #include int cmp(const void *d1, const void *d2) { int a, b; a = *(int const *) d1; b = *(int const *) d2; if (a > b) return 1; else if (a == b) …
user963720
0
votes
1 answer

Branch Prediction - Global Share Implementation Explanation

I'm working on an assignment in my Computer Architecture class where we have to implement a branch prediction algorithm in C++ (for the Alpha 21264 microprocessor architecture). There is a solution provided as an example. This solution is an…
errant
  • 11
  • 1
  • 3
-1
votes
2 answers

Is 2-bit prediction always better than 1-bit?

Does 2-bit prediction always better than 1-bit? And from wikipedia, how ‘a loop-closing conditional jump is mispredicted once rather than twice.’ with 2-bit prediction? According to this answer, 2-bit prediction will have 1 misprediction if with…
zg c
  • 113
  • 1
  • 1
  • 7
-1
votes
1 answer

High Performance Bit Removal (XOR vs. subtract)

It is my understanding that XOR messes with branch prediction. Is it preferable to remove bits via subtraction or via xor for operations that will run a great many times? // For an operation that will run several million times ... int encoding =…
B. Nadolson
  • 2,988
  • 2
  • 20
  • 27
-1
votes
1 answer

How to understand macro `likely` affecting branch prediction?

I noticed if we know there is good chance for control flow is true or false, we can tell it to compiler, for instance, in Linux kernel, there are lots of likely unlikely, actually impled by __builtin_expect provided by gcc, so I want to find out how…
http8086
  • 1,306
  • 16
  • 37
-1
votes
1 answer

Is branch prediction still significantly speeding up array processing?

I was reading a interesting post about why is it faster to process a sorted array than an unsorted array? and saw a comment made by @mp31415 that said: Just for the record. On Windows / VS2017 / i7-6700K 4GHz there is NO difference between two…
Guillaume D
  • 2,202
  • 2
  • 10
  • 37
-1
votes
2 answers

How do you iterate simultaneously two array that are not equally spaced in a optimized way?

Let say I've to multiply two array such as A[MAX_BUFFER] and B[MAX_BUFFER] (with a MAX_BUFFER = 256). For some reason, each B[MAX_BUFFER] values are calculated at fixed control rate (8, for example), since each values would be heavy processed.…
markzzz
  • 47,390
  • 120
  • 299
  • 507
-1
votes
3 answers

how would you optimize this function?

#include #include #include int cp[1000000][3]; int p[1000000][3];//assume this array to be populated void main(){ srand(time(NULL)); for(n; n < 1000000; n++){ if (rand()%2) memcpy(cp[n], p[n], 12); …
Andreas
  • 177
  • 1
  • 8
-1
votes
2 answers

Efficient way of checking property from a large set of data inside a loop

Please, consider this generic piece of code: for j = 0 to (Array.length myArray) - 1 do if property.(id) then (* do a bunch of stuff*) done Here, property is a very large array of boolean. In this experiment, we have 2 cases: in the…
-1
votes
2 answers

Cache miss penalty on branching

I wonder is it faster to replace branching with 2 multiplications or no (due to cache miss penalty)? Here is my case: float dot = rib1.x*-dir.y + rib1.y*dir.x; if(dot<0){ dir.x = -dir.x; dir.y = -dir.y; } And I'm trying to replace it…
tower120
  • 5,007
  • 6
  • 40
  • 88
-2
votes
1 answer

Is branch prediction purely cpu behavior, or will the compiler give some hints?

In go standard package src/sync/once.go, a recent revision change the snippets if atomic.LoadUint32(&o.done) == 1 { return } //otherwise ... to: //if atomic.LoadUint32(&o.done) == 1 { // return // } if atomic.LoadUint32(&o.done)…
agnes
  • 11
  • 1
1 2 3
24
25