Why computer architects pay more attention on the accuracy of a branch predictor of a dynamic out-of-order processor than a branch predictor of a simple 5-stage in-order processor?
Asked
Active
Viewed 37 times
0
-
The potential performance lost by a branch misprediction in a 5-stage scalar in-order design is only 1 or 2 instructions (assuming branch resolution in 2nd or 3rd stage and no delay slot), a dual-issue design of the same pipeline depth would lose potentially 2 or 4 instructions; an OoO design which can continue processing past data dependencies would lose even more potential instruction execution. OoO also implies a deeper pipeline, which increases the loss of execution potential from a misprediction. Simplistically, improving performance shifts bottlenecks. – Jan 25 '15 at 21:22
-
I really liked your answer, why not post it as an answer so that I accept it? – Libathos Jan 26 '15 at 07:23
-
The question appears to be off-topic (not about programming). Giving a quick "answer" (an actual answer would be substantially longer) in a comment seems less offensive than answering an off-topic question (though I have done a lot of that). I think this question would be on-topic at [SuperUser](http://superuser.com), [Electrical Engineering](http://electronics.stackexchange.com), or [Computer Science](http://cs.stackexchange.com), so you might consider flagging for moderator attention to be migrated to one of those SE sites, perhaps asking on Metas first to find the most appropriate SE site. – Jan 26 '15 at 12:21