What is the disparity between bus throughput and CPU throughput? How does this adversely impact sequential computing? How does this adversely impact parallel computing?
Asked
Active
Viewed 178 times
1 Answers
0
If your CPU can access its cache in 1 nS steps, but your memory takes 60 nS to deliver a random memory word, at some point your processor is going to read memory at 60x slow rate than the cache. If you are processing a lot of data, you may see a tremendous slow down, even for sequential programs.
If you have multiple CPUs, they will collectively have a higher bandwidth demand on the bus. Imagine a serial-access bus with 64 CPUs all trying to read from it: only one succeeds at any one moment. The consequence is it is hard to get parallelism of 64 in such a system, unless each processor stays entirely within its cache.

Ira Baxter
- 93,541
- 22
- 172
- 341
-
Is the "60x slow rate" occurring only when the specific "memory word" is not in the cache and it has to be delivered to the cache? – user2871354 Dec 02 '13 at 22:46
-
Yes. The point is that if your processor is doing enough work, the cache cannot contain all the data, therefore it must eventually go outside the cache and you have to take the hit. – Ira Baxter Dec 02 '13 at 23:26
-
... there are some algorithms which read memory sequentially (or in known strides), and in that case the CPU/memory system can organize to deliver data sequentially or in strides before it is requested; this can help keep the bandwidth up. Most algorithms aren't so lucky, and you still need enough busses to feed 64 CPUs, if you have 64. – Ira Baxter Dec 02 '13 at 23:37