0

Some parameters have been described in the Advanced computer architecture book by Hwang, e.g Speedup, Efficiency, Redundancy, Utilization and Quality as shown in the picture below.

enter image description here

I understand all and partially understand the last parameter, quality. The question is, why quality has inverse relationship with the redundancy. As said, redundancy shows the matching between software parallelism and the hardware. For example, one processor runs one unit instruction, therefore, O(1)=1.

By O(n) we are actually increasing the number of processors, so the unit operation increases and the upper bound is n. So, that is good! in't it?

But according to the quality metric, if we increase the redundancy, the quality decreases. Maybe some word definitions are ambiguous here.

Any thought?

mahmood
  • 23,197
  • 49
  • 147
  • 242

1 Answers1

1

I think misunderstood O(n), which is the total number of unit operations performed by all processors for the whole execution duration. This has no upper bound and O(1) is not necessarily (and realistically is not) 1; it depends on the application. If we had to do more operations to parallelize the application on n processors compared to executing it on a uniprocessor (i.e., the redundancy is larger than 1), then this is bad a thing and the larger the redundancy the greater the mismatch between software and hardware parallelism. Ideally, the redundancy is 1. Therefore, if we want to combine the speedup, efficiency, and redundancy into a single quality metric, speedup and efficiency should be in the numerator and the redundancy should be in the denominator.

It's worth noting that most of the upper and lower bounds on these metrics as mentioned in the book are too simplistic on modern processors. They make most sense on simple (scalar) processors. However, on a modern multicore system, they don't capture accurately what may happen in reality. For example, the speedup on a 2-processor system could be lower than 1 or higher than 2. The redundancy depends not only on the ISA but also on the compiler. So it can be smaller than 1 or larger 2. Although the metrics themselves can still be very useful.

Hadi Brais
  • 22,259
  • 3
  • 54
  • 95
  • Do you mean that the goal is to increase the number of unit operation with lower redundancy? That is a paradox then. Because the larger the unit operations, the larger the redundancy. – mahmood Oct 22 '18 at 12:03
  • @mahmood If I understand your comment correctly, you mean that we can reduce redundancy by increasing O(1), right? But that's not how it works. The programmer has no direct control over O(1) and O(n). These are supposed to be calculated by using some standard compiler optimization options and the same compiler for both. Note also that both O(1) and O(n) count dynamic instructions over all processors, not static instructions. We would like to make O(n) as small as possible compared to O(1) because generally, we a program is parallelized, the number of dynamic instructions has to increase. – Hadi Brais Oct 22 '18 at 12:54
  • Due to overheads such as thread creation and synchronization. It's a matter of by how much. – Hadi Brais Oct 22 '18 at 12:57