I have been reading a Elements of Programming Interview and am struggling to understand the passage below:
"The algorithm taught in grade-school for decimal multiplication does not use repeated addition- it uses shift and add to achieve a much better time complexity. We can do the same with binary numbers- to multiply x and y we initialize the result to 0 and iterate through the bits of x, adding (2^k)y to the result if the kth bit of x is 1.
The value (2^k)y can be computed by left-shifting y by k. Since we cannot use add directly, we must implement it. We can apply the grade-school algorithm for addition to the binary case, i.e, compute the sum bit-by-bit and "rippling" the carry along.
As an example, we show how to multiply 13 = (1101) and 9 = (1001) using the algorithm described above. In the first iteration, since the LSB of 13 is 1, we set the result to (1001). The second bit of (1101) is 0, so we move on the third bit. The bit is 1, so we shift (1001) to the left by 2 to obtain (1001001), which we add to (1001) to get (101101). The forth and final bit of (1101) is 1, so we shift (1001) to the left by 3 to obtain (1001000), which we add to (101101) to get (1110101) = 117.
My Questions are:
- What is the overall idea behind this, how is it a "bit-by-bit" addition
- where does (2^k)y come from
- what does it mean by "left-shifting y by k"
- In the example, why do we set result to (1001) just because the LSB of 13 is 1?