4

Problem 31.1-12 of the CLRS algorithms book asks the following question:

Give an efficient algorithm to convert a given β-bit (binary) integer to a decimal representation. Argue that if multiplication or division of integers whose length is at most β takes time M(β), then binary-to-decimal conversion can be performed in time Θ( M(β) lg β). (Hint: Use a divide-and-conquer approach, obtaining the top and bottom halves of the result with separate recursions.

It asks for time Θ( M(β) lg β). How is that even possible for a divide and conquer algorithm given that lg β alone is the height of the recursion tree? Does anyone know what the intended solution is?

user782220
  • 10,677
  • 21
  • 72
  • 135

1 Answers1

0

For the hint to work, it must be the case that M(β) is a linear function; in particular, that M(β) ≈ 2·M(β/2).

If that is given, there is an obvious solution: Recursively split the data into parts, process the parts separately, and combine the results. At level k of the recursion there will be 2ᵏ parts, each of length approximately β/(2ᵏ) bits, or about β total. The processing at level k costs 2ᵏ·M(β/(2ᵏ)) ≈ M(β), whence O(M(β)·lg β) total time.

To split a value u with β bits and process its two parts (v,w), let 2·d or 2·d+1 = ⌊β·ln(2)/ln(10)⌋; let v = ⌊u/10ᵈ⌋ and w = u-v·10ᵈ.

James Waldby - jwpat7
  • 8,593
  • 2
  • 22
  • 37
  • I think the book was actually suggesting to treat M as a constant, so that we need to make the conversion with lg B multiplications. This answer still makes 2^k ~ O(B) multiplications, so probably isn't what the book is looking for. Additions are assumed to be much cheaper such that the cost of multiplications dominate them. – xdavidliu Dec 22 '17 at 22:51