Problem 31.1-12 of the CLRS algorithms book asks the following question:
Give an efficient algorithm to convert a given
β
-bit (binary) integer to a decimal representation. Argue that if multiplication or division of integers whose length is at mostβ
takes timeM(β)
, then binary-to-decimal conversion can be performed in timeΘ( M(β) lg β)
. (Hint: Use a divide-and-conquer approach, obtaining the top and bottom halves of the result with separate recursions.
It asks for time Θ( M(β) lg β)
. How is that even possible for a divide and conquer algorithm given that lg β
alone is the height of the recursion tree? Does anyone know what the intended solution is?