What is the complexity of converting a very large n-bit number to a decimal representation?
My thought is that the elementary algorithm of repeated integer division, taking the remainder to get each digit, would have O(M(n)log n)
complexity, where M(n)
is the complexity of the multiplication algorithm; however, the division is not between 2 n-bit numbers but rather 1 n-bit number and a small constant number, so it seems to me the complexity could be smaller.