1

This is identical to the question found on Check if one integer is an integer power of another, but I am wondering about the complexity of a method that I came up with to solve this problem.

Given an integer n and another integer m, is n = m^p for some integer p. Note that ^ here is exponentiation and not xor.

There is a simple O(log_m n) solution based on dividing n repeatedly by m until it's 1 or until there's a non-zero remainder.

I'm thinking of a method inspired by binary search, and it's not clear to me how complexity should be calculated in this case.

Essentially you start with m, then you go to m^2, then m^4, m^8, m^16, ..... When you find that m^{2^k} > n, you check the range bounded by m^{2^{k-1}} and m^{2^k}. Is this solution O(log_2 (log_m(n)))?

Somewhat related to this, if I do something like

m^2 * m^2

vs.

m * m * m * m

Do these 2 have the same complexity? If they do, then I think the algorithm I came up with is still O(log_m (n))

roulette01
  • 1,984
  • 2
  • 13
  • 26
  • Yes, you are correct, the complexity of your algorithm is `log_2 (log_m (n))`. Also note that `log_m(n) = log_2(n) / log_2(m)`; thus `log_2 (log_m (n)) = log_2(log_2(n)) - log_2(log_2(m))`. – Stef Sep 30 '20 at 13:32
  • You can use math formula: `p = log(n) / log(m)`; if `p` is close to some round int, you can try to verify result, by compute power `m^p` with quick power function, works as `log(p)`. – olegarch Sep 30 '20 at 19:30

1 Answers1

0

Not quite. First of all, let's assume that multiplication is O(1), and exponentiation a^b is O(log b) (using exponentiation by squaring).

Now using your method of doubling the exponent p_candidate and then doing a binary search, you can find the real p in log(p) steps (or observe that p does not exist). But each try within the binary search requires you to compute m^p_candidate, which is bounded by m^p, which by assumption is O(log(p)). So the overall time complexity is O(log^2(p)).

But we want to express the time complexity in terms of the inputs n and m. From the relationship n = m^p, we get p = log(n)/log(m), and hence log(p) = log(log(n)/log(m)). Hence the overall time complexity is

O(log^2(log(n)/log(m)))

If you want to get rid of the m, you can provide a looser upper bound by using

O(log^2(log(n)))

which is close to, but not quite O(log(log(n))). (Note that you can always omit the logarithmic bases in the O-notation since all logarithmic functions differ only by a constant factor.)

Now, the interesting question is: is this algorithm better than one that is O(log(n))? I haven't proved it, but I'm pretty certain it is the case that O(log^2(log(n))) is in O(log(n)) but not vice versa. Anyone cares to prove it?

Mo B.
  • 5,307
  • 3
  • 25
  • 42
  • Oops yeah I realized that after posting and forgot to edit the question. Do you agree that finding the bounds is O(log(log n))? Also I put the bases for the logs here because it helps visualize the case – roulette01 Sep 30 '20 at 19:40
  • You mean finding the upper bound for `p`? Yes, since each squaring (e.g. getting from `m^8` to `m^16`) is assumed to be `O(1)`, you only need `log(p)` tries, which is `log(log(n)/log(m))` which in turn is bounded by `log(log(n))`. – Mo B. Sep 30 '20 at 19:51
  • Yes. But the thing idk if it's correct for us to make that assumption of `O(1)`. This is one of the things about complexity analysis that confuses me in that when can we assume some standard mathematical operation can be performed in constant time. – roulette01 Sep 30 '20 at 19:53
  • These assumptions are always simplifications. It is a valid assumption if your integers fit into the word size of the CPU, so multiplication can be done in a bounded and small number of CPU cycles. If your integers are unbounded, the assumption is not valid anymore. – Mo B. Sep 30 '20 at 20:05