0

I solved this leetcode question https://leetcode.com/problems/divide-two-integers/ . The goal is to get the quotient of the division of dividend by divisor without using a multiplication or division operator. Here is my solution:

    def divide(dividend, divisor):     
        """
        :type dividend: int
        :type divisor: int
        :rtype: int
        """
        sign = [1,-1][(dividend < 0) != (divisor < 0)]
        dividend, divisor = abs(dividend), abs(divisor)
        res = 0
        i = 0
        Q = divisor
        while dividend >= divisor:
            dividend = dividend - Q
            Q <<= 1
            res += (1 << i)
            i+=1
            if dividend < Q:
                Q = divisor
                i = 0

        if sign == -1:
            res = -res

        if res < -2**31 or res > 2**31 -1:
            return 2**31 - 1

        return res

So I am having trouble analyzing the time complexity of this solution. I know it should be O(log(something)). Usually for algorithms we say they are O(log(n)) when the input gets divided by 2 at each iteration but here I multiply the divisor by 2 Q<<= 1 at each iteration so at each step I take a bigger step towards the solution. Obviously if the dividend is the same for a bigger divisor my algorithm will be faster. Similarly the bigger the dividend for the same divisor we get a slower run time.

My guess is the equation governing the runtime of this algorithm is basically of the form O(dividend/divisor) (duh that's division) with some logs in there to account for me multiplying Q by 2 at each step Q <<= 1... I can't figure out what exactly.

EDIT:

When I first posted the question the algorithm I posted is the one below, Alain Merigot's answer is based on that algorithm. The difference between the version on top and that one is for the one above I never have my dividend go below 0 resulting in a faster run time.

    def divide(dividend, divisor):
        """
        :type dividend: int
        :type divisor: int
        :rtype: int
        """
        sign = [1,-1][(dividend < 0) != (divisor < 0)]
        dividend, divisor = abs(dividend), abs(divisor)
        res = 0
        i = 0
        tmp_divisor = divisor
        while dividend >= divisor:
            old_dividend, old_res = dividend, res
            dividend = dividend - tmp_divisor
            tmp_divisor <<= 1
            res += (1 << i)
            i+=1
            if dividend < 0:
                dividend = old_dividend
                res = old_res
                tmp_divisor >>= 2
                i -= 2

        if sign == -1:
            res = -res

        if res < -2**31 or res > 2**31 -1:
            return 2**31 - 1

        return res
d_darric
  • 387
  • 2
  • 15
  • 2
    We consider the complexity based on the number of bits of the operands, and we know that every iteration produces a bit of the result. The worst case is when divider=1 and dividend is 2^31+k. In that case there are 32 (n) iterations and complexity is linear. – Alain Merigot May 22 '19 at 21:13
  • That makes a lot of sense actually, would you mind expanding the idea so I can grasp it ? thanks – d_darric May 22 '19 at 21:17

2 Answers2

1

Worst case complexity is easy to find.

Every iteration generates a bit of the result, and the number of iterations is equal to the number of bits in the quotient.

When divider=1, quotient=dividend and in that case the number of iterations is equal to the number of bits in dividend after the leading (most significant) 1. It is maximized when dividend=2^(n-1)+k, where n is the number of bits and k any number such as 1≤k<2^(n-1). This will obviously be the worst case.

After first iteration, dividend=dividend-diviser(=dividend-1) and diviser=2^1

After iteration m, diviser=2^m and dividend=dividend-(1+2^1+..+2^(m-1))=dividend-(2^m-1)

Iterations stop when dividend is <0. As dividend=2^(n-1)+k, with k>0, this happens for m=n.

Hence, the number of steps in the worst case is n and complexity is linear with number of bits of the dividend.

Alain Merigot
  • 10,667
  • 3
  • 18
  • 31
  • "Every iteration generates a bit of the result," - This isn't true, because the value of `i` can get reset to 0 and it will go back to update the previous bits. – interjay May 22 '19 at 22:37
  • @interjay Right, but actually, the question has been edited and the algorithm modified. My answer was relative to the first version of the algorithm where `i` was never reset to 0. – Alain Merigot May 23 '19 at 00:03
  • The value of `i` was also sometimes reduced in the original version, so my point holds. I do think that the original version may have been linear, but it's more difficult to prove that than your answer indicates because it isn't true that "the number of iterations is equal to the number of bits in the quotient". – interjay May 23 '19 at 00:48
  • Sorry I should have mentioned it in the edit... I am gonna fix it now so both versions are there. I fixed my algorithm after realising going below 0 was unnecessary. – d_darric May 23 '19 at 07:53
  • I just read and understood your analysis @AlainMerigot, and I have to agree with @interjay. In your reasoning you assume one pass for each bit when you say "iterations stop when dividend < 0 ". But in reality at this point I decrement i by 2 and might end up going over the same bit of `res` many times but each time the most significant bit is never touched again which intuitively sounds like O(n^2) with n the number of bits in the dividend if divisor = 1 (just like your example) --> or O(n^2) with n number of bits in the result more generally. – d_darric May 23 '19 at 22:35
1

Your algorithm is O(m^2) in the worst-case, where m is the number of bits in the result. In terms of the inputs, it would be O(log(dividend/divisor) ^ 2).

To see why, consider what your loop does. Let a=dividend, b=divisor. The loop subtracts b, 2b, 4b, 8b, ... from a as long as it's big enough, then repeats this sequence again and again until a<b.

It can be equivalently written as two nested loops:

while dividend >= divisor:
    Q = divisor
    i = 0
    while Q <= dividend:
        dividend = dividend - Q
        Q <<= 1
        res += (1 << i)
        i+=1

For each iteration of the outer loop, the inner loop will perform less iterations because dividend is smaller. In the worst case, the inner loop will do only one iteration less for each iteration of the outer loop. This happens when the result is 1+3+7+15+...+(2^n-1) for some n. In this case, it can be shown that n = O(log(result)), but the total number of inner loop iterations is O(n^2), i.e. quadratic in the size of the result.

To improve this to be linear in the size of the result, first calculate the largest needed values of Q and i. Then work backwards from that, subtracting 1 from i and shifting Q right each iteration. This guarantees no more than 2n iterations total.

interjay
  • 107,303
  • 21
  • 270
  • 254
  • Spot on analysis tx... I like the sound of what you suggest, it sounds really smart so basically n iterations to find max value of Q and i and then another n on the way down... I'll try to implement it later tx again ! – d_darric May 23 '19 at 22:43