2

In class, we were presented with an algorithm for 2^n mod(m).

    to find 2^n mod(m){  
      if n=0 {return 1;}  

      r=2^(n-1)mod(m);  
      if 2r < m {return 2r;}  
      if 2r > =m {return 2r-m;}  
    }

We were told that the runtime is O(n*size(m)) where size of m is the number of bits in m.

I understand the n part, but I cannot explain the size(m) unless it is because of the subtraction involved. Can anyone shed some light on that?

Thanks in advance.

Chet
  • 1,209
  • 1
  • 11
  • 29
  • *What* are you doing n times? You're doing an exponentiation, a modulus, a comparison and perhaps a subtraction. So ... – David Schwartz Oct 13 '11 at 00:05
  • 1
    I believe `r=2^(n-1)mod(m);` is recursive invocation of the same function – Slartibartfast Oct 13 '11 at 00:14
  • It's O(n) all right. O(n) == O(n*some_k). Although the `sizeof(m)` comes for cases if we'd allow an arbitrary size `m` having only hardware for fixed size arithmetic. – ruslik Oct 13 '11 at 00:16
  • Yes, this is recursive. You are recursing n times maximum. I can't figure out which operations (or perhaps all of them) account for the size(m) in complexity. – Chet Oct 13 '11 at 00:18

2 Answers2

0

I believe this is used in cryptography (so called noninvertible function).

If we need to compute (2**n) mod m recursively, this would be the most obvious way to do it. Since the depth of recursion is n, the O(n) complexity is obvious.

However, if we would like to support arbitrary size of m (512 bit keys are possible in cryptography, and are much larger than any arithmetic register), we should also consider that (in most cases we don't have to use arbitrary precision arithmetics, so this term is usually 1).

EDIT @Mysticial: The function does not call the hardware mod operation explicitely, all it does is shift and substraction. shift is always O(1) while addition/substraction is O(ceil(m/sizeof_ALU_precision))

ruslik
  • 14,714
  • 1
  • 39
  • 40
  • The problem here is that the computation time for arithmetic of size `m` isn't linear. It's `O(m^2)` for these small sizes, and roughly `O(m * log(m))` for extremely large `m`. – Mysticial Oct 13 '11 at 00:44
  • Yes, we are discussing RSA. I did not make the connection that that is why we were considering m. I imagine the size(m) term has some coefficient because of the comparisons, subtractions, etc that isn't included when notating O(n*size(m)). Thank you! – Chet Oct 13 '11 at 00:45
  • I was talking about bignum arithmetic. Arithmetic on 512-bit integers is considered bignum. So in that sense addition/subtraction is `O(m)` instead of `O(1)`, where `m` is the length of your key. – Mysticial Oct 13 '11 at 00:53
0

The n part is clear, as you have already understood yourself. The size(m) (which is the number of digits in m, which is basically log(m)) is because of mod. Even though your CPU does that for you in one instruction, it takes log(m) (let's say 32 bits) times. If m is very large, as is common with encryption keys, this can become considerable.

Why number of digits in m? Remember division:

abcdefghijk | xyz
            |-----
alm         | nrvd...
 opq
  stu
   wabc
    .......

The number of times you do the minus, is at most the number of digits in the dividend.

Shahbaz
  • 46,337
  • 19
  • 116
  • 182