0

Here is the pseudo code below:

pow2(a,b,k)
d := a, e := b, s := 1 
until e = 0
if e is odd s:=s·d modk
d:=d2 modk
e := ⌊e/2⌋ 
return s
end
  • The number of time the loop runs is: log b(base 2), As this is the number of times b = e can be halved before it is rounded to 0
  • The input size of b is log b
  • With all this above information my calculation for the time complexity is as follows: O(log(log b)) (base 2) Which when simplified is like O(2^b) I know I am incorrect as this algorithm is meant to be efficient and so the time complexity should be better.
  • The actual time complexity of this algorithm is O(n) wrt to the input size of b.
  • please explain how the time complexity O(n) can be determined from all the information above.
  • Where does O(log(log(b))) come from? Why does that simplify to O(2^b)? You also have some problems with your code -- where does the until block end and where is the "return s" statement? – Paul Hankin Dec 28 '17 at 14:02
  • Well the time complexity would have been log b if it was being calculated wrt the actual input value, but because it is being calculated wrt the input size which can be calculated using: log b (base 2), I thought that the b in O(log b) can be replaced with log b? And that would give the time complexity wrt the input size? Thanks – Selly Noor Dec 28 '17 at 14:06
  • The loop executes lg(b) times. If n is the size of the input, that is, n=lg(b), then the loop executes n times. So the complexity is O(n). – Paul Hankin Dec 28 '17 at 14:09
  • Ah okay I guess I have been confusing myself all along. Thanks for the clarification. – Selly Noor Dec 28 '17 at 14:12

0 Answers0