2

In calculating the efficiency of algorithms, I have read that the exponentiation operation is not considered to be an atomic operation (like multiplication).

Is it because exponentiation is the same as the multiplication operation repeated several times over?

Ali Haider
  • 378
  • 2
  • 13
  • 1
    Check [Wikipedia](https://en.wikipedia.org/wiki/Computational_complexity_of_mathematical_operations#Arithmetic_functions) on several algorithms for exponentiation and their complexity -- compared to other arithmetic operations. – trincot Sep 24 '22 at 20:53
  • 1
    Atomicity is a property of memory coherence, not arithmetic operations. – Raymond Chen Sep 24 '22 at 20:57
  • 1
    "atomic" is a pretty weird term to apply to an operation. It is not a *primitive* operation, performed by hardware circuits built into the processor. It has to be done with software, that makes it comparatively slow. One major reason for that is its tricky domain problems, it only has a non-complex result for negative powers when they are exact integrals. – Hans Passant Sep 24 '22 at 21:00

2 Answers2

3

In principle, you can pick any set of "core" operations on numbers that you consider to take a single time unit to evaluate. However, there are a couple of reasons, though, why we typically don't count exponentiation as one of them.

Perhaps the biggest has to do with how large of an output you produce. Suppose you have two numbers x and y that are each d digits long. Then their sum x + y has (at most) d + 1 digits - barely bigger than what we started with. Their product xy has at most 2d digits - larger than what we started with, but not by a huge amount. On the other hand, the value xy has roughly yd digits, which can be significantly bigger than what we started with. (A good example of this: think about computing 100100, which has about 200 digits!) This means that simply writing down the result of the exponentiation would require a decent amount of time to complete.

This isn't to say that you couldn't consider exponentiation to be a constant-time operation. Rather, I've just never seen it done.

(Fun fact: some theory papers don't consider multiplication to be a constant-time operation, since the complexity of a hardware circuit to multiply two b-bit numbers grows quadratically with the size of b. And some theory papers don't consider addition to be constant-time either, especially when working with variable-length numbers! It's all about context. If you're dealing with "smallish" numbers that fit into machine words, then we can easily count addition and multiplication as taking constant time. If you have huge numbers - say, large primes for RSA encryption - then the size of the numbers starts to impact the algorithm's runtime and implementation.)

templatetypedef
  • 362,284
  • 104
  • 897
  • 1,065
2

This is a matter of definition. For example in hardware-design and biginteger-processing multiplication is not considered an atomic operation (see e.g. this analysis of the karatsuba-algorithm).

On the level that is relevant for general purpose software-design on the other hand, multiplication can be considered as a fairly fast operation on fixed-digit numbers implemented in hardware. Exponentiation on the other hand is rarely implemented in hardware and an upper bound for the complexity can only be given in terms of the exponent, rather than the number of digits.