Preliminaries
Time complexity isn't usually expressed in terms of a specific integer, so the statement "The time complexity of operation X is 18" isn't clear without a unit, e.g., 18 "doodads".
One usually expresses time complexity as a function of the size of the input to some function/operation.
You often want to ignore the specific amount of time a particular operation takes, due to differences in hardware or even differences in constant factors between different languages. For example, summation is still O(n)
(in general) in C and in Python (you still have to perform n
additions), but differences in constant factors between the two languages will result in C being faster in terms of absolute time the operation takes to halt.
One also usually assumes that "Big-Oh"--e.g, O(f(n))
--is the "worst-case" running time of an algorithm. There are other symbols used to study more strict upper and lower bounds.
Your question
Instead of summing from 1 to 5, let's look at summing from 1 to n
.
The complexity of this is O(n)
where n
is the number of elements you're summing together.
Each addition (with +
) takes constant time, which you're doing n
times in this case.
However, this particular operation that you've shown can be accomplished in O(1)
(constant time), because the sum of the numbers from 1 to n
can be expressed as a single arithmetic operation. I'll leave the details of that up to you to figure out.
As far as expressing this in terms of logarithms: not exactly sure why you'd want to, but here goes:
Because exp(log(n))
is n
, you could express it as O(exp(log(n)))
. Why would you want to do this? O(n)
is perfectly understandable without needing to invoke log
or exp
.