No, your code has time complexity of O(2^|<DeltaTime>|)
,
For a proper coding of the current time.
Please, let me first apologize for my English.
What is and how Big O works in CS
Big O notation is not used to tie the input of a program with its running time.
Big O notation is, leaving rigor behind, a way to express the asymptotic ratio of two quantities.
In the case of algorithm analysis these two quantities are not the input (for which one must first have a "measure" function) and the running time.
They are the length of the coding of an instance of the problem1 and a metric of interest.
The commonly used metrics are
- The number of steps required to complete the algorithm in a given model of computation.
- The space required, if any such concept exists, by the model of computation.
Implicitly is assumed a TM as the model so that the first point translates to the number of applications of the transition2 function, i.e. "steps", and the second one translates the number of different tape cells written at least once.
Is it also often implicitly assumed that we can use a polynomially related encoding instead of the original one, for example a function that search an array from start to end has O(n)
complexity despite the fact that a coding of an instance of such array should have length of n*b+(n-1)
where b
is the (constant) number of symbols of each element. This is because b
is considered a constant of the computation model and so the expression above and n
are asymptotically the same.
This also explains why an algorithm like the Trial Division is an exponential algorithm despite essentially being a for(i=2; i<=sqr(N); i++)
like algorithm3.
See this.
This also means that big O notation may use as many parameters one may needs to describe the problem, is it not unusual to have a k parameter for some algorithms.
So this is not about the "input" or that "there is no input".
Study case now
Big O notation doesn't question your algorithm, it just assumes that you know what you are doing. It is essentially a tool applicable everywhere, even to algorithm which may be deliberately tricky (like yours).
To solve your problem you used the current date and a future date, so they must be part of the problem somehow; simply put: they are part of the instance of the problem.
Specifically the instance is:
<DeltaTime>
Where the <>
means any, non pathological, coding of choice.
See below for very important clarifications.
So your big O complexity time is just O(2^|<DeltaTime>|)
, because you do a number of iteration that depends on the value of current time.
There is no point in putting other numeric constants as the asymptotic notation is useful as it eliminates constants (so for example the use of O(10^|<DeltaTime>|*any_time_unit)
is pointless).
Where the tricky part is
We made one important assumption above: that the model of computation reificates5 time, and by time I mean the (real?) physical time.
There is no such concept in the standard computational model, a TM does not know time, we link time with the number of steps because this is how our reality work4.
In your model however time is part of the computation, you may use the terminology of functional people by saying that Main is not pure but the concept is the same.
To understand this one should note that nothing prevent the Framework to using a fake time that run twice, five, ten times faster that physical time. This way your code will run in "half", "one fifth", "one tenth" of the "time".
This reflection is important for choosing the encoding of <DeltaTime>
, this is essentially a condensed way of writing <(CurrentTime, TimeInFuture)>.
Since time does not exist at priory, the coding of CurrentTime could very well be the word Now (or any other choice) the day before could be coded as Yesterday, there by breaking the assumption that the length of the coding increase as the physical time goes forward (and the one of DeltaTime decreases)
We have to properly model time in our computational model in order to do something useful.
The only safe choice we can do is to encode timestamps with increasing lengths (but still not using unary) as the physical time steps forward. This is the only true property of time we need and the one the encoding needs to catch.
Is it only with this type of encoding that your algorithm maybe given a time complexity.
Your confusion, if any, arise from the fact that the word time in the phrases 'What is its time complexity?' and 'How much time will it take?' means to very very different things
Alas the terminology use the same words, but you can try using "steps complexity" in your head and re-ask yourself your question, I hope that will help you understand the answer really is ^_^
1 This also explains the need of an asymptotic approach as each instance has a different, yet not arbitrary, length.
2 I hope I'm using the correct English term here.
3 Also this is why we often find log(log(n))
terms in the math.
4 Id est, a step must occupy some finite, but not null, nor not connected, interval of time.
5 This means that the computational mode as a knowledge of physical time in it, that is can express it with its terms. An analogy are how generics work in the .NET framework.