12

If I understand Big-O notation correctly, k should be a constant time for the efficiency of an algorithm. Why would a constant time be considered O(1) rather than O(k), considering it takes a variable time? Linear growth ( O(n + k) ) uses this variable to shift the time right by a specific amount of time, so why not the same for constant complexity?

Adam Eberlin
  • 14,005
  • 5
  • 37
  • 49
SImon
  • 839
  • 2
  • 10
  • 13
  • 6
    *If I understand Big-O notation correctly, k should be a constant time for the efficiency of an algorithm.* What is `k`? – japreiss Oct 23 '12 at 14:32
  • When representing linear growth, k is used, such that the complexity is O(n + k). Unless I am incorrect in saying that k is a constant time the algorithm takes? – SImon Oct 23 '12 at 14:34
  • @Simon When writing O(k) I see k is a variable, not a constant. And thus O(k) is linear time, since it's based on k. k number of operations takes k time units. – Simon Forsberg Oct 23 '12 at 14:35
  • no, when representing asymptotic linear growth, it is of order O(n), where n is the variable size of input. If k is a constant, O(n + k) = O(n) asymptotically. – im so confused Oct 23 '12 at 14:35
  • Ah, my mistake then. But why would O(k) be linear? It is not dependent on n at all. – SImon Oct 23 '12 at 14:36
  • 1
    Because you can name your variable whatever you want, whether it be n or k is up to you. It's just customary to use n. – phant0m Oct 23 '12 at 14:53

1 Answers1

5

There is no such linear growth asymptotic O(n + k) where k is a constant. If k were a constant and you went back to the limit representation of algorithmic growth rates, you'd see that O(n + k) = O(n) because constants drop out in limits.

Your answer may be O(n + k) due to a variable k that is fundamentally independent of the other input set n. You see this commonly in compares vs moves in sorting algorithm analysis.

To try to answer your question about why we drop k in Big-O notation (which I think is taught poorly, leading to all this confusion), one definition (as I recall) of O() is as follows:

formula

Read: f(n) is in O( g(n) ) iff there exists d and n_0 where for all n > n_0,
                                         f(n) <= d * g(n)

Let's try to apply it to our problem here where k is a constant and thus f(x) = k and g(x) = 1.

  • Is there a d and n_0 that exist to satisfy these requirements?

Trivially, the answer is of course yes. Choose d > k and for n > 0, the definition holds.

im so confused
  • 2,091
  • 1
  • 16
  • 25
  • But if k is a constant relevant to the algorithm, why use O(1) instead of O(k)? – SImon Oct 23 '12 at 14:40
  • 1
    @SImon: If `k` were constant then one would not say `O(k)`, one would say `O(1)` since they are the same thing. If `k` is not constant, but depends in some way on the input to the program, then they are not the same thing. – Steve Jessop Oct 23 '12 at 14:42
  • @SImon well, I see where you are confused. k may be relevant, but O() is only cares about *asymptotic* run times and loose equivalence. Your analysis of the algorithm may be more strict and more correct to include k as a measure of its operations – im so confused Oct 23 '12 at 14:43
  • Ah, so k is just the number of operations it takes? So in that case, two algorithms of complexity O(n^2) may have a different number of operations? In that case, why would k be added to a linear complexity, rather than multiplied? – SImon Oct 23 '12 at 14:48
  • @AK4749 You can use [this](http://i.stack.imgur.com/WVhRA.png) definition if you like, you just need to replace your `c`s with a `d` and `x` with `n`. – phant0m Oct 23 '12 at 14:55
  • @phant0m Man, I love me some imgur but it's blocked at work :( feel free to go ahead and edit my answer, I wont bite! – im so confused Oct 23 '12 at 14:57
  • That's a pity. All images uploaded via SO end up on imgur. That might prove a valid reason to have it unblocked if you work in a programming related field. – phant0m Oct 23 '12 at 14:59
  • 1
    @SImon Exactly, there may be and usually are many many different physical runtimes experienced by algorithms that have the same O() representation. In your case, I have no choice but to disagree with you: if the final result is O(n + k), k must be a variable that shows a different input set's influence on the algorithm than n. If k is constant, the final result of O(n + k) can be further siplified to O(n) – im so confused Oct 23 '12 at 14:59
  • @SImon if k is a constant, You can indeed drop it in the situation you are describing. However, be wary, there are some situations (k^n) that require you to keep it! – im so confused Oct 23 '12 at 15:00
  • @phant0m quick question, was I wrong about the magnitude requirement? (|f| <= c|g|) ? the new definition doesn't seem to require that – im so confused Oct 23 '12 at 15:03
  • 1
    No, not at all. It's just that there are different definitions floating around. This is the one I was taught in my datastructures and algorithms course, and since we usually use this to talk about runtimes, the values would be positive anyway. The image defines f to be a positive function, so I thought I'd omit the `| |` from the textual description as well. I simply recycled the image from an old answer of mine. – phant0m Oct 23 '12 at 15:07
  • @phant0m aha! damn the firewall, I hadn't even noticed there was an image now haha thanks – im so confused Oct 23 '12 at 15:09
  • Otherwise I would not have changed the variables ;) – phant0m Oct 23 '12 at 15:11