I am just trying to understand how Big O and Big Omega work. I know that Big O means no better than, and Big Omega means no worse than running times. So if I have a function g(n) such that g(n) = O(f(n)) then can I say that f(n) = Ω(g(n))?
Asked
Active
Viewed 4,872 times
1 Answers
2
Notation-wise, it is better to write g(n) ∈ O(f(n)), because "O(f(n))" can be seen as the set of all functions that grow no faster than a multiple of f(n).
Let us restate the two relevant formal definitions used in complexity theory:
- g(n) ∈ O(f(n)) ⇔ ∃k>0 ∃N≥0 ∀n≥N [|g(n)| ≤ k·|f(n)|]
- f(n) ∈ Ω(g(n)) ⇔ ∃k>0 ∃N≥0 ∀n≥N [f(n) ≥ k·g(n)]
If we can assume that f and g are non-negative functions (which is almost always the case for functions used in computer science), then we can drop the absolute value signs. Thus:
- g(n) ∈ O(f(n)) ⇔ ∃k>0 ∃N≥0 ∀n≥N [g(n) ≤ k·f(n)]
- f(n) ∈ Ω(g(n)) ⇔ ∃k>0 ∃N≥0 ∀n≥N [f(n) ≥ k·g(n)]
Next, flip the inequality on the second logical statement:
- g(n) ∈ O(f(n)) ⇔ ∃k>0 ∃N≥0 ∀n≥N [g(n) ≤ k·f(n)]
- f(n) ∈ Ω(g(n)) ⇔ ∃k>0 ∃N≥0 ∀n≥N [k·g(n) ≤ f(n)]
Now let's prove that the right-hand side of the first statement implies the right-hand side of the second statement:
- Assume that ∃k>0 ∃N≥0 ∀n≥N [g(n) ≤ k·f(n)] is true.
- Instantiate the k>0 that satisfies ∃N≥0 ∀n≥N [g(n) ≤ k·f(n)].
- Let kʹ = 1/k, which is legal because k ≠ 0.
- Instantiate the N≥0 that satisfies ∀n≥N [g(n) ≤ k·f(n)].
- Let n be an arbitrary number such that n≥N.
- Then we have g(n) ≤ k·f(n).
- Next we have g(n)/k ≤ f(n).
- By substitution, we have kʹ·g(n) ≤ f(n).
- Because n is arbitrary, we derive that ∀n≥N [kʹ·g(n) ≤ f(n)].
- We derive that ∃N≥0 that satisfies ∀n≥N [kʹ·g(n) ≤ f(n)].
- We derive that ∃kʹ>0 ∃N≥0 ∀n≥N [kʹ·g(n) ≤ f(n)].
- We rename kʹ to k, so that ∃k>0 ∃N≥0 ∀n≥N [k·g(n) ≤ f(n)].
- Thus [∃k>0 ∃N≥0 ∀n≥N [g(n) ≤ k·f(n)]] implies [∃k>0 ∃N≥0 ∀n≥N [k·g(n) ≤ f(n)]].
- Therefore g(n) ∈ O(f(n)) implies f(n) ∈ Ω(g(n)), as wanted.

Nayuki
- 17,911
- 6
- 53
- 80