I was just working on a LeetCode problem, Roman to Integer, the conversion of Roman numerals to integers, and after finishing up and comparing solutions, I noticed a rather interesting nuance in how the solutions listed describe their computational complexity.
I had described my solution as O(n)
, linear with the number of input elements, as my solution iterated over the elements of a Roman numeral character by character. The official solutions, however, described how with numerals I
, V
, X
, L
, C
, D
, and M
, only numbers from 1 to 3999 can be expressed. Their argument was that because Big O only considers the worst case, and the worst case is fixed at 3999, time complexity is constant at O(1)
, regardless of process.
This begs a really subtle question. When we say "worst case performance," are we referring to worst case within any given size of n
, or across all n
. Do we consider, for a given n
, the worst case performance, or do we consider the specific choice of n
that gives us the global worst case performance?