What is the Big-O efficiency of a stack, queue, set, and deque as it pertains to insertion, search, indexing, space, and deletion complexities?
-
1Thanks for the sarcasm @MikeDinescu, I've been searching Google for nearly an hour and haven't found much of an answer. My research hasn't been confined to one source, please don't think I haven't tried to find the answer on my own. – cereallarceny Aug 20 '14 at 19:43
-
What specifically have you found and what's you specific question? – Mike Dinescu Aug 20 '14 at 19:44
-
Everything I've found on the issue has related to various implementations, but none of what I've found has discussed Big-O. Matter of fact, my most informative source has been Wikipedia which gives a good primer on these data types, but also no explanation of their efficiency. – cereallarceny Aug 20 '14 at 19:48
-
2There we go. I've rephrased my question to just one "to-the-point" sentence. I've excluded what I've tried and searched for because apparently that's what gets your flamed by those more bent on arrogance than on helping someone understand a concept they find difficult to understand. – cereallarceny Aug 20 '14 at 19:51
5 Answers
This really depends on the implementation of each data structure. I'll go over a few, so you can get the idea of how to determine this.
Let's assume the Stack class is implemented using a Node
(it can be implemented with an LinkedList
, ArrayList
, arrray
, etc. as well).
Node top, bottom;
public Stack(Node n){
top = bottom = n;
}
Stack has 3 primary methods: peek
, push
, and pop
.
public int peek(){
return top.value; //only return value
}
There wasn't much processing involved. It just returned a primitive value. This is O(1) for both time and space.
public void push(Node n){
n.next = top;
top = n;
}
Still no real processing. This is O(1) for both time and space. Let's skip pop()
and try something more interesting. Let's try a method called contains(int v)
. It will search the stack to see if it contains a Node
which contains a value equal to v
.
public bool contains(int v){
Node current = top;
while(current != null){
if(current.value == v){
return true;
}
current = current.next;
}
return false;
}
Basically we'll move through the Node references until we find the value or we reach the end. In some cases, you'll find value early and in some cases later down the road. However, we care about the worst case! The worst possible case would be we have to check every single Node
. Say there are n Nodes, then we have O(n).
You can apply these analysis skills to the other data structures, so you can solve the rest yourself. It's not too bad. Good luck. :)

- 1,103
- 11
- 17

- 13,614
- 6
- 43
- 65
I've been using this: http://bigocheatsheet.com/ , but there is virtually no information on WHY those big O values are used. You have to dig in and research it for yourself.

- 2,907
- 2
- 22
- 30
I'm surprised you couldn't find this information online.
There's a distinction to be made between the data structures you listed in the question.
I'll start with the queue and stack data structures. Both the stack and the queue offer specialized access to the data in that there is no random access, only sequential. Therefore you can't talk about random access performance. In that case any decent implementation of a stack or a queue will offer O(1) access for both insert and remove operations (in their respective form).
A set is a very different structure and its performance will depend heavily on the underlying implementation. For instance you can implement a set using an underlying hash table for near constant time insert, remove, and find operations, or you can implement it using a balanced search tree for O(log n).

- 54,171
- 16
- 118
- 151
-
3i think in this context dequeue is 'double ended queue'; ie one the links in both directions – pm100 Aug 20 '14 at 19:57
The issue here is that these data structures are typically implemented in terms of other ones. For example, a set
could be implemented as a hash table or using a red-black-tree algorithm.
A stack doesn't provide random access, but is often implemented as a block of memory (e.g. an array) with a single element pointing to the top of the stack which is updated for push
and pop
operations.
A queue
could be implemented as an array or a linked list with very different insertion, removal and indexing characteristics. A deque
is more likely to be implemented as a linked list, but the Microsoft implementation in the C++ standard library uses a hybrid approach (see What the heque is going on with the memory overhead of std::deque?).
Big-O notation is usually reserved for algorithms and functions, not data types.
Furthermore, the time complexity very much depends on the implementation. Asking for the Big-O time complexity of a "stack" data type is like asking for the Big-O time complexity of "sorting". It all depends on the implementation. (More specifically, certain case-specific optimizations and requirements may modify the time complexity considerably.)
If you're looking to use the C++ STL as a reference implementation, you can find details on the complexity of each of your listed data types from here. Simply search for the data type and operation.

- 20,202
- 2
- 62
- 115