I'm currently using a List<T>
as a queue (use lst[0]
then lst.removeAt(0)
) to hold objects. There's about 20 items max at a given time. I realized there was an actual Queue<T>
class. I'm wondering if there's any benefit (performance, memory, etc.) to using a Queue<T>
over a List<T>
acting like a queue?
-
`Probably` not if you're not using more than 20 items. But you can measure that using the StopWatch class. – alexn Apr 30 '12 at 08:37
-
It depends on your usage scenario if it does matter. lst.RemoveAt(0) will cause the list to relocate all elements whereas queue is smarter. In theory Queue is better but to be sure you should measure your use case. – Alois Kraus Apr 30 '12 at 08:40
-
You can't access a queue by index. You have to use entries you dequeue and you cant put them back. Peek is not a solution however Count > 0 may be. – Jay Jan 05 '15 at 22:23
4 Answers
Performance can be profiled. Though in this case of so few items, you may need to run the code millions of times to actually get worthwhile differences.
I will say this: Queue<T>
will expose your intent more explicitly, people know how a queue works.
A list being used like a queue is not as clear, especially if you have a lot of needless indexing and RemoveAt(magicNumber)
code. Dequeue
is a lot more consumable from a code maintenance point of view.
If this then gives you measurable performance issues, you can address it. Don't address every potential performance issue upfront.

- 63,413
- 11
- 150
- 187
-
10Why shouldn't we address every potential performance issue upfront? – John Isaiah Carmona Apr 30 '12 at 08:43
-
27@JohnIsaiahCarmona: Because using an O(n^2) algorithm instead of an O(n) one on 10 elements is not a performance issue. – Jon Apr 30 '12 at 08:44
-
21@JohnIsaiahCarmona Because you fall into the trap of micro optimisation when you don't need it. My opinion of all this is we should watch out for the obvious clangers, but the few sub-milliseconds between A and B are not worth worrying about until they become a problem. Maintainable, readable code is more important than performance in most cases. – Adam Houldsworth Apr 30 '12 at 08:45
-
6@JohnIsaiahCarmona Also, most potential performance issues are never really realised, hence they just remain *potential* issues. – Adam Houldsworth Apr 30 '12 at 08:48
-
4Let's not dismiss @JohnIsaiahCarmona outright. We should always encourage devs to understand how these things are implemented under the covers. More practically, we all know how these stories go in production -- some throws something together that is "temporary" and fast forward 5 years and it's still shipping and now some other component is thrashing it in ways where the performance penalties modulate each other. This may not strongly apple for this question, but the best devs I know strive to see how their code can be thrashed/stressed. – Drew O'Meara Feb 18 '20 at 23:11
-
@DrewO'Meara I don't feel the comment was dismissed in the manner in which it was expressed. I agree that people should seek to gain deeper and deeper knowledge of the language, framework, and runtime they work in. It doesn't always happen in the wild though, and there will always be a balance between knowing something and applying it in production. – Adam Houldsworth Feb 19 '20 at 11:20
-
@Jon using an O(n^2) algorithm instead of an O(n) one on only 10 elements is nonetheless an order of magnitude difference and by most standards a performance issue. Still I agree that some things do not necessarily need to be addressed upfront, but when the business decision balancing performance vs development AND testing time has been made and the theory says there is an order of magnitude difference then profiling would be a waste of resources. – Paul Childs Jan 07 '21 at 02:20
-
@PaulChilds most certainly not. In case you are not very familiar with O notation, it describes the _asymptotic_ behavior of algorithms; that is, what happens after the input size has grown large enough so that e.g. the higher-order polynomial term dominates. In fact, for example one of the most performant sorting algos today [deliberately switches](https://en.wikipedia.org/wiki/Timsort#Operation) to O(n^2) insertion sort for small sized blocks of data. – Jon Jan 07 '21 at 13:59
-
@PaulChilds you can also intuitively think about it like this: an algo with runtime r1(x) = 1000x is O(n); and an algo with runtime r2(x) = x^2 + x is O(n^2). Which one is actually faster if n is in less than 50? – Jon Jan 07 '21 at 14:01
-
Yes but this is a very artificial edge case; far removed from the example of removeAt in Queue vs List. Sure one might come across large differences in the leading coefficients of the polynomials for sorting algorithms, but that isn't the case for simple insert/remove operations on unordered containers. – Paul Childs Jan 08 '21 at 04:40
-
@PaulChilds I'm not sure what your argument is here. If you're talking specifically, then "on only 10 elements is nonetheless an order of magnitude difference and by most standards a performance issue" is an assertion unsupported by data -- and in this question's particular case, where the queue is implemented on top of an auto-growing array the same as a list, there will be no perf difference at all. If you're talking generically, then we are in agreement on what is technically correct so what does that leave to discuss? – Jon Jan 08 '21 at 13:06
-
The only disagreement is on your original comment taken as it stands. – Paul Childs Jan 14 '21 at 22:23
-
a single order of magnitude of CPU performance on collections is almost never a bottleneck situation. Slow code has these problems 1.Not using any parallelization or asynchronziation. Everything is sequentially 2. I/O bound, often very large amounts of data need read/written and this takes far longer than CPU manipulations. 3. Latency bound, getting data to/from the database especially when this is done wirelessly.The JIT can often look at a List, see it is only used as a Que, and optimize by replacing the que under the hood, JIT's are far more advanced now than most people realize. – jdmneon Oct 08 '22 at 18:59
-
More often than not, buying better hardware is the solution to a performance bottleneck. It takes around 600$ to have a coder work a full day. 600$ can buy you A Quadruple Raid Array of M.2's, 700$ can now buy a 3090 Graphics card, 600$ can buy 32 gigabytes of Incredibly fast DDR5 RAM. 600$ can buy you a top of the line next generation CPU. Many languages Javascript, Python, R, PL/SQL do not even implement Que's. If writing an operating system, or code fighter jet, or code on a super-computer. Then you may need to use many complex data structures. In C#/Java/C++ they are largely obsolete – jdmneon Oct 08 '22 at 19:11
-
@jdmneon Can you link to anything showing the JIT changing a List to a Queue? Never heard that before. I would also argue against the last statement being that "complex" data structures are obsolete. They simply aren't in my experience. I now work on code that operates on large sets in memory, and silly mistakes in data structure choice can have a lasting impact. I will agree, however, that sometimes a valid answer to performance is hardware, though I find this goes in cycles. Either way, best to start with intent and move onto performance when the data tells you to. – Adam Houldsworth Oct 10 '22 at 08:36
-
It's proprietary so I don't have the new JIT code or what C# uses for JIT but something like that would certainly be possible using A.I. (What types of optimizations would the JIT be doing if not speeding up the code using a plethora of different techniques?) I didn't say all, I said a 10x performance gain on the CPU is rarely the bottleneck, and that coding hours are often expensive compared to hardware. There are always exceptions. If we were talking about a 1000x performance or something by using a hashet instead of a list I would be on your sort, but that's not what was discussed. – jdmneon Oct 12 '22 at 03:38
Short answer:
Queue<T>
is faster than List<T>
when it's used like a queue. List<T>
is faster than Queue<T>
when used like a list.
Long answer:
A Queue<T>
is faster for dequeuing operation, which is an O(1) operation. The entire block of subsequent items of the array is not moved up. This is possible because a Queue<T>
need not facilitate removal from random positions, but only from the top. So it maintains a head (from which the item is pulled upon Dequeue
) and tail position (to which the item is added upon Enqueue
). On the other hand removing from the top of a List<T>
requires itself to shift positions of every subsequent item one up. This is O(n) - worst case if you're removing from the top, which is what a dequeue operation is. The speed advantage can be noticeable if you're dequeuing in a loop.
A List<T>
is more performant if you need indexed access, random retrieval etc. A Queue<T>
will have to enumerate fully to find the appropriate index position (it doesn't expose IList<T>
).
That said, a Stack<T>
vs List<T>
is much closer, there is no performance difference in pushing and popping operations. They both push to end and remove from end of array structures (both of which are O(1)).
Of course you should use the correct structure that reveals the intent. In most cases they will perform better as well since they are tailor-made for the purpose. I believe had there been no performance difference at all, Microsoft wouldn't have included Queue<T>
and Stack<T>
in the framework for merely different semantics. It would have been simply easily extensible if that was the case. Think about SortedDictionary<K, V>
and SortedList<K, V>
, both of which do exactly the same but is differentiated by only performance characteristic; they find a place in BCL.

- 70,104
- 56
- 326
- 368
-
-
2@AlexanderRyanBaggett It should make no difference (my best guess) but you should really use the structure that reveals intent better. They both tell different stories to the developer. – nawfal Apr 23 '17 at 09:23
Besides the fact that the Queue<T>
class implements a queue and the List<T>
class implement a list there is a performance difference.
Every time you remove the first element from List<T>
all elements in the queue are copied. With only 20 elements in the queue it may not be noticeable. However, when you dequeue the next element from Queue<T>
no such copying is happening and that will always be faster. If the queue is long the difference can be significant.

- 104,481
- 22
- 209
- 256
I wanted to emphasize what HugoRune already pointed out. Queue
is significantly faster than List
, where memory accesses are 1
vs. n
for List
in this use case. I have a similar use case but I have hundreds of values and I will use Queue
because it is an order of magnitude faster.
A note about Queue
being implemented on top of List
: the key word is "implemented". It doesn't copy every value to a new memory location upon dequeue, rather using a circular buffer. This can be done on "top of List
" without the penalties of copies that direct usage of List
implies.
-
-
@Didaxis By the above 'reason', List can't use an Array unless the capacity is fixed.. and yet the implementation does, hence why List.Add is "amortized" O(1). tldr; Chained array and array-resizing (ie. when backed by List) circular buffer implementations are possible.. – user2864740 May 19 '19 at 20:59