I am currently studying algorithms at college and I am curios as what does a seasoned developer uses in its code when said programmer needs to sort something.
Different sorting algorithms have different applications. You choose the best algorithm for the problem you're facing. For example, if you have a list of items in-memory then you can sort them in-place with QuickSort - if you want to sort items that are streamed-in (i.e. an online sort) then QuickSort wouldn't be appropriate.
C++ uses IntroSort which has an average of Θ(n log(n)) and worst of Θ(n^2).
I think you mean C++'s STL sort
defaults to using Introsort in most implementations (including the original SGI STL and GNU's, but I don't believe the C++ specification specifically requires sort
to use Introsort - it only requires it to be a stable sort. C++ is just a language, which does not have a sorting-algorithm built in to the language. Anyway, it's a library feature, not a language feature.
C# uses QuickSort which has an average of Θ(n log(n)) and worst of Θ(n^2).
Again, C# (the language) does not have any built-in sorting functionality. It's a .NET BCL (Base Class Library) feature that exposes methods that perform the sorting (such as Array.Sort
, List<T>.Sort
, Enumerable.OrderBy<T>
, and so on). Unlike the C++ specification, the C# official documentation does state that the algorithm used by List<T>.Sort
is Quicksort, but other methods like Enumerable.OrderBy<T>
leave the actual sorting algorithm used to the backend provider (e.g. in Linq-to-SQL and Linq-to-Entities the sorting is performed by the remote database engine).
Do programmers use the default sorting methods or they implement their own ?
Generally speaking, we use the defaults because they're good enough for 95%+ of all workloads and scenarios - or because the specification allows the toolchain and library we're using to pick the best algorithm for the runtime platform (e.g. C++'s sort
could hypothetically make use of hardware-sorting which allows for sorting of constrained values of n
in O(1)
to O(n)
worst-case time instead of O(n^2)
with QuickSort - which is a problem when processing unsanitized user-input.
But also, generally speaking, programmers should never reimplement their own sorting algorithms. Modern languages with support for templates and generics mean that an algorithm can be written in the general form for us, so we just need to provide the data to be sorted and either comparator function or a sorting key selector, which avoids common programmer human errors when reimplementing a stock algorithm.
As for the possibility of programmers inventing their own new novel sorting algorithms... with few exceptions that really doesn't happen. As with cryptography, if you find yourself "inventing" a new sorting algorithm I guarantee that not only are you not inventing a new algorithm, but that your algorithm will be flawed in some way or another. In short: don't - at least not until you've ran your idea past your nearest computer science academic.
When do they use the default one and when do they implement their own ?
See above. You're also not considering using a non-default algorithm. As the other answers have said, it's based on the application, i.e. the problem you're trying to solve.
Is Θ(n log(n))
the best time a sorting algorithm can get ?
You need to understand the difference between Best-case, Average-case, and Worst-case time complexities. Just read the Wikipedia article section with the big table that shows you the different runtime complexities: https://en.wikipedia.org/wiki/Sorting_algorithm#Comparison_sorts - for example, insertion sort has a best-case time complexity of O(n)
which is much better than O(n log n)
, which directly contradicts your supposition.
There are a ton of sorting algorithm, as I am currently finding out in uni.
I think you would be better served by bringing your questions to your class TA/prof/reader as they know the course material you're using and know the context in which you're asking.