As in so many things, it depends.
What the ISO standards define is that when a sort order is requested, it is honored in particular ways. The mechanics of meeting that standard are up to the implementation. With that said, sorting has been a heavily studied branch of computing for nearly half a century, and there are a small number of algorithms that are known to work well, plus minor variations that amount to fine tuning.
LEXX, YACC, and BISON don't do much besides pull out the intent of he supplied code. You can identify nouns, predicates and verbs in the supplied code, but the output doesn't actually do anything until it is passed to an interpreter of some sort.
In the RDBMS, the interpreter hiding under the parser and lexer takes those nouns, predicates and verbs and computes an idealized access path to the data, taking into account the optimizations (proprietary or not) of the platform. The access path is executed as a list of verbs.
However, the interpreter does not have to be an RBMS. It might be a tool for managing metadata, in which case the result might be a graphical image of entity relationships (as an example).
Most databases use several different sorting algorithms depending on what they are sorting, and in what phase of the information lifecycle they are applying the sort.
When creating an ordered index from bulk data, they may use a tree sort or a heap sort.
When selecting data, the first choice is to choose an access path that allows traversal of an index that naturally returns the data in the order you requested (i.e. avoid sorting).
If the dataset must be sorted after retrieval, and it is sufficiently small to fit into memory, they will typically use some flavor of QuickSort.
If the dataset must be sorted after retrieval, and it is too large to fit into memory, they may create a temporary table and use either heap sort or tree sort.
I hope this helps.