As @that other guy points out, you can use iterate
to apply sort'
to a list n
times, which is a wonderful example of using higher-level combinators in Haskell (as opposed to primitive recursion). Another nice combinator that is useful for this sort of thing is until
: until
will take a predicate :: a -> Bool
, a function :: a -> a
, and a value :: a
, and will apply the function to the value until the predicate remains true. So when you have a similar situation but don't know exactly how many passes you want to make over the input, you can do something like until isSorted sort' xs
, which can be quite nice.
A lot of other one-pass sorting algorithms are quite nice in Haskell as well, including top-down and (especially) bottom-up mergesorts.
At the risk of throwing you into the theory deep end, I wanted to mention another quite nice high-level combinatorial way of thinking about iterating n
times. This is based on the idea of "recursion schemes", where the basic idea is that the structure of a recursive datatype informs useful patterns for processing/transforming it. As it happens, Haskell is powerful enough to express a lot of these ideas. One of the simplest examples of the recursion scheme is based on the natural numbers:
data Nat = Zero | Succ Nat
This is a recursive datatype because the second data constructor refers back to Nat
itself. There's a really natural way of processing Nat
s, which is to say: when I have Zero
, I should return some value; otherwise I should recursively process the Nat
and then use those values to come up with a new value. As it happens, we can write Nat
in a different way, which makes this very easy to do generally:
data NatF a = Zero | Succ a
type Nat = Mu NatF
Now, the NatF
data type can hold a value of any type inside its "recursive" component, instead of only another Nat
. The Mu
type constructor in the second line is basically like the fix
function but for types: it instantiates NatF (NatF (NatF (NatF ...)))
so that Nat = NatF Nat
, which is the same as our earlier definition.
Because of the increased genericity, though, we can now write down the a type that describes functions that process Nat
s as described above: NatF a -> a
. When given a Zero
, a function of this type just returns some a
; when given a Succ
, it has access to the a
that it returned when it was recursively called on the "inside" Nat
. It turns out that it's possible to generalize the infrastructure needed to do this, so that it can work on any type written in the style above (i.e. as a fixed point of a type generalized over the recursive component). This means that you can just write down the equations for your function given the results from recursive calls and then call cata
on this datastructure, without ever having to be bothered to write down the repetitive boring recursive calls!
It also means that another way to write your "apply sort' (length xs) times to xs` is:
sort xs = cata phi (toNat $ length xs)
where phi Zero = xs
phi (Succ xs) = sort' xs