You're right that it's a bit confusing and not very precise of the authors. It's better to use recurrences like to this to count discrete events like comparisons. But then you have to lay down details of a particular implementation, and that gets fiddly. This proof is a kind of estimate, which is good enough since we're only looking for big Omega behavior. Constant factors don't make a difference.
To make sense of it, think of cn
(which is c
times n
) as the amount of time it takes to do a merge step on lists with total length n
. So c
is a rough expression for the constant time it takes to handle one element: the time it takes to execute one iteration of whatever loop is doing the merging.
Rather than merging, he calls this "combining". He's also proposing there might be a per-element cost to splitting the lists for recursive sorting. In a normal array implementation, there is no such per-element cost. A linked list mergesort will have one, though: a loop that divides a big list into two halves. Then c
represents one iteration of both loops.
The recursive term 2T(n/2)
is an expression for the amount of time it takes to sort the two sub-lists.
You could make this expression a little more precise by saying
T(n) = 2T(n/2) + cn + k
where k
is the constant time of code that runs outside the merge (and split if there is one) loop: function call overhead, sublist length math, etc. You might try solving the recurrence with this extra term as an exercise to prove to yourself that the big Omega result doesn't change.