I have a problem that states the following:
n players (where n is even) are to a play games against each other. Everyone will not necessarily play but a player can only play against someone else once. If two people do decide to play against each other, we have one loser and one winner. I then wish to partition my n players into two sets of size n/2: winners (W) and losers (L). I want all players in my winner set to have never lost against someone in my losers set.
This is impossible ex. for 4 players and games p1 won against p2, p2 won against p3, p3 won against p4 and p4 won against p1 then there is no way to partition the players into W and L. I do the next best thing, which is I wish to minimise my error: the number of pairs of players where a player in W has lost to a player in L (not playing against each other is not a loss).
I (think) I found a greedy solution to this problem. I simply sort the players by their number of losses and place the people with the least loses in my W set and fill in the rest to L. How do I go about proving that my greedy approach is in fact optimal? I have done several random tests and I can show that my approach will give a feasible solution but I don't know how to show that this does in fact minimise my error.