As the name itself suggests combiners should only be used when there is any possibility to combine. Generally, it shall be applied on the functions that are commutative(a.b = b.a) and associative {a.(b.c) = (a.b).c} . But this is just for caution, there is no hard and fast rule that it has to be commutative and associative. Combiners may operate only on a subset of your keys and values or may not execute at all. So if there are very less amount of duplicate keys in your mapper output then at times using combiners may backfire and instead become a useless burden. So use combiners only when there are enough scope of combining.
Quoting from Chuck Lam's 'Hadoop in Action':
"A combiner doesn't necessarily improve performance. You should
monitor the job's behavior to see if the number of records outputted
by the combiner is meaningfully less than the number of records going
in. The reduction must justify the extra execution time of running a
combiner. "
Hence, in your case it is possible that the number of subsets which can be combined are less in ratio, hence the overhead of running a combiner ultimately increases your execution time.
Read more from my article here.