I'll often end up, by way of gnu parallel, with a large file containing counts of various objects:
1201 object1
804 object1
327 object2
3828 object1
29 object2
277 object3
...
This'll often have several thousand lines with various objects in no particular order. I'll want a sum of the total counts of each object. My usual approach is to put together a Perl one-liner like this:
perl -lane '$O{$F[1]} += $F[0]; END {foreach $k (keys %O) {print "$k: $O{$k}"}}' countsfile
I'll typically have a pipeline consisting of parallel, awk, grep, sort, uniq, cut, etc. with fairly terse arguments each. The perl hack is an exception: it's long to type and much more complex than other parts of the pipeline. I always feel like I'm specifying far more than I really need to when typing it.
So my question: is there a technique or utility that'll let me do this without having a compose a full script every time? I'd like to be able to do this without using perl, awk, R, or other systems that implement general-purpose languages.