For those of you unfamiliar with Meka - it is an extension of Weka for multi-label classifiers. Meka and Weka are VERY similar, however, and so Weka users may be able to answer this question, too.
Basically, I want my results from my runs of various classifiers, and I want them all in a table so I can do Model Selection quickly (dynamically/automatically) without having to hardcode the values from each classifier, for the various evaluation metrics...
Is there a fool-proof, effective way to run multiple classifier experiments - say using cross validation - and get a table like the below:
Model Hamming_Loss Exact_match Jaccard One_Error Rank_Loss
Binary.Relevance 0.94 0.95 0.03 0.04 0.002
Classifier.Chains 0.91 0.94 0.06 0.04 0.03
Random.k-Labelsets 0.95 0.97 0.01 0.01 0.005
... ... ... ... ...
... ... ... ... ...