I have to train a classifier that would be able to discern 6 possible classes of the input samples. I also have a Cost Matrix to estimate the classifier's performance with and without considering the reject option.
So far, using the Cross-validation (leave-one-out), I split the dataset in training set and validation set, so I can measure the classifier's performance. I've reached these results in terms of accuracy:
- MultiLayer Perceptron: 57,69% without the reject option, 48,26% with the reject option
- Support Vector Machine: 61,99% without the reject option, 35,09% with the reject option
While in terms of costs (these are estimates obtained using the Minimum Risk Classification rule):
- MLP: 2,0028 without the reject option, 1,4965 with the reject option
- SVM: 1,6089 without the reject option, 0,8502 with the reject option
So I've reached a point where I don't know which classifier is better.
Of course the SVM has ridiculous low costs, but when you consider the reject option you suddenly notice that its accuracy is pretty bad (-13% lower than MLP).
In terms of accuracy, I'd say that the MLP is better than the SVM because of its average accuracy (considering both with/without the reject option): 52,97% (MLP) vs 48,54% (SVM).
But the SVM is better in terms of average cost: 1,74965 (MLP) vs 1,22955 (SVM).
Are there any guide lines to facilitate this decision?
Edit (more information as requested): the dataset has ~700 samples with ~1250 features. However with the Feature Selection I reduced the features to 81.
The test set (which I don't have) will be of ~700 samples.