So I know that given a binary classifier, the farther away you are from an accuracy of 0.5 the better your classifier is. (I.e. A binary classifier that gets everything wrong can be converted to one which gets everything right by always inverting its decisions.)
However, I have an inner feature selection process, which provides me "good" features to use (I'm trying out recursive feature elimination, and another based on Spearman's rank correlation coefficient). Given that the classifier using these "good" features gets a cross validation accuracy of 0, can I still conclude that the features selected are useful and are predictive of the class in this binary prediction problem?