I am using the RandomizedSearchCV function in sklearn with a Random Forest Classifier. To see different metrics i am using a custom scoring
from sklearn.metrics import make_scorer, roc_auc_score, recall_score, matthews_corrcoef, balanced_accuracy_score, accuracy_score
acc = make_scorer(accuracy_score)
auc_score = make_scorer(roc_auc_score)
recall = make_scorer(recall_score)
mcc = make_scorer(matthews_corrcoef)
bal_acc = make_scorer(balanced_accuracy_score)
scoring = {"roc_auc_score": auc_score, "recall": recall, "MCC" : mcc, 'Bal_acc' : bal_acc, "Accuracy": acc }
these Custom scorers are the used for the Randomized search
rf_random = RandomizedSearchCV(estimator=rf, param_distributions=random_grid, n_iter=100, cv=split, verbose=2,
random_state=42, n_jobs=-1, error_score=np.nan, scoring = scoring, iid = True, refit="roc_auc_score")
Now the problem is, as I am using custom splits, the AUC is throwing an exception because there is only one class label for this exact split.
I do not want to change the splits, hence is there a possibility to catch these exceptions within the RandomizedSearchCV or the make_scorer functions? So e.g. if one of the metrics is not calculated (due to an exception) just put in NaN and go on with the next model.
Edit: Apparently the error_score excepts model training but not the metric calculation. If I use eg Accuracy everything works and I just get the warnings in the folds where I have only one class label. If I use eg AUC as Metric I still get the exceptions thrown.
Would be great to get some ideas here!
Solution: Define a custom scorer with exception:
def custom_scorer(y_true, y_pred, actual_scorer):
score = np.nan
try:
score = actual_scorer(y_true, y_pred)
except ValueError:
pass
return score
This leads to a new metric:
acc = make_scorer(accuracy_score)
recall = make_scorer(custom_scorer, actual_scorer=recall_score)
new_auc = make_scorer(custom_scorer, actual_scorer=roc_auc_score)
mcc = make_scorer(custom_scorer, actual_scorer=matthews_corrcoef)
bal_acc = make_scorer(custom_scorer,actual_scorer=balanced_accuracy_score)
scoring = {"roc_auc_score": new_auc, "recall": recall, "MCC" : mcc, 'Bal_acc' : bal_acc, "Accuracy": acc }
Which in turn can be passed to the scoring parameter of RandomizedSearchCV
A second solution I found was :
def custom_auc(clf, X, y_true):
score = np.nan
y_pred = clf.predict_proba(X)
try:
score = roc_auc_score(y_true, y_pred[:, 1])
except Exception:
pass
return score
which also can be passed to the scoring argument:
scoring = {"roc_auc_score": custom_auc, "recall": recall, "MCC" : mcc, 'Bal_acc' : bal_acc, "Accuracy": acc }
(Adapted from this answer)