I'm studing the effects of performing a calibrated classifier and I read that the aim of calibrating is to make a classifier's prediction more 'reliable'. With this in mind I think that a calibrated classifier would have a higher score (roc_auc)
When testing this hypothesis in Python with sklearn y found the exact opposite
Could you please explain:
Does calibration improve roc score? (Or any metric)
If it is not true. What is/are the advantage/es of performing calibration?
clf=SVC(probability=True).fit(X_train,y_train)
calibrated=CalibratedClassifierCV(clf,cv=5,method='sigmoid').fit(X_train,y_train)
probs=clf.predict_proba(X_test)[:,1]
cal_probs=calibrated.predict_proba(X_test)[:,1]
plt.figure(figsize=(12,7))
names=['non-calibrated SVM','calibrated SVM']
for i,p in enumerate([probs,cal_probs]):
plt.subplot(1,2,i+1)
fpr,tpr,threshold=roc_curve(y_test,p)
plt.plot(fpr,tpr,label=nombre[i],marker='o')
plt.title(names[i]+ '\n' + 'ROC: '+ str(round(roc_auc_score(y_test,p),4)))
plt.plot([0,1],[0,1],color='red',linestyle='--')
plt.grid()
plt.tight_layout()
plt.xlim([0,1])
plt.ylim([0,1])