5

I noticed that the result of the following two codes is different.

#1
metrics.plot_roc_curve(classifier, X_test, y_test, ax=plt.gca())


#2
metrics.plot_roc_curve(classifier, X_test, y_test, ax=plt.gca(), label=clsname + ' (AUC = %.2f)' % roc_auc_score(y_test, y_predicted))

So, which method is correct?

I have added a simple reproducible example:

from sklearn.metrics import roc_auc_score
from sklearn import metrics
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.svm import SVC
from sklearn.datasets import load_breast_cancer

data = load_breast_cancer()
X = data.data
y = data.target

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20, random_state=12)

svclassifier = SVC(kernel='rbf')
svclassifier.fit(X_train, y_train)
y_predicted = svclassifier.predict(X_test)

print('AUC = %.2f' % roc_auc_score(y_test, y_predicted))  #1

metrics.plot_roc_curve(svclassifier, X_test, y_test, ax=plt.gca())  #2
plt.show()

Output (#1):

AUC = 0.86

While (#2):

enter image description here

desertnaut
  • 57,590
  • 26
  • 140
  • 166
David Ws.
  • 134
  • 2
  • 9
  • @Mr.T I haven't seen that. Should I remove my question? – David Ws. Feb 27 '21 at 10:14
  • Waht is the difference between #1 and #2? you are just adding a label to #2 , refer [plot_roc_curve](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.plot_roc_curve.html?highlight=sklearn%20metrics%20plot_roc), refer [matplotlib.pyplot. **kwargs label](https://matplotlib.org/stable/api/_as_gen/matplotlib.artist.Artist.set_label.html#matplotlib.artist.Artist.set_label) – Shijith Feb 27 '21 at 10:35
  • @Shijith I manually add `roc_auc_score ` as a label instead of an automatic legend to show the difference. Could you elaborate on that, please? – David Ws. Feb 27 '21 at 11:29

1 Answers1

1

The difference here may be sklearn internally using predict_proba() to get probabilities of each class, and from that finding auc

Example , when you are using classifier.predict()

import matplotlib.pyplot as plt
from sklearn import datasets, metrics, model_selection, svm
X, y = datasets.make_classification(random_state=0)
X_train, X_test, y_train, y_test = model_selection.train_test_split(X, y, random_state=0)
clf = svm.SVC(random_state=0,probability=False)
clf.fit(X_train, y_train)
clf.predict(X_test)

>> array([1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0,
       1, 0, 0])

# calculate auc
metrics.roc_auc_score(y_test, clf.predict(X_test))

>>>0.8782051282051283  # ~0.88

if you use classifier.predict_proba()

X, y = datasets.make_classification(random_state=0)
X_train, X_test, y_train, y_test = model_selection.train_test_split(X, y, random_state=0)
# set probability=True
clf = svm.SVC(random_state=0,probability=True)
clf.fit(X_train, y_train)
clf.predict_proba(X_test)

>> array([[0.13625954, 0.86374046],
       [0.90517034, 0.09482966],
       [0.19754525, 0.80245475],
       [0.96741274, 0.03258726],
       [0.80850602, 0.19149398],
       ......................,
       [0.31927198, 0.68072802],
       [0.8454472 , 0.1545528 ],
       [0.75919018, 0.24080982]])

# calculate auc
# when computing the roc auc metrics, by default, estimators.classes_[1] is   
# considered as the positive class here 'clf.predict_proba(X_test)[:,1]'

metrics.roc_auc_score(y_test, clf.predict_proba(X_test)[:,1])
>> 0.9102564102564102

so for your question metrics.plot_roc_curve(classifier, X_test, y_test, ax=plt.gca()) may be using default predict_proba() to predict the auc, and for metrics.plot_roc_curve(classifier, X_test, y_test, ax=plt.gca(), label=clsname + ' (AUC = %.2f)' % roc_auc_score(y_test, y_predicted)), you are calculating roc_auc_score and passing the score as a label.

desertnaut
  • 57,590
  • 26
  • 140
  • 166
Shijith
  • 4,602
  • 2
  • 20
  • 34