3

I'm having a little trouble conceptually understanding how the ROC function in scikit learn generates the true positive and false positive rates. I used the BC scikit learn data and built a decision tree around 2 random features.

from sklearn.datasets import load_breast_cancer
from sklearn.model_selection import train_test_split
from sklearn import tree
import numpy as np

data = load_breast_cancer()
X = data.data[:, [1,3]]
y = data.target

# Splitting data
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33,random_state=0)

# Training tree
bc_tree = tree.DecisionTreeClassifier(criterion="entropy").fit(X_train, y_train)

# Predictions
bc_pred = bc_tree.predict(X_test)
# Score
bc_tree.score(X_test, y_test)

# Confusion matrix
from sklearn.metrics import confusion_matrix
metrics.confusion_matrix(y_test, bc_pred) # True positive = 0.83

# ROC curve
fpr_tree, tpr_tree, thresholds_tree = metrics.roc_curve(y_test, bc_pred)

# True positive rate ROC
tpr_tree # 0.91

The confusion matrix is looks like this:

[[ 55,  12]
[ 11, 110]]

According to my calculations, the true positive rate is:

55/(55+11) = .83

According to the ROC curve implemented by scikit learn, the true positive rate is 0.92. How did it calculate this number, and why aren't my calculations matching up? What am I missing?

Olivier
  • 321
  • 2
  • 11

1 Answers1

2

Because you are inferring the confusion_matrix wrong.

The matrix returned from confusion_matrix will be of the form

      0   TN   FP
True
      1   FN   TP

           0    1
         Predicted

So according to the formula for TPR, the value should be 110/ (110+11) = 0.9090...

Vivek Kumar
  • 35,217
  • 8
  • 109
  • 132