0

Let's consider data

set.seed(20)     
y <- sample(0:1, 100, replace = T)
x <- data.frame(rnorm(100), rexp(100))

I want to perform cross validation and output sensitivity and specificity. I found out that I can provide additional input to train function 'metric' to specify which metric I want to have. SO :

# train the model on training set

library(caret)
cross <- train(as.factor(y) ~ .,
  data = cbind(y,x),
  metric = 'Sensitivity',
  trControl = trainControl(method = "cv", number = 5)
,
  method = "glm",
  family = binomial()
)

However I see the problem :

The metric "Sensitivity" was not in the result set. Accuracy will be used instead.

Is there any solution how Sensitivity and specificity can be used in cross validation ?

desertnaut
  • 57,590
  • 26
  • 140
  • 166
John
  • 1,849
  • 2
  • 13
  • 23
  • 1
    Does this answer your question? [Optimising caret for sensitivity still seems to optimise for ROC](https://stackoverflow.com/questions/49265400/optimising-caret-for-sensitivity-still-seems-to-optimise-for-roc) – Merik Jan 12 '21 at 18:43

2 Answers2

0

Since you are using caret, you can find some of the answer in the documentation of this package. It states that the metric parameter is ...

a string that specifies what summary metric will be used to select the optimal model. By default, possible values are "RMSE" and "Rsquared" for regression and "Accuracy" and "Kappa" for classification. If custom performance metrics are used (via the summaryFunction argument in trainControl, the value of metric should match one of the arguments. If it does not, a warning is issued and the first metric given by the summaryFunction is used. (NOTE: If given, this argument must be named.)

So by default, a 'Sensitivity' metric does not exist. But you can define such a metric yourself. One approach is to use the trainControl function to pass a custom function that calculates sensitivity. See Optimising caret for sensitivity still seems to optimise for ROC for instance.

desertnaut
  • 57,590
  • 26
  • 140
  • 166
Merik
  • 2,767
  • 6
  • 25
  • 41
0

You can subset confusionMatrix() with $ or [] and this will probably give you what you need.

You can also use function like negPredValue() to get Sensitivity and Specificity.

The 'Sensitivity' metric does not exist for train() in caret package.

itsDV7
  • 854
  • 5
  • 12
  • Yes - the $ solution would be correct, if confusionMatrix will be output of crossvalidation. – John Jan 12 '21 at 18:51
  • I don't quite understand solutionwith negPredValue, because the only output we get from cross validation is Accuracy and Kappa – John Jan 12 '21 at 18:51