There seem to be differences between the ROC/Sens/Spec that is produced when tuning the model, to the actual predictions made by the model on the same dataset. I'm using caret which uses kernlab's ksvm. I'm not experiencing this problem with glm.
data(iris)
library(caret)
iris <- subset(iris,Species == "versicolor" | Species == "setosa") # we need only two output classess
iris$noise <- runif(nrow(iris)) # add noise - otherwise the model is too "perfect"
iris$Species <- factor(iris$Species)
fitControl <- trainControl(method = "repeatedcv",number = 10, repeats = 5, savePredictions = TRUE, classProbs = TRUE, summaryFunction = twoClassSummary)
ir <- train(Species ~ Sepal.Length + noise, data=iris,method = "svmRadial", preProc = c("center", "scale"), trControl=fitControl,metric="ROC")
confusionMatrix(predict(ir), iris$Species, positive = "setosa")
getTrainperf(ir) # same as in the model summary
What is the source of this discrepancy? which ones are the "real", post-cross-validation predictions?