13

I'm using the caret function "train()" in one of my project and I'd like to add a "custom metric" F1-score. I looked at this url caret package But I cannot understand how I can build this score with the parameter available.

There is an example of custom metric which is the following:

## Example with a custom metric
madSummary <- function (data,
lev = NULL,
model = NULL) {
out <- mad(data$obs - data$pred,
na.rm = TRUE)
names(out) <- "MAD"
out
}
robustControl <- trainControl(summaryFunction = madSummary)
marsGrid <- expand.grid(degree = 1, nprune = (1:10) * 2)
earthFit <- train(medv ~ .,
data = BostonHousing,
method = "earth",
tuneGrid = marsGrid,
metric = "MAD",
maximize = FALSE,
trControl = robustControl)

Update:

I tried your code but the problem is that it doesn't work with multiple classes like with the code below (The F1 score is displayed, but it is weird) I'm not sure but I think the function F1_score works only on binary classes

library(caret)
library(MLmetrics)

set.seed(346)
dat <- iris

## See http://topepo.github.io/caret/training.html#metrics
f1 <- function(data, lev = NULL, model = NULL) {

print(data)
  f1_val <- F1_Score(y_pred = data$pred, y_true = data$obs)
  c(F1 = f1_val)
}

# Split the Data into .75 input
in_train <- createDataPartition(dat$Species, p = .70, list = FALSE)

trainClass <- dat[in_train,]
testClass <- dat[-in_train,]



set.seed(35)
mod <- train(Species ~ ., data = trainClass ,
             method = "rpart",
             metric = "F1",
             trControl = trainControl(summaryFunction = f1, 
                                  classProbs = TRUE))

print(mod)

I coded a manual F1 score as well, with one input the confusion matrix: (I'm not sure if we can have a confusion matrix in "summaryFunction"

F1_score <- function(mat, algoName){

##
## Compute F1-score
##


# Remark: left column = prediction // top = real values
recall <- matrix(1:nrow(mat), ncol = nrow(mat))
precision <- matrix(1:nrow(mat), ncol = nrow(mat))
F1_score <- matrix(1:nrow(mat), ncol = nrow(mat))


for(i in 1:nrow(mat)){
  recall[i] <- mat[i,i]/rowSums(mat)[i]
  precision[i] <- mat[i,i]/colSums(mat)[i]
}

for(i in 1:ncol(recall)){
   F1_score[i] <- 2 * ( precision[i] * recall[i] ) / ( precision[i] + recall[i])
 }

 # We display the matrix labels
 colnames(F1_score) <- colnames(mat)
 rownames(F1_score) <- algoName

 # Display the F1_score for each class
 F1_score

 # Display the average F1_score
 mean(F1_score[1,])
}
MarcelRitos
  • 179
  • 1
  • 1
  • 8
  • It is not clear which part of the process you have had a problem with. Is it about writing a custom summaryFunction or using its results in the `train` output. Can you elaborate a little bit? – pbahr Jun 06 '16 at 21:21
  • It is using its results in the train output. After launching train(), I would like to have the F1 score displayed ( Only accuracy and cohen's kappa are directly coded ) – MarcelRitos Jun 06 '16 at 21:44
  • So, if you're done coding your custom function, it would help to post your reproducible example using your function editing your question. – pbahr Jun 07 '16 at 04:20

2 Answers2

22

You should look at The caret Package - Alternate Performance Metrics for details. A working example:

library(caret)
library(MLmetrics)

set.seed(346)
dat <- twoClassSim(200)

## See https://topepo.github.io/caret/model-training-and-tuning.html#metrics
f1 <- function(data, lev = NULL, model = NULL) {
  f1_val <- F1_Score(y_pred = data$pred, y_true = data$obs, positive = lev[1])
  c(F1 = f1_val)
}

set.seed(35)
mod <- train(Class ~ ., data = dat,
             method = "rpart",
             tuneLength = 5,
             metric = "F1",
             trControl = trainControl(summaryFunction = f1, 
                                      classProbs = TRUE))
BSMP
  • 4,596
  • 8
  • 33
  • 44
topepo
  • 13,534
  • 3
  • 39
  • 52
  • This is nice because I think it should be clearly stated that `prSummary` will only work for a two-class problem. – NelsonGon Apr 03 '19 at 16:25
  • Is positive class always `lev[1]`? Cant find it [here](https://topepo.github.io/caret/model-training-and-tuning.html#metrics) – user3226167 Jul 30 '19 at 07:55
  • using a custom function might be preferable as F1 = NA if Precision = NA... and Precision = NA if the model never predicts the 'positive' class for any samples (e.g., 0/0 = NA). In this case, you could special case your F1 function to report 0 to avoid an error in `train()`. – Brian D Jun 09 '20 at 21:44
1

For the two-class case, you can try the following:

mod <- train(Class ~ ., 
             data = dat,
             method = "rpart",
             tuneLength = 5,
             metric = "F",
             trControl = trainControl(summaryFunction = prSummary, 
                                      classProbs = TRUE))

or define a custom summary function that combines both twoClassSummary and prSummary current favorite which provides the following possible evaluation metrics - AUROC, Spec, Sens, AUPRC, Precision, Recall, F - any of which can be used as the metric argument. This also includes the special case I mentioned in my comment on the accepted answer (F is NA).

comboSummary <- function(data, lev = NULL, model = NULL) {
  out <- c(twoClassSummary(data, lev, model), prSummary(data, lev, model))

  # special case missing value for F
  out$F <- ifelse(is.na(out$F), 0, out$F)  
  names(out) <- gsub("AUC", "AUPRC", names(out))
  names(out) <- gsub("ROC", "AUROC", names(out))
  return(out)
}

mod <- train(Class ~ ., 
             data = dat,
             method = "rpart",
             tuneLength = 5,
             metric = "F",
             trControl = trainControl(summaryFunction = comboSummary, 
                                      classProbs = TRUE))


Brian D
  • 2,570
  • 1
  • 24
  • 43