8

I just started with GridSearchCV in Python, but I am confused what is scoring in this. Somewhere I have seen

scorers = {
    'precision_score': make_scorer(precision_score),
    'recall_score': make_scorer(recall_score),
    'accuracy_score': make_scorer(accuracy_score)
}

grid_search = GridSearchCV(clf, param_grid, scoring=scorers, refit=refit_score,
                       cv=skf, return_train_score=True, n_jobs=-1)

What is the intent of using these values, i.e. precision, recall, accuracy in scoring?

Is this used by gridsearch in giving us the optimized parameters based on these scoring values.... like for the best precision score it finds the best parameters or something like that?

It calculate precision, recall, accuracy for the possible parameters and gives the result, now the question is if this is true, then it select best parameters based on precision, recall or accuracy? Is the above statement true?

MarianD
  • 13,096
  • 12
  • 42
  • 54
KMittal
  • 602
  • 1
  • 7
  • 21

2 Answers2

12

You are basically correct in your assumptions. This parameter dictionary allows the gridsearch to optimize across each scoring metric and find the best parameters for each score.

However, you can't then have the gridsearch automatically fit and return the best_estimator_, without choosing which score to use for the refit, it will instead throw the following error:

ValueError: For multi-metric scoring, the parameter refit must be set to a scorer 
key to refit an estimator with the best parameter setting on the whole data and make
the best_* attributes available for that metric. If this is not needed, refit should 
be set to False explicitly. True was passed.
G. Anderson
  • 5,815
  • 2
  • 14
  • 21
  • 6
    Okay So what I get is , If I give refit='precision_score' , then it will give best parameters for the best precion score – KMittal Sep 27 '18 at 15:23
  • 3
    Absolutely correct! Just to add, you can access all of the fits and scores with `lr_grid.cv_results_` or, more readable, `pd.DataFrame(lr_grid.cv_results_)` after fitting the gridsearch – G. Anderson Sep 27 '18 at 15:33
  • 2
    Thank a lot :) It helped me a lot to confirm . – KMittal Sep 27 '18 at 15:43
3

What is the intent of using these values, i.e. precision, recall, accuracy in scoring?

Just in case your question also includes "What are precision, recall, and accuracy and why are they used?"...

  • Accuracy = (number of correct predictions)/(total predictions)
  • Precision = (true positives)/(true positives + false positives)
  • Recall = (true positives)/(true positives + false negatives)

Where a true positive is a prediction of true that is correct, a false positive is a prediction of true which is incorrect, and a false negative is a prediction of false that is incorrect.

Recall and Precision are useful metrics when working with unbalanced datasets (i.e., there are a lot of samples with label '0', but much fewer samples with label '1'.

Recall and Precision also lead into slightly more complicated scoring metrics like F1_score (and Fbeta_score), which are also very useful.

Here's a great article explaining how recall and precision work.

H Froedge
  • 197
  • 1
  • 8