1

I'm performing a classification task using XGBClassifier - I want to reuse sklearn's functionalities as much as possible. Especially I'm interested in defining my custom scorer using f_beta function to define f0.5 score.

When I run the following code:

from sklearn.metrics import f1_score

clf = xgb.XGBClassifier(max_depth=5, 
                    learning_rate=0.25,
                    objective='binary:logistic',
                    use_label_encoder=False,
                    eval_metric=make_scorer(fbeta_score(beta=0.5)),
                    )

I get the following error:

TypeError: fbeta_score() missing 2 required positional arguments: 'y_true' and 'y_pred'

Also, following this part of XGBoost documentation I simplified the case just to use a predefined, ready f1_score metric: eval_metric=f1_score but XGBClassifier switches back to log-loss one.

How can I implement my customised metric in the appropriate way?

Roberto
  • 649
  • 1
  • 8
  • 22

1 Answers1

0

if you check the documentation you cannot use with eval_metric creating you own metric but the one listed in the documentation

But if you want to optimize I think you can precise a custom metric in gridsearchCV with scoring

DataSciRookie
  • 798
  • 1
  • 3
  • 12