12

I have an imbalanced dataset with 53987 rows, 32columns and 8 classes. I'm trying to perform multiclass classification. This is my code and the corresponding output:

from sklearn.metrics import classification_report, accuracy_score
import xgboost
xgb_model = xgboost.XGBClassifier(num_class=7, learning_rate=0.1, num_iterations=1000, max_depth=10, feature_fraction=0.7, 
                              scale_pos_weight=1.5, boosting='gbdt', metric='multiclass')
hr_pred = xgb_model.fit(x_train, y_train).predict(x_test)
print(classification_report(y_test, hr_pred))


[10:03:13] WARNING: C:/Users/Administrator/workspace/xgboost-win64_release_1.3.0/src/learner.cc:541: 
Parameters: { boosting, feature_fraction, metric, num_iterations, scale_pos_weight } might not be used.

This may not be accurate due to some parameters are only used in language bindings but
passed down to XGBoost core.  Or some parameters are not used but slip through this verification. Please open an issue if you find above cases.

[10:03:13] WARNING: C:/Users/Administrator/workspace/xgboost-win64_release_1.3.0/src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'multi:softprob' was changed from 'merror' to 'mlogloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
          precision    recall  f1-score   support

     1.0       0.84      0.92      0.88      8783
     2.0       0.78      0.80      0.79      4588
     3.0       0.73      0.59      0.65      2109
     4.0       1.00      0.33      0.50         3
     5.0       0.42      0.06      0.11       205
     6.0       0.60      0.12      0.20       197
     7.0       0.79      0.44      0.57       143
     8.0       0.74      0.30      0.42       169

accuracy                           0.81     16197
macro avg       0.74      0.45      0.52     16197
weighted avg       0.80      0.81      0.80     16197

and

max_depth_list = [3,5,7,9,10,15,20,25,30]

for max_depth in max_depth_list:
    xgb_model = xgboost.XGBClassifier(max_depth=max_depth, seed=777)
    xgb_pred = xgb_model.fit(x_train, y_train).predict(x_test)
    xgb_f1_score_micro = f1_score(y_test, xgb_pred, average='micro')

    xgb_df = pd.DataFrame({'tree depth':max_depth_list,             
                            'accuracy':xgb_f1_score_micro})
    xgb_df

WARNING: C:/Users/Administrator/workspace/xgboost-win64_release_1.3.0/src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'multi:softprob' was changed from 'merror' to 'mlogloss'. Explicitly set eval_metric if you'd like to restore the old behavior.

How can I fix these warnings?

Hagbard
  • 3,430
  • 5
  • 28
  • 64
mineral
  • 499
  • 2
  • 6
  • 17
  • 3
    Welcome to StackOverflow. Please create a MWE (https://stackoverflow.com/help/minimal-reproducible-example) first and don't post code as images (https://meta.stackoverflow.com/questions/285551/why-not-upload-images-of-code-errors-when-asking-a-question). – Hagbard Feb 08 '21 at 08:01
  • 1
    Welcome to Stackoverflow. Please make sure that 1) you include your code and error messages into your question as text. Screenshots, and even worse links to screenshots, are not very well to read, esspecially on mobile devices. Also 2) Please indicate what your exact problem is, the warnings (there are two) give instructions on what to do, so it is unclear why that was not possible for you – FlyingTeller Feb 08 '21 at 08:03
  • Additionally, upgrading to the most recent XGBoost version might automatically remove some of those warnings. – Hagbard Feb 08 '21 at 09:11
  • @mirekphd This warning is not about OP's computer, but about the XGBoost library itself. I have literally same warning and I do not have even such folder as Administrator. – Roland Pihlakas Dec 03 '21 at 18:20

3 Answers3

22

If you don't want to change any behavior, just set the eval_metric='mlogloss' as the following.

xgb_model = xgboost.XGBClassifier(num_class=7,
                                  learning_rate=0.1,
                                  num_iterations=1000,
                                  max_depth=10,
                                  feature_fraction=0.7, 
                                  scale_pos_weight=1.5,
                                  boosting='gbdt',
                                  metric='multiclass',
                                  eval_metric='mlogloss')

From the warning log, you will know what eval_metric algorithm to set to remove the warning. Mostly either mlogloss or logloss.

Wei Chen
  • 605
  • 6
  • 14
0

I run through exactly the same problem, the reason is that I used incorrect hyperparameters with the XGBClassifier. In your situation, try to remove these hyperparameters boosting, feature_fraction, metric, num_iterations, scale_pos_weight because they are no longer valid, and you can look at the documentation.

This is your error message:

This may not be accurate due to some parameters are only used in language bindings but passed down to XGBoost core. Or some parameters are not used but slip through this verification.

Hamzah
  • 8,175
  • 3
  • 19
  • 43
-3

You can try this:

import xgb

xgb.set_config(verbosity=0)
Tomerikoo
  • 18,379
  • 16
  • 47
  • 61
  • 5
    Welcome to Stackoverflow! Your answer may show how to *hide* the warning, but the question was how to *fix* it – anestv Dec 16 '21 at 01:34
  • This is not correct. By setting this, you are suppressing the warning, which is not desired, instead focus on the solution rather than suppressing. – Cesar Flores May 02 '23 at 16:36