2

I fully realized that I will likely be embarassed for missing something obvious, but this has me stumped. I am tuning a LGBM model using Optuna, and my notebook gets flooded with warning messages, how can I suppress them leaving errors (and ideally trial results) on? Code below

import optuna
import sklearn

optuna.logging.set_verbosity(optuna.logging.ERROR)

import warnings
warnings.filterwarnings('ignore')

def objective(trial):    
    list_bins = [25, 50, 75, 100, 125, 150, 175, 200, 225, 250,500,750,1000]   

    param = {
        'lambda_l1': trial.suggest_loguniform('lambda_l1', 1e-8, 10.0),
        'lambda_l2': trial.suggest_loguniform('lambda_l2', 1e-8, 10.0),
        'colsample_bytree': trial.suggest_categorical('colsample_bytree', [0.3,0.4,0.5,0.6,0.7,0.8,0.9, 1.0]),
        'subsample': trial.suggest_categorical('subsample', [0.4,0.5,0.6,0.7,0.8,1.0]),
        'learning_rate': trial.suggest_categorical('learning_rate', [0.006,0.008,0.01,0.014,0.017,0.02,0.05]),
        'max_depth': trial.suggest_categorical('max_depth', [10,20,50,100]),
        'num_leaves' : trial.suggest_int('num_leaves', 2, 1000),
        'feature_fraction': trial.suggest_uniform('feature_fraction', 0.1, 1.0),
        'bagging_fraction': trial.suggest_uniform('bagging_fraction', 0.1, 1.0),
        'bagging_freq': trial.suggest_int('bagging_freq', 1, 15),
        'min_child_samples': trial.suggest_int('min_child_samples', 1, 300),
        'cat_smooth' : trial.suggest_int('cat_smooth', 1, 256),
        'cat_l2' : trial.suggest_int('cat_smooth', 1, 256),
        'max_bin': trial.suggest_categorical('max_bin', list_bins)
    }
    

    model = LGBMRegressor(**param,objective='regression',metric= 'rmse',boosting_type='gbdt',verbose=-1,random_state=42,n_estimators=20000,cat_feature= [x for x in range(len(cat_features))])
    
    
    model.fit(X_train, y_train,eval_set=[(X_test,y_test)], early_stopping_rounds=200,verbose=False)
    
    preds = model.predict(X_test)
    
    rmse = mean_squared_error(y_test, preds,squared=False)
    
    return rmse


study = optuna.create_study(direction="minimize")
study.optimize(objective, n_trials=300)

print("Number of finished trials: {}".format(len(study.trials)))

print("Best trial:")
trial = study.best_trial

print("  Value: {}".format(trial.value))

print("  Params: ")
for key, value in trial.params.items():
    print("    {}: {}".format(key, value))
    

What I am trying to minimize is this

[LightGBM] [Warning] feature_fraction is set=0.7134336417771784, colsample_bytree=0.4 will be ignored. Current value: feature_fraction=0.7134336417771784
[LightGBM] [Warning] lambda_l1 is set=0.0001621506831365743, reg_alpha=0.0 will be ignored. Current value: lambda_l1=0.0001621506831365743
[LightGBM] [Warning] bagging_fraction is set=0.8231149550002105, subsample=0.5 will be ignored. Current value: bagging_fraction=0.8231149550002105
[LightGBM] [Warning] bagging_freq is set=4, subsample_freq=0 will be ignored. Current value: bagging_freq=4
[LightGBM] [Warning] lambda_l2 is set=0.00010964883369301453, reg_lambda=0.0 will be ignored. Current value: lambda_l2=0.00010964883369301453
[LightGBM] [Warning] feature_fraction is set=0.3726043373358532, colsample_bytree=0.3 will be ignored. Current value: feature_fraction=0.3726043373358532
[LightGBM] [Warning] lambda_l1 is set=1.4643061619613147, reg_alpha=0.0 will be ignored. Current value: lambda_l1=1.4643061619613147
Ncosgove
  • 55
  • 1
  • 5

4 Answers4

0

I know this is a late response but I recently had a similar issue using Optuna with XGBoost and I was able to turn off the warnings with simplefilter like this:

from warnings import simplefilter
simplefilter("ignore", category=RuntimeWarning)

I see you are already using the warnings module with ignore, I haven't done it this way but the simplefilter worked for me.

Dharman
  • 30,962
  • 25
  • 85
  • 135
sog
  • 493
  • 4
  • 13
0

You sould pass "'verbosity': -1" within the dict 'params' which you will pass to the lightgbm.train() later. Besides,passing 'verbose_eval=False' to the lightgbm.train() is also necessary.

like:

params=
        {... 
        'verbosity': -1
        } 
gbm = lgbm.train(
        params, 
        ...
        verbose_eval=False, 
        ...)
0

Optuna is basically telling you that you have passed aliases for the parameters and hence the default parameter names and values are being ignored. You will not receive these warnings if you set the parameter names to the default ones. For example, replace feature_fraction with colsample_bytree replace lambda_l1 with reg_alpha, and so on.

Please note that you have passed duplicate parameters as well.

  1. subsample and bagging_fraction
  2. colsample_bytree and feature_fraction etc.
0

if you read the documentation linked here: https://optuna.readthedocs.io/en/stable/faq.html#how-to-suppress-log-messages-of-optuna

you can see that implementing this line of code before creating the study solves the issue:

optuna.logging.set_verbosity(optuna.logging.WARNING)
al-obrien
  • 1,353
  • 11
  • 29
roman
  • 1