6

I have trained a ranking model with LightGBM with the objective 'lambdarank'. I want to evaluate my model to get the nDCG score for my test dataset using the best iteration, but I have never been able to use the lightgbm.Booster.eval() nor lightgbm.Booster.eval_train() function.

First, I have created 3 dataset instances, namely the train set, valid set and test set:

lgb_train = lgb.Dataset(x_train, y_train, group=query_train, free_raw_data=False)

lgb_valid = lgb.Dataset(x_valid, y_valid, reference=lgb_train, group=query_valid, free_raw_data=False)

lgb_test = lgb.Dataset(x_test, y_test, group=query_test)

I then train my model using lgb_train and lgb_valid:

gbm = lgb.train(params,
            lgb_train,
            num_boost_round=1500,
            categorical_feature=chosen_cate_features,    
            valid_sets=[lgb_train, lgb_valid],
            evals_result=evals_result,
            early_stopping_rounds=150  
)

When I call the eval() or the eval_train() functions after training, it returns an error:

gbm.eval(data=lgb_test,name='test')
---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-122-7ff5ef5136b8> in <module>()
----> 1 gbm.eval(data=lgb_test,name='test')

/usr/local/lib/python3.6/dist-packages/lightgbm/basic.py in eval(self, data, 
name, feval)
   1925             raise TypeError("Can only eval for Dataset instance")
   1926         data_idx = -1
-> 1927         if data is self.train_set:
   1928             data_idx = 0
   1929         else:

AttributeError: 'Booster' object has no attribute 'train_set'

gbm.eval_train()
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-123-0ce5fa3139f5> in <module>()
----> 1 gbm.eval_train()

/usr/local/lib/python3.6/dist-packages/lightgbm/basic.py in eval_train(self, 
feval)
   1956             List with evaluation results.
   1957         """
-> 1958         return self.__inner_eval(self.__train_data_name, 0, feval)
   1959 
   1960     def eval_valid(self, feval=None):

/usr/local/lib/python3.6/dist-packages/lightgbm/basic.py in 
__inner_eval(self, data_name, data_idx, feval)
   2352         """Evaluate training or validation data."""
   2353         if data_idx >= self.__num_dataset:
-> 2354             raise ValueError("Data_idx should be smaller than number 
of dataset")
   2355         self.__get_eval_info()
   2356         ret = []

ValueError: Data_idx should be smaller than number of dataset

and when i called the eval_valid() function, it returns an empty list.

Can anyone tell me how to evaluate a LightGBM model and get the nDCG score using test set properly? Thanks.

John
  • 309
  • 3
  • 12

1 Answers1

1

If you add keep_training_booster=True as an argument to your lgb.train, the returned booster object would be able to execute eval and eval_train (though eval_valid would still return an empty list for some reason even when valid_sets is provided in lgb.train).

Documentation says:

keep_training_booster (bool, optional (default=False)) – Whether the returned Booster will be used to keep training. If False, the returned value will be converted into _InnerPredictor before returning.

Lei
  • 733
  • 1
  • 5
  • 13