4

I am using LightGBM and would like to use average precision recall as a metric. I tried defining feval:

cv_result = lgb.cv(params=params, train_set=lgb_train, feature_name=Rel_Feat_Names, feval=APS)

where APS defined as:

def APS(preds, train_data):
    y_pred_val = []
    y_test_val = []
    for i, stat in enumerate(train_data.get_label.isnull()):
        if ~stat:
            y_pred_val.append(preds[i])
            y_test_val.append(train_data.get_label[i])
    aps = average_precision_score(np.array(y_test_val), np.array(y_pred_val))
    return aps

and I get an error:

TypeError: Unknown type of parameter:feval, got:function

I also try to use "MAP" as the metric

cv_result = lgb.cv(params=params, train_set=lgb_train, feature_name=Rel_Feat_Names, "metric="MAP")

but got the following error:

"lightgbm.basic.LightGBMError: For MAP metric, there should be query information"

I can't find what is the query information required.

How can I use feval corrctly and define the query required for "MAP"

Thanks

Yochai Edlitz
  • 41
  • 1
  • 3
  • MAP is not the "Average Precision" (the area under the Precision-Recall curve). see https://en.wikipedia.org/wiki/Evaluation_measures_(information_retrieval)#Mean_average_precision and https://en.wikipedia.org/wiki/Evaluation_measures_(information_retrieval)#Average_precision – snow_abstraction Aug 05 '19 at 14:10
  • I think that "map" with lowercase is the correct parameter. – Hernan C. Vazquez Jul 28 '20 at 16:41

1 Answers1

0

Right now you can put map (alias mean_average_precision) as your metric as described here, but to answer the question of applying feval correctly:

Output of the customized metric should be a tuple of name, value and greater_is_better, so in your case:

def APS(preds, train_data):
    aps = average_precision_score(train_data.get_label(), preds)
    return 'aps', aps, False

then also include in your params the following: 'objective': 'binary', 'metric': 'None'

Rafa
  • 564
  • 4
  • 12