0

I am trying to use lightfm.evaluation.precision_at_k() to evaluate performance of my model.

My questions are around the parameters that I need to pass to it:

  1. Does the test_interactions parameter needs to be the exact same shape (user indexes matching) as the interactions set that the model was trained on? In the examples I have seen from LightFM's Movielens data, it appears the test and train sets have the same number of rows (so they index to the same exact users in the same order). This would make sense since the model it self does not store any user ID -> matrix index mappings. However, I wonder if I can use the precision_at_k() method at all if I just want to get the evaluation done on a subset of users. If not, I guess I would have to iterate over my test users by hand and call .predict() on each one and calculate precision-at-k for each user in my own code?

  2. I trained the model with item features but I'm confused why I would need to pass it again as item_features to precision_at_k(). If I'm just trying to predict the recommendations for a user that was part of my train data set (but has now some new interactions data), and if the features of the items haven't changed, is it safe to just not pass the item_features again here? If I have to pass them, I have to store them along with the model somewhere - just painful and I'm not sure why it is needed. What are item_features used for in this precision_at_k() case?

I might end up trying to just manually evaluate predictions for each user and skip using precision_at_k() completely.

desertnaut
  • 57,590
  • 26
  • 140
  • 166
Pedro
  • 1

0 Answers0