I am trying to use lightfm.evaluation.precision_at_k()
to evaluate performance of my model.
My questions are around the parameters that I need to pass to it:
Does the
test_interactions
parameter needs to be the exact same shape (user indexes matching) as the interactions set that the model was trained on? In the examples I have seen from LightFM's Movielens data, it appears the test and train sets have the same number of rows (so they index to the same exact users in the same order). This would make sense since the model it self does not store any user ID -> matrix index mappings. However, I wonder if I can use theprecision_at_k()
method at all if I just want to get the evaluation done on a subset of users. If not, I guess I would have to iterate over my test users by hand and call.predict()
on each one and calculate precision-at-k for each user in my own code?I trained the model with item features but I'm confused why I would need to pass it again as
item_features
toprecision_at_k()
. If I'm just trying to predict the recommendations for a user that was part of my train data set (but has now some new interactions data), and if the features of the items haven't changed, is it safe to just not pass theitem_features
again here? If I have to pass them, I have to store them along with the model somewhere - just painful and I'm not sure why it is needed. What areitem_features
used for in thisprecision_at_k()
case?
I might end up trying to just manually evaluate predictions for each user and skip using precision_at_k()
completely.