1

I am evaluating a recommender and I have ROC curves and Precision-Recall curves. When I change some parameters the ROC and PR curves change a little bit differently. Sometimes the ROC curve looks better than the PR curve, or the other way around. Therefore I want both curves. I can boil down the ROC Curve to AUC, and since I have a 11-point PR curve I can take the mean over the 11 points to get a single number.

Can I combine these measures somehow to one number? And is this something that people do or is that unnecessary?

Is the fact that the ROC looks better than the PR just a subjective thing because I am not good at intrepreting the curves, or is it valid that one can be better than the other? (They are not completely different, but it´s still noticable I think)

EDIT: Basically I don´t want to show tons of plots, I want a table of numbers. Would you combine these numbers in one table? Or make a table for each measure?

Puckl
  • 731
  • 5
  • 19

1 Answers1

1

What people do most in common systems is to use the AUC (area under the ROC curve) or the F-Measure as summary metrics. But how you are dealing with recommender systems, until what i know they like to see the precision and recall curves (like these). Because the precision decay and the recall grow as the TOP-K grows are important results to these systems.

But if you still want a better answer about the precision-recal versus ROC curves, read this paper

ЯegDwight
  • 24,821
  • 10
  • 45
  • 52
Augusto
  • 241
  • 3
  • 5