Traditional recommender system evaluation focuses on raising the accuracy, or lowering the rating prediction error of the recommendation algorithm. Recently, however, discrepancies between commonly used metrics (e.g. precision, recall, root-mean-square error) and the experienced quality from the users' have been brought to light. This project aims to address these discrepancies by attempting to develop novel means of recommender systems evaluation which encompasses qualities identified through traditional evaluation metrics and user-centric factors, e.g. diversity, serendipity, novelty, etc., as well as bringing further insights in the topic by analyzing and translating the problem of evaluation from an Information Retrieval perspective.
,
CEUR
Conference on User Modeling, Adaptation and Personalization
Human-Centered Data Analytics

Said, A., Bellogín Kouki, A., de Vries, A., & Kille, B. (2013). Information Retrieval and User-Centric Recommender System Evaluation. In Extended Proceedings of The 21st Conference on User Modeling, Adaptation and Personalization (UMAP'13). CEUR.