The evaluation of recommender systems is crucial for their development. In today's recommendation landscape there are many standardized recommendation algorithms and approaches, however, there exists no standardized method for experimental setup of evaluation -- not even for widely used measures such as precision and root-mean-squared error. This creates a setting where comparison of recommendation results using the same datasets becomes problematic. In this paper, we propose an evaluation protocol specifically developed with the recommendation use-case in mind, i.e. the recommendation of one or several items to an end user. The protocol attempts to closely mimic a scenario of a deployed (production) recommendation system, taking specific user aspects into consideration and allowing a comparison of small and large scale recommendation systems. The protocol is evaluated on common recommendation datasets and compared to traditional recommendation settings found in research literature. Our results show that the proposed model can better capture the quality of a recommender system than traditional evaluation does, and is not affected by characteristics of the data (e.g. size. sparsity, etc.).
LSRS
ACM RecSys Workshop on Large-Scale Recommender Systems
Human-Centered Data Analytics

Said, A., Bellogín Kouki, A., & de Vries, A. (2013). A Top-N Recommender System Evaluation Protocol Inspired by Deployed Systems. In Proceedings of the 2013 ACM RecSys Workshop on Large-Scale Recommender Systems. LSRS.