In the evaluation of recommender systems, the quality of recommendations made by a newly proposed algorithm is compared to the state-of-the-art, using a given quality measure and dataset. Validity of the evaluation depends on the assumption that the evaluation does not exhibit artefacts resulting from the process of collecting the dataset. The main difference between online and offline evaluation is that in the online setting, the user’s response to a recommendation is only observed once. We used the NewsREEL challenge to gain a deeper understanding of the implications of this difference for making comparisons between different recommender systems. The experiments aim to quantify the expected degree of variation in performance that cannot be attributed to differences between systems. We classify and discuss the non-algorithmic causes of performance differences observed.

doi.org/10.1007/978-3-319-44564-9_15
International Conference of the Cross-Language Evaluation Forum for European Languages
Human-Centered Data Analytics

Gebremeskel, G., & de Vries, A. (2016). Random performance differences between online recommender system algorithms. Presented at the International Conference of the Cross-Language Evaluation Forum for European Languages. doi:10.1007/978-3-319-44564-9_15