2018-06-15
Fair benchmarking considered difficult: Common pitfalls in database performance testing
Publication
Publication
Presented at the
International Workshop on Testing Database Systems (June 2018), Houston, Texas, USA
Performance benchmarking is one of the most commonly used methods for comparing different systems or algorithms, both in scientific literature and in industrial publications. While performance measurements might seem objective on the surface, there are many different ways to influence benchmark results to favor one system over the other, either by accident or on purpose. In this paper, we perform a study of the common pitfalls in DBMS performance comparisons, and give advice on how they can be spotted and avoided so a fair performance comparison between systems can be made. We illustrate the common pitfalls with a series of mock benchmarks, which show large differences in performance where none should be present.
Additional Metadata | |
---|---|
, | |
doi.org/10.1145/3209950.3209955 | |
International Workshop on Testing Database Systems | |
Organisation | Centrum Wiskunde & Informatica, Amsterdam (CWI), The Netherlands |
Raasveldt, M., Timbó Holanda, P., Gubner, T., & Mühleisen, H. (2018). Fair benchmarking considered difficult: Common pitfalls in database performance testing. In Workshop on Testing Database Systems (pp. 1–6). doi:10.1145/3209950.3209955 |