Classifiers are applied in many domains where classification errors have significant implications. However, end-users may not always understand the errors and their impact, as error visualizations are typically designed for experts and for improving classifiers. We discuss a visualization design that addresses the specific needs of classifiers’ end-users. We evaluate this design with users from three levels of expertise, and compare it with ROC curves and confusion matrices. We identify key difficulties with understanding the classification errors, and how visualization designs addressed or aggravated them. The main issues concerned confusions of the actual and predicted classes (e.g., confusion of False Positives and False Negatives). The machine learning terminology, complexity of ROC curves, and symmetry of confusion matrices aggravated the confusions. The end-user-oriented visualization reduced the difficulties by using several visual features to clarify the actual and predicted classes, and more tangible metrics and representation. Our results contribute to supporting end-users’ understanding of classification errors, and informed decisions when choosing or tuning classifiers.

Information Access [IA]
Supporting humans in knowledge gathering and question answering w.r.t. marine and environmental monitoring through analysis of multiple video streams
CWI management

Beauxis-Aussalet, E., van Doorn, J., & Hardman, L. (2018). Supporting End-User Understanding of Classification Errors (extended version). Information Access [IA].