Supervised Machine Learning techniques can automatically extract information from a variety of multimedia sources, e.g., image, text, sound, video. But it produces imperfect results since the multimedia content can be misinterpreted. Errors are commonly measured using confusion matrices, encoding type I and II errors for each class. Non-expert users encounter difficulties in understanding and using confusion matrices. They need to be read both column- and row-wise, which is tedious and error prone, and their technical concepts need explanations. Further, the visualizations commonly use of complex metrics, e.g., Precision/Recall, F1 scores. These can be overwhelming and misleading for non-experts since they may be inappropriate for specific use cases. For instance, type II errors (False Negative) are critical for medical diagnosis while type I errors (False Positive) are more tolerated. In the case of optical sorting of manufactured products (defect detection), the sensitivity to errors can be the opposite. We propose a novel visualization design that address the needs of non-experts users. Our visualization is intended to be easier to understand, and to minimize the risk of misinterpretation, and so for all kind of use cases. Future work will evaluate our design with both experts and non-experts, and compare its effectiveness with that of traditional ROC and Precision/Recall curves.