Classifiers can provide counts of items per class, but systematic classification errors yield biases (e.g., if a class is often misclassified as another, its size may be under-estimated). To handle classification biases, the statistics and epidemiology domains devised methods for estimating unbiased class sizes (or class probabilities) without identifying which individual items are misclassified. These bias correction methods are applicable to machine learning classifiers, but in some cases yield high result variance and increased biases. We present the applicability and drawbacks of existing methods and extend them with three novel methods. Our Sample-to-Sample method provides accurate confidence intervals for the bias correction results. Our Maximum Determinant method predicts which classifier yields the least result variance. Our Ratio-to-TP method details the error decomposition in classifier outputs (i.e., how many items classified as class Cy truly belong to Cx, for all possible classes) and has properties of interest for applying the Maximum Determinant method. Our methods are demonstrated empirically, and we discuss the need for establishing theory and guidelines for choosing the methods and classifier to apply.
doi.org/10.1109/DSAA.2017.52
Supporting humans in knowledge gathering and question answering w.r.t. marine and environmental monitoring through analysis of multiple video streams
International Conference on Data Science and Advanced Analytics
Human-Centered Data Analytics

Beauxis-Aussalet, E., & Hardman, L. (2017). Extended methods to handle classification biases. In 2017 International Conference on Data Science and Advanced Analytics, DSAA 2017 (pp. 765–774). doi:10.1109/DSAA.2017.52