The Support Vector Data Description (SVDD) has been introduced to address the problem of anomaly (or outlier) detection. It essentially fits the smallest possible sphere around the given data points, allowing some points to be excluded as outliers. Whether or not a point is excluded, is governed by a slack variable. Mathematically, the values for the slack variables are obtained by minimizing a cost function that balances the size of the sphere against the penalty associated with outliers. In this paper we argue that the SVDD slack variables lack a clear geometric meaning, and we therefore re-analyze the cost function to get a better insight into the characteristics of the solution. We also introduce and analyze two new definitions of slack variables and show that one of the proposed methods behaves more robustly with respect to outliers, thus providing tighter bounds compared to SVDD.
,
,
, ,
Springer (Heidelberg)
P. Perner
doi.org/10.1007/978-3-642-23184-1_3
Industrial Conference on Data Mining
Intelligent and autonomous systems

Pauwels, E., & Ambekar, O. (2011). One Class Classification for Anomaly Detection:
Support Vector Data Description Revisited. In P. Perner (Ed.), Proceedings of Industrial Conference on Data Mining 2011 (pp. 25–39). Springer (Heidelberg). doi:10.1007/978-3-642-23184-1_3