We present the first algorithm that combines privacy-preserving technologies and state-of-the-art explainable AI to enable privacy-friendly explanations of black-box AI models. We provide a secure algorithm for contrastive explanations of black-box machine learning models that securely trains and uses local foil trees. Our work shows that the quality of these explanations can be upheld whilst ensuring the privacy of both the training data, and the model itself. An extended version of this paper is found at Cryptology ePrint Archive [16].

, , ,
Lecture Notes in Computer Science
International Symposium on Cyber Security, Cryptology and Machine Learning, CSCML 2022
Centrum Wiskunde & Informatica, Amsterdam (CWI), The Netherlands

Veugen, P.J.M, Kamphorst, B, & Marcus, M. (2022). Privacy-preserving contrastive explanations with local foil trees. In Proceedings of CSCML 2022 (pp. 88–89). doi:10.1007/978-3-031-07689-3_7