In this work, we investigate privacy risks associated with model inversion attribute inference attacks. Specifically, we explore a case in which a governmental institute aims to release a trained machine learning model to the public (i.e., for collaboration or transparency reasons) without threatening privacy. The model predicts change of living place and is important for studying individuals’ tendency to relocate. For this reason, it is called a propensity-to-move model. Our results first show that there is a potential leak of sensitive information when a propensity-to-move model is trained on the original data, in the form collected from the individuals. To address this privacy risk, we propose a data synthesis + privacy preservation approach: we replace the original training data with synthetic data on top of which we apply privacy preserving techniques. Our approach aims to maintain the prediction performance of the model, while controlling the privacy risk. Related work has studied a one-step synthesis of privacy preserving data. In contrast, here, we first synthesize data and then apply privacy preserving techniques. We carry out experiments involving attacks on individuals included in the training data (“inclusive individuals”) as well as attacks on individuals not included in the training data (“exclusive individuals”). In this regard, our work goes beyond conventional model inversion attribute inference attacks, which focus on individuals contained in the training data. Our results show that a propensity-to-move model trained on synthetic training data protected with privacy-preserving techniques achieves performance comparable to a model trained on the original training data. At the same time, we observe a reduction in the efficacy of certain attacks.

, , , , ,
doi.org/10.1007/978-3-031-49187-0_1
Lecture Notes in Computer Science
26th International Conference on Information Security, ISC 2023
Centrum Wiskunde & Informatica, Amsterdam (CWI), The Netherlands

Slokom, M., de Wolf, P.-P., & Larson, M. (2023). Exploring privacy-preserving techniques on synthetic data as a defense against model inversion attacks. In Proceedings of the 26th International Conference on Information Security, ISC 2023 (pp. 3–23). doi:10.1007/978-3-031-49187-0_1