2024
Considerations on the theory of training models with differential privacy
Publication
Publication
In federated learning, collaborative learning takes place by a set of clients who each want to remain in control of how their local training data is used, in particular, how each client's local training data can remain private. Differential privacy is one method to limit privacy leakage. We provide a general overview of its framework and provable properties, adopt the more recent hypothesis based definition called Gaussian DP or f-DP, and discuss Differentially Private Stochastic Gradient Descent (DP-SGD). We stay at a meta level and attempt intuitive explanations and insights.
Additional Metadata | |
---|---|
, , , | |
doi.org/10.1016/B978-0-44-319037-7.00009-0 | |
Organisation | Centrum Wiskunde & Informatica, Amsterdam (CWI), The Netherlands |
van Dijk, M., & Nguyen, P. H. (2024). Considerations on the theory of training models with differential privacy. In Federated Learning: Theory and Practice (pp. 29–55). doi:10.1016/B978-0-44-319037-7.00009-0 |