Machine Learning, particularly Deep Learning, is transforming society in any of its fundamental domains - healthcare, culture, finance, transportation, education, just to mention a few. However Machine Learning suffers from serious weaknesses in privacy and security due to the large amount of data (datasets for training and parameters in trained models) and the probabilistic approximation inherent in any ML function. Multi-Party Computation (MPC) is a family of techniques and tactic with a sound scientific and operative base that can be applied to mitigate some relevant weaknesses of ML. In particular, privacy in training may be assured by MPC with federated learning techniques (these may be considered particular interpretations and implementation of a general MPC method) and also security in training and inference may be enforced by continuous model testing using MPC is a technique that allows multiple parties to evaluate a machine learning model on their private data without revealing it to each other. This brief paper is a practical and essential review on how to use MPC to mitigate privacy and security issues in ML.

Bellini, A., Bellini, E., Bertini, M., Almhaithawi, D., Cuomo, S. (2023). Multi-party Computation for Privacy and Security in Machine Learning: a practical review. In 2023 IEEE International Conference on Cyber Security and Resilience (CSR) (pp.174-179). 345 E 47TH ST, NEW YORK, NY 10017 USA : IEEE [10.1109/CSR57506.2023.10224826].

Multi-party Computation for Privacy and Security in Machine Learning: a practical review

Bellini, E;
2023-01-01

Abstract

Machine Learning, particularly Deep Learning, is transforming society in any of its fundamental domains - healthcare, culture, finance, transportation, education, just to mention a few. However Machine Learning suffers from serious weaknesses in privacy and security due to the large amount of data (datasets for training and parameters in trained models) and the probabilistic approximation inherent in any ML function. Multi-Party Computation (MPC) is a family of techniques and tactic with a sound scientific and operative base that can be applied to mitigate some relevant weaknesses of ML. In particular, privacy in training may be assured by MPC with federated learning techniques (these may be considered particular interpretations and implementation of a general MPC method) and also security in training and inference may be enforced by continuous model testing using MPC is a technique that allows multiple parties to evaluate a machine learning model on their private data without revealing it to each other. This brief paper is a practical and essential review on how to use MPC to mitigate privacy and security issues in ML.
2023
979-8-3503-1170-9
Bellini, A., Bellini, E., Bertini, M., Almhaithawi, D., Cuomo, S. (2023). Multi-party Computation for Privacy and Security in Machine Learning: a practical review. In 2023 IEEE International Conference on Cyber Security and Resilience (CSR) (pp.174-179). 345 E 47TH ST, NEW YORK, NY 10017 USA : IEEE [10.1109/CSR57506.2023.10224826].
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11590/459748
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact