Fully distributed privacy preserving mini-batch gradient descent learning

Gábor Danner, M. Jelasity

Research output: Chapter in Book/Report/Conference proceedingConference contribution

5 Citations (Scopus)

Abstract

In fully distributed machine learning, privacy and security are important issues. These issues are often dealt with using secure multiparty computation (MPC). However, in our application domain, known MPC algorithms are not scalable or not robust enough. We propose a light-weight protocol to quickly and securely compute the sum of the inputs of a subset of participants assuming a semi-honest adversary. During the computation the participants learn no individual values. We apply this protocol to efficiently calculate the sum of gradients as part of a fully distributed mini-batch stochastic gradient descent algorithm. The protocol achieves scalability and robustness by exploiting the fact that in this application domain a “quick and dirty” sum computation is acceptable. In other words, speed and robustness takes precedence over precision. We analyze the protocol theoretically as well as experimentally based on churn statistics from a real smartphone trace. We derive a sufficient condition for preventing the leakage of an individual value, and we demonstrate the feasibility of the overhead of the protocol.

Original languageEnglish
Title of host publicationDistributed Applications and Interoperable Systems - 15th IFIP WG 6.1 International Conference, DAIS 2015 Held as Part of the 10th International Federated Conference on Distributed Computing Techniques, DisCoTec 2015, Proceedings
PublisherSpringer Verlag
Pages30-44
Number of pages15
Volume9038
ISBN (Print)9783319191287
DOIs
Publication statusPublished - 2015
Event15th IFIP WG 6.1 International Conference on Distributed Applications and Interoperable Systems, DAIS 2015 Held as Part of the 10th International Federated Conference on Distributed Computing Techniques, DisCoTec 2015 - Grenoble, France
Duration: Jun 2 2015Jun 4 2015

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume9038
ISSN (Print)03029743
ISSN (Electronic)16113349

Other

Other15th IFIP WG 6.1 International Conference on Distributed Applications and Interoperable Systems, DAIS 2015 Held as Part of the 10th International Federated Conference on Distributed Computing Techniques, DisCoTec 2015
CountryFrance
CityGrenoble
Period6/2/156/4/15

Fingerprint

Privacy Preserving
Gradient Descent
Batch
Smartphones
Robustness
Multiparty Computation
Secure multi-party Computation
Stochastic Gradient
Descent Algorithm
Gradient Algorithm
Learning systems
Scalability
Leakage
Statistics
Privacy
Machine Learning
Trace
Learning
Gradient
Calculate

Keywords

  • Fully distributed learning
  • Mini-batch stochastic gradient descent
  • P2P smartphone networks
  • Secure sum

ASJC Scopus subject areas

  • Computer Science(all)
  • Theoretical Computer Science

Cite this

Danner, G., & Jelasity, M. (2015). Fully distributed privacy preserving mini-batch gradient descent learning. In Distributed Applications and Interoperable Systems - 15th IFIP WG 6.1 International Conference, DAIS 2015 Held as Part of the 10th International Federated Conference on Distributed Computing Techniques, DisCoTec 2015, Proceedings (Vol. 9038, pp. 30-44). (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 9038). Springer Verlag. https://doi.org/10.1007/978-3-319-19129-4_3

Fully distributed privacy preserving mini-batch gradient descent learning. / Danner, Gábor; Jelasity, M.

Distributed Applications and Interoperable Systems - 15th IFIP WG 6.1 International Conference, DAIS 2015 Held as Part of the 10th International Federated Conference on Distributed Computing Techniques, DisCoTec 2015, Proceedings. Vol. 9038 Springer Verlag, 2015. p. 30-44 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 9038).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Danner, G & Jelasity, M 2015, Fully distributed privacy preserving mini-batch gradient descent learning. in Distributed Applications and Interoperable Systems - 15th IFIP WG 6.1 International Conference, DAIS 2015 Held as Part of the 10th International Federated Conference on Distributed Computing Techniques, DisCoTec 2015, Proceedings. vol. 9038, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 9038, Springer Verlag, pp. 30-44, 15th IFIP WG 6.1 International Conference on Distributed Applications and Interoperable Systems, DAIS 2015 Held as Part of the 10th International Federated Conference on Distributed Computing Techniques, DisCoTec 2015, Grenoble, France, 6/2/15. https://doi.org/10.1007/978-3-319-19129-4_3
Danner G, Jelasity M. Fully distributed privacy preserving mini-batch gradient descent learning. In Distributed Applications and Interoperable Systems - 15th IFIP WG 6.1 International Conference, DAIS 2015 Held as Part of the 10th International Federated Conference on Distributed Computing Techniques, DisCoTec 2015, Proceedings. Vol. 9038. Springer Verlag. 2015. p. 30-44. (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)). https://doi.org/10.1007/978-3-319-19129-4_3
Danner, Gábor ; Jelasity, M. / Fully distributed privacy preserving mini-batch gradient descent learning. Distributed Applications and Interoperable Systems - 15th IFIP WG 6.1 International Conference, DAIS 2015 Held as Part of the 10th International Federated Conference on Distributed Computing Techniques, DisCoTec 2015, Proceedings. Vol. 9038 Springer Verlag, 2015. pp. 30-44 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)).
@inproceedings{1da4465eda9e44928a9955646c577e88,
title = "Fully distributed privacy preserving mini-batch gradient descent learning",
abstract = "In fully distributed machine learning, privacy and security are important issues. These issues are often dealt with using secure multiparty computation (MPC). However, in our application domain, known MPC algorithms are not scalable or not robust enough. We propose a light-weight protocol to quickly and securely compute the sum of the inputs of a subset of participants assuming a semi-honest adversary. During the computation the participants learn no individual values. We apply this protocol to efficiently calculate the sum of gradients as part of a fully distributed mini-batch stochastic gradient descent algorithm. The protocol achieves scalability and robustness by exploiting the fact that in this application domain a “quick and dirty” sum computation is acceptable. In other words, speed and robustness takes precedence over precision. We analyze the protocol theoretically as well as experimentally based on churn statistics from a real smartphone trace. We derive a sufficient condition for preventing the leakage of an individual value, and we demonstrate the feasibility of the overhead of the protocol.",
keywords = "Fully distributed learning, Mini-batch stochastic gradient descent, P2P smartphone networks, Secure sum",
author = "G{\'a}bor Danner and M. Jelasity",
year = "2015",
doi = "10.1007/978-3-319-19129-4_3",
language = "English",
isbn = "9783319191287",
volume = "9038",
series = "Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)",
publisher = "Springer Verlag",
pages = "30--44",
booktitle = "Distributed Applications and Interoperable Systems - 15th IFIP WG 6.1 International Conference, DAIS 2015 Held as Part of the 10th International Federated Conference on Distributed Computing Techniques, DisCoTec 2015, Proceedings",

}

TY - GEN

T1 - Fully distributed privacy preserving mini-batch gradient descent learning

AU - Danner, Gábor

AU - Jelasity, M.

PY - 2015

Y1 - 2015

N2 - In fully distributed machine learning, privacy and security are important issues. These issues are often dealt with using secure multiparty computation (MPC). However, in our application domain, known MPC algorithms are not scalable or not robust enough. We propose a light-weight protocol to quickly and securely compute the sum of the inputs of a subset of participants assuming a semi-honest adversary. During the computation the participants learn no individual values. We apply this protocol to efficiently calculate the sum of gradients as part of a fully distributed mini-batch stochastic gradient descent algorithm. The protocol achieves scalability and robustness by exploiting the fact that in this application domain a “quick and dirty” sum computation is acceptable. In other words, speed and robustness takes precedence over precision. We analyze the protocol theoretically as well as experimentally based on churn statistics from a real smartphone trace. We derive a sufficient condition for preventing the leakage of an individual value, and we demonstrate the feasibility of the overhead of the protocol.

AB - In fully distributed machine learning, privacy and security are important issues. These issues are often dealt with using secure multiparty computation (MPC). However, in our application domain, known MPC algorithms are not scalable or not robust enough. We propose a light-weight protocol to quickly and securely compute the sum of the inputs of a subset of participants assuming a semi-honest adversary. During the computation the participants learn no individual values. We apply this protocol to efficiently calculate the sum of gradients as part of a fully distributed mini-batch stochastic gradient descent algorithm. The protocol achieves scalability and robustness by exploiting the fact that in this application domain a “quick and dirty” sum computation is acceptable. In other words, speed and robustness takes precedence over precision. We analyze the protocol theoretically as well as experimentally based on churn statistics from a real smartphone trace. We derive a sufficient condition for preventing the leakage of an individual value, and we demonstrate the feasibility of the overhead of the protocol.

KW - Fully distributed learning

KW - Mini-batch stochastic gradient descent

KW - P2P smartphone networks

KW - Secure sum

UR - http://www.scopus.com/inward/record.url?scp=84937458382&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84937458382&partnerID=8YFLogxK

U2 - 10.1007/978-3-319-19129-4_3

DO - 10.1007/978-3-319-19129-4_3

M3 - Conference contribution

AN - SCOPUS:84937458382

SN - 9783319191287

VL - 9038

T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

SP - 30

EP - 44

BT - Distributed Applications and Interoperable Systems - 15th IFIP WG 6.1 International Conference, DAIS 2015 Held as Part of the 10th International Federated Conference on Distributed Computing Techniques, DisCoTec 2015, Proceedings

PB - Springer Verlag

ER -