Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Lightweight Privacy-preserving Training and Evaluation for Discretized Neural Networks
Shanghai Key Laboratory of Trustworthy Computing, East China Normal University, Shanghai, China.
Shanghai Key Laboratory of Trustworthy Computing, East China Normal University, Shanghai, China.
Shanghai Key Laboratory of Trustworthy Computing, East China Normal University, Shanghai, China.
Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science. Department of Computer Science and Technology, Fuzhou University, China.ORCID iD: 0000-0003-1902-9877
Show others and affiliations
2019 (English)In: IEEE Internet of Things Journal, ISSN 2327-4662Article in journal (Refereed) Epub ahead of print
Abstract [en]

Machine learning, particularly the neural network, is extensively exploited in dizzying applications. In order to reduce the burden of computing for resource-constrained clients, a large number of historical private datasets are required to be outsourced to the semi-trusted or malicious cloud for model training and evaluation. To achieve privacy preservation, most of the existing work either exploited the technique of public key fully homomorphic encryption (FHE) resulting in considerable computational cost and ciphertext expansion, or secure multiparty computation (SMC) requiring multiple rounds of interactions between user and cloud. To address these issues, in this paper, a lightweight privacy-preserving model training and evaluation scheme LPTE for discretized neural networks is proposed. Firstly, we put forward an efficient single key fully homomorphic data encapsulation mechanism (SFH-DEM) without exploiting public key FHE. Based on SFH-DEM, a series of atomic calculations over the encrypted domain including multivariate polynomial, nonlinear activation function, gradient function and maximum operations are devised as building blocks. Furthermore, a lightweight privacy-preserving model training and evaluation scheme LPTE for discretized neural networks is proposed, which can also be extended to convolutional neural network. Finally, we give the formal security proofs for dataset privacy, model training privacy and model evaluation privacy under the semi-honest environment and implement the experiment on real dataset MNIST for recognizing handwritten numbers in discretized neural network to demonstrate the high efficiency and accuracy of our proposed LPTE.

Place, publisher, year, edition, pages
IEEE, 2019.
Keywords [en]
Discretized neural networks, privacy-preserving, secure outsourced computation, efficiency, Neural networks, Training, Computational modeling, Data privacy, Public key
National Category
Media and Communication Technology
Research subject
Pervasive Mobile Computing
Identifiers
URN: urn:nbn:se:ltu:diva-76111DOI: 10.1109/JIOT.2019.2942165OAI: oai:DiVA.org:ltu-76111DiVA, id: diva2:1354389
Available from: 2019-09-25 Created: 2019-09-25 Last updated: 2019-09-25

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full text

Authority records BETA

Vasilakos, Athanasios

Search in DiVA

By author/editor
Vasilakos, Athanasios
By organisation
Computer Science
In the same journal
IEEE Internet of Things Journal
Media and Communication Technology

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 14 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf