Privacy and Security Issues in Deep Learning: A SurveyShow others and affiliations
2021 (English)In: IEEE Access, E-ISSN 2169-3536, Vol. 9, p. 4566-4593Article in journal (Refereed) Published
Abstract [en]
Deep Learning (DL) algorithms based on artificial neural networks have achieved remarkable success and are being extensively applied in a variety of application domains, ranging from image classification, automatic driving, natural language processing to medical diagnosis, credit risk assessment, intrusion detection. However, the privacy and security issues of DL have been revealed that the DL model can be stolen or reverse engineered, sensitive training data can be inferred, even a recognizable face image of the victim can be recovered. Besides, the recent works have found that the DL model is vulnerable to adversarial examples perturbed by imperceptible noised, which can lead the DL model to predict wrongly with high confidence. In this paper, we first briefly introduces the four types of attacks and privacy-preserving techniques in DL. We then review and summarize the attack and defense methods associated with DL privacy and security in recent years. To demonstrate that security threats really exist in the real world, we also reviewed the adversarial attacks under the physical condition. Finally, we discuss current challenges and open problems regarding privacy and security issues in DL.
Place, publisher, year, edition, pages
IEEE, 2021. Vol. 9, p. 4566-4593
Keywords [en]
Deep learning, DL privacy, DL security, model extraction attack, model inversion attack, adversarial attack, poisoning attack, adversarial defense, privacy-preserving
National Category
Media and Communication Technology
Research subject
Pervasive Mobile Computing
Identifiers
URN: urn:nbn:se:ltu:diva-82336DOI: 10.1109/ACCESS.2020.3045078ISI: 000607679500001Scopus ID: 2-s2.0-85098748130OAI: oai:DiVA.org:ltu-82336DiVA, id: diva2:1517023
Note
Validerad;2021;Nivå 2;2021-01-13 (alebob);
Finansiär: National Natural Science Foundation of China (U1804263, 61702105)
2021-01-132021-01-132023-10-28Bibliographically approved