Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Bidirectional Learning for Robust Neural Networks
MindGarage, University of Kaiserslautern, Kaiserslautern, Germany; Department of Computer Science, Oslo Metropolitan University, Oslo, Norway; Department of Computer Science, Norwegian University of Science and Technology, Trondheim, Norway.
Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab. MindGarage, University of Kaiserslautern, Kaiserslautern, Germany.ORCID iD: 0000-0003-4029-6574
2019 (English)In: 2019 International Joint Conference on Neural Networks (IJCNN), IEEE, 2019, article id N-19072Conference paper, Published paper (Refereed)
Abstract [en]

A multilayer perceptron can behave as a generative classifier by applying bidirectional learning (BL). It consists of training an undirected neural network to map input to output and vice-versa; therefore it can produce a classifier in one direction, and a generator in the opposite direction for the same data. The learning process of BL tries to reproduce the neuroplasticity stated in Hebbian theory using only backward propagation of errors. In this paper, two learning techniques are independently introduced which use BL for improving robustness to white noise static and adversarial examples. The first method is bidirectional propagation of errors, which the error propagation occurs in backward and forward directions. Motivated by the fact that its generative model receives as input a constant vector per class, we introduce as a second method the novel hybrid adversarial networks (HAN). Its generative model receives a random vector as input and its training is based on generative adversarial networks (GAN). To assess the performance of BL, we perform experiments using several architectures with fully and convolutional layers, with and without bias. Experimental results show that both methods improve robustness to white noise static and adversarial examples, and even increase accuracy, but have different behavior depending on the architecture and task, being more beneficial to use the one or the other. Nevertheless, HAN using a convolutional architecture with batch normalization presents outstanding robustness, reaching state-of-the-art accuracy on adversarial examples of hand-written digits.

Place, publisher, year, edition, pages
IEEE, 2019. article id N-19072
Series
International Joint Conference on Neural Networks (IJCNN), E-ISSN 2161-4407
Keywords [en]
adversarial example defense, noise defense, bidirectional learning, hybrid neural network, Hebbian theory
National Category
Computer Sciences
Research subject
Machine Learning
Identifiers
URN: urn:nbn:se:ltu:diva-85967DOI: 10.1109/IJCNN.2019.8852120ISI: 000530893803052Scopus ID: 2-s2.0-85073199027OAI: oai:DiVA.org:ltu-85967DiVA, id: diva2:1572665
Conference
International Joint Conference on Neural Networks (IJCNN), Budapest, Hungary, July 14-19, 2019
Note

ISBN för värdpublikation: 978-1-7281-1985-4

Available from: 2021-06-24 Created: 2021-06-24 Last updated: 2024-03-07Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Liwicki, Marcus

Search in DiVA

By author/editor
Liwicki, Marcus
By organisation
Embedded Internet Systems Lab
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 31 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf