Endre søk
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Are You Tampering with My Data?
Document Image and Voice Analysis Group (DIVA), University of Fribourg, Fribourg, Switzerland.
Document Image and Voice Analysis Group (DIVA), University of Fribourg, Fribourg, Switzerland.
Document Image and Voice Analysis Group (DIVA), University of Fribourg, Fribourg, Switzerland.
Document Image and Voice Analysis Group (DIVA), University of Fribourg, Fribourg, Switzerland.
Vise andre og tillknytning
2019 (engelsk)Inngår i: Computer Vision – ECCV 2018 Workshops: Proceedings, Part II / [ed] Laura Leal-Taixé & Stefan Roth, Springer, 2019, s. 296-312Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

We propose a novel approach towards adversarial attacks on neural networks (NN), focusing on tampering the data used for training instead of generating attacks on trained models. Our network-agnostic method creates a backdoor during training which can be exploited at test time to force a neural network to exhibit abnormal behaviour. We demonstrate on two widely used datasets (CIFAR-10 and SVHN) that a universal modification of just one pixel per image for all the images of a class in the training set is enough to corrupt the training procedure of several state-of-the-art deep neural networks, causing the networks to misclassify any images to which the modification is applied. Our aim is to bring to the attention of the machine learning community, the possibility that even learning-based methods that are personally trained on public datasets can be subject to attacks by a skillful adversary.

sted, utgiver, år, opplag, sider
Springer, 2019. s. 296-312
Serie
Lecture Notes in Computer Science ; 11130
Emneord [en]
Adversarial attack, Machine learning, Deep neural networks, Data poisoning
HSV kategori
Forskningsprogram
Maskininlärning
Identifikatorer
URN: urn:nbn:se:ltu:diva-73147DOI: 10.1007/978-3-030-11012-3_25Scopus ID: 2-s2.0-85061797135ISBN: 978-3-030-11011-6 (tryckt)OAI: oai:DiVA.org:ltu-73147DiVA, id: diva2:1295180
Konferanse
15th European Conference on Computer Vision (ECCV), September 8-14, Munich, Germany
Tilgjengelig fra: 2019-03-11 Laget: 2019-03-11 Sist oppdatert: 2019-03-11bibliografisk kontrollert

Open Access i DiVA

Fulltekst mangler i DiVA

Andre lenker

Forlagets fulltekstScopus

Personposter BETA

Liwicki, Marcus

Søk i DiVA

Av forfatter/redaktør
Liwicki, Marcus
Av organisasjonen

Søk utenfor DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric

doi
isbn
urn-nbn
Totalt: 217 treff
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf