Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Adversarial deep learning against intrusion detection classifiers
Luleå University of Technology.
Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.ORCID iD: 0000-0003-4250-4752
2017 (English)In: CEUR Workshop Proceedings / [ed] Kott A.,Pechoucek M., CEUR-WS , 2017, Vol. 2057, p. 35-48Conference paper, Published paper (Refereed)
Abstract [en]

Traditional approaches in network intrusion detection follow a signature-based approach, however the use of anomaly detection approaches and machine learning techniques have been studied heavily for the past twenty years. The continuous change in the way attacks are appearing, the volume of attacks, as well as the improvements in the big data analytics space, make machine learning approaches more alluringthan ever. The intention of this paper is to show that using machine learning in the intrusion detection domain should be accompanied with an evaluation of its robustness against adversaries. Several adversarial techniques have emerged lately from the deep learning research, largely in the area of image classification. These techniques are based on the idea of introducing small changes in the original input data in order to make a machine learning model to misclassify it. This paper follows a big data analytics methodology and explores adversarial machine learning techniques that have emerged from the deep learning domain, against machine learning classifiers used for network intrusion detection. We look at several well-known classifiers and study their performance under attack over several metrics, such as accuracy, F1-score and receiver operating characteristic. The approach used assumes no knowledge of the original classifier and examines both general and targeted misclassification. The results showthat using relatively simple methods for generating adversarial samples it is possible to lower the detection accuracy of intrusion detection classifiers as much as 27%. Performance degradation is achieved using a methodology that is simpler than previous approaches and it requires only 6.14% change between the original and the adversarial sample, making it a candidate for a practical adversarial approach. 

Place, publisher, year, edition, pages
CEUR-WS , 2017. Vol. 2057, p. 35-48
Series
CEUR Workshop Proc, ISSN 1613-0073
National Category
Information Systems, Social aspects
Research subject
Information systems
Identifiers
URN: urn:nbn:se:ltu:diva-67832Scopus ID: 2-s2.0-85042448945OAI: oai:DiVA.org:ltu-67832DiVA, id: diva2:1187471
Conference
017 NATO IST-152 Workshop on Intelligent Autonomous Agents for Cyber Defence and Resilience, IST-152 2017; Czech Technical UniversityPrague; Czech Republic; 18 - 20 October 2017
Available from: 2018-03-05 Created: 2018-03-05 Last updated: 2018-03-05Bibliographically approved

Open Access in DiVA

No full text in DiVA

Scopus

Search in DiVA

By author/editor
Rigaki, MariaElragal, Ahmed
By organisation
Luleå University of TechnologyComputer Science
Information Systems, Social aspects

Search outside of DiVA

GoogleGoogle Scholar

urn-nbn

Altmetric score

urn-nbn
Total: 227 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf