Traditional approaches in network intrusion detection follow a signature-based approach, however the use of anomaly detection approaches and machine learning techniques have been studied heavily for the past twenty years. The continuous change in the way attacks are appearing, the volume of attacks, as well as the improvements in the big data analytics space, make machine learning approaches more alluringthan ever. The intention of this paper is to show that using machine learning in the intrusion detection domain should be accompanied with an evaluation of its robustness against adversaries. Several adversarial techniques have emerged lately from the deep learning research, largely in the area of image classification. These techniques are based on the idea of introducing small changes in the original input data in order to make a machine learning model to misclassify it. This paper follows a big data analytics methodology and explores adversarial machine learning techniques that have emerged from the deep learning domain, against machine learning classifiers used for network intrusion detection. We look at several well-known classifiers and study their performance under attack over several metrics, such as accuracy, F1-score and receiver operating characteristic. The approach used assumes no knowledge of the original classifier and examines both general and targeted misclassification. The results showthat using relatively simple methods for generating adversarial samples it is possible to lower the detection accuracy of intrusion detection classifiers as much as 27%. Performance degradation is achieved using a methodology that is simpler than previous approaches and it requires only 6.14% change between the original and the adversarial sample, making it a candidate for a practical adversarial approach.