A Stacking-Based Deep Neural Network Approach for Effective Network Anomaly Detection

An anomaly-based intrusion detection system (A-IDS) provides a critical aspect in a modern computing infrastructure since new types of attacks can be discovered. It prevalently utilizes several machine learning algorithms (ML) for detecting and classifying network traffic. To date, lots of algorithms have been proposed to improve the detection performance of A-IDS, either using individual or ensemble learners. In particular, ensemble learners have shown remarkable performance over individual learners in many applications, including in cybersecurity domain. However, most existing works still suffer from unsatisfactory results due to improper ensemble design. The aim of this study is to emphasize the effectiveness of stacking ensemble-based model for A-IDS, where deep learning (e.g., deep neural network [DNN]) is used as base learner model. The effectiveness of the proposed model and base DNN model are benchmarked empirically in terms of several performance metrics, i.e., Matthew’s correlation coefficient, accuracy, and false alarm rate. The results indicate that the proposed model is superior to the base DNN model as well as other existing ML algorithms found in the literature.


Introduction
Intrusion detection system (IDS) has been an active research in the cybersecurity domain recently. It contributes a critical role to a modern computing infrastructure in repealing any malicious activities in the network. In addition, as a protection mechanism, an IDS is accountable for taking preventive action to overcome any malignant acts in the computer network. By examining network access logs, audit trails, and other security-relevant information within an organization, an IDS detects and blocks attack without human intervention [1].
An IDS is typically split into two main techniques, i.e., anomaly and misuse. The differences lie in the number of attack classes to be predicted. An anomaly-based IDS (A-IDS) attempts to solve a binary classification problem, where the classifier is trained so that it is able to distinguish anomaly traffic from normal traffic. Since the trained model is only capable in handling two classes, a new type of attack can be discovered by A-IDS. Apart from this merit, this technique always suffers from high false alarm rate (FAR), thus bringing the network into vulnerable state. In contrast to A-IDS, a misuse-based IDS (M-IDS) attempts to solve multiclass classification problem, where a future attack could be detected by comparing it with some known attacks signatures stored in knowledge-based system. It results shows a lower FAR, however, unknown attacks cannot be easily detected [2].
Owing to the fact that A-IDS are powerful to find new types of attacks, it is more adopted in IDS research. Even though it offers a small improvement in the performance, such A-IDS would be a significant asset for an organization. For instance, it could help an organization to get rid of successful attack, e.g., service inaccessibility and performance breakdown, that might result into huge financial loss. However, maintaining a lower FAR while increasing the detection accuracy is also a challenging task. This trade-off is prevalently solved using the combination of feature selection and classification algorithms. Feature selection or feature importance methods are crucial as some irrelevant features might contribute to degrading classifier's performance.
To develop an A-IDS that is able to learn anomaly or normal pattern within the network, a classification algorithm is trained using publicly available network traffic log datasets such as NSL-KDD [3], UNSW-NB-15 [4], and more recently, CICIDS-2017 [5]. These datasets are commonly used in the current literature for benchmarking the proposed A-IDS model. To improve an A-IDS, a considerable number of classification algorithms have been carried out, ranging from shallow machine learning models to deep neural network (DNN) models [6,7]. Besides, some ensemble learners are also taken into account due to their performance advantages over individual classification algorithms [8,9].
In an ensemble learner, multiple classification algorithms are trained to predict the same problem. Over the last few decades, ensemble learners have shown remarkable performance in various applications, including cybersecurity field. However, there still exist several research challenges while utilizing ensemble learners. For instance, the selection of the mixture technique for combining the base learner's predictions and the multifariousness of classifiers in the wild. Thus, this study focuses on the development of an A-IDS technique using stacking-based deep neural network (DNN). Stacking is chosen due to its flexibility in combining multiple classifiers in heterogeneous way. The contributions of this paper lie in two different angles: (i) An ensemble approach of DNN is proposed, instead of just using DNN as an individual classifier; and (ii) A two-step significance test is employed to prove the effectiveness of the proposed model over individual model.

Related Work
In this section, a brief review of existing A-IDS techniques is discussed. Since A-IDS is an active research field, we only provide the proposed techniques published in the last two years, e.g., 2018 and 2019 and studies that employed at least one classifier ensemble in their experiment. This is also to show the position of this paper in comparison with other state-of-the-art techniques. We summarize and classify the trend of A-IDS research in Tab. 1. Interested readers might refer to recently survey papers [10][11][12][13][14].

Material and Methods
This section describes several publicly available datasets used in the experiment. The remaining part of this section details the proposed A-IDS model.

Intrusion Datasets
The following datasets are very common in IDS community. NSL-KDD and UNSW-NB15 are considered for network packets-based analysis, while CICIDS 2017 is used for Web traffic-based analysis. The datasets are described chronologically as follows.

NSL-KDD [3]:
It is an improved version of long-standing intrusion dataset, called KDD Cup 99. Unlike its predecessor, NSL-KDD possesses no redundant samples, providing more realistic and reliable dataset while applying machine learning algorithm to develop an IDS model. A number of training samples (e.g., 125,973 instances) are used for creating the classification model, where the number of samples representing anomaly and normal class is 67,343 and 58,630 samples, respectively. In addition, for the sake of the evaluation procedure, an independent testing set (e.g., KDDTest+) is taking into consideration. The testing set consists of 22,544 instances. UNSW-NB15 [4]: It was built by generating real-life normal network packets as well as synthetic attacks using IXIA PerfectStorm tool. A training set consisting of 37,000 normal and 45,332 attack samples is used in our experiment. In addition, an independent test set, called UNSW-NB15 test (e.g., 175,341 samples) is also used for evaluating the proposed classification model. The number of input feature is 42 with 1 class label attribute.
CICIDS 2017 [5]: B-profile system was used to generate realistic benign background traffic. Moreover, several network protocols such as HTTP, HTTPS, FTP, SSH, etc. were also taken into consideration, providing a complete network traffic dataset with a diverse attack profiles. There are 78 input features, while the number of benign and malicious samples is 168,186 and 2,180 samples, respectively. Since an independent dataset is not provided, we simply apply a train-test split with a ratio of 80% and 20% for training and testing set, respectively.

Proposed Method
The idea of our proposed model is briefly presented in the following subsections:

Deep Neural Network
Since the advent of artificial neural networks (ANNs) that mimic human thought, deep neural networks (DNNs) (e.g., deep learning) is one of the most effective tools in comparison with other machine learning algorithms in the wild. DNN is built based on the initial ANN architecture that has a multilayer structure, activation and optimization functions. It is highly recognized due to the advancement of computing hardware. Fig. 1 denotes a base DNN model. The base DNN architecture consists of one input layer, three hidden layer, and one output layer. All features are fed into input layer, in which some nonlinear operations are then performed to provide the final class prediction in the output layer.

Stacking Ensemble
Stacking was firstly introduced by the researcher in [28]. Despite the fact that it was originally invented by Wolpert, the present-day stacking that uses internal k-fold cross-validation was Breiman's contribution. Our proposed stacking-based deep learning model is detailed in Algorithm 1. In this study, five different DNN base models are taken into account. The goal of using such different models is to maximize the diversity of the ensemble. This is quite essential since without diversity, an ensemble is deemed to be unsuccessful as it is [29]. Diversity can be achieved in several ways: By using different base learners for constructing the ensemble (e.g., heterogeneous) and by using different training set. This paper is emphasized on the first strategy, specifically, different learning parameters of each base DNN are used. Moreover, a gradient boosting machine learning (GBM) [30] is considered as meta-learning classifier.

Results and Discussion
In this section, the experimental results of staking-based deep neural network for an A-IDS is described. First of all, learning parameters of each base DNN model are specified in Tab. 2. As mentioned previously, by specifying different learning parameters, our objective is to maximize the diversity and we expect that an improved final ensemble prediction could be obtained. To evaluate the proposed model and baseline models, a Matthews correlation coefficient (MCC) is considered. The metric is found to be meaningful to measure the performance of classifier applied to imbalance datasets. Furthermore, two other metrics, i.e., accuracy and false alarm rate (FPR) that are commonly used in IDS research are also taken into consideration. Those three performance measures can be obtained as follows Fig. 2: Determine L DNN base models, along with their optimal hyperparameters. Determine the level-1 classifier, e.g., gradient boosting machine (GBM).

Train the ensemble:
Train each of the L base model on the training set. Implement stratified 5-fold cross-validation on each DNN base model. Gather the prediction results, S 1 , S 2 , …, S L Gather M prediction values from L base models and generate a matrix M x L, which is later called as matrix W Along with original response vector Y, train level-1 classifier: Accuracy A deep learning framework, i.e., H 2 O was utilized for running classification task. All codes were implemented in R on a machine with Linux operating system, 32 GB memory, and Intel Xeon processor. First of all, the performance of all classifiers with respect to MCC metric are presented in Fig. 3. It is clear that for all IDS datasets, the proposed stacking-based DNN outperforms all baseline models, except for UNSW-NB15. Using NSL-KDD, the proposed model (MCC = 0.7994) has achieved better than Similarly, the proposed model has a significant improvement over the baseline models when it is applied to CICIDS 2017. Tab. 3 compares relative performance between the proposed model and baseline models.     For the sake of completeness, an empirical comparison using statistical significance tests is also provided in this section. For this purpose, a two-fold Quade-Quade post hoc test [36] is employed. Quade test is deemed to be more powerful than other tests when comparing five or less different classifiers. The two or more classifiers are significantly different if p-value is less than a threshold (0.5 in our case). First of all, an omnibus test using Quade test yields p-value = 0.067, with degree of freedom, d f = 5 is conducted. Therefore, it can be inferred that at least one classifier has performed differently than others. Since the test demonstrates its contribution, Quade post hoc test is carried out. Tab. 4 exhibits the pvalues of all pair-wise comparisons using Quade post hoc test. It conveys an information that the proposed model is statistically significant than DNN3, DNN4, and DNN5. Finally, in order to ensure the comprehensiveness of this study, it is compulsory to benchmark the proposed model and other existing approaches. Tab. 5 depicts such a fairer comparison with the state-of-the-arts in terms of accuracy and FPR. It proves that the proposed model is obviously superior to every other approach published in some major outlets.

Conclusion
Anomaly detection in computer network has always been an active research in cybersecurity domain. Many studies have been implemented to address network traffic logs as a binary classification problem. In the current literature, there is no available stacking-based deep neural network approach applied to anomaly-based IDS thus far. In this study, a stacking-based deep neural network is designed for anomaly detection, coping with a two-class detection problem, i.e., normal and malicious. To evaluate the effectiveness of the proposed model, the experiments were performed on three different intrusion datasets such as NSL-KDD, UNSW-NB15, and CICIDS 2017. Experimental results demonstrate that the proposed model is a first-rate method for anomaly detection with a detection accuracy of 89.97%, 92/83%, and 99.65% when dealing with specified training sets of KDDTest+, UNSW-NB15test, and CICIDS 2017, respectively. Conflict of Interest: The authors declare that they have no conflicts of interest to report regarding the present study.