The Use of Ensemble Models for Multiple Class and Binary Class Classification for Improving Intrusion Detection Systems

The pursuit to spot abnormal behaviors in and out of a network system is what led to a system known as intrusion detection systems for soft computing besides many researchers have applied machine learning around this area. Obviously, a single classifier alone in the classifications seems impossible to control network intruders. This limitation is what led us to perform dimensionality reduction by means of correlation-based feature selection approach (CFS approach) in addition to a refined ensemble model. The paper aims to improve the Intrusion Detection System (IDS) by proposing a CFS + Ensemble Classifiers (Bagging and Adaboost) which has high accuracy, high packet detection rate, and low false alarm rate. Machine Learning Ensemble Models with base classifiers (J48, Random Forest, and Reptree) were built. Binary classification, as well as Multiclass classification for KDD99 and NSLKDD datasets, was done while all the attacks were named as an anomaly and normal traffic. Class labels consisted of five major attacks, namely Denial of Service (DoS), Probe, User-to-Root (U2R), Root to Local attacks (R2L), and Normal class attacks. Results from the experiment showed that our proposed model produces 0 false alarm rate (FAR) and 99.90% detection rate (DR) for the KDD99 dataset, and 0.5% FAR and 98.60% DR for NSLKDD dataset when working with 6 and 13 selected features.


Introduction
The increase in how people view and utilize the Internet has become a blessing and also a liability to our everyday online activities. The quest for urgent data transmission on the internet and the need for commensurable security, authentication, confidentiality of web applications, and cloud interface computing have given rise to all kinds of advanced security attacks. The day to day internet usage is becoming complicated due to the threat of the internet in data security, industrial attack, and sponsored attacks to social and engineering facilities [1,2]. The complex natures of the attacks demand response with security systems that are efficient, automated, having faster responses, accuracy, and efficient security preventing systems in place.
Network intrusion detection systems (NIDS) have been developed by researchers over time that serve the purpose of detecting any suspicious action and intention that will lead to data theft or identity cloning. The fact that there has been a rapid response to security attacks on many web-based applications has not deterred the intruders from discovering loopholes to the networks and sending more sophisticated attacks.
An ExtraTrees classifier that is used in selecting applicable features for different types of Intruders with extreme learning machines (ELMs) was proposed by [1]. During attacks classification, multi-class issues were divided into multiple binary classifications and the authors used subjective extreme learning machines to solve the issue of imbalance. Lastly, they implemented in parallel the ELMs ensemble by using GPUs in order to perform in real-time intrusion detection. Their results did better than all the other methods earlier in use, achieving 98.24% and 99.76% precision on their datasets for multi-class classification. Their proposer incurred a small overhead and lacks training on how to distinguish between normal traffic and potential attacks. Meanwhile, a multi-model biomatrix recognition system that is based on pattern recognition methods was used to make a personal identification by [2]. A modification of the fingerprint was done by applying the Delaney triangulation network. Although their system achieved a high precision with low error rate equals 0.9%, it is limited and cannot function as IDS because it is based on eyelash detection and not on the internet or online system.
Another multiclass classification that uses a heterogeneous ensemble model and outlier detection in a combination of numerous approaches and ensemble methods was developed by [3]. Their study was based on Pre-processing involving a way to filter global outliers, using a synthetic minority oversampling technique (SMOTE) algorithm to repeat the sampling process. They performed a binarization on the dataset by using the OnevsOne decomposing technique. In addition, Adaboost, random subspace algorithms, and random forest were used to design their model as the base classifier. Their proposed model performed better in terms of outlier detection and classification prediction for the multiclass problem, and also did better than other classical algorithms commonly in use. The study failed to combine filtration and wrapper selection methods in order to investigate the effect of partial removal of point-outliers from datasets prior to building up of classifiers. DOS, probe, U2R, and R2L were the four types of attacks used by [4] to deal with the random forest model. They implemented ten cross-validations that were functional for classification usage and a Feature selection that was implemented on the dataset in order to reduce dimensionality, remover of redundancy and unrelated features. On comparing their random forest modeling with a j48 classier, their experimentation proves that accuracy and DR for four types of attacks are better, but they failed to use evolutionary calculation as a feature selection measure that could further improve the accuracy of the classifier. The fact is that denial of service (DoS) attacks have created massive disruptions to private and public sectors web-based applications of which many are not in the news due to management fears of customers' panic and loss of shares. It becomes a challenge to create a multiple class-based IDS that has the capacity to withstand multiple attacks provide higher accuracy, higher detection rate (DR), and lower false detection rate (FAR). This paper's intention is to develop an intelligent intrusion detection system that has high accuracy, high packet detection rate, and low false alarm rate. The Objectives include 1. Developed machine learning models for the intrusion detection system; 2. Implement and evaluate the proposed solution on network security datasets; 3. Proposed a data-independent Model; 4. Achieved high accuracy; 5. Achieved high detection system; and 6. Achieved a low false alarm rate.
Our motivation is to reduce False Positive Rate (FPR) by applying dimensional reduction method on the Correlation Feature Selection (CFS) algorithm.
Our contribution includes: 1. The research performs dimensionality reduction using the Correlation-based feature selection (CFS) approach. 2. Machine Learning Ensemble Models with base classifiers (j48, Random forest and reptree) were used to perform simulations. 3. Automatically proposed optimal subset features for the new dataset. 4. FAR and Detection rate has a great impact on the IDS system, so we propose a novel solution based on machine learning ensemble models with the effect of the CFS algorithm. 5. Our Proposed CFS + Ensemble Classifiers has 0 false alarm rate and 99.90% detection rate for kdd99 dataset and for nslkdd dataset 0.5% FAR and 98.60% detection rate. 6. Our proposed model was evaluated and compared with two different datasets and also these research experimental results are also likened with other recent and important papers in this area.
The remainder of the paper is structured as stated: In Section 2, we describe the Literature review. Section 3 presented the proposed methodology. Section 4 describes the experiments and results. Section 5 concludes the research and the mindset of future work.

Literature Review
A hybrid smart system with an enhancement of the decision tree was used by the authors in [5] to design a multiple classifier system. This was done by applying Adaboost and naïve Bayes with decision trees (NBDT), non-nested generalized exemplar (NNge), and incremental pruning (JRip) rule-based classifiers (NNJR). The system was able to detect network intrusions efficiently. The only limitation to this research is that other data mining approaches were not explored in full. Hybrid IDS based on integrating the predictions of a tree by probability in a diverse kind of classifier was proposed by [6]. Their result illustrates a model that gives a much lower false alarm rate and a peak detection rate. Moreover, their proposed model shows better precision than the recent IDS models with a precision equivalent to 96.27% for KDD'99 and 89.75% for NSL-KDD-unlike authors in [7] that use spectral clustering (SC) and deep neural network (DNN) in their proposer for intrusion detection.
Their results indicate that their classifier delivers a real tool of study and analysis of intrusion detection in a large network and does better than back propagation neural network (BPNN), support vector machine (SVM), random forest (RF), and Bayes tree models in spotting precision and the types of irregular attacks in the network.
The hybrid model of [8] is a proposed system designed on the network transaction that estimates the intrusion scope threshold degree at data's peak features which are readily accessible for the physical activities. Their results show that the hybrid approach is necessary in order to achieve accuracy of 99.81% and 98.56% for the binary class and multiclass NSL-KDD datasets, respectively. Nevertheless, it was suggested for further studies to apply optimizing techniques with the intrusion detection model because it is likely to have a better accuracy rate.
A Gini index based feature selection can give the ensemble technique a higher increase accuracy of detection by 10% according to [9]. Other benefits include reduction of a false positive rate to 0.05 and improving the system performance in terms of the time it takes for executing a truer positive rate. Nevertheless, reduced features that will require less processing time in a distributed situation need to be applied to improve the detection rate.
An improved conditional variational Auto Encoder (ICVAE) with a deep neural network (DNN) was combined to design an intrusion detection model known as ICVAE-DNN by [10]. They learn and explore potential sparse representations between network data features and classes that show better overall accuracy, detection rate, and false positive rate than the nine state-of-the-art intrusion detection methods. Nonetheless, there is a need to improve the detection performance of minority attacks and unknown attacks. The adversarial learning method can be used to explore the spatial distribution of ICVAE latent variables to better reconstruct input samples. The machine learning-based IDS developed by the authors in [11] are based on deep learning. According to the authors, in large network datasets and unbalanced network traffic, the performance of the IDS may be affected, this can result in an anomaly network-based IDS. A Deep Belief Networks (DBNs) approach which projected deep learning as a swift upsurge of machine learning (ML) was proposed in [12,13]. Following this proposal, deep learning has realized greatly the extraction of high-level dormant features from dataset models. However, notwithstanding these huge successes, several problems related to IDS still exist-firstly, a high network data dimension. In many IDS models, the feature selection approach is first considered as one of the steps of the preprocessing [14]-for instance, the advancement of the Internet of Things (IoT) and the prevalent cloud-based services, in addition to the emergence of several new attacks. In the training dataset, several unidentified attacks do not appear. For instance, in the NSL-KDD dataset considered in [15,16], about 16.6% of the attack samples in the dataset tested did not appear in the training dataset. This implies that mostly all conventional IDS typically achieve poor performance. However, for an anomaly network-based IDS (A-NIDS), the authors in [17,18] proposed a primal dependable hybrid approach that incorporates the Adaboost meta-algorithm and artificial bee colony (ABC). This is intended to achieve optimal detection rate (DR) at a minimized false positive rate (FPR) [19]. In the study by [20], the ABC algorithm is implemented for selection of features, while the Adaboost meta-algorithm is used for feature classification and evaluation. The Adaboost meta-algorithm was implemented to tackle the unbalanced data based on the actual plan, while the ABC was used for the IDS problem optimization. Incorporating both the redesigned density peak clustering algorithm (MDPCA) and the deep belief networks (DBNs) resulted in a novel fuzzy aggregation approach which was proposed in [21]. The MDPCA section of the algorithm splits the primal training dataset into numerous minor subsets based on the similarity of the training samples feature. On the other hand, the results of the entire sub-DBNs classifiers are combined according to the weights of the fuzzy membership. The objective of [22] was to design a system that has to have the capacity for accurate traffic classification of classes into normal and attack, measure up the huge datasets, and be able to acquire a lower false alarms rate. To achieve these, the authors leveraged on the Extreme Learning Machine (ELM) algorithm, which is an advanced ML algorithm. Although the ELM algorithm has proved to be more efficient in terms of performance against the Support Vector Machine (SVM) algorithm, it operates, however, at high frequency while sustaining adequate classification ability. The authors further attempted to enhance the performance ELM algorithm by including a redesigned kind of Huang's Kernel-based ELM and combined this with the Multiple Kernel Boost (MKBoost) framework which was earlier introduced by [3]. " A novel approach based on the combination of discretization, filtering, and classification methods using a KDD Cup 99 dataset is presented in [23]. The focus of the research was to drastically minimize the number of features while classifier performance is absolutely maintained, or even improved. The approach makes use of filters because of their high-speed characteristics and based on their high suitability for large datasets. eep learning models were applied as classifiers. Bearing in mind the importance of the temporary data classification of network attacks, the Long Short Term Memory (LSTM) network, a modification of frequent networks, was used in classifying the KDD's dataset attacks [24]. " Several works in the literature of [25,26] motivated the development of our proposed approach. A scheme of nested binary trees was used in [26]; the scheme realized a good performance when tested with minor UCI datasets, but the computational difficulty of this scheme amplified swiftly with the increase at the number of instances. The recent study of [25] integrated both the oversampling and binarization with boosting, and indicated that the proposed approach realized improved performance than the multiclass learners and one-versus-all (OVA) framework. Even though information about the runtime was voided in the study, the use of oversampling enhances substantial computational difficulty; hence, this method failed to scale proficiently for an application to IDS datasets, which encompasses a higher number of samples. On the other hand, the authors in [26] implemented random undersampling (RUS) in their method because it can realize similar performance when used for all the datasets while dealing with class imbalance mitigation.
Several studies on the use of binary classifiers set to the detection of intrusion have been established. A good number of these studies engaged the use of classifiers based on SVM. Authors presented a simple decision tree-based OVA model which populates a decision tree structure using a set of class probabilities [27]. An OVA method in [28] was also incorporated into a least-squares SVM technique and analyzed on the KDD dataset. The output showed that, for each of the five classes of traffic, their attack detection rate was approximately 99%. Additionally, the authors observed in the method, the best model realized an average FPR of 0.28%. SVMs in a binary classification method was employed by [29]. Authors in [30] proposed a composite scheme architecture in which precise classifiers were allocated the task of detecting precise classes. For example, an SVM was allocated for the detection of DoS attacks, while an RBF-based neural network was allocated for the detection of U2R-based attacks. The results of the hybrid classifier were transferred to a different ensemble which was allocated for the detection of R2L and probe attacks. For this scenario, in advance, a definite architecture was defined. A weighting element was included in a scheme of binary SVMs in [31].
The binarization methods that were tested included one-versus-one (OVO), OVA, directed acyclic graphs, and ECOC. It was noticed that the OVA model distributes the best performance. It is observed by the authors that the weight which measures a prediction level of certainty was targeted at the unclassifiable areas in which the group of binary classifiers cannot approve on a single class prediction. using a precise subset of the KDDTest+ dataset, the model was assessed, but then the outputs proved that employing a weighting system with the model resulted in an improved general performance better than the model that did not include weighting scheme. Individual class performance on binarization approaches have been analyzed in all the above-mentioned works; however, the lowest FPR was realized in the recent works [32][33][34][35][36] while many other algorithm and DoS were considered by [37][38][39][40][41][42][43][44].

Proposed Methodology
This research has five phases according to our proposed methodology shown in Figure 1; the 1st phase is data collection. After data collection, the next phase is data pre-processing, which is phase 2. In data pre-processing, duplicate values inside the dataset are removed. Inconsistent values are also removed. Missing values were checked for its presence or not in the dataset. Data normalization was also done to bring down the whole dataset into one standard scale. Non-numeric values were converted to numeric by doing encoding. After data pre-processing, the 3rd phase is dimensionality reduction, which was done by using the CFS method. After dimensionality reduction, the next phase, which is the 4th phase, comes in the 4th phase machine learning ensemble classifiers Bagging, and Adaboost was used. The 5th phase is an evaluation phase; in this phase, this research work is compared with other state-of-the-art work that used the same approach.

Description
This research uses two datasets: the KDD99 dataset and the NSLKDD dataset.

KDD99 Dataset
KDD99 is one of the most famous and old data sets used in network security for intrusion detection systems. KDD99 is a derived version of the 1998 DARPA. The kdd99 dataset was developed in an MIT research lab, and it is used by IDS designers as a benchmark to evaluate various methodologies and techniques [40]. The kdd99 has 4,900,000 rows and 41 attributes, and one is class label. Twenty-two network attacks are listed in the KDD99 dataset [41]. In this research, we did binary classification as well as multiclass classification for kdd99 and nslkdd datasets. We named all the attacks as an anomaly and normal traffic and then performed experiments. Class labels consist of four major attacks like DoS, Probe, U2R, R2L, and Normal class. We did further classification in DoS, Probe, U2R, and R2L, in order to detect the categories of these attacks. Table 1 represents the total number of normal and anomaly packets that contain the KDD99 dataset used in this research. 97,277 and 396,731 packets were used for anomaly and normal classes to develop ensemble machine learning classifiers upon which training and testing can be performed. In addition, 70% of the KDD99 dataset was used for training and validation purposes, and the rest of the 30% dataset was used for testing and validation, respectively. The samples for KDD99 Training and Testing are present in Table 2.   Table 3 represents the number of attacks used in this research for prediction and their number of packets (size). DoS has five sub-attacks in it. The similarity Probe and R2L have four sub-attacks in it, respectively. "

NSLKDD Dataset
NSLKDD is a derived version of the KDD99 dataset. NSLKDD does not have any duplicate values that were in the kdd99 dataset. NSLKDD also does not have inconsistent values. NSL-KDD contains 148,517 instances for training and testing purposes overall. The NSLKDD set has 41 features in total. Some features are binary, some are numeric, and nominal features are also listed in the NSLKDD dataset. The NSLKDD dataset also consists of four major attacks like DoS, Probe, U2R, R2L, and Normal class. Table 4 represents the total number of normal and anomaly packets containing the NSLKDD dataset used in this research. The total number of anomaly and normal packets used to train and test machine learning ensemble models are 71,215 and 77,054, respectively. In addition, 70% of the NSLKDD dataset was used for training and rest of the 30% dataset was used for testing and validation, respectively.  Table 5 represents the total number of anomaly and normal packets used to train and test machine learning ensemble models are 103,789 and 44,481, respectively. The number of attacks for NSLKDD and Features of KDD99 and NSSLKDD datasets are presented in Tables 6 and 7, respectively. Table 5. NSLKDD dataset binary classifications total packets.

Training and Testing Packets Training and Testing Packets Count
Training Data Size 103,789 Testing Data Size 44,481  " After selection of the dataset, data cleaning operations are performed on datasets to remove noise from the dataset and normalize the features. For normalization, different techniques are used, but, in this research, the min-max normalization approach is used which is better in terms of scaling and solving outliers' issues with z-score normalization. Min-max scaling normalizes values in the range of [0, 1]. The equation for min-max normalization is given below: . . , Y n ) are the number of features, while Y i is the feature that we want to normalize and Zi is the normalized feature. By doing this, now all features have the same weights and all features are in one scope.

Data Encoding
In the process of data encoding, duplicate and inconsistent values were removed earlier from the datasets before the commencement of this process. The next process was to convert the nominal attributes to numeric values. The reason for this is that machine learning algorithms' back-end calculations are done using numeric values and not nominal values. This data encoding step is vital before we proceed to passing data to the proposed model.

Feature Selection
Optimal features not only improve accuracy, but also improve computational cost in terms of time. The main focus of feature optimization is not only to decrease the computational cost but also find such feature subsets that can work with different classifiers to produce better results. In this research, we used the correlation-based feature selection method (CFS) for feature selection.
Correlation-Based Feature Selection (CFS) Figure 2 illustrates the workflow of the TCFS model. In the illustration, feature selection algorithms not only reduce dimensionality, but also select optimal features that produce high results in terms of accuracy, precision, recall, and F1-Scores. Dimensionality reduction also decreases the computational cost of algorithms. Heuristic evaluation function is used inside the Correlation-based feature selection (CFS) algorithm, which is dimensionality reeducation algorithm [45][46][47]. CFS ranks features based on their similarity with the predication class. CFS examines every feature vector subset. These subsets of feature vectors are highly correlated with the predication class but irrespective of each other. The CFS algorithm considers that some features have a low correlation with the predication class, so these features can be ignored because these features have no major role in prediction. On the other side, it is important to evaluate excess features since they are generally strongly associated with each other or with other features as well. The following equation can be used to find a subset of features vectors correlated with each other: If we have S number of features subset having A number of attributes, then M s is evaluation of these S subsets with A number of attributes, where M c f represents the average correlation between class label and attributes. M f f is average correlation values between attributes, or we can say how much two features are associated with each other based on this M f f function [37]. If we have a classification problem, CFS calculates symmatrix uncertainty shown in Equation (3): In Equation (3), E represents the entropy function that is calculated using below Equation (4). Entropy is a function of the uncertainty of a random variable: For all values of X, P (y) represents the prior probabilities while, when Y given to X, P (y/w) is the posterior probability.
Six features were selected using the KDD99 dataset for binary class and 11 features were selected for 21 attacks for the KDD99 dataset. Similarly, for the nslkdd dataset, 13 features were selected for both binary and multiple attacks as shown in Table 8. The correlation-based feature selection working algorithm which describes the modalities of the CFS model is presented below as Algorithm 1.

Bagging Classifier
An ensemble method is a technique that combines the predictions from multiple machine learning algorithms together to make predictions more reliable than any other (see Algorithm 2). Bootstrapping or Bagging is a very effective and powerful ensemble approach. In ensembling, multiple classifiers are combined to get more accurate results, compared to their individual performance. Working on Bagging is given below in the form of a pseudo code.

Algorithm 2: Bagging classifier algorithm.
Input: KDD99 and NSLKDD datasets Training: 1. Selection of the number of samples for Bagging which is n samples and also the selection of base classifier C (j48, Random Forest, and reptree in our case). 2. Dividing dataset into two subsets (Training and Testing subsets). Produce further training datasets using with replacement sampling and these datasets are D 1 D 2 D 3 .........D n . 3. Then, train a base classifier on each dataset D i and build n number of classifiers Testing : 1. In the testing dataset, each data object X is passed to trained classifiers C 1 C 2 C 3 .........C n . 2. The label is assigned to every new data object based on a majority vote. For the classification problem, the majority vote is used to assign a new label to data point X and, for the regression problem, the average value is used to be assigned to a new data object X i . 3. We repeat these steps until we classify every object in the dataset.

Adaboost Classifier
The goal of the Adaboost classifier is converting weak classifiers into a strong classifier that produces better results: h n represents the nth weak classifier and θ n is the corresponding weight for that classifier. The Adaboost Classifer is given in Algorithm 3.

Algorithm 3: Adaboost classifier algorithm
Input: KDD99 and nslkdd datasets Training: 1. Selection of base classifier C; 2. Set the threshold for initial weights W i1 ∈ [0, 1], sum N i=1 w i1 = 1 , Commonly W i1 = 1 N ; 3. For n = 1 → k produce sample D n for training from D using the distribution W n 4. Training of base classifier C on D n data subset to develop the C n classifier. 5. e n = ∑ N j=1 w ni is the ensemble error calculated when classifier C n misclassifies the i th data point in D. 6. If e n ∈ (0, 0.5) then calculate β n = e n 1−e n and update the next weight.

Testing :
1. In the testing dataset, each data object X is passed to the testing dataset; classify by classifiers The class that has maximum value m y (x) is decided as the class label of x. 3. Repeat step2 for testing data and return the output.

Evaluation Matrixs
Various performance matrixs are used to evaluate the proposed solution, including precision, recall, F1-Measure [48], False Alarm Rate (FAR), Detection Rate (DR), and Accuracy. False Positive Rate is a combination of total instances that are normal but classified as attack class and truly classify attack class.
Accuracy is used to measure how many instances are correctly classified as normal and attack classes. Accuracy is achieved by summing correctly classify instances and dividing the total instances as shown in Equation (8): Detection Rate (DR) provides information about the attacks detected correctly divided by the total number of attacks in the dataset: Precision's objective is to evaluate the True Positive (TP) entities in relation to False Positive (FP) entities: The purpose of recall is to evaluate True Positive (TP) entities in relation to (FN) False Negative entities that are not at all categorized. The mathematical form of recall is mentioned in Equation (10): Sometimes, performance assessment may not be good with accuracy and recall. For instance, if one mining algorithm has low recall but high precision, then another algorithm is needed. Then, there is the question of which algorithm is better. This problem is solved by using an F1-Score that gives an average recall and precision. F1-Score can be calculated as shown in Equation (11):

Experiments
The simulation was performed using Weka 3.7 [49] on Intel ® Core™ i3-4010 CPU@1.70 Ghz (4 CPUs) with 8 GB RAM installed. Haier laptop was used with a 64-bit operating system on it. In this research, for experiments, two datasets were used: KDD99 [50] and nslkdd [51]. The KDD99 dataset is an advanced version of the DARPA 1998 dataset. The main feature that separates the KDD99 dataset from DARPA 1998 is that test data are not from the same probability distribution as training data; it has different attacks for testing that training data doesn't have. Similarly, NSLKDD are an advanced version of the KDD99 dataset. The NSLKDD dataset solves the problem of duplicate and inconsistent values that the KDD99 dataset had. Table 9 depicts that false positive rate and true positive rate scores were 0.6% and 99.10%, respectively, for a normal class. Similarly, for the anomaly class, false positive and true positive scores were 0.9 and 99.40%, respectively. For normal class, the number of correctly detected packets was 28,934, and 271 packets were detected incorrectly as anomaly packets. In addition, for the anomaly class, 118,238 packets were correctly detected while 759 packets were incorrectly detected as normal packets. From Table 10, we can see that precision for normal was class 97.40%, Recall score for normal class was 99.10%, and F1-Score was 98.30%, respectively. Likewise, for anomaly class, Precision and Recall scores were 99.80% and 99.40%, respectively. F1-Score for anomaly class was 99.60%. The ROC Area for both normal and anomaly class were 99.90%, respectively.  Table 11 depicts that, for normal class, the number of correctly detected packets is 28,934, and 271 packets were detected incorrectly as anomaly packets. Similarly, for anomaly class, 118,238 packets were correctly detected, while 759 packets were incorrectly detected as normal packets. From Table 12, we can see that precision for normal was class 97.40%, Recall score for normal class is 99.10%, and F1-Score was 98.30%, respectively. Similarly, for anomaly class, Precision and Recall scores were 99.80% and 99.40%, respectively. F1-Score for anomaly class was 99.60%. The ROC Area for both normal and anomaly class was 99.90%, respectively.   Table 14, precision for normal class is 97.80%, recall score for normal class is 99.20%, and F1-Score was 98.50%, respectively. In addition, for anomaly class precision and recall scores were 99.80% and 99.40%, respectively. F1-score for anomaly class was 99.60%. The ROC Area for both normal and anomaly class was 99.80% and 100%, respectively. " As shown in Table 15, correctly detected normal and anomaly packets were 28,838 and 118,225, respectively. In addition, 367 packets were wrongly classified as anomaly, but, actually, it was normal packets. Similarly, 772 packets were anomaly, but it was detected as normal packets. According to Table 16, using the Bagging j48 classifier, the false positive rate and true positive rate scores are 0.6% and 98.70%, respectively, for a normal class. Similarly, for the anomaly class, false positive and true Positive scores were 1.30% and 99.40%, respectively, using a j48 classifier. " Table 17 depicts a Bagging random forest classifier detects 28,994 packets correctly as normal packets and 118,318 packets as anomaly packets. In addition, 211 packets are detected as anomaly packets, but, actually, they are normal packets and 679 packets were detected as normal packets, but, actually, they were anomaly packets. " As shown in Table 19, correctly detected normal and anomaly packets are 29,010 and 118,299, respectively. In addition, 195 packets are wrongly classified as anomaly, but, actually, they are normal packets. Similarly, 698 packets were an anomaly, but they are detected as normal packets. False Positive and True positive scores for normal are 0.60% and 99.30%, respectively. Similarly, for anomaly class, False Positive and True positive scores are 0.70% and 99.40%, respectively, as shown in Table 20. For reptree Bagging normal class, precision score is 97.70%; recall and F1-Scores are 99.30% and 98.50%, respectively. For Bagging reptree, anomaly class precision score is 99.80%, recall score is 99.40%, and F1-Score was 99.60%, respectively, as shown in Table 20.      From Table 23 and Figure 5, we can conclude that Normal class achieved 99.80% precision, 99.70% recall, and 99.80% F1-Measure, respectively. Loadmodule, Ftp_write, Phf, and Multihop achieved very low results. Perl, Neptune, Smurf, Guess-passwd, Pod, teardrop, and Back achieved 100% TP Rate, precision, recall, and F1-Measures, respectively. Buffer-overflow, Portsweep, Ipsweep, Land, Imap, Satan, Nmap, Warezmaster, and Warezclient also performed well and achieved on average 90% precision, recall, and F1-Measure, respectively. From Table 24 and Figure 6, we can conclude that Normal class achieved 99.80% precision, 99.70% recall, and 99.80% F1-Measure, respectively. Loadmodule, Ftp_write, and Phf achieved very low results. Perl, Neptune, Smurf, Guess-passwd, Pod, teardrop, and Land achieved 100% TP Rate, precision, recall, and F1-Measures, respectively. Buffer-overflow, Portsweep, Ipsweep, Back, Imap, Satan, Nmap, Warezmaster, and Warezclient also performed well and achieved on average 90% precision, recall, and F1-Measure, respectively. From Table 25 and Figure 7, we can conclude that Normal class achieved 99.80% precision, recall, and F1-Measure, respectively. Similarly, Buffer-overflow achieved 61.50% recall and TP Rate, 80% recall, and 69.69% F1-Measure, respectively. Loadmodule, Perl, Phf, and Multihop achieved very low results. Neptune, Smurf, Guess-passwd, Pod, teardrop, and Imap achieved 100% TP Rate, precision, recall, and F1-Measures, respectively. Buffer-overflow, Portsweep, Ipsweep, Land, Imap, Satan, Nmap, Warezmaster, and Warezclient also performed well and achieved on average 90% precision, recall and F1-Measure, respectively.      Figure 6. Classification report for the Bagging J48 KDD99 dataset.    In Table 28, TP rate for both normal and anomaly was 99.10% and 98.90%, respectively, while FR rate for normal packets was 1.10% and, for anomaly packets, it was 0.90%, respectively. Precision, recall, and F1-Score for normal packets was 99.00%, 99.10%, and 99.00%, respectively. Similarly, for anomaly packets, the precision score was 99.00%, recall score was 98.90%, and F1-Score was 98.90%, respectively. The ROC area was 99.90%, respectively, for both normal and anomaly packets.  Table 29 indicates that 44,481 packets were used for testing and 44,072 packets were detected correctly as normal and anomaly packets, and 409 packets were incorrectly detected; the accuracy of Adaboost random forest was 99.08%. TP rate for both normal and anomaly was 99.00% and 99.20%, respectively. FR rate for normal packets was 0.8% and, for anomaly packets, it was 1.00%. Precision, recall, and F1-score for normal packets were 99.30%, 99.00%, and 99.10%, respectively. Likewise, for anomaly packets, precision score was 98.90%, recall score was 99.20%, and F1-Score was 99.00%, respectively. The ROC area was 99.80%, respectively, for both normal and anomaly packets as shown in Table 30.  Table 31 indicates that 44,481 packets were used for testing and 44,028 packets were detected correctly as normal and anomaly packets, and 453 packets were incorrectly detected; the accuracy of Adaboost reptree was 98.98%. The TP rate for both normal and anomaly was 98.70% and 99.30% respectively. The FR rate for normal packets was 0.70% and, for anomaly packets, it was 1.30%. Precision, recall, and F1-score for normal packets was 99.40%, 98.70%, and 99.00%, respectively. On the other hand, for anomaly packets, precision score was 99.30%, recall score was 99.30%, and F1-Score was 98.90%, respectively. The ROC area was 99.90%, respectively, for both normal and anomaly packets, as shown in Table 32.  Table 33 indicates that 44,481 packets were used for testing and 44,039 packets were detected correctly as normal and anomaly packets, and 442 packets were incorrectly detected; the accuracy of Bagging j48 was 99.00%. TP rate for both normal and anomaly was 99.10% and 98.90%, respectively. FR rate for normal packets was 1.10% and, for anomaly packets, it was 0.90%, respectively. Precision, recall, and F1-score for normal packets was 99.00%, 99.10%, and 99.00%, respectively. Similarly, for anomaly packets, precision score was 99.00%, recall score was 98.90%, and F1-Score was 99.00%, respectively. The ROC area was 99.90%, respectively, for both normal and anomaly packets as shown in Table 34.  Table 35 indicates that 44,481 packets were used for testing and 44,072 packets were detected correctly as normal and anomaly packets, and 409 packets were incorrectly detected; the accuracy of Bagging random forest was 99.08%. TP rate for both normal and anomaly was 99.20% and 99.10%, respectively. FR rate for normal packets was 0.90%, and, for anomaly packets, it was 0.80%, respectively. Precision, recall, and F1-Score for normal packets was 99.10%, 99.10%, and 99.10%, respectively. Similarly, for anomaly packets, precision score was 99.10%, recall score was 99.10%, and F1-Score was 99.10%, respectively. The ROC area was 99.90%, respectively, for both normal and anomaly packets, as shown in Table 36.  Table 37 indicates that 44,481 packets were used for testing, and 44,072 packets were detected correctly as normal and anomaly packets, and 409 packets were incorrectly detected; the accuracy of Bagging random forest was 99.08%. TP rate for both normal and anomaly was 99.00% and 98.90%, respectively. FR rate for normal packets was 1.10%, and, for anomaly packets, it was 1.00%, respectively. Precision, recall, and F1-score for normal packets was 99.00%, 99.00%, and 99.00%, respectively. Similarly, for anomaly packets, precision score was 98.90%, recall score was 98.90%, and F1-Score was 98.90%, respectively. The ROC area was 99.90%, respectively, for both normal and anomaly packets, as shown in Table 38. From Table 39 and Figure 9, we can conclude that Normal class achieved 99.80% precision, recall, and F1-Measure, respectively. Neptune class achieved 99.90% precision, 100% recall, and 99.90% F1-Measure, respectively. Similarly, Warezclient achieved 95.60%, 90.20%, and 92.80% precision, recall, and F1-Measure, respectively. On the other hand, Ipsweep achieved 99.50%, 90.50%, and 94.80% precision, recall, and F1-Measure, respectively. Portsweep achieved above 97% precision, recall, and F1-Measure, respectively. Teardrop achieved 96.30%, 100%, 98.10% precision, recall, and F1-Measure, respectively. For Nmap precision, recall and F1-Measure scores were 78.20%, 96.20%, and 86.30%, respectively. Satan, Smurf, and Pod achieved on average 90% precision, recall, and F1-Measure, respectively. Back attack achieved 100% recall while 99.80% and 99.90% precision and F1-Measure, respectively. Guess_passwd achieved 96.50%, 96.80%, and 96.70% precision, recall, and F1-Measure, respectively. Saint, Snmpgetattack, and Snmpguess attack didn't perform well. Warezmaster, Mscan, Apache 2, Processtable, Httptunnel, and Mailbomb also achieved promising results for precision, recall, F1-Measure, and for TP Rate as well.

Discussion
In this section, we will discuss our key outcomes as well as comparison with previous work. Therefore, Tables 45-47 provide the detailed results of our whole work. Hence, we will discuss them one by one in detail. "  Table 47. Comparison analysis of our proposed model with other ensemble models.

Conclusions and Future Work
In this paper, a machine learning based intrusion detection system has been proposed. During experimentation, various ensemble machine learning algorithms have been implemented on NSLKDD and KDD99 datasets. First, NSLKDD and KDD99 datasets were collected. Then, we transformed collected data into binary classes: Attack class and Normal class. In addition, we kept them as multiple attacks (21 Attacks for both KDD99 and nslkdd datasets). At the initial stage of the experiment, various steps were included for the datasets to prepare for the experiment such as pre-processing on the datasets, min-max normalization, feature optimization, and dimensionality reduction. After best feature selection, we have applied different machine learning algorithms on both of the datasets. Ensemble random forest has outperformed all other methods in terms of accuracy, training, and false-positive rate. Experimental results prove that our method performs better in terms of detection rate, false alarm rate, and accuracy for both KDD99 and NSLKDD datasets. The FPR on the KDD99 dataset that we achieved was 0.0%, and we achieved 0.5% FPR on the NSLKDD dataset, respectively. Similarly, we achieved on average 99% testing accuracy for both KDD99 and NSLKDD datasets, respectively. The limitation of this work is that some attacks have 0 classification accuracy. The reason for this is that the data size of that attack is less than 20, while, for other attacks, data size is large. In the future, we will solve this problem using some data balancing methods like SMOTE, which balances the classes and improves the performance of lower classes as well.