Dynamic Multi-Swarm Differential Learning Quantum Bird Swarm Algorithm and Its Application in Random Forest Classification Model

Bird swarm algorithm is one of the swarm intelligence algorithms proposed recently. However, the original bird swarm algorithm has some drawbacks, such as easy to fall into local optimum and slow convergence speed. To overcome these short-comings, a dynamic multi-swarm differential learning quantum bird swarm algorithm which combines three hybrid strategies was established. First, establishing a dynamic multi-swarm bird swarm algorithm and the differential evolution strategy was adopted to enhance the randomness of the foraging behavior's movement, which can make the bird swarm algorithm have a stronger global exploration capability. Next, quantum behavior was introduced into the bird swarm algorithm for more efficient search solution space. Then, the improved bird swarm algorithm is used to optimize the number of decision trees and the number of predictor variables on the random forest classification model. In the experiment, the 18 benchmark functions, 30 CEC2014 functions, and the 8 UCI datasets are tested to show that the improved algorithm and model are very competitive and outperform the other algorithms and models. Finally, the effective random forest classification model was applied to actual oil logging prediction. As the experimental results show, the three strategies can significantly boost the performance of the bird swarm algorithm and the proposed learning scheme can guarantee a more stable random forest classification model with higher accuracy and efficiency compared to others.


Introduction
e concept of swarm intelligence was first proposed by Hackwood and Beni in 1992 [1]. Swarm intelligence algorithms have been proved that it can solve nondifferentiable problems, NP-hard problems, and difficult nonlinear problems which the traditional techniques cannot solve. For this reason, swarm intelligence algorithms are hotly researched in computer science and have been updated from generation to generation. For classic swarm intelligence algorithms, particle swarm optimization (PSO) [2] is used to define the basic principle and equations of the swarm intelligence algorithms. In recent years, many new swarm intelligence algorithms have been proposed, such as artificial bee colony (ABC) algorithm [3] which is inspired by the stock of food location behavior of bees. Artificial fish school algorithm (AFSA) [4] and firefly algorithm (FA) [5] are inspired by the foraging process of fish and firefly, and cat swarm optimization (CSO) [6] is developed based on vigilance and foraging behavior of cats in nature. According to the foraging behavior, vigilance behavior, and flight behavior of the bird swarms in nature, Meng et al. proposed a novel swarm intelligence algorithm called bird swarm algorithm (BSA) [7]. Meanwhile, due to these advantages above, swarm intelligence algorithms have been applied to optimize various fields, such as PSO for mutation testing problems [8], genetic algorithm (GA) for convolutional neural networks parameters [9], FA for convolutional neural network problems [10], and whale optimization algorithm (WOA) for cloud computing environments [11]. So, BSA which will be used in this paper has been widely applied to engineering optimization problems.
However, the original swarm intelligence algorithms have limitations in solving some practical problems. Hybrid strategy which is one of the main research directions to improve the performance of swarm intelligence algorithms has become a research hotspot in machine learning. Tuba and Bacanin [12] modified the exploitation process of the original seeker optimization algorithm (SOA) approach by hybridizing it with FA which overcame shortcomings and outperformed other algorithms. Strumberger et al. [13] also has proposed dynamic search tree growth algorithm (TGA) and hybridized elephant herding optimization (EHO) with ABC, and the simulation results have shown that the proposed approach was viable and effective. Yang [14] analyzed swarm intelligence algorithms by using differential evolution, dynamic systems, self-organization, and Markov chain framework. e discussions demonstrate that the hybrid algorithms have some advantages over traditional algorithms. Bacanin and Tuba [15] proposed a modified ABC based on GA, and the obtained results show that the hybrid ABC is able to provide competitive results and outperform other counterparts. Liu et al. [16] presented a multistrategy brain storm optimization (BSO) with dynamic parameter adjustment which is more competitive than other related algorithms. Peng et al. [17] has proposed FA with luciferase inhibition mechanism to improve the effectiveness of selection. e simulation results have shown that the proposed approach has the best performance in some complex functions. Peng et al. [18] also developed a hybrid approach, which is using the best neighbor-guided solution search strategy to search ABC algorithm.
e experimental results indicate that the proposed ABC is very competitive and outperforms the other algorithms. It can be seen that the hybrid strategy is a strategy to successfully improve the swarm intelligence algorithm, so the BSA algorithm will be improved by hybrid strategy in this paper.
Similarly, BSA can also be applied to multiple fields, especially in the field of parameter estimation, and hybrid strategy is also the improvement method for BSA. In 2017, Xu et al. [19] proposed improved boundary BSA (IBBSA) for chaotic system optimization of the Lorenz system and the coupling motor system. However, the improved boundary learning strategy has randomness, which makes IBBSA generalization performance not high. Yang and Liu [20] introduced the dynamic weight into the foraging formula of BSA (IBSA) which provides a solution for problem that antisame-frequency interference of shipborne radar. e results have shown that the dynamic weight is just introduced into the foraging formula of BSA, but IBSA ignored the impact of population initialization. Wang et al. [21] designed a strategy named "disturbing the local optimum" for helping the original BSA converge to the global optimal solution faster and more stably. However, "disturbing the local optimum" also has randomness, which makes the generalization performance of improved BSA not very well.
Like many swarm intelligence algorithms, BSA is also faced with the problem of being trapped in local optima and slow convergence. ese disadvantages limit the wider application of BSA. In this paper, a dynamic multi-swarm differential learning quantum BSA called DMSDL-QBSA is proposed, which introduced three hybrid strategies into the original BSA to improve its effectiveness. Motivated by the defect of insufficient generalization ability in the literature [19,21], we will first establish a dynamic multi-swarm bird swarm algorithm (DMS-BSA) and merge the differential evolution operator into each sub-swarm of the DMS-BSA, and it improves the local search capability and global search capability of foraging behavior. Second, according to the contempt for the impact of population initialization in the literature [20], they used quantum behavior to optimize the particle swarm optimization in order to obtain a good ability to jump out of global optimum; we will use the quantum system to initialize the search space of the bird. Consequently, it improves the convergence rate of the whole population and avoids BSA into a local optimum. In order to validate effectiveness of the proposed method, we have evaluated the performance of DMSDL-QBSA on classical benchmark functions and CEC2014 functions including unimodal and multimodal functions in comparison with the state-of-the-art methods and new popular algorithms. e experimental results have shown that the three improvement strategies are able to significantly boost the performance of BSA.
Based on the DMSDL-QBSA, an effective hybrid random forest (RF) model for actual oil logging prediction is established, called DMSDL-QBSA-RF approach. RF has the characteristics of being nonlinear and anti-interference [22]. In addition, it can decrease the possibility of overfitting which often occurs in actual logging. RF has been widely used in various classification problems, but it has not yet been applied to the field of actual logging. Parameter estimation is a prerequisite to accomplish the RF classification model. e two key parameters of RF are the number of decision trees and the number of predictor variables; the former is called [−100, 100] D , and the latter is called m try . Meanwhile, parameter estimation of the model is a complex optimization problem that traditional methods might fail to solve. Many works have proposed to use swarm intelligence algorithms to find the best parameters of the RF model. Ma and Fan [23] adopted AFSA and PSO to optimize the parameters of the RF. Hou et al. [24] used the DE to obtain an optimal set of initial parameters for RF. Liu et al. [25] compared genetic algorithms, simulated annealing, and hill climbing algorithms to optimize the parameters of the RF. From these papers, we can see that metaheuristic algorithm must be suitable for this problem. In this study, the DMSDL-QBSA was used to optimize the two key parameters that can improve the accuracy without overfitting for RF. When investigating the performance of the DMSDL-QBSA-RF classification model compared with 3 swarm intelligence algorithm-based RF methods, 8 two-dimensional UCI datasets are applied. As the experimental results show, the proposed learning scheme can guarantee a more stable RF classification model with higher predictive accuracy compared to other counterparts. e rest of the paper is organized as follows: 2 Computational Intelligence and Neuroscience (i) In order to achieve a better balance between efficiency and velocity for BSA, we have studied the effects of four different hybrid strategies of the dynamic multi-swarm method, differential evolution, and quantum behavior on the performance of BSA. (ii) e proposed DMSDL-QBSA has successfully optimized n tree and m try setting problem of RF. e resulting hybrid classification model has been rigorously evaluated on oil logging prediction. (iii) e proposed hybrid classification model delivers better classification performance and offers more accurate and faster results when compared to other swarm intelligence algorithm-based RF models. Rule 1: each bird can switch between vigilant behavior and foraging behavior, and both bird forages and keeps vigilance is mimicked as random decisions. Rule 2: when foraging, each bird records and updates its previous best experience and the swarms' previous best experience with food patches. e experience can also be used to search for food. Instant sharing of social information is across the group. Rule 3: when keeping vigilance, each bird tries to move towards the center of the swarm. is behavior may be influenced by disturbances caused by swarm competition. Birds with more stocks are more likely to be near swarm's centers than birds with lease stocks. Rule 4: birds fly to another place regularly. When flying to another location, birds often switch between production and shrubs. e bird with the most stocks is the producer, and the bird with the least is a scrounger. Other birds with the highest and lowest reserves are randomly selected for producers and scroungers. Rule 5: producers actively seek food. Scroungers randomly follow producers looking for food.

Bird Swarm Algorithm and Its Improvement
According to Rule 1, we define that the time interval of each bird flight behavior FQ, the probability of foraging behavior P(P ∈ (0, 1)), and a uniform random number δ ∈ (0, 1).
(1) Foraging behavior If the number of iteration is less than FQ and δ ≤ P, the bird will be the foraging behavior. Rule 2 can be written mathematically as follows: where C and S are two positive numbers; the former is called cognitive accelerated coefficients, and the latter is called social accelerated coefficients. Here, p i,j is the i-th bird's best previous position and g j is the best previous swarm's position. (2) Vigilance behavior If the number of iteration is less than FQ and δ > P, the bird will be the vigilance behavior. Rule 3 can be written mathematically as follows: where a 1 and a 2 are two positive constants in [0, 2], pFit i is the best fitness value of i-th bird, and sumFit is the sum of the swarms' best fitness value. Here, ε, which is used to avoid zero-division error, is the smallest constant in the computer. mean j denotes the j-th element of the whole swarm's average position.

(3) Flight behavior
If the number of iteration equals FQ, the bird will be the flight behavior which can be divided into the behaviors of the producers and scroungers by fitness. Rule 3 and Rule 4 can be written mathematically as follows: where FL (FL ∈ [0, 2]) means that the scrounger would follow the producer to search for food. Dynamic multi-swarm method has been widely used in real-world applications, because it is efficient and easy to implement. In addition, it is very common in the Computational Intelligence and Neuroscience improvement of swarm intelligent optimization, such as coevolutionary algorithm [26], the framework of evolutionary algorithms [27], multiobjective particle swarm optimization [28], hybrid dynamic robust [29], and PSO algorithm [30,31]. However, the PSO algorithm is easy to fall into the local optimum and its generalization performance is not high. Consequently, motivated by these literature studies, we will establish a dynamic multi-swarm bird swarm algorithm (DMS-BSA), and it improves the local search capability of foraging behavior.

e Bird Swarm Algorithm
In DMS-PSO, the whole population is divided into many small swarms, which are often regrouped by using various reorganization plans to exchange information. e velocity update strategy is where l i,j is the best historical position achieved within the local community of the i-th particle.
According to the characteristic of equation (1), we can see that the foraging behavior formula of BSA is similar to the particle velocity update formula of PSO. So, according to the characteristic of equation (7), we can get the improved foraging behavior formula as follows: where GV is called the guiding vector. e dynamic multi-swarm method is used to improve the local search capability, while the guiding vector GV can enhance the global search capability of foraging behavior. Obviously, we need to build a good guiding vector.

e Guiding Vector Based on Differential Evolution.
Differential evolution (DE) is a powerful evolutionary algorithm with three differential evolution operators for solving the tough global optimization problems [32]. Besides, DE has got more and more attention of scholars to evolve and improve in evolutionary computation, such as hybrid multiple crossover operations [33] and proposed DE/ neighbor/1 [34], due to its excellent global search capability. From these literature studies, we can see that DE has a good global search capability, so we will establish the guiding vector GV based on differential evolution operator to improve the global search capability of foraging behavior. e detailed implementation of GV is presented as follows: (1) Differential mutation According to the characteristic of equation (8), the "DE/best/1", "DE/best/2," and "DE/current-to-best/1" mutation strategies are suitable. In the experiments of the literature [31], they showed that the "DE/best/1" mutation strategy is the most suitable in DMS-PSO, so we choose this mutation strategy in BSA. And the "DE/ lbest/1" mutation strategy can be written as follows: Note that some components of the mutant vector v i,j may violate predefined boundary constraints. In this case, the boundary processing is used. It can be expressed as follows: (2) Crossover After differential mutation, a binomial crossover operation exchanges some components of the mutant vector v i,j with the best previous position p i,j to generate the target vector u i,j . e process can be expressed as (3) Selection Because the purpose of BSA is to find the best fitness, a selection operation chooses a vector accompanied a better fitness to enter the next generation to generate the selection operator, namely, guiding vector GV. e process can be expressed as follows: Choose a vector with better fitness to enter the next generation

e Initialization of Search Space Based on Quantum
Behavior. Quantum behavior is a nonlinear and excellent superposition system. With its simple and effective characteristics and good performance in global optimization, it has been applied to optimize many algorithms, such as particle swarm optimization [35] and pigeon-inspired optimization algorithm [36]. Consequently, according to the literature studies and its excellent global optimization performance, we use the quantum system to initialize the search space of the bird. Quantum-behaved particle position can be written mathematically as follows: According to the characteristics of equations (13)-(15), we can get the improved search space initialization formula as follows: 4 Computational Intelligence and Neuroscience where β is a positive number, which can be, respectively, called as a contraction expansion factor. Here, x t i is the position of the particle at the previous moment and mbest is the average value of the best previous positions of all the birds (Algorithm 1).

Procedures of the DMSDL-QBSA. In
, in order to improve the local search capability and the global search capability on BSA, this paper has improved the BSA in three parts: (1) In order to improve the local search capability of foraging behavior on BSA, we put forward equation (8) based on the dynamic multi-swarm method. (2) In order to get the guiding vector to improve the global search capability of foraging behavior on BSA, we put forward equations (9), (11), and (12) based on differential evolution. (3) In order to expand the initialization search space of the bird to improve the global search capability on BSA, we put forward equations (16) and (17) based on quantum behavior.
Finally, the steps of DMSDL-QBSA can be shown in Algorithm 1.

Simulation Experiment and Analysis.
is section presents the evaluation of DMSDL-QBSA using a series of experiments on benchmark functions and CEC2014 test functions. All experiments in this paper are implemented using the following: MATLAB R2014b; Win 7 (64-bit); Inter (R) Core (TM) i5-2450M; CPU @2.50 GHz; 4.00 GB RAM. To obtain fair results, all the experiments were conducted under the same conditions. e number of the population size is set as 30 in these algorithms. And each algorithm runs 30 times independently for each function.

Benchmark Functions and CEC 2014 Test Functions.
When investigating the effective and universal performance of DMSDL-QBSA compared with several hybrid algorithms and popular algorithms, 18 benchmark functions and CEC2014 test functions are applied. In order to test the effectiveness of the proposed DMSDL-QBSA, 18 benchmark functions [37] are adopted, and all of which have an optimal value of 0. e benchmark functions and their searching ranges are shown in Table 1. In this test suite, f 1 − f 9 are unimodal functions. ese unimodal functions are usually used to test and investigate whether the proposed algorithm has a good convergence performance. en, f 10 − f 18 are multimodal functions. ese multimodal functions are used to test the global search capability of the proposed algorithm. e smaller the fitness value of functions, the better the algorithm performs. Furthermore, in order to better verify the comprehensive performance of DMSDL-QBSA in a more comprehensively manner, another 30 complex CEC2014 benchmarks are used. e CEC2014 benchmark functions are simply described in Table 2.

Parameter Settings.
In order to verify the effectiveness and generalization of the proposed DMSDL-QBSA, the improved DMSDL-QBSA is compared with several hybrid algorithms. ese algorithms are BSA [7], DE [32], DMSDL-PSO [31], and DMSDL-BSA. Another 5 popular intelligence algorithms, such as grey wolf optimizer (GWO) [38], whale optimization algorithm (WOA) [39], sine cosine algorithm (SCA) [40], grasshopper optimization algorithm (GOA) [41], and sparrow search algorithm (SSA) [42], are used to compare with DMSDL-QBSA. ese algorithms represented state-of-the-art can be used to better verify the performance of DMSDL-QBSA in a more comprehensively manner. For fair comparison, the number of populations of all algorithms is set to 30, respectively, and other parameters of all algorithms are set according to their original papers. e parameter settings of these involved algorithms are shown in Table 3 in detail.

Comparison on Benchmark Functions with Hybrid
Algorithms. According to Section 2.2, three hybrid strategies (dynamic multi-swarm method, DE, and quantum behavior) have been combined with the basic BSA method. When investigating the effectiveness of DMSDL-QBSA compared with several hybrid algorithms, such as BSA, DE, DMSDL-PSO, and DMSDL-BSA, 18 benchmark functions are applied. Compared with DMSDL-QBSA, quantum behavior dynamic is not used in the dynamic multi-swarm differential learning bird swarm algorithm (DMSDL-BSA). e number of function evaluations (FEs) is 10000. We selected two different dimension's sizes (Dim). Dim � 10 is the typical dimensions for the benchmark functions. And Dim � 2 is for RF has two parameters that need to be optimized, which means that the optimization function is 2-dimensional. e fitness value curves of a run of several algorithms on about eight different functions are shown in Figures 1 and 2, where the horizontal axis represents the number of iterations and the vertical axis represents the fitness value. We can obviously see the convergence speeds of several different algorithms. e maximum value (Max), the minimum value (Min), the mean value (Mean), and the variance (Var) obtained by several benchmark algorithms are shown in Tables 4-7, where the best results are marked in bold. Table 4 and 5 show the performance of the several algorithms on unimodal functions when Dim � 10 and 2, and Table 6 and 7 show the performance of the several algorithms on multimodal functions when Dim � 10 and 2.
(1) Unimodal functions From the numerical testing results on 8 unimodal functions in Table 4, we can see that DMSDL-QBSA can find the optimal solution for all unimodal functions and get the minimum value of 0 on f 1 , f 2 , f 3 , f 7 , and f 8 . Both DMSDL-QBSA and DMSDL-Computational Intelligence and Neuroscience 5 BSA can find the minimum value. However, DMSDL-QBSA has the best mean value and variance on each function. e main reason is that DMSDL-QBSA has better population diversity during the initialization period. In summary, the DMSDL-QBSA has best performance on unimodal functions compared to the other algorithms when Dim � 10.
Obviously, the DMSDL-QBSA has a relatively well convergence speed. e evolution curves of these algorithms on four unimodal functions f 1 , f 5 , f 6 , and f 9 are drawn in Figure 1. It can be detected from the figure that the curve of DMSDL-QBSA descends fastest in the number of iterations that are far less than 10000 times. For f 1 , f 5 , and f 9 case, DMSDL-QBSA has the fastest convergence speed compared with other algorithms. However, the original BSA got the worst solution because it is trapped in the local optimum prematurely. For function f 6 , these algorithms did not find the value 0. However, the convergence speed of the DMSDL-QBSA is significantly faster than other algorithms in the early stage and the solution eventually found is the best. Overall, owing to enhance the diversity of population, DMSDL-QBSA has a relatively excellent convergence speed when Dim � 2.

(2) Multimodal functions
From the numerical testing results on 8 multimodal functions in Table 6, we can see that DMSDL-QBSA can find the optimal solution for all multimodal functions and get the minimum value of 0 on f 11 , f 13 , and f 16 . DMSDL-QBSA has the best performance on f 11 , f 12 , f 13 , f 14 , f 15 , and f 16 . BSA works the best on f 10 . DMSDL-PSO performs not very well. And DMSDL-QBSA has the best mean value and variance on most functions. e main reason is that DMSDL-QBSA has a stronger global exploration capability based on the dynamic multi-swarm method and differential evolution. In summary, the DMSDL-QBSA has relatively well performance on unimodal functions compared to the other algorithms when typical Dim � 10. Obviously, the DMSDL-QBSA has relatively well global search capability.
e evolution curves of these algorithms on four multimodal functions f 12 , f 13 , f 17 , and f 18 when Dim � 2 are depicted in Figure 2. We can see that DMSDL-QBSA can find the optimal solution in the same iteration. For f 13 and f 17 case, DMSDL-QBSA continues to decline. However, the original BSA and DE get parallel straight lines because of their poor global convergence ability. For functions f 12 and f 18 , although DMSDL-QBSA also trapped the local optimum, it find the minimum value compared to other algorithms. Obviously, the convergence speed of the DMSDL-QBSA is significantly faster than other algorithms in the early stage, and the solution eventually found is the best. In general, owing to enhance the diversity of population, DMSDL-QBSA has a relatively balanced global search capability when Dim � 2.
Furthermore, from the numerical testing results on nine multimodal functions in Table 7, we can see that DMSDL-QBSA has the best performance on f 11

Name
Test function Range Sphere Step Computational Intelligence and Neuroscience 7 on any functions. DMSDL-BSA has got the minimum value of 0 on f 11 and f 13 . In summary, the DMSDL-QBSA has a superior global search capability on most multimodal functions when Dim � 2. Obviously, DMSDL-QBSA can find the best two parameters for RF that need to be optimized, because of its best global search capability. In this section, it can be seen from Figures 1 and 2 and Tables 4-7 that DMSDL-QBSA can obtain the best function values for most cases. It indicates that the hybrid strategies of BSA, dynamic multi-swarm method, DE, and quantum behavior operators, lead to the bird moves towards the best solutions. And DMSDL-QBSA has well ability of searching for the best two parameters for RF with higher accuracy and efficiency.

Comparison on Benchmark Functions with Popular
Algorithms. When comparing the timeliness and applicability of DMSDL-QBSA compared with several popular algorithms, such as GWO, WOA, SCA, GOA, and SSA, 18 benchmark functions are applied. And GWO, WOA, GOA and SSA are swarm intelligence algorithms. In this experiment, the dimension's size of these functions is10. e number of function evaluations (FEs) is100000.
e maximum value (Max), the minimum value (Min), the mean value (Mean), and the variance (Var) obtained by several different algorithms are shown in Tables 8 and 9, where the best results are marked in bold.

Algorithm
Parameter settings BSA Computational Intelligence and Neuroscience From the test results in Table 8, we can see that DMSDL-QBSA has the best performance on each unimodal function. GWO finds the value 0 on f 1 , f 2 , f 3 , f 7 , and f 8 . WOA obtains 0 on f 1 , f 2 , and f 7 . SSA works the best on f 1 and f 7 . With the experiment of multimodal function evaluations, Table 9 shows that DMSDL-QBSA has the best performance on f 11 , f 12 , f 13 , f 14 , f 15 , f 16 , and f 18 . SSA has the best performance on f 10 . GWO gets the minimum on f 11 . WOA and SCA obtains the optimal value on f 11 and f 13 . Obviously, compared with these popular algorithms, DMSDL-QBSA is a competitive algorithm for solving several functions and the swarm intelligence algorithms perform better than other algorithms. e results of Tables 8 and 9 show that DMSDL-QBSA has the best performance on the most test benchmark functions.

Comparison on CEC2014 Test Functions with Hybrid
Algorithms. When comparing the comprehensive performance of proposed DMSDL-QBSA compared with several hybrid algorithms, such as BSA, DE, DMSDL-PSO, and DMSDL-BSA, 30 CEC2014 test functions are applied. In this experiment, the dimension's size (Dim) is set to 10. e number of function evaluations (FEs) is 100000. Experimental comparisons included the maximum value (Max), the minimum value (Min), the mean value (Mean), and the variance (Var) are given in Tables 10 and 11, where the best results are marked in bold.
Based on the mean value (Mean), on the CEC2014 test functions, DMSDL-QBSA has the best performance on F 2 , F 3 , F 4 , F 6 , F 7 , F 8 , F 9 , F 10 , F 11     n tree decision trees, and each of which consists of nonleaf nodes and leaf nodes. e leaf node is a child node of the node branch. It is supposed that the dataset has M attributes. When each leaf node of the decision tree needs to be segmented, the m try attributes are randomly selected from the M attributes as the reselected splitting variables of this node. is process can be defined as follows: where S j is the splitting variable of the j-th leaf node of the decision tree, and P i is the probability that m try reselected attributes are selected as the splitting attribute of the node. e nonleaf node is a parent node that classifies training data as a left or right child node. e function of k-th decision tree is as follows: where c � 0 or 1 { }, where the symbol 0 indicates that the k-th row of data is classified as a negative label and the symbol 1 indicates that the k-th row of data is classified as a positive label. Here, f l is the training function of the l-th decision tree based on the splitting variable S. X k is the k-th row of data in the dataset by random sampling with replacement. e symbol τ is a positive constant, which is used as the threshold value of the training decision.
When decision processes are trained, each row of data will be input into a leaf node of each decision tree. e average of n tree decision tree classification results is used as the final classification result.
is process can be written mathematically as follows: where l is the number of decision trees which judged k-th row of data as c.
From the above principle, we can see that it is mainly necessary to determine two parameters of n tree and m try in the RF modeling process. In order to verify the influence of these two parameters on the classification accuracy of the RF classification model, the Ionosphere dataset is used to test the influence of the two parameters on the performance of Computational Intelligence and Neuroscience the RF model, as shown in Figure 3, where the horizontal axis represents n tree and m try , respectively, and the vertical axis represents the accuracy of the RF classification model.
(1) Parameter analysis of n tree When the number of predictor variables m try is set to 6, the number of decision trees n tree is cyclically set from 0 to 1000 at intervals of 20. And the evolutionary progress of RF classification model accuracy with the change of n tree is shown in Figure 3(a). From the curve in Figure 3(a), we can see that the accuracy of RF is gradually improved with the increase of the number N of decision trees. However, when the number of decision trees n tree is greater than a certain value, the improvement of RF performance has become gentle without obvious improvement, but the running time becomes longer.
(2) Parameter analysis of m try When the number of decision trees n tree is set to 500, the number of predictor variables m try is cyclically set from 1 to 32. e limit of m try is set to 32, because the number of attributes of the Ionosphere dataset is 32. And the obtained curve of RF classification model accuracy with m try transformation is shown in Figure 3(b). And we can see that with the increase of the splitting property of the selection, the classification performance of RF is gradually improved, but when the number of predictor variables m try is greater than 9, the RF generates overfitting and the accuracy of RF begins to decrease. e main reason is that too many split attributes are selected, which resulted in the same splitting attributes which are owned by a large number of decision trees. is reduced the diversity of decision trees. In summary, for the RF classification model to obtain the ideal optimal solution, the selection of the number of decision trees n tree and the number of predictor variables m try are very important. And the classification accuracy of the RF classification model can only be optimized by the comprehensive optimization of these two parameters. So, it is necessary to use the proposed algorithm to find a suitable set of RF parameters. Next, we will optimize the RF classification model by the improved BSA proposed in Section 2.

RF Model Based on an Improved Bird Swarm Algorithm.
Improved bird swarm algorithm optimized RF classification model (DMSDL-QBSA-RF) is based on the improved bird swarm algorithm optimized the RF classification model and introduced the training dataset into the training process of the RF classification model, finally getting the DMSDL-QBSA-RF classification model. e main idea is to construct a two-dimensional fitness function containing RF's two parameters n tree and m try as the optimization target of DMSDL-QBSA, so as to obtain a set of grouping parameters and make the RF classification model obtain the best classification accuracy. e specific algorithm steps are shown as in Algorithm 2.

Simulation Experiment and Analysis.
In order to test the performance of the improved DMSDL-QBSA-RF classification model, we compare the improved classification model with the standard RF model, BSA-RF model, and DMSDL-BSA-RF model on 8 two-dimensional UCI datasets. e DMSDL-BSA-RF classification model is an RF classification model optimized by BSA without quantum behavior. In our experiment, each of datasets is divided into two parts: 70% of the dataset is as training set and the remaining 30% is as a test set. e average classification accuracies of 10 independent runs of each model are recorded in Table 12, where the best results are marked in bold.
From the accuracy results in Table 12, we can see that the DMSDL-QBSA-RF classification model can get best accuracy on each UCI dataset except magic dataset. And the DMSDL-BSA-RF classification model has got best accuracy on magic dataset. en, compared with the standard RF model, the accuracy of the DMSDL-QBSA-RF classification model can get better accuracy which is increased by about 10%. Finally, the

Design of Oil Layer Classification
System. e block diagram of the oil layer classification system based on the improved DMSDL-QBSA-RF is shown in Figure 4. e oil layer classification can be simplified by the following five steps: Step 1. e selection of the actual logging datasets is intact and full-scale. At the same time, the datasets should be closely related to rock sample analysis. e dataset should be relatively independent. e dataset is randomly divided into two parts of training and testing samples.
Step 2. In order to better understand the relationship between independent variables and dependent variables and reduce the sample information attribute, the dataset continuous attribute should be discretized by using a greedy algorithm.
Step 3. In order to improve the calculation speed and classification accuracy, we use the covering rough set method [43] to realize the attribute reduction. After attribute reduction, normalization of the actual logging datasets is carried out to avoid computational saturation.
Step 4. In the DMSDL-QBSA-RF layer classification model, we input the actual logging dataset after attribute reduction, use a DMSDL-QBSA-RF layer classification algorithm to train, and finally get the DMSDL-QBSA-RF layer classification model. Step 5. e whole oil section is identified by the trained DMSDL-QBSA-RF layer classification model, and we output the classification results.
In order to verify the application effect of the DMSDL-QBSA-RF layer classification model, we select three actual logging datasets of oil and gas wells to train and test.

Practical Application.
In Section 2.3, the performance of the proposed DMSDL-QBSA is simulated and analyzed on benchmark functions. And in Section 3.3, the effectiveness of the improved RF classification model optimized by the proposed DMSDL-QBSA is tested and verified on two-dimensional UCI datasets. In order to test the application effect of the improved DMSDL-QBSA-RF layer classification model, three actual logging datasets are adopted and recorded as mathematical problems in engineering W1, W2, and W3. e W1 is a gas well in Xian (China), the W2 is a gas well in Shanxi (China), and the W3 is an oil well in Xinjiang (China). e depth and the corresponding rock sample analysis samples of the three wells selected in the experiment are as shown in Table 13.
Attribute reduction on the actual logging datasets is performed before the training of the DMSDL-QBSA-RF classification model on the training dataset, as shown in Table 14. en, these attributes are normalized as shown in Figure 5, where the horizontal axis represents the depth and the vertical axis represents the normalized value. e logging dataset after attribute reduction and normalization is used to train the oil and gas layer classification model. In order to measure the performance of the DMSDL-QBSA-RF classification model, we compare the improved classification model with several popular oil and gas layer classification models. ese classification models are the standard RF model, SVM model, BSA-RF model, and DMSDL-BSA-RF model. Here, the RF classification model was first applied to the field of logging. In order to evaluate the performance of the recognition model, we select the following performance indicators: where y i and f i are the classification output value and the expected output value, respectively. RMSE is used to evaluate the accuracy of each classification model. MAE is used to show actual forecasting errors. Table 15 records the performance indicator data of each classification model, and the best results are marked in bold. e smaller the RMSE and MAE, the better the classification model performs.
From the performance indicator data of each classification model in Table 15, we can see that the DMSDL-QBSA-RF classification model can get the best recognition accuracy and all the accuracies are up to 90%. e recognition accuracy of the proposed classification model for W3 is up to 99.73%, and it has superior performance for oil and gas layer classification in other performance indicators and different wells. Secondly, DMSDL-QBSA can improve the performance of RF, and the parameters found by DMSDL-QBSA used in the RF classification model can improve the classification accuracy and keep running speed relatively fast at the same time. For example, the running times of DMSDL-QBSA-RF classification model for W1 and W2 are, respectively, 0.0504 seconds and 1.9292 seconds faster than the original RF classification model. Based on above results of data, the proposed classification model is better than the traditional RF and SVM model in oil layer classification. e comparison of oil layer classification result is shown in Figure 6, where, (a), (c), and (e) represent the actual oil layer distribution and (b), (d), and (f ) represent DMSDL-QBSA-RF oil layer distribution. In addition, 0 means this depth has no oil or gas and 1 means this depth has oil or gas.
From Figure 6, we can see that the DMSDL-QBSA-RF classification model identifies that the oil layer distribution results are not much different from the test oil test results. It can accurately identify the distribution of oil and gas in a well. e DMSDL-QBSA-RF model is suitable for petroleum logging applications, which greatly reduces the difficulty of oil exploration and has a good application foreground.    (1) Begin (2) / * Build classification model based on DMSDL-QBSA-RF * / (3) Initialize the positions of N birds using equations (16) and (17): X i (i � 1, 2, ..., N); (4) Calculated fitness: f(X i )(i � 1, 2, ..., N); set X i to be P i and find P gbest ; (5) While iter < iter max + 1 do (6) For i � 1: n tree (7) Give each tree a training set of size N by random sampling with replacement based on Bootstrap; (8) Select m try attributes randomly at each leaf node, compare the attributes, and select the best one; (9) Recursively generate each decision tree without pruning operations; (10) End For (11) Update classification accuracy of RF: Evaluate f(X i ); (12) Update gbest and P gbest ; (13) [n best , m best ] � gbest; (14) iter � iter + 1; (15) End While (16) / * Classify using RF model * / (17) For i � 1: n best (18) Give each tree a training set of size N by random sampling with replacement based on Bootstrap; (19) Select m best attributes randomly at each leaf node, compare the attributes, and select the best one; (20) Recursively generate each decision tree without pruning operations; (21) End (22) Return DMSDL-QBSA-RF classification model (23) Classify the test dataset using equation (20); (24) Calculate OOB error; (25) End ALGORITHM 2: DMSDL-QBSA-RF classification model.

Conclusion
is paper presents an improved BSA called DMSDL-QBSA, which employed the dynamic multi-swarm method, differential evolution, and quantum behavior to enhance the global and the local exploration capabilities of original BSA. First, 18 classical benchmark functions are used to verify the effectiveness of the improved method. e experimental study of the effects of these three strategies on the performance of DMSDL-QBSA revealed that the hybrid method has an excellent influence to improve the improvement of original GOA and especially original DE. Second, compared with the popular intelligence algorithms, such as GWO, WOA, SCA, GOA, and SSA, the DMSDL-QBSA can provide more competitive results on the 18 classical benchmark functions. Additionally, 30 complex CEC2014 test functions are used to better verify the performance of DMSDL-QBSA in a more comprehensively manner.
e DMSDL-QBSA can show more excellent performance on the 18 classical benchmark functions. Finally, the improved DMSDL-QBSA is used to optimize the parameters of RF. Experimental results on actual oil logging prediction problem have proved that the classification accuracy of the established DMSDL-QBSA-RF classification model can get 94.00%, 94.24%, and 99.73% on these wells, and the accuracy is much higher than the original RF model. At the same time, the running speed performed faster than other four advanced classification models on most wells.
Although the proposed DMSDL-QBSA has been proven to be effective in solving general optimization problems, DMSDL-QBSA has some shortcomings that warrant further investigation. And in DMSDL-QBSA, due to the hybrid of three strategies, DMSDL-QBSA has needed more time than the classical BSA. erefore, deploying the proposed algorithm to increase recognition efficiency is a worthwhile direction. In the future research work, the method presented in this paper can also be extended to solving discrete optimization problems and multiobjective optimization problems. Furthermore, applying the proposed DMSDL-QBSA-RF model to other fields such as financial prediction and biomedical science diagnosis is also an interesting future work.

Data Availability
All data included in this study are available upon request by contact with the corresponding author.

22
Computational Intelligence and Neuroscience