An Enhanced Lightning Attachment Procedure Optimization with Quasi-Opposition-Based Learning and Dimensional Search Strategies

Lightning attachment procedure optimization (LAPO) is a new global optimization algorithm inspired by the attachment procedure of lightning in nature. However, similar to other metaheuristic algorithms, LAPO also has its own disadvantages. To obtain better global searching ability, an enhanced version of LAPO called ELAPO has been proposed in this paper. A quasi-opposition-based learning strategy is incorporated to improve both exploration and exploitation abilities by considering an estimate and its opposite simultaneously. Moreover, a dimensional search enhancement strategy is proposed to intensify the exploitation ability of the algorithm. 32 benchmark functions including unimodal, multimodal, and CEC 2014 functions are utilized to test the effectiveness of the proposed algorithm. Numerical results indicate that ELAPO can provide better or competitive performance compared with the basic LAPO and other five state-of-the-art optimization algorithms.


Introduction
Optimization problems can be found in many engineering application domains and scientific fields which have a complex and nonlinear nature. It is usually difficult to solve these optimization problems using classical mathematical methods since such methods are often inefficient and have a requirement of strong math assumptions. Due to the limitations of classical approaches, many natural-inspired stochastic optimization algorithms have been proposed to conduct global optimization problems in the last two decades. Such optimization algorithms were commonly simple and easy to implement, and these features make the possibility to solve highly complex optimization problems. ese metaheuristics can be roughly classified into three categories: evolutionary algorithms, swarm intelligence, and physical-based algorithms.
Evolutionary algorithms are generic population-based metaheuristics, which imitate the evolutionary behavior of biology in nature such as reproduction, mutation, recombination, and selection. e first generation starts with randomly initialized solutions and further evolves over successive generations. e best individual among the whole population in the final evolution is considered to be the optimization solution. Some of the popular evolutionary algorithms are genetic algorithm (GA) [1], genetic programming (GP) [2], evolution strategy (ES) [3], differential evolution (DE) algorithm [4], and biogeography-based optimizer (BBO) [5].
Regardless of the difference among the three categories of algorithms, a common point lies in that besides tuning of common control parameters such as population size and number of generations, the metaheuristic algorithms necessitate tuning of algorithm-specific parameters during the course of optimization. For instance, GA requires tuning of cross-over probability, mutation probability, and selection operator [43]; SA requires tuning of initial temperature and cooling rate [31]; PSO requires tuning of inertia weight and learning factors [6]. e improper tuning of these parameters either increases the computational cost or leads to the local optimal solution.
Recently, a new physical-based metaheuristic algorithm named lightning attachment procedure optimization (LAPO) [44] was proposed, which does not require tuning of any algorithm-specific parameters. Instead, an average value of all solutions was employed to adjust the lightning jump behavior of moving towards or away from a jumping point (or position) in a self-adaptive manner. is is an important reason that LAPO is not easily stuck in the local optimal solution and has a good exploration and exploitation abilities. LAPO has already proved its superiority in solving a number of constrained numerical optimization problems [44].
In this paper, an enhanced lightning attachment procedure optimization, namely, ELAPO is developed to increase the convergence speed during the search process of LAPO while maintaining the key feature of the LAPO as free from algorithm-specific parameters tuning. In ELAPO, a concept of opposition-based learning (OBL) is incorporated for enhancing the searching ability of metaheuristic algorithms. e motivation is that the current estimates and their corresponding opposites are considered simultaneously to find the better solutions, thereby enabling the algorithm to explore a large region of the search space in every generation.
is concept was found to be effective in improving the performance of well-known optimization algorithms such as genetic algorithms (GA) [45], differential evolution (DE) [46,47], particle swarm optimization (PSO) [48,49], biogeography-based optimization (BBO) [50,51], harmony search (HS) algorithm [52,53], gravitational search optimization (GSO) [54,55], group search algorithm (GSA) [56,57], and artificial bee colony (ABC) [58]. Meanwhile, a dimensional search strategy is proposed to intensively exploit a local search for each variable of the best solution in each iteration, thus resulting in a higher quality of solution at the end of iteration and strengthening the exploitation of the algorithm. To evaluate the effectiveness of the proposed algorithms, ELAPO is applied to 32 benchmark functions and compared with the basic LAPO and six representative swarm intelligence algorithms (SSA [28], Jaya [59], IBB-BC [60], ODE1 [61], and ALO [20]). e effectiveness of the two strategies is also discussed. e rest of this paper is organized as follows: Section 2 briefly recapitulates the basic LAPO. Next, the proposed ELAPO is presented in a detailed way in Section 3. Numerical comparisons are illustrated in Section 4. Finally, Section 5 gives the concluding remarks.

Basic Algorithm
LAPO is a new nature-inspired global optimization, which mimics the lightning attachment procedure including the downward leader movement and the upward leader propagation. e lightning is a sudden electrostatic discharge occurring between electrically charged regions of a cloud, which moves toward or away from the ground in a stepwise movement. After each step, the downward leader stops and then moves to a randomly selected potential point that may have higher value of electrical field. e upward leader starts from sharp points and goes towards the downward leader. e branch fading feature of lightning is taken effect when the charge of a branch is lower than a critical value. In the case where the two leaders join together, a final strike occurs and the charge of the cloud is neutralized.

Parameters and Initialization of Test Points.
Main parameters of the LAPO consist of the maximum number of iterations Iter max , the number of test points Npop, the number of decision variables n, and the upper and lower bounds for decision variable X max and X min . ese parameters are given at the beginning of the algorithm. Similar to other nature-inspired optimization algorithms, an initial population is required. Each population is regarded as a test point in the feasible search space, which could be an emitting point of the downward or upward leader. e test points are randomly initialized as follows: Given the fact that the lightning has a random behavior, for test point i, a random point k is selected among the population (i ≠ k), and the new test point is updated based on the following rules: (i) if the electric field of point k is higher than the average electric field, then and (ii) if the electric field of point k is lower than the average electric field, then If the electric field of the new test point is better than the old one, the branch sustains; otherwise, it fades. is feature is mathematically formulated as 2.3. Upward Leader Movement. In the upward movement phase, all the test points are considered as the upward leader towards the cloud. e new test points are generated as follows: where X best and X worst are the best and the worst solutions of the population and S is an exponent factor that is a function of the number of iterations Iter and the maximum number of iterations Iter max : From a computational point of view, this iteration-dependent exponent factor is important for the balance of exploration and exploitation capabilities of the algorithm. Similar to the downward movement, the branch fading feature also occurs in this phase.

Enhancement of the Performance.
In order to enhance the performance of LAPO, in each iteration, the worst test point is replaced by the average test point if the fitness of the former is worse than the latter: (10)

Stopping
Criterion. e algorithm terminates if the maximum number of iterations is satisfied. Otherwise, the procedures of downward and upward leader movements and of performance enhancement are repeated.

Procedure of the Basic LAPO.
e complete computational procedure of the basic LAPO is provided in Algorithm 1.

The Enhanced Lightning Attachment
Procedure Optimization e enhanced lightning attachment procedure optimization (ELAPO) is presented in this section. Two main strategies exist in the ELAPO. First, a quasi-opposition-based learning strategy is developed and employed randomly to diversify the population. Second, the dimensional search strategy is proposed to improve the quality of the best solution in each iteration.
e key ideas behind ELAPO are illustrated as follows.

Quasi-Opposition-Based Learning.
In order to prevent the proposed algorithm from being trapped in local optimal solutions, a monitoring condition is introduced and checked in each iteration. Following steps are involved. First, a distance constant between the average test point and the best test point is calculated: Second, the minimum value of the distance constant condition is computed: and the monitoring condition is then checked. If D c < D cmin , the concept of opposition-based learning is employed to further diversify the population and improve the convergence rage of the algorithm. In the strategy, a portion of test points is randomly selected, based on which the corresponding opposite test points are generated and both are considered at the same time. en, the fitness of the original test points and the quasi-opposite test points are calculated and ranked in a descending order, from which the first Npop solutions are selected for proceeding the downward leader movement and the upward leader movement. In order to maintain the stochastic nature of ELAPO, a quasi-opposite solution is randomly generated between the center of the search space CS and the mirror point of the corresponding test point MP: Computational Intelligence and Neuroscience 3 where Nq is the number of randomly chosen test points for the generation of opposite test points, and it is set to be 5 in this paper. X new best,j � X best,j + rand( ) * S * X best,j − X worst,j , j � 1, 2, . . . , n.

Enhancing
3.3. Procedure of ELAPO. e complete computational procedure of the enhanced ELAPO is provided in Algorithm 2.
e error value, defined as f(x) − F min , is recorded for the solution x, where f(x) is the optimal fitness value of the function calculated by the algorithms. e widely used parametric settings of all algorithms are listed in Table 4. Each algorithm is applied on the test functions in 10 independent runs. e average and standard deviation of the error values over all independent runs is calculated. Meanwhile, all algorithms are compared in terms of convergence behavior with different curves (Figures 1-6). In addition, the effectiveness of each strategy is tested.

Experimental Test 1: Unimodal Functions.
Unimodal benchmark functions have one global optimal solution, and they are commonly used for evaluating the exploitation ability of optimization algorithms. Tables 5 and 6 list the statistical results (the mean error and standard deviation) by different algorithms through 10 independent runs at n � 30 and 100, respectively. In these tables, the best values are highlighted in bold. It is obvious from the results that ELAPO has an extremely high level of accuracy and convergence precision for most of unimodal functions in comparison to other six counterpart algorithms. Taken F10 as an example, ELAPO can reach a mean error level of 10E − 195 with zero standard deviation at n � 30, while the accuracy of the rest algorithms is ranked in an order of , and Jaya (10E − 2). It is also found that ELAPO is able to achieve the true minimal value of F3 and F11, while the rest of algorithms fail to obtain the same level of accuracy except for LAPO on F3. e increase in the number of dimensions seems to not affect the outstanding accuracy of ELAPO in comparison to other algorithms, though the accuracy of all algorithms tends to decrease. Figures 1 and 2 show the convergence behaviors of some test functions for ELAPO and its competitors at n � 30 and 100, respectively. As can be seem from these figures, for most of test functions, ELAPO dramatically outperforms its competitors in terms of both convergence rate and precision. For F5 and F6, the convergence performance of ELAPO is still the best, though LAPO tends to have similar behaviors, and the difference between ELAPO and the rest five algorithms seems to be not very significant. Such excellent performance of ELAPO may be due to the introduction of quasi-opposition-based learning strategy as well as the dimensional search strategy.

Experimental Test 2: Multimodal Functions.
Different from unimodal function, multimodal test functions have multiple local optimal solutions and thus are commonly adopted by researchers for testing the exploration ability of an algorithm. Tables 7 and 8 provide the recorded results of statistical analysis over 10 independent runs for n � 30 and 100, respectively. From these tables, it is clear that ELAPO can get better level of accuracy for most of test functions compared with other six algorithms. Particularly, ELAPO is able to obtain the exact true values of F17, F18, F21, F22, and F24. It is also interesting to find that ELAPO is still better than LAPO on F20 although both cannot match with SSA at all dimensions involved. Similar to the observation in the unimodal functions, ELAPO seems to be insensitive to the increase of dimensional number. e convergence performance of all algorithms for several multimodal benchmark functions at n � 30 and 100 are presented in Figures 3 and 4, respectively. As can be found in these figures, ELAPO always has the fastest convergence rate and can reach the best (at least comparable) convergence precision in comparison to other six algorithms. For some multimodal functions such as F13, F14, and F16, the convergence performance of LAPO is unsatisfactory, while the global convergence ability of ELAPO is improved greatly. is is mainly contributed by the quasi-opposition-based strategy in which new opposite test points are generated according to a portion of randomly selected test points and both are simultaneously employed for global searching.

Experimental Test 3: CEC 2014 Benchmark Functions.
In this experimental study, most intensely investigated benchmark functions used in IEEE CEC 2014 are considered for evaluating both exploration and exploitation capabilities of ELAPO. Seven CEC 2014 functions are considered, which consist of several novel basic problems (e.g., with shifting and rotation) and hybrid and composite test problems. ese modern benchmark functions are specially developed with complex features; consequently, all the algorithms can hardly reach the global optimum. However, as per statistical results obtained from different algorithms through 10 independent runs in Tables 9 and 10, ELAPO is able to yield highly competitive results for all CEC 2014 functions under consideration as compared with other six algorithms. For example, the mean error of F27 is as low as a level of 10E − 1

Algorithm
Parameter is serves as a further confirmation that ELAPO possesses excellent balance between exploration and exploitation.

Effectiveness of the Two Strategies.
In order to verify the effectiveness of the two strategies in the proposed algorithm, this subsection performs the previous three experiments for   For most of the unimodal functions, as shown in Tables 11 and 12, ELAPO outperforms the other two variants in terms of the minimum, mean and maximum fitness values, and the standard deviation. is confirms that for most of the functions, both strategies take effects on enhancing the global search ability and the contribution of the quasi-opposition-based learning strategy is more important. As for F6, it seems that the dimensional search strategy has bigger contributions on the exploitation ability of ELAPO. It is also noted that ELAPO and its two variants have almost the same statistical results because, as per Table 5, the basic LAPO has already converged to desirable accuracy and thus the two strategies seem to have no effects.         Computational Intelligence and Neuroscience 13 have no effect. is is because, as per Table 8, the basic LAPO has already obtained the exact optimum. It is also interesting to find that negative effect may be exerted on the global optimization capability. Taking F25 as an example, the best minimum, mean, and maximum fitness values all lied on ELAPO2. In other words, the dimensional search strategy and the quasi-opposition-based learning strategy take positive and negative effects on F25, respectively; the combination of both (ELAPO) can only achieve a middle place of statistical results between         Tables 15 and 16. It can be seen from the tables that both strategies have positive influences on F26, F27, F28, and F29, and the dimensional search strategy tends to take greater effects especially when the number of dimensions is higher. For composite functions, as the complexity of the function increases, the two strategies are still able to make slight contributions, and thus, as per Tables 9 and 10, ELAPO can get competitive results over all other algorithms.
In summary, equipping with any individual strategy only is insufficient to achieve the desired results, but integrating the two strategies results in excellent performance for most of benchmark functions.
is superior performance of ELAPO verifies its appropriate taking care of the exploration-exploitation trade-off problem with the introduction of the proposed two strategies.

Statistical
Analysis. In order to analyze the performance of any two algorithms, the Wilcoxon signed-rank test and Friedman test [63] are considered for the present work. e results of Wilcoxon's test for ELAPO against other six algorithms are summarized in Tables 17 and 18 for n � 30 and 100, respectively. e test is carried out by considering the best solution of each algorithm on each benchmark function with 10 independent runs and a significance level of α � 0.05.
In Tables 17 and 18, "+" sign indicates that the reference algorithm outperforms the compared one, "−" sign indicates that the reference algorithm is inferior to the compared one, and "�" sign indicates that both algorithms have comparable performances. e results of last row of the tables show that the proposed ELAPO has a larger number of "+" counts in comparison to other algorithms, confirming that ELAPO is better than the other six compared algorithms under 95% level of significance. e results of the Friedman test are presented in Tables 19 and 20 for n � 30 and 100, respectively. e last row of the tables depicts the ranks computed through the Friedman test. As can be seen in the table, ELAPO is the best performing algorithm of seven optimization algorithms. e quantitative analysis is also carried out for seven algorithms with an index of mean absolute error (MAE) which is an effective performance index for ranking the optimization algorithms and is defined by [64] MAE � N j�1 m j − k j N , (15) where m j is the mean of optimal values, k j is the actual global optimal value, and N is the number of samples. In the present work, N is the number of benchmark functions. e MAE of all algorithms and their ranking for all functions are given in Table 19. It is clear to find that ELAPO ranks No. 1 and provides the minimum MAE in all cases. ELAPO reaches the optimum solution 436 times out of 640 runs (10 runs for each test function for n � 30 and 100, respectively) and comes in the first rank as shown in Figure 7. It is concluded that ELAPO provides the best performance in comparison to other six optimization algorithms.

Conclusions
In this paper, an enhanced lightning attachment procedure optimization called ELAPO is proposed for global optimization problems. e exploration and exploitation abilities of the basic LAPO are appropriately balanced in the search process. e quasi-opposition-based learning strategy is applied to control the convergence speed and to improve both exploration and exploitation abilities of the algorithm. To further enhance the exploitation capability, the dimensional search strategy is employed, which inherits the good information from the best solution in each iteration and thus increases the convergence precision of the proposed algorithm. e efficiency of ELAPO is examined on unimodal, multimodal, and CEC 2014 benchmark functions.
e statistic results show that the proposed algorithm has superior performance in terms of accuracy and convergence rates when compared with other six state-of-the-art algorithms including LAPO, SSA, Jaya, IBB-BC, ODE1, and ALO.

Data Availability
e data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest
e authors declare that they have no conflicts of interest.