Evolutionary Algorithms Enhanced with Quadratic Coding and Sensing Search for Global Optimization

Enhancing Evolutionary Algorithms (EAs) using mathematical elements significantly contribute to their development and control the randomness they are experiencing. Moreover, the automation of the primary process steps of EAs is still one of the hardest problems. Specifically, EAs still have no robust automatic termination criteria. Moreover, the highly random behavior of some evolutionary operations should be controlled, and the methods should invoke advanced learning process and elements. As follows, this research focuses on the problem of automating and controlling the search process of EAs by using sensing and mathematical mechanisms. These mechanisms can provide the search process with the needed memories and conditions to adapt to the diversification and intensification opportunities. Moreover, a new quadratic coding and quadratic search operator are invoked to increase the local search improving possibilities. The suggested quadratic search operator uses both regression and Radial Basis Function (RBF) neural network models. Two evolutionary-based methods are proposed to evaluate the performance of the suggested enhancing elements using genetic algorithms and evolution strategies. Results show that for both the regression, RBFs and quadratic techniques could help in the approximation of high-dimensional functions with the use of a few adjustable parameters for each type of function. Moreover, the automatic termination criteria could allow the search process to stop appropriately.


Introduction
Evolutionary Algorithms (EAs) are considered to be one of the core methods applied in the area of computational intelligence [1]. Generally, EAs constitute a class of the main global search tools which can be adapted to deal with many forms of nonlinear and hard optimization problems. Developing applicable versions of EAs is hugely required to meet the fast-growing of optimization applications in all aspects of science [2,3]. Recently, there is an extreme interest in adapting EAs to be used with other computational intelligence techniques in different application areas. Although there are a vast number of attempts to modify the EAs to solve global optimization problems, most of them invoke only heuristic design elements, and they do not use mathematical mechanisms [4,5].
On the other hand, EAs have been applied in various optimization problems such as continuous, discrete, and combinatorial problems with and without constraints as well as mixed optimization problems [6,7]. Moreover, EAs are considered to be a milestone in the computational intelligence filed [8]. Thus, developing practical schemes of EAs is remarkably required to meet the fast increasing of many applications in different real-life science [2,9]. However, EAs still have no automatic termination criteria. Thus, EAs algorithms cannot determine when or where they can terminate, and a user should pre-specify a criterion for that purpose. Typically, methods of termination criteria are such as when there has been no improvement in a pre-specified number of generations, when reaching a pre-specified number of generations, or when the objective function value has reached a pre-specified value. Another important designing issue is that controlling the randomness of some evolutionary operations should be considered in designing effective evolutionary-based search methods. This has motivated many researchers and was one of the main reasons for inventing the "memetic algorithms" [10,11]. Moreover, more deterministic operations have been highly recommended in developing the next generation genetic algorithms [12].
The main goal of this research is to construct an effective and intelligent method that looks for optimal or near-optimal solutions to non-convex optimization problems. A global search method is to solve the general unconstrained global optimization problem: In Equation (1), f is a real-value function that is defined in the search space X ⊆ R n with variables x ∈ X. Many methods of EAs have been suggested to deal with such problems [13,14]. This optimization problem has also been considered by different heuristic methods such as; tabu search [15][16][17], simulated annealing [18][19][20], memetic algorithms [11], differential evolution [21,22], particle swarm optimization [23,24], ant colony optimization [25], variable neighborhood search [26], scatter search [27,28] and hybrid approaches [29][30][31]. Multiple applications in various areas such as computer engineering, computer science, economic, engineering, computational science and medicine can be expressed or redefined as problem in Equation (1), see [2,32] and references therein. The considered problem is an unconstrained one; however, there are several constraint-handling techniques that have been proposed to extend some of the previously mentioned research to deal with the constrained version of Problem (1) [33][34][35]. Some of the common techniques are to use penalty functions, fillers and dominance-based technique [36,37]. In this research, the proposed methods are expressed using quadratic models that partially approximate the objective function. Two design models have been invoked to construct these methods. In the first model called Quadratic Coding Genetic Algorithm (QAGA), trial solutions are encoded as coefficients of quadratic functions, and their fitness is evaluated by the objective values at quadratic optimizers of these models. Through generations, these quadratic forms are adapted by the genetic operators: selection, crossover, and mutation. In the second model, Evolution Strategies (ESs) are modified by exploiting search memory elements, automatic sensing conditions, and a quadratic search operator for global optimization. The second proposed method is called Sensing Evolution Strategies (SES). Specifically, the regression method and Artificial Neural Networks (ANN) are used as an approximation process. Explicitly, Radial Basis Functions (RBFs) are invoked as an efficient local ANN technique [38].
The obtained results show that the quadratic approximation and coding models could improve the performance of EAs and could reach faster convergence. Moreover, the mutagenesis operator is much effective and cheaper than some local search techniques. Furthermore, the final intensification process can be started to refine the elite solutions obtained so far. In general, the numerical results indicate that the proposed methods are competitive with some other versions of EAs.
The rest of this paper is structured as follows. Related work is discussed in Section 2. In Section 3, the components of the proposed quadratic models are illustrated. The details of the proposed methods QCGA and SES are explained in Sections 4 and 5, respectively. In Section 6, numerical experiments aiming at analyzing and discussing the performance of the proposed methods and their novel operators are presented. Finally, the conclusion makes up Section 7.

Related Work
The main contributions of this research are to use the wise guidance of mathematical techniques through quadratic models and to design smarter evolutionary-based methods that can sense the progress of the search to achieve finer intensification, wider exploration, and automatic termination.
In designing the proposed methods, EAs are customized with different guiding strategies. First, an exploration and exploitation strategy is proposed to provide the search process with accelerated automatic termination criteria. Specifically, matrices called Gene Matrix (GM) are constructed to sample the search space [1,39,40]. The role of the GM is to aid the exploration process. Typically, GM can equip the search with novel diverse solutions by applying a unique type of mutation that is called "mutagenesis" [39,40]. Principally, mutagenesis may be defined as a nature mechanism by which the genes of an individual are changed, which lead to the mutation process. Thus, this mechanism is a driving force of evolution process [40]. Thus, the mutagenesis operator alters some survival individuals to accelerate the exploration and exploitation processes [40]. On the other hand, GM is used to lead the search process to know how far the exploration process has gone to judge a termination point. Moreover, the mutagenesis operation lets the proposed method, the GA with Gene Matrix, perform like a so-called "Memetic Algorithm" [41,42] to accomplish a faster convergence [40].
Practical termination criteria are one of the main designing issues in EAs. Typically, research is scant of termination criteria for EAs. However, in [43], an empirical method is used to determine the maximum number of needed generations by using the problem characteristics to converge for a solution. Moreover, eight termination criteria have been discussed in [44] that proposed a way of using clustering techniques to test the distribution of specific generation individuals in the search space. Different search memory mechanisms have been adapted as termination criteria to reflect the wideness of the exploration process [39,40,45]. There are four classes of termination criteria of EAs [39]. The first class is called T Fit criterion, and it is based on calculating the best fitness function values over generations. This method keeps tracking the best fitness function values in each generation, and it terminates the algorithm if the values do not significantly change [46,47]. A T Pop criterion is the second class measuring the population over generations. In each generation, the distances between chromosomes are measured and used to terminate the algorithm if these distances are very close. Work in [48] uses T Pop criterion for termination by adding distances among individuals and making sure it is smaller than a predefined threshold. The other two classes are T Bud and T SFB . The T Bud criterion applies a computational budget such as the maximum number of generations or function evaluations for termination [49][50][51]. In the T SFB criterion, a search feedback measure evaluates the exploration and exploitation processes. Measures such as the distribution of the generated solutions or the frequencies of visiting regions inside the search space are invoked to test the search process. In [45], the exploration process is examined by a diverse index set of points created at the beginning of the search process, which tries to guarantee a well-distribution of the generated points over the search space. The algorithm is terminated when most of the regions get visited.
These termination criteria may struggle from some difficulties [45,52]. Applying T Fit only in EAs can easily make it trapped in local minima. Also, it is costly to have the entire population or a part of it convergent when using T Pop , while it is sufficient to have one chromosome closely reach the global minima. Gaining prior information about the search problem is a big concern when invoking T Bud criteria. Lastly, however using T SFB looks effective; it may suffer the dimensionality problem, and it is complex to save and test huge historical search information. Therefore, implementing mixed termination criteria in EAs to overcome these problems is appreciated [53,54].
Evolutionary computing and artificial neural networks are main classes in computational intelligence [55,56]. Most of the research focus on how to use EAs to get more trained ANN [57,58].
There is relatively little research studying how to do the opposite by using ANN to design better EA-based methods or even to construct global optimization methods such in [59,60]. In this research, the Radial Basis Functions (RBFs) are invoked as an efficient local ANN technique to help the EAs to reach promising regions and directions. Generally, other types of ANN could be used for the same task, such as the well-known multi-layer perceptron (MLB) with a training algorithm such as the back-propagation algorithm. However, RBFs is a local learning ANNs that could overcome the problem of long iterations learning process of other types of ANNs. In RBFs, the number of iterations is bounded maximum to the number of input samples. In other words, the middle layer adds one neuron per any new input sample, as shown in Figure 1.

Quadratic Approximation Methods
One of the main steps in the proposed algorithms is to find out the best and fastest approximation for a high-dimensional function. Generally, two methods have been applied for this purpose; quadratic regression and radial basis function as a local supervised ANN. In both methods, the following Equation (2) and its derivative are used for the fitting of the parameters: where A is an n × n matrix, b is a vector with n-dimension, and c is a scalar. These individuals' quadratic forms are adapted by the genetic operators through generations to fit the given objective function locally. The least-square quadratic regression (also known as a nonlinear or second-order regression) is used because it is a fast and simple method for fitting. Moreover, it produces a good approximation and has very encouraging properties that can solve high-dimensional functions by finding A, b, and c in Equation (2). Thus, the idea is to choose a quadratic function that minimizes the sum of the squares of the vertical distances between the original fitness function values at generated solutions and an approximate quadratic function with finding coefficients A, b and c, as shown in Figure 2. Then, approximate quadratic optimizers of the quadratic models can be given by Equation (3), which is represented by the derivative of Equation (2).
It is computationally expensive to compute the inverse of a full matrix A in Equation (3), especially for high dimension problems. Hence, it is recommended to apply an efficient relaxation method to approximate A to a diagonal matrix. Therefore, an approximate quadratic solution can be given as: where a i,j and b i are the entities of A and b, respectively, with a i,i = 0, i = 1, . . . , n and i = 1, . . . , n.

Regression
Quadratic regression models are usually generated using least-squares methods [61]. Procedure 1 explains the steps of obtaining an approximate solution using the least-squares regression models. Starting with a set S of previously generated solutions, the least-squares regression method is called to find an approximate quadratic function, as in Equation (2), that fits those solutions. Then, an approximate optimizer can be calculated using Equation (3).

Procedure 1. Quad-LSE(S)
1. Using least-squares regression, generate a quadratic model as in Equation (2) that fits the solutions in the sample set S. 2. Get the coefficients A, b, and c, of the generated approximate quadratic function. 3. Return with the minimum point of the approximate quadratic function using Equations (3) and (4).

Artificial Neural Networks
Artificial Neural Networks (ANNs) are used in various optimization applications, such as in the areas of complex nonlinear system identification and global approximation. Many researchers have proved that multi-layer feed-forward ANNs, using different types of activation functions, works as a universal approximator [38]. Moreover, continuous nonlinear functions can be approximated and/or interpolated with feed-forward ANNs and can be used to approximate the outputs of dynamical systems directly. Generally, ANNs are employed with EA in two main ways. Typically, most research focus on optimizing ANNs parameters, especially weights, using evolutionary techniques, especially GA [62]. These results, in better classification of the ANNs, use of the power of both methods, the learning ability of the ANNs, and the best parameter values of EA. On the other hand, little research, such as this one, do the opposite thing by optimizing the evolutionary algorithms using ANNs for various purposes [63][64][65].
In this research, the speedup of the optimization process has been done using an RBF neural network. The RBF has been used for searching for the approximated quadratic function for a continuous function of variables such as shown in Appendices A and B. Therefore, The RBF builds local representations of the functions and so their parameters. However, RBFs are simpler to be initialized and trained than feed-forward multi-layer perceptrons (MLPs) neural networks. Thus, RBFs overcome the problem of the training iteration process as it is a local learning type of neural network. Therefore, the iterations are bounded as a maximum of the number of input samples. However, RBFs approximation may be very attractive for approximating complex functions in numerical simulations. Moreover, RBFs would allow randomly scattered data to generalize easily to several space dimensions, and it can be accurate in the interpolation process. With all those motivations, this study used the properties of RBFs approximations to develop a new global optimization tool.
As shown in Figure 1, the RBF neural network is a two-layer ANN, where each hidden unit exploits a radial (Gaussian) activation function for the hidden layer processing elements. The output unit applies a weighted sum of all hidden unit outputs. Although the inputs into the RBF network are nonlinear, the output is linear due to the structure of the RBF neural network. By adjusting the parameters of the Gaussian functions in the hidden layer neurons, each one reacts only to a small region of the input space. However, for successful performance and implementation of RBFs is to find reasonable centers and widths of the Gaussian functions. In this research, during simulation, a MATLAB function is used to determine the Gaussian centers and widths using the input data of the desired function from Appendices A and B. Once the hidden layer has completed its local training and all parameters adjustment, the output layer then adds the outputs of the hidden layer based on its weights. The steps of obtaining an approximate solution using the ANN models are shown in Procedure 2.

Procedure 2. Quad-ANN(S)
1. A set S of generated samples of solutions is used for training the RBF to obtain an approximate quadratic function as in Equation (2). 2. Get the coefficients A, b and c of the approximate quadratic function obtained by the RBF network. 3. Return with the minimum point of the approximate quadratic function using Equations (3) and (4).

Quadratic Coding Genetic Algorithm
One of the proposed methods in this research is called the Quadratic Coding Genetic Algorithm (QCGA). In the following, the main structure and design of the proposed QCGA method are described. Generally, the QCGA method uses a population of µ individuals or solutions. Each individual represents a quadratic form model as in Equation (2). These quadratic form individuals are adapted by the genetic operators through generations to fit the given objective function locally. Therefore, optimizing these quadratic models can provide approximated solutions for the local minima of a given objective function. The optimizers of the quadratic models can be given by Equation (3), as explained before.
Practically, it is very costly to compute the inverse of a full matrix A in Equation (3), especially for high dimension problems. Hence, using a reasonable relaxation form of this coefficient matrix is crucial. Thus, as a main step in the proposed method, A is approximated as a diagonal matrix. Therefore, the QCGA individuals are coded in the quadratic models as: where w i , i = 1, . . . , n, are equal to the diagonal entities of A, w i , i = n + 1, . . . , 2n, are equal to the entities of b, and w 2n+1 = c. To avoid the non-singularity of A, the entities w i , i = 1, . . . , n, are initialized and updated to satisfy the conditions |x i |≥ , i = 1, . . . , n, for some predefined small real number . The individuals fitness values are computed as f (x * ), where f (·) is the given objective function and x * is calculated as: The algorithmic scenario of QCGA is shown in Figure 3. In each iteration of the QCGA algorithm, an intermediate parent population is generated using the linear ranking selection mechanism [66]. Then, the arithmetical crossover and uniform mutation operators [67] are applied to reproduce the offspring. After getting the children, the survival selection is applied to select the survival members based on the members' elitism. Finally, the QCGA uses local search methods of Nelder-Mead [68] and the heuristic descent method [18] as a final intensification step to refine the best solutions found so far.

Start
Set the initial parameters for the quadratic coding and genetic operators

Sensing Evolution Strategies
The difficulty in solving global optimization problems arises from the challenge of searching a vast variable space to locate an optimal point, or at least space of points, with appropriate solution quality. It becomes even more challenging when the appropriate space is minimal compared to the complex search space. However, the quality of the initial solutions may impact the performance of the algorithm. Thus, various global optimization techniques which exploit memory concept in different ways to help in both finding the optimal solution in minimum time and reduce the problem complexity are proposed. Generally, memory concept is used in various research for the sake of assisting exploration and exploitation processes. In [40], a directing strategy based on new search memories; the Gene Matrix (GM) and Landscape Matrix (LM) is proposed. That strategy can provide the search with new diverse solutions by applying a new type of mutation called "mutagenesis" [39]. The mutagenesis operator is used to change selected survival individuals to accelerate the exploration and exploitation processes. However, the GM and LM are also used to help the search process to remember how far the exploration process has gone to judge a termination point. In [15,16], tabu search algorithms are used with memory concept in high-dimensional problems to explore the region around an iterate solution more precisely.
The main structure of the proposed Sensing Evolution Strategies (SES) is shown in Figure 4. In the following, full details of the SES method and its algorithm steps are described. The SES method starts with a population of µ real-coded chromosomes. The mutated offspring is generated as in the typical ESs [69,70]. Therefore, a mutated offspring (x,σ) is obtained from a parent (x,σ), where the i-th component of the mutated offspring (x,σ) is given as: where τ ∝ 1/ √ 2n and τ ∝ 1/ 2 √ n, and the proportional coefficients are usually set to one. The mutated offspring can be also computed from recombined parent as given in Procedure 3. Adding sensing features to the proposed SES method makes it behave smarter. This can be achieved by generating and analyze search data and memory elements. As a main search memory for the SES, the GM is invoked as it has shown promising performance in global optimization [1,39,40]. In GM, each x in the search space consists of n genes or variables to get a real-coding representation of each individual in x. First, the range value of each gene is divided into m sub-range to check the diversity of the gene values. Thus, the GM is initialized to be a n × m zero matrix in which each entry of the i-th row refers to a sub-space of the i-th gene, from which GM is zeros, and ones matrix and its entries are changed from zeros to ones if new genes are generated within the corresponding sub-spaces during the searching process, as shown in Figure 5. This process can be continued until all entries become ones, i.e., with no zero entry in the GM. At that point, when the GM is considered full of ones, the search process achieves an advanced exploration process and can be stopped. Therefore, the GM is used to provide the search process with efficient termination criteria, and it can help by providing the search process with various diverse solutions.

Update Search Parameters and Sensing
Memories and Conditions   Three sensing features have been invoked in the SES method as explained in the following: • Intensification Sensing. This feature seeks to detect promising search regions that have been recently visited. Then, a local fast search can be applied to explore such promising regions deeply. The SES method uses quadratic search models as stated in Procedures 1 and 2 to obtain approximate premising solutions using Equation (4). Moreover, a promising region can be detected when the generated mutated children are adapted to be close. Then, a quadratic model can be generated to approximate the fitness function in that close region surrounding those mutated children. • Diversification Sensing. This feature aims to avoid entrapment in local solutions and recognize the need for generating new diverse solutions. The SES method checks the improvement rates while the search is going on. Whenever significant non-improvement has been faced, then the mutagenesis operation (Procedure 4) is called to generate new diverse solutions. In that procedure, new solutions are created by generating their gene values within sub-ranges whose corresponding indicators are zeros in the GM as shown in Figure 6. Those diverse solutions replace some worst solutions in the current population. • Exploration Sensing. This feature tries to recognize the progress of the search exploration process and to detect an appropriate time for termination whenever a wide exploration process is achieved. When a full GM is reached, then the SES method has learned that the exploration method is almost over.
The mutagenesis operation starts with selecting a zero-position in the GM, which is corresponding to a zero-visit partition. Selecting such GM-entry is done randomly. Then, a new value of the gene related to the selected GM-entry is generated within the corresponding partition of the selection, as illustrated in Figure 6. The formal description of this mechanism is stated in Procedure 4.  The formal description of the proposed SES method is given in the following Algorithm 1. 2.1 Recombination. Generate a random number χ ∈ [0, 1]. If χ > p r , choose a parent (x j ,σ j ) from P g and go to Step 2.2. Otherwise, choose ρ parents p 1 , . . . , p ρ from P g , and calculate the recombined child (x j ,σ j ) using Procedure 3. 2.2 Mutation. Use the individual (x j ,σ j ) to calculate the mutated children (x j,k ,σ j,k ), k = 1, . . . , ν, as in Equations (7) and (8). 2.3 Fitness. Evaluate the fitness functionF j,k = F(x j,k ), k = 1, . . . , ν. 2.4 Intensification Sensing. If the individual (x j ,σ j ) and his mutated children (x j,k ,σ j,k ), k = 1, . . . , ν, are close enough to be approximated by a quadratic model, the apply Procedure 1 or 2 to get an improved point. Replace the worst mutated child with the generated point by the quadratic model if the latter is better.

Update Search Parameters.
Update the GM.

Exploration Sensing.
If the GM is full, then go to Step 8.
6. Selection. Choose the best µ individuals from P g ∪ C g to compose the next generation P g+1 . Update the gene matrix GM.
7. Diversification Sensing. If the new population needs new diverse solutions, then apply Procedure 4 to alter the N w worst individuals in P g+1 . Set g := g + 1, and go to Step 2.
8. Intensification. Apply a local search method starting from each solution from the N elite best ones obtained in the previous search stage.
The non-convex global optimization problem, defined in Equation (1), is an NP-complete problem [71]. Therefore, there is no efficient algorithm to solve such problem in its general form. Metaheuristics are practical solvers for this problem which are generally polynomial algorithms [72,73]. The proposed methods; QCGA and SES, follow the main structures of standard EAs with additive procedures which are at most of order O(n 3 ).

Experimental Results
In the following, the experimental results are discussed. All the proposed methods are programmed using MATLAB. The parameters values used in QCGA and SES algorithms are set based on the typical setting in the literature or determined through our preliminary numerical experiments, as shown in Tables 1 and 2. Two classes of benchmark test functions have been invoked in the experimental results to discuss the efficiency of the proposed methods. The first class of benchmark functions contains 13 classical test functions f 1 -f 13 [16]. Those functions definitions are given in Appendix A. The other class of benchmark functions contains 25 hard test functions h 1 -h 25 [74,75] which are described in Appendix B. No. of worst solutions updated by the mutagenesis process n Table 2. SES parameters.

Parameter Definition Value µ
The population size 30 λ The offspring size 10µ ρ No. of mated parents min(n, 5) m No. of partitions in GM 50 p r The recombination probability 0. 25 N elite No. of best solutions used in in the intensification step 1 N w No. of worst solutions updated by the mutagenesis process n In the following, a highlight of the performance analysis on the quadratic coding and quadratic search operators before discussing the whole proposed algorithmic performance of the proposed methods is given.

Quadratic Coding
The main numerical results are shown in Figure 7 and Table 3. Table 3 shows the average values of best solutions (Avg. f (x best )), the average numbers of function evaluations (Cost), and the success rates of reaching close to the global optima. We compare the results of the QCGA method with the standard genetic algorithm (GA), and one of the recent genetic algorithm versions called the Adaptive Genetic Algorithm (AGA) [9]. For the QCGA and GA results, there are obtained over 100 independent runs. The AGA results are taken from their original reference, which is averaged over only 30 independent runs. The results in Table 3 and Figure 7 show that the proposed method could improve the performance of the genetic algorithm and could reach faster convergence. Moreover, the proposed method could be competitive with the recent advanced GA versions.

Quadratic Models
The performance of the quadratic regression model is tested to show how it can help the EAs to find new, improved solutions. Figure 8 shows the success rates of obtaining improved solutions using the proposed quadratic models. Two test functions; f 5 and f 11 (shown in Appendix A), are invoked in this experiment with different dimensions of 2, 5, 10, 20, 30 and 50. The solutions used in this experiment are generated to be close to local or global minima, and also to be far from them. Results show that the success rates of obtaining improved solutions are promising, especially in high dimensions. However, the results shown in Figure 8 illustrate that the performance of this quadratic operation near minima is better than that far from them. Compared to regression models, the RBF is almost working in the same manner. However, the RBF success rate of obtaining improved solutions depends on the type of function since the performance of the quadratic operation at f 5 is better than that at f 11 . Finally, we may conclude that the proposed quadratic models are promising intensification procedures especially in the vicinity of the local and global minima.

Termination Criteria and Processing Time
The automatic termination using the GM is illustrated in Figure 9 using test functions f 1 , f 3 , f 5 and f 7 . The QCGA and SES R codes were continued running after reaching the point of having a full GM to check any further improvement after that. It is worthwhile to mention that there almost no improvement after reaching a full GM as shown in Figure 9. Therefore, the automatic termination using the GM could help EAs to avoid unnecessary computations. The runtime of the proposed methods are computed and presented in Figures 10 and 11 which is reported over 25 independent runs. Figure 10 shows the processing time for running the QCGA and the standard GA. The difference between the processing times of the two methods is reasonable, although the QCGA uses additional search procedures more than the standard GA. The processing time for running the all proposed methods are presented in Figure 11. The processing times of these methods are close to each other. However, one can note that the processing times of using the ANN models are greater than those of using the regression models.   Table 4 shows a comparison between various methods such as regression SES R , RBFs SES N , and Covariance Matrix Adaptation Evolution Strategy (CMA-ES) [76]. The results are not as fair as the termination policies in both methods are different. Thus, the results show that the regression method is more promoting over the RBFs due to the mentioned reason of termination criteria. The results also reveal that the QCGA algorithm has steadily better performance than the ESLAT and CMA-ES in terms of the success rate. The solution costs in Table 5 show the number of iterations that is consumed to get each relevant result. RBFs consume more time in all function iterations than the regression method, while the QCGA consumes less time compared with the ESLAT algorithm.   The Wilcoxon rank-sum test [77][78][79] is applied to measure the performance of all the compared methods. This test is classified as a non-parametric test, which is endorsed in our experiments [80,81]. A comparison between the Wilcoxon test's results and the obtained results from our proposed methods are illustrated in Table 6. The table shows the sum of rankings obtained in each comparison and the p-value associated. Typically, the difference d i between the performance values of the two methods on i-th out of N results are calculated. Afterwards, based on the absolute values of these differences, they are ranked. The ranks R + and R − are computed as follows:

Global Search Results
The equality hypothesis is based on rejecting in case of having the min(R + , R − ) is less than or equal to the value of the distribution of Wilcoxon for N degrees of freedom (Table B.12 in [79]). The best method is SES R , as shown in Table 6 according to the best solutions criteria, while the comparison based on the success rate criteria shows that SES R is better than the SES N and ESLAT algorithms. Also, QCGA has an excellent performance better than SES N and ESLAT. The third comparison criteria, solutions costs, proves that SES R performs better than all the compared methods except the CMA-ES.

Results for Hard Test Functions
There are many suits of test functions, but one of the most common is the CEC2005 suit which contains 25 functions [74,75], see Appendix B. This benchmark suit includes unimodal functions (h 1 -h 5 ), basic multimodal functions (h 6 -h 12 ), extended multimodal functions (h 13 -h 14 ), and hybrid composition functions (h 15 -h 25 ). These functions are composed from some classical functions with higher degrees of difficulties like function shifting, rotating and combining.
The proposed QCGA method is applied to these hard test functions with dimensions 10, 30 and 50, as reported in Tables 7-9. The reported results in these tables are global minima, the mean and standard deviation of best objective values obtained by the algorithm as well as the best and the worst value, the mean and standard deviation of the errors between the obtained objective values and the optimal ones, the mean of function evaluations, and the success rate of reaching the global minima within a gap of 10 −3 . All these results are obtained over 25 independent runs of the QCGA code. The results show that the method could reach the global minima for some functions and close to them for others. The best objective function values obtained by the method are relatively close to the worst ones except to functions h 4 and h 5 .  To check the quality of the hard functions results displayed in Tables 8 and 9, we compare them with the following benchmark methods: • Self-adaptive artificial bee colony based on the global best candidate (SABC-GB) method [82].
The compared results are reported in Tables 10 and 11, as well as Table 12 which shows the statistical  tests for the compared results. Tables 10 and 11 show the mean and standard deviation values of the errors between the obtained objective values and the optimal ones for the hard test functions with dimensions 30 and 50. The results of the compared methods SABC-GB and ABC/best/1 are taken from their original references [82,83], respectively. The number of function evaluations used to terminate both of the SABC-GB and ABC/best/1 methods is 300,000 as mentioned in [82]. The best obtained error values among the compared methods in Tables 8 and 9 are marked with bold. Moreover, the numbers of times that each method succeeded in obtaining the best result compared to the others are mentioned at the bottom of each table. For the 30-dimension results in Table 8, the performance of SABC-GB and QCGA regarding obtaining the best errors is close. However, the proposed QCGA method is slightly better than SABC-GB for the 50-dimension results in Table 9. The Wilcoxon rank-sum test is applied to estimate the difference between the compared methods in terms of the obtained errors and the numbers of function evaluations. The statistical test outputs are reported in Table 12. These tests clarify that there is no significant difference between the compared methods in terms of the solution errors at the significant level of 0.05. However, it is clear that the QCGA could outperform the others in terms of reducing the computational costs of function evaluations.

Conclusions
In this work, we propose a modified GA version called QCGA, which is based on applying a quadratic search operator to solve the global optimization problem. The use of quadratic coding of solutions efficiently assists the algorithm in achieving promising performance. Furthermore, other quadratic-based techniques are presented as search operators to help the ESs in finding a global solution quickly. Moreover, the gene matrix memory element is invoked, which could guide the search process to new diverse solutions and help in terminating the algorithms. The efficiency of these methods is tested using several sets of benchmark test functions. The comparisons with some standard techniques in the literature indicate that the proposed methods are promising and show the efficiency of them shortening the computational costs. We may conclude that applying quadratic search operators can give them better performance and quick solutions. Therefore, we suggest the extension of the proposed ideas to modify the other metaheuristics. The quadratic search models can be invoked as local search procedures in many methods such as particle swarm optimization, ant colony optimization, and artificial immune systems. Moreover, the quadratic models and coding can also be used in response surface and surrogate optimization techniques. Finally, the proposed methods can also be extended to deal with constrained global optimization problems using suitable constraint-handling techniques.