A Generalized Evolutionary Metaheuristic (GEM) Algorithm for Engineering Optimization

Many optimization problems in engineering and industrial design applications can be formulated as optimization problems with highly nonlinear objectives, subject to multiple complex constraints. Solving such optimization problems requires sophisticated algorithms and optimization techniques. A major trend in recent years is the use of nature-inspired metaheustic algorithms (NIMA). Despite the popularity of nature-inspired metaheuristic algorithms, there are still some challenging issues and open problems to be resolved. Two main issues related to current NIMAs are: there are over 540 algorithms in the literature, and there is no unified framework to understand the search mechanisms of different algorithms. Therefore, this paper attempts to analyse some similarities and differences among different algorithms and then presents a generalized evolutionary metaheuristic (GEM) in an attempt to unify some of the existing algorithms. After a brief discussion of some insights into nature-inspired algorithms and some open problems, we propose a generalized evolutionary metaheuristic algorithm to unify more than 20 different algorithms so as to understand their main steps and search mechanisms. We then test the unified GEM using 15 test benchmarks to validate its performance. Finally, further research topics are briefly discussed.


Introduction
Many design problems in engineering and industry can be formulated as optimization problems subject to multiple nonlinear constraints.To solve such optimization problems, sophisticated optimization algorithms and techniques are often used.Traditional algorithms such as Newton-Raphson method are efficient, but they use derivatives and calculations of these derivatives, especially the second derivatives in a high-dimensional space, can be costly.In addition, such derivative-based algorithms are usually local search and the final solutions may depend on the starting the point if optimization problems are highly nonlinear and multimodal (Boyd and Vandenberghe, 2004;Yang, 2020a).An alternative approach is to use derivative-free algorithms and many evolutionary algorithms, especially recent nature-inspired algorithms, do not use derivatives (Kennedy and Eberhart, 1995;Storn and Price, 1997;Pham et al., 2005;Yang, 2020b).These nature-inspired metaheuristic algorithms are flexible and easy to implement, and yet they are usually very effective in solving various optimization problems in practice.
Algorithms have been important through history (Beer, 2016;Chabert, 1999;Schrijver, 2005).There are a vast spectrum of algorithms in the literature, ranging from fundamental algorithms to combinatorial optimization techniques (Chabert, 1999;Schrijver, 2005).In some special classes of optimization problems, effective algorithms exist for linear programming (Karmarkar, 1984) and quadratic programming (Zdenek, 2009) as well as convex optimization (Bertsekas et al., 2003;Boyd and Vandenberghe, 2004).However, for nonlinear optimization problems, techniques vary and often approximations, heuris-tic algorithms and metaheuristic algorithms are needed.Even so, optimal solutions cannot always be obtained for nonlinear optimization.
Metaheuristic algorithms are approximation optimization techniques, and they use some form of heuristics with trial and error and some form of memory and solution selections (Glover, 1986;Glover and Laguna, 1997).Most metaheuristic algorithms are evolution-based and/or nature-inspired.Evolution-based algorithms such as genetic algorithm (Holland, 1975;Goldberg, 1989) are often called evolutionary algorithms.Algorithms such as particle swarm optimization (PSO) (Kennedy and Eberhart, 1995), bees algorithm (Pham et al., 2005;Pham and Castellani, 2009) and firefly algorithm (Yang, 2009) are often called swarm intelligence based algorithms (Kennedy et al., 2001).
However, terminologies in this area are not well defined and different researchers may use different terminologies to refer to the same things.In this paper, we use nature-inspired algorithms to mean all the metaheuristic algorithms that are inspired by some forms of evolutionary characteristics in nature, being biological, behaviour, social, physical and chemical characteristics (Yang, 2020a;Yang and He, 2019).In this broad sense, almost all algorithms can be called nature-inspired algorithms, including bees algorithms (Pham andCastellani, 2014, 2015), PSO (Kennedy et al., 2001), ant colony optimization, bat algorithm, flower pollination algorithm, cuckoo search algorithm, genetic algorithm, and many others.
Nature-inspired algorithms have become popular in recent years, and it is estimated that there are several hundred algorithms and variants in the current literature (Yang, 2020a), and the relevant literature is expanding with more algorithms emerging every year.An exhaustive review of metaheuristic algorithms by Rajwar et al. (Rajwar et al., 2023) indicated that there are over 540 metaheuristic algorithms with over 350 of such algorithms that were developed in the last 10 years.Many such new variants have been developed based on different characteristics/species from nature, social interactions and/or artificial systems, or based on the hybridization of different algorithms or algorithmic components, or based on different strategies of selecting candidate solutions and information sharing characteristics (Mohamed et al., 2020;Rajwar et al., 2023;Zelinka, 2015).
Despite the wide applications of nature-inspired algorithms, theoretical analysis in contrast lacks behind.Though there are some rigorous analyses concerning genetic algorithm (Greenhalgh and Marshal, 2000), PSO (Clerc and Kennedy, 2002) and the bat algorithm (Chen et al., 2018), however, many new algorithms have not been analyzed in detail.Ideally, a systematical analysis and review should be carried out in the similar way to convex analysis (Bertsekas et al., 2003) and convex optimization (Boyd and Vandenberghe, 2004).In addition, since there are so many different algorithms, it is difficult to figure out what search mechanisms can be effective in determining the performance of a specific algorithm.Furthermore, some of these 540 algorithms can look very similar, either in terms of their search mechanisms or updating equations, though they may look very different on the surface.This often can cause confusion and frustration to readers and researchers to see what happens in this research community.In fact, there are many open problems and unresolved issues concerning nature-inspired metaheuristic algorithms (Yang et al., 2018;Yang and He, 2019;Yang, 2020b;Rajwar et al., 2023).
Therefore, the purpose of this paper is two-fold: outlining some of the challenging issues and open problems, and then developing a generalized evolutionary metaheuristic (GEM) to unify many existing algorithms.The rest of the paper is organized as follows.Section 2 first provides some insights into nature-inspired computing and then outlines some of the open problems concerning nature-inspired algorithms.Section 3 presents a unified framework of more than 20 different algorithms so as to look all the relevant algorithms in the same set of mathematical equations.Section 4 discusses 15 benchmark functions and case studies, whereas Section 5 carries out some numerical experiments to test and validate the generalized algorithm.Finally, Section 6 concludes with a brief discussion of future research topics.

Insights and Open Problems
Though not much systematical analysis of nature-inspired algorithms exist, various studies from different perspectives have been carried out and different degrees of insights have been gained (Eiben and Smit, 2011;Yang, 2020b;Ser et al., 2019).For any given nature-inspired algorithm, we can analyze its algorithmic components, and their role in exploration and exploitation, and we also study its search mechanism so as to understand ways that local search and global search moves are carried out.Many studies in the literature have provided numerical convergence curves during iterations when solving different function optimization and sometimes real-world case studies, and such convergence curves are often presented with various statistical quantities according to a specific set of performance metrics such as accuracy of the solution and successful rate as well as number of iterations.In addition, stability and robustness are also studies for some algorithms (Clerc and Kennedy, 2002).Such analyses, though very important, are largely qualitative studies of algorithmic features, as summarized in Fig. 1.
The analysis of algorithms can be carried out more rigorously from a quantitative perspective, as shown in Fig. 2. For a given algorithm, it is possible to analyze the iterative behaviour of the algorithm using fixed-point theory.However, the assumptions required for such theories may not be realistic or relevant to the actual algorithm under consideration.Thus, it is not always possible to carry out such analysis.One good way is to use complexity theory to analyze an algorithm to see its time complexity.Interestingly, most nature-inspired algorithms have the complexity of O(nt) where n is the typically population size and t is the number of iterations.It is still a mystery that how such low complexity algorithms can solve highly complex nonlinear optimization problems that have been shown in various applications.
From the dynamical system point of view, an algorithm is a system of updating equations, which can be formulated as a discrete dynamical system.The eigenvalues of the main matrix of such a system determine the main characteristics of the algorithm.It can be expected that these eigenvalues can depend on the parameter values of the algorithm, and thus parameter settings can be important.In fact, the analyses on PSO (Clerc and Kennedy, 2002) and the bat algorithm (Chen et al., 2018) show that parameter values are important.If the parameter values are in the wrong ranges, the algorithm may become unstable and become less effective.This also indicates the important of parameter tuning in nature-inspired algorithms (Eiben and Smit, 2011;Joy et al., 2023).However, parameter tuning itself is a challenging task because its aim is to find the optimal parameter setting for an optimization algorithm for a given set of optimization problems.
From the probability point of view, an algorithm can be considered as a set of interacting Markov chains, thus it is possible to do some approximation analysis in terms of convergence using Markov chain Monte Carlo (MCMC) theory (Chen et al., 2018).However, the conditions required for MCMC can be stringent, and thus not all algorithms can be analyzed in this way.From the perspective of the analysis of variance, it is possible to see how the variances may change with iteration to gain some useful understanding (Zaharie, 2009).
An alternative approach is to use Bayesian statistical framework to gain some insights into the algorithm under analysis.Loosely speaking, the initialization of the population in an algorithm with a given probability distribution forms the prior of the algorithm.When the algorithm evolves and the solutions also evolve, leading to a posterior distribution of solutions and parameters.Thus, the evolution of algorithm can be understood from this perspective.However, since Bayesian framework often requires a lot of extensive integral evaluations, it is not straightforward to gain rigorous results in general.
A more ambition approach is to build a mathematical framework so as to analyze algorithms in a unified way, though such a framework does not exist in the current literature.Ideally, a theoretical framework should provide enough insights into the rise of swarm intelligence, which is still an open problem (Yang, 2020b;Yang and He, 2019).
As we have seen, algorithms can potentially be analyzed from different perspectives and there are many issues that need further research in the general area of swarm intelligence and nature-inspired computation.We can highlight a few important open problems.
1. Theoretical framework.Though there are some good theoretical analyses of a few algorithms such as genetic algorithm (Greenhalgh and Marshal, 2000), PSO (Clerc and Kennedy, 2002) and the bat algorithm (Chen et al., 2018), there is no unified theoretical framework that can be used to analyze all algorithms or at at least a major subset of nature-inspired algorithms.There is a strong need to build a mathematical framework so that the convergence and rate of convergence of any algorithm can be analyzed with rigorous and quantitative results.
In addition, stability of an algorithm and its robustness should also be analyzed using the same mathematical framework, based on theories such as dynamical systems and perturbation as well as probability.The insights gained from such a theoretical framework should provide enough guidance for tuning and setting parameter values for a given algorithm.However, how to construct this theoretical framework is still an open problem.It may be that a multidisciplinary approach is needed to ensure to look at algorithms from different perspectives.
2. Parameter tuning.The setting of parameters in an algorithm can influence the performance of an algorithm, though the extent of such influence may largely depend on the algorithm itself and potentially on the problem to be solved (Joy et al., 2023).There are different methods for parameter tuning, but it is not clear which method(s) should be used for a given algorithm.In addition, different tuning methods may produce different results for parameter settings for different problems, which may leads to the question if a truly optimal parameter setting exists.It seems that there are different optimality conditions concerning parameter setting (Yang et al., 2013;Joy et al., 2023), and parameter settings may be both algorithm-dependent and problem-dependent, depending on the performance metric used for tuning.Many of these questions remain unresolved.

3.
Benchmarking.All new algorithms should be tested and validated using a diverse of set of benchmarks and case studies.In the current literature, one of the main issues is that most tests use smooth functions as benchmarks, and it seems that these functions have nothing to do with realworld applications.Thus, it is not clear how such tests can actually validate the algorithm to gain any insight into the potential performance of the algorithm to solve much more complicated realworld applications.There is a strong need to systematically investigate the role of benchmarking and what types of benchmarks and case studies should be used.
4. Performance metrics.It can be expected that the performance of an algorithm depends on the performance metrics used to measure the performance.In the current literature, performance measures are mostly accuracy compared to the true objective values, success rate of multiple functions, number of iterations as computational efforts, computational time, and the combination of these measures.Whether these measures are fair or sufficient is still unexplained.In addition, these performance metrics tend to lead to the ranking of algorithms used and the benchmark functions used.Again, this may not be consistent with the no-free-lunch theorems (Wolpert and Macready, 1997) With newly gained insights, we may be able to design better and more effective algorithms.
In addition, from both theoretical perspective and practical point of view, the no-free lunch theorems (Wolpert and Macready, 1997) had some significant impact on the understanding of algorithm behaviour.Studies also indicate that free lunches may exist for co-evolution (Wolpert and Macready, 2005), continuous problems (Auger and Teytaud, 2010) and multi-objective optimization (Corne and Knowles, 2003;Zitzler et al., 2003) under certain conditions.The main question is how to use such possibilities to build more effective algorithms.

A Generalized Evolutionary Metaheuristic (GEM) for Optimization
From the numerical algorithm analysis point of view (Boyd and Vandenberghe, 2004;Yang, 2020a), an algorithm in essence is a procedure to modify the current solution x t so as to produce a potentially better solution x t+1 .The well-known Newton's method at iteration t can be written as where ∇f is the gradient vector of the objective function at x t and ∇ 2 f is the Hessian matrix.Loosely speaking, all iterative algorithms can schematically be written as where S is a step size vector, which can in general depend on the current solution x t , the current best solution x * , and a set of parameters (p1, ..., p k ).Different algorithms may differ only in the ways of generating such steps.
In addition to the open problems highlighted above, the other purpose of this paper is to provide a unified algorithm framework, based on more than 20 existing nature-inspired algorithms.This unification may enable us to understand the links and difference among different algorithms.It also allows us to build a single generic algorithm that can potentially use the advantages of all its component algorithms, leading to an effective algorithm.
As we have mentioned earlier, there are approximately 540 metaheuristic algorithms and variants in the literature (Rajwar et al., 2023), it is impossible to test all algorithms and try to provide a unified approach.This paper is the first attempt to unify multiple algorithms in the same generalized evolutionary perspective.Thus, we have to select over twenty algorithms to see if we can achieve our aim.Obviously, there are two challenging issues: how many algorithms we should use and which algorithms we should choose.For the former question, we think it makes sense that we should choose at least ten different algorithms or more so that we can get a reasonable picture of the unification capabilities.In fact, we have chosen 22 algorithms for this framework.As for the latter question which algorithms to use, it is almost impossible to decide what algorithms to use from 540 algorithms.In the end, the algorithms we have chosen here have different search features or characteristics in terms of their exploration and exploitation capabilities.In addition, in the case of similar exploration and exploitation capabilities, we tend to select the slightly well-established algorithms that appeared earlier in the literature because later/new algorithms may have been based on such mechanisms.
This unified algorithm is called Generalized Evolutionary Metaheuristic (GEM) with multiple parameters and components to unify more than 20 algorithms.
A solution vector x in the D-dimensional space is denoted by For a set of n solution vectors xi where i = 1, 2, ..., n, these vectors evolve with the iteration t (the pseudo-time) and they are denoted by x t i .Among these n solutions, there is one solution g * that gives the best objective value (i.e., highest for maximization or lowest for minimization).For minimization, where the argmin is to find the corresponding solution vector with the best objective value.Here, xg is the average of the top m best solutions among n solutions (m ≤ n).That is In case of m = n, this becomes the centre of the gravity of the whole population.Thus, we can refer to this step as the centrality move.
The main updating equations for our proposed unified algorithm are guided randomization and position update.In the guided randomization search step, it consists of In the position update step, it means where a, b, c, Θ, p, q, and r are parameters, and x * i is the individual best solution for agent or particle i.Here, h(x t i ) is a function of current solution, and in most case we can set h(x t i ) = 1, which can be a constant.In addition, ζi is a vector of random numbers, typically drawn from a standard normal distribution.That is ζi ∼ N (0, 1).( 8) It is worth pointing out the effect of each term can be different, and thus we can name each term, based on their effect and meaning.The inertia term is pv t i , whereas the qǫ1(g * − x t i ) simulates the motion or move towards the current best solution.The term rǫ2(x * i − x t i ) means to move towards the individual best solution.The centrality term has a weighting coefficient (1 − a), which tries to balance between the importance of the current solution x t i and the importance of the centre of gravity of the swarm xg.The main term b(x t j − x t i ) shows similar solutions should converge with subtle or minor changes.cv t i is the kinetic motion of each solution vector.The perturbation term Θh(x t i )ζi is controlled by the strength parameter Θ where h(xi) can be used to simulate certain specialized moves for each solution agent.If there is no specialization needed, h(xi) = 1 can be used.
The selection and acceptance criterion for minimization is x t i otherwise. (9) These four main steps/equations are summarized in the pseudocode, shown in Algorithm 1 where the initialization of the population follows the standard Monte Carlo randomization where Lb and Ub are, respectively, the lower bound and upper bound of the feasible decision variables.
In addition, rand(1,D) is a vector of D random numbers drawn from a uniform distribution.Special cases can correspond to more than 20 different algorithms.It is worth pointing out that in many case there are more than one way to represent the same algorithm by setting different values of parameters.The following representations are one of the possible ways for representing these algorithms: 1. Differential evolution (DE) (Storn and Price, 1997;Price et al., 2005) 2. Particle swarm optimization (PSO) (Kennedy and Eberhart, 1995;Kennedy et al., 2001): a = 1, b = 0, p = 1, c = 1, and Θ = 0.In addition, q = α and r = β.
14. Ant lion optimizer (ALO) (Mirjalili, 2015): a = 1, b = 0, c = 1, Θ = di − ci where di is the scaled random walk length and ci is its corresponding minimum.In addition, p = 0, q = 0, r = 1 with some variation of g * is just the average of two selected elite ant-lion solutions.
Algorithm 1: GEM: A generalized evolutionary metaheuristic for optimization.Data: Define optimization objective (f (x)) with proper constraint handling Result: Best or optimal solution (f min ) with archived solution history 1 Initialize parameter values (n, a, b, c, Θ, p, q, r); 2 Generate an initial population of n solutions using Eq. ( 10); 3 Find the initial best solution g * and f min among the initial population; 4 while (t <MaxGeneration) do As pointed out earlier, there are more than one ways of representing an algorithm under consideration using the unified framework, and the minor details and un-important components of some algorithms may be ignored.In essence, the unified framework intends to extract the key components of multiple algorithms so that we can figure out what main search mechanisms or moves are important in metaheuristic algorithms.Therefore, it can be expected that many other algorithms may be considered as special cases of the GEM framework if the right combinations of parameter values are used.

Benchmarks and Parameter Settings
To test the unified GEM algorithm, we have selected 15 benchmark functions and case studies.There are many different test function benchmarks, including multivariate functions (Jamil and Yang, 2013), CEC2005 test suite (Suganthan et al., 2005), unconstrained benchmark function repository (Al-Roomi, 2015) and engineering optimization problems (Cagnina et al., 2008;Coello, 2000).The intention is to select a subset of optimization problems with a diverse range of properties such as modality, convexity, nonlinear constraints, separability, and landscape variations.The case studies also include a mixedinteger programming pressure vessel design problem, and a parameter estimation based on data for a vibrations system governed by an ordinary differential equation (ODE).

Test Functions
The ten test functions are outlined as follows.

Case Studies
The five design case studies or benchmarks are described as follows.
11. Spring design.The three design variables are: the wire diameter w (or x1), mean coil diameter D (or x2) and the number N (or x3) of active coils.
subject to The simple lower and upper bounds are The best solution found so far is (Cagnina et al., 2008;Yang, 2013) fmin = 0.01266522, x * = [0.05169, 0.35673, 11.2885]. (29) 12. For the three-bar truss system design with two cross-section areas x1 = A1 and x2 = A2, the objective is to minimize min f (x) = 100(2 subject to where σ = 2 kN/cm 2 is the stress limit and P = 2 kN is the load.In addition, x1, x2 ∈ [0, 1].The best solution so far in the literature is (Bekasş et al., 2018) fmin = 263.8958,x * = (0.78853, 0.40866).( 34) 13.Beam design.For the beam design to support a vertical load at the free end of the beam, the objective is to minimize subject to with the simple bounds 0.01 ≤ xi ≤ 100.The best solution found so far in the literature is (Bekasş et al., 2018) fmin = 1.33997, x * = (6.0202, 5.3082, 4.5042, 3.4856, 2.1557). (37) 14. Pressure vessel design.The main objective is to minimize The first two design variables must be the integer multiples of the basic thickness h = 0.0625 inches (Cagnina et al., 2008).Therefore, the lower and upper bounds are  (45) 15.Parameter estimation of an ODE.For a vibration problem with a unit step input, we have its mathematical equation as an ordinary differential equation (Yang, 2023) ÿ where ω and ζ are the two parameters to be estimated.Here, the unit step function is given The initial conditions are y(0) = y ′ (0) = 0.For a given system, we have observed its actual response.The relevant measurements are given Table 1.
In order to estimate the two unknown parameter values ω and ζ, we can define the objective function as where y(ti) for i = 0, 1, ..., 10 are the observed values and ys(ti) are the values obtained by solving the differential equation ( 46), given a guessed set of values ζ and ω.Here, we have used x = (ζ, ω).
The true values are ζ = 1/4 and ω = 2.The aim of this benchmark is to solve the differential equation iteratively so as to find the best parameter values that minimize the objective or best-fit errors.

Parameter Settings
In our simulations, the parameter settings are: population size n = 10 with parameter values of a = 1, b = 0.7, c = 1, p = 0.7, q = r = 1 and Θ = θ t with θ = 0.97.The maximum number of iterations is set to tmax = 1000.For the benchmark functions, D = 5 is used for f1, f2, f3, f4, and f10.For f9, D = 4 is used so as to give fmin as a nice integer.For all other problems, their dimensionality has been given earlier in this section.

Numerical Experiments and Results
After implementing the algorithm using Matlab, we have carried out various numerical experiments.This section summarizes the results.

Results for Function Benchmarks
For the first ten functions, the algorithm has been run 20 times so that the best fmin, mean values and other statistics can be calculated.The results have been summarized in Table 2.As we can see, the algorithm can find all the true optimal solutions even with a small population size n = 10.This shows that the unified GEM algorithm is very efficient for solving function optimization problems.

Five Case Studies
To test the proposed algorithm further, we have used it to solve five different case studies, subject to various constraints.The constraints are handled by the standard penalty method with a penalty coefficient λ = 1000 to 10 5 .For the pressure vessel problem, λ = 10 5 is used, whereas all other problems use λ = 1000.For example, in the pressure vessel design problem, the four constraints are incorporated as P (x) so that the new objective becomes where The pressure vessel design problem is a mixed integer programming problem because the first two variables x1 and x2 can take only integer multiples of the basic thickness h = 0.0625 due to manufacturing constraints.The other two variables x3 and x4 can take continuous real values.For each case study, the algorithm has been run 10 times.For example, the 10 runs of the pressure vessel design are summarized in Table 3.As we can see, Run 2 finds a much better solution fmin = 5850.3851than the best solution known so far in the literature 6059.7143.All the constraints are satisfied, which means that this is a valid new solution.Following the exact same procedure, each case study has been simulated 20 times.The results for the five design case studies are summarized in Table 4.As we can see, all the best known optimal solutions have been found by the algorithm with a population size n = 10.1.3400e + 00 5.1657e + 00 2.3158e + 00 1.3400e + 00 Pressure Vessel 5.8504e + 03 7.3328e + 03 6.3685e + 03 6.0597e + 03 ODE Parameters 6.9479e − 09 1.5850e − 02 3.1700e − 03 6.8900e − 09 The above simulations and results have shown that the unified GEM algorithm performs well for all the 15 test benchmarks, and in some cases it can achieve even better results.This indicates that this unified algorithm is effective and can potentially be applied to solve a wide range of optimization problems.This will form part of our further research.

Conclusion and Discussion
In this paper, we have proposed a unified algorithm, called generalized evolutionary metaheuristic (GEM), which represents more than 20 different algorithms in the current literature.We have validated the proposed GEM with 15 different test benchmarks and optimal solutions have been obtained with a small population size and fixed parameters.
From the parameter tuning perspective, it can be expected that if the parameters can be tuned systematically, it may be possible to enhance the algorithm's performance further.In fact, a systematical parameter study and parameter tuning is needed for this new algorithm, which will be carried out in our future work.
In addition, apart from more than 20 algorithms as special cases of this unified algorithm when setting different parameter values, it can be expected that the vast majority of the 540 algorithms can also be rightly represented in this unified algorithm if certain parameter are allowed to vary in certain ways and some minor differences in some algorithms can be ignored.Obviously, for some algorithms such as the genetic algorithm, the algorithm is mainly described as an algorithmic procedure without explicit mathematical formulas.Such type of algorithm may not be easily categorized into the generalized framework.However, the procedure and algorithmic flow may in essence be quite similar to some of the main framework.In addition, a systematic comparison of this general framework can be carried out with various component algorithms.This can be a useful topic for further research.

Figure 1 :
Figure 1: Analysis of algorithmic features such as components, mechanisms and stability.

Figure 2 :
Figure 2: Different perspectives for quantitative analysis of algorithms.
. It is not clear if other performance measures should be designed and used, and what theory should be based on to design performance metrics.All these are still open questions.
5. Search mechanism.In many nature-inspired metaheuristic algorithms, certain forms of randomization and probability distributions are used to generate solutions with exploration abilities.One of the main tasks is to balance exploration and exploitation or diversification and intensification using different search moves or search mechanisms.However, how to balance these two components is still an open problem.In addition, exploration is usually by randomness, random walks and perturbations, whereas exploitation is usually by using derivative information and memory.It is not clear what search moves can be used to achieve both exploration and exploitation effectively.6.Scalability.Most studies of metaheuristic algorithms in the literature are concerned with problems with a few parameters or a few dozen parameters.These problems, though very complicated and useful, are small-scale problems.It is not clear if the algorithms used and the implementation realized can directly be applied to large-scale problems in practice.Simple scale up by using highperformance computing or cloud computing facilities may not be enough.How to scale up to solve real-world, large-scale problems is still a challenging issue.In fact, more efficient algorithms are always desirable in this context.7.Rise of Swarm Intelligence.Various discussions about swarm intelligence have attracted attention in the current literature.It is not clear what swarm intelligence exactly means and what conditions are necessary to achieve such collective intelligence.There is a strong need to understand swarm intelligence theoretically and practically so as to gain insights into the rise of swarm intelligence.

Table 1 :
Measured data for a vibration problem.

Table 2 :
Numerical experiments for ten test functions.

Table 3 :
Pressure vessel design benchmarks.