A new optimization algorithm based on mimicking the voting process for leader selection

Stochastic-based optimization algorithms are effective approaches to addressing optimization challenges. In this article, a new optimization algorithm called the Election-Based Optimization Algorithm (EBOA) was developed that mimics the voting process to select the leader. The fundamental inspiration of EBOA was the voting process, the selection of the leader, and the impact of the public awareness level on the selection of the leader. The EBOA population is guided by the search space under the guidance of the elected leader. EBOA’s process is mathematically modeled in two phases: exploration and exploitation. The efficiency of EBOA has been investigated in solving thirty-three objective functions of a variety of unimodal, high-dimensional multimodal, fixed-dimensional multimodal, and CEC 2019 types. The implementation results of the EBOA on the objective functions show its high exploration ability in global search, its exploitation ability in local search, as well as the ability to strike the proper balance between global search and local search, which has led to the effective efficiency of the proposed EBOA approach in optimizing and providing appropriate solutions. Our analysis shows that EBOA provides an appropriate balance between exploration and exploitation and, therefore, has better and more competitive performance than the ten other algorithms to which it was compared.


INTRODUCTION
Optimization is an integral part of engineering, industry, technology, mathematics, and many other applications in science. Decision variables, constraints, and objective function are the three main parts of any optimization issue, where determining the values of decision variables while respecting the constraints to optimize the objective function is the main challenge of optimization (Ray & Liew, 2003).
Optimization problem solving approaches are included in two groups: deterministic and stochastic (Kozlov, Samsonov & Samsonova, 2016). Deterministic approaches, which include gradient-based and non-gradient-based techniques, have successful performance in handling convex and linear optimization problems. However, these approaches fail to meet real-world challenges with features such as convex behavior, nonlinear search space, high number of variables, complex objective function, high number of constraints, as well as NP-hard problems. Following the inability of deterministic approaches to address these types of issues, researchers have developed a new approach called stochastic optimization techniques. Metaheuristic algorithms are one of the most widely used stochastic approach techniques that are effective in optimization applications by using random operators, random scanning in the search space, and trial and error (Curtis & Robinson, 2019).
Simplicity of concept, ease of implementation, problem-independent, needlessness of the derivation process, and efficiency in complex problems are some of the advantages that have led to the popularity and applicability of metaheuristic algorithms (Gonzalez et al., 2022).
Metaheuristic algorithms have a nearly identical problem-solving process that begins with the production of a certain number of candidate solutions at random. Then, in a repetition-based process, the effect of the algorithm steps on these candidate solutions improves them. At the end of the implementation, the best-founded candidate solution is introduced as the solution to the problem (Toloueiashtian, Golsorkhtabaramiri & Rad, 2022).
It is important to note here that there is no guarantee that metaheuristic algorithms will be able to provide the best optimal solution known as the global optimal. This is due to the random nature of the search process of these algorithms. For this reason, the solution obtained from metaheuristic algorithms is called quasi-optimal (Yu, Semeraro & Matta, 2018).
The two indicators of exploration with the concept of global search and exploitation with the concept of local search are effective in the performance of metaheuristic algorithms in handling optimization problems and providing better quasi-optimal solutions (Mejahed & Elshrkawey, 2022). What has led researchers to develop numerous optimization methods is to achieve better solutions closer to the global optimal.
The main research question is whether there is a need to develop new metaheuristic algorithms given that countless algorithms have been developed so far. This question is answered with the concept of the No Free Lunch (NFL) theorem (Wolpert & Macready, 1997). The NFL theorem explains that the effective performance of an algorithm in solving a set of optimization problems does not create any presuppositions on the ability of that algorithm to provide similar performance in other optimization applications. In other words, it cannot be claimed that a particular metaheuristic algorithm performs best in the face of all optimization problems compared to all other optimization methods. The NFL theorem is the main incentive for the authors to design new optimization approaches that perform more effectively in solving optimization problems in a variety of applications. The NFL theorem has motivated the authors of this article to develop a new metaheuristic algorithm applicable in optimization challenges that is effective in providing solutions closer to the global optimization.
The novelty of this article is in introducing and designing a new metaheuristic algorithm named Election-Based Optimization Algorithm (EBOA), which its fundamental inspiration is the simulation of the voting process and the popular movement. The main contributions of this study are as follows: • A novel human-based Election-Based Optimization Algorithm (EBOA) is proposed.
• The process of public movement and the electoral voting process are examined and then mathematically modeled in the EBOA design.
• The efficiency of EBOA in optimizing thirty-three objective functions (i.e., unimodal, high-dimensional multimodal, and fixed-dimensional multimodal, and CEC 2019) is tested.
• The quality of EBOA results is compared with ten state-of-the-art metaheuristic algorithms.
The rest of the article is structured in such a way that the literature review is presented in 'Lecture Review'. Then in 'Election-Based Optimization Algorithm' the proposed EBOA is introduced and modeled. Simulation studies are presented in 'Results'. The discussion is provided in 'Discussion'. Conclusions and several research directions for future studies are presented in 'Conclusions'.

LECTURE REVIEW
Natural phenomena, the behavior of living things in nature, the biological sciences, genetic sciences, the laws of physics, the rules of the game, human behavior, and any evolutionary process that has an optimization process have been the source of inspiration in the design and development of metaheuristic algorithms. Accordingly, metaheuristic algorithms fall into nine groups: swarm-based, biology-based, physics-based, human-based, sport-based, math-based, chemistry-based, music-based, and the other hybrid approaches (Akyol & Alatas, 2017).
Behaviors of living organisms such as animals, birds, and insects have been the main source of ideas in the development of numerous swarm-based algorithms. The most common feature used in many swarm-based methods is the ability of living organisms to search for food sources. The most popular methods developed based on food search process modeling are the Particle Swarm Optimization (PSO) based on the search behavior of birds and fish (Kennedy & Eberhart, 1995), Ant Colony Optimization (ACO) based on ants search for the shortest path to food (Dorigo, Maniezzo & Colorni, 1996), Artificial Bee Colony (ABC) based on bee colony search behavior (Karaboga & Basturk, 2007), the Butterfly Optimization Algorithm (BOA) based on search and mating behavior of butterflies (Arora & Singh, 2019), and the Tunicate Search Algorithm (TSA) based on search behavior tonics (Kaur et al., 2020). The process of reproduction among bees and the scout bees search mechanism to find suitable new places for hives have been employed in the designing Fitness Dependent Optimizer (FDO) (Abdullah & Ahmed, 2019). Chimpanzee's hunting strategy using operators such as emotional intelligence and sexual motivation has been the main source of inspiration in designing the Chimp Optimization Algorithm (ChOA) (Khishe & Mosavi, 2020).
Modeling of hunting strategies of living organisms in the wild has been a source of inspiration in designing various optimization approaches, including Grey Wolf Optimizer (GWO) based on gray wolf strategy (Mirjalili, Mirjalili & Lewis, 2014), Whale Optimization Algorithm (WOA) (Mirjalili & Lewis, 2016) based on humpback whales strategy, and Pelican Optimization Algorithm (POA) based on pelican behavior (Trojovský & Dehghani, 2022).
Applying the concepts of biology, genetics, and natural selection alongside random operators such as selection, crossover, and mutation has led to the development of biologybased algorithms. The process of reproduction, Darwin's evolutionary theory, and natural selection are key concepts in the development of two widely used methods, the Genetic Algorithm (GA) (Goldberg & Holland, 1988) and the Differential Evolution (DE) algorithm (Storn & Price, 1997). The mechanism of the immune system in the face of diseases, viruses, and microbes has been the major inspiration in the development of the artificial immune system (AIS) method (Hofmeyr & Forrest, 2000).
Many phenomena, laws, and forces in physics science have been employed as inspiration sources for the development of physics-based metaheuristic algorithms. The phenomenon of melting and cooling of metals, which is known in physics as the refrigeration process, has been the main inspiration in the development of the Simulated Annealing (SA) approach (Van Laarhoven & Aarts, 1987). The phenomenon of the water cycle based on its physical changes in nature has inspired the Water Cycle Algorithm (WCA) (Eskandar et al., 2012). Gravitational force and Newton's laws of motion have been the main concepts employed to introduce the method of Gravitational Search Algorithm (GSA) (Rashedi, Nezamabadi-Pour & Saryazdi, 2009). The application of Hook's law and spring tensile force has been the main inspiration in Spring Search Algorithm (SSA) design (Dehghani et al., 2020a). Various physical theories and concepts have been the source of inspiration in the development of physics-based methods such as Multiverse Optimizer (MVO), inspired from cosmology concepts (Mirjalili, Mirjalili & Hatamlou, 2016), Big Bang-Big Crunch (BB-BC) inspired from Big Bang and Big Crunch theories (Erol & Eksin, 2006), Big Crunch Algorithm (BCA) inspired from Closed Universe theory (Kaveh & Talatahari, 2009), Integrated Radiation Algorithm (IRA) inspired from gravitational radiation concept in Einstein's theory of general relativity (Chuang & Jiang, 2007), and Momentum Search Algorithm (MSA) inspired from momentum concept (Dehghani & Samet, 2020).
Behavior, thought, interactions, and collaborations in humans have been designed ideas in the development of human-based approaches. The most widely used humanbased method is the Teaching-Learning-Based Optimization (TLBO) algorithm, which mimics the classroom learning environment and the interactions between students and teachers (Rao, Savsani & Vakharia, 2011). The competition between political parties and the efforts of parties to seize control of parliament is the source of inspiration in designing the Parliamentary Optimization Algorithm (POA) (Borji & Hamidi, 2009). The economic activities of the rich and the poor to gain wealth in society have been the main inspiration for the Poor and Rich Optimization (PRO) approach (Moosavi & Bardsiri, 2019). Influencing people in the community from the best successful people in the community has been the main idea of the Following Optimization Algorithm (FOA) (Dehghani, Mardaneh & Malik, 2020). The mechanism of admission of high school graduates to the university and the process of improving the educational level of students has been the main idea in designing the Learner Performance-based Behavior (LPB) algorithm (Rahman & Rashid, 2021). The cooperation of the members of a team to improve the performance of the team in performing their tasks and achieving the goal has been the main inspiration of the Teamwork Optimization Algorithm (TOA) (Dehghani & Trojovský, 2021). The efforts of human society to achieve felicity by changing and improving the thinking of individuals has been employed in the design of the Human Felicity Algorithm (HFA) (Veysari, 2022). The strategic movement of army troops during the war, using attack, defense, and troop relocation operations, has been a central idea in the design of War Strategy Optimization (WSO) (Ayyarao et al., 2022).
The rules governing various games, both individual and group, along with the activities of players, referees, coaches, and influential individuals, have been the main source of inspiration in the development of sport-based methods. The effort of the players in the tug-of-war competition have been the main idea in designing the Tug of War Optimization (TWO) technique (Kaveh & Zolghadr, 2016). The use of volleyball club interactions and the coaching process has been instrumental in designing the Volleyball Premier League (VPL) approach (Kaveh & Zolghadr, 2016). The players' effort to find a hidden object was the main idea used in Hide Object Game Optimization (HOGO) (Dehghani et al., 2020b). The strategy that players and individuals use to solve the puzzle and arrange the puzzle pieces to complete it has been the source of inspiration in designing the Puzzle Optimization Algorithm (POA) (Zeidabadi & Dehghani, 2022).
The literature review shows that numerous metaheuristic algorithms have been developed so far. However, according to the best knowledge of the literature, the voting process to determine the leader of the community has not yet been used in the design of any algorithm. This research gap motivated the authors of this article to develop a new human-based metaheuristic algorithm based on mathematical modeling of the electoral process and public movement.

ELECTION-BASED OPTIMIZATION ALGORITHM
This section is dedicated to introducing the proposed Election-Based Optimization Algorithm (EBOA) and then mathematical modeling of it.

Inspiration
An election is a process by which individuals in a community select a person from among the candidates. The person elected as the leader influences the situation of all members of that society, even those who did not vote for him. The more aware the community members are, the better they will be able to choose and vote for the better candidate. These expressed concepts of the election and voting process are employed in the design of the EBOA.

Algorithm initialization
EBOA is a population-based metaheuristic algorithm whose members are community individuals. In the EBOA, each member of the population represents a proposed solution to the problem. From a mathematical point of view, the EBOA population is represented by a matrix called the population matrix using Eq. (1).
where X refers to the EBOA population matrix, X i refers to the ith EBOA member (i.e., the proposed solution), x i,j refers to the value of the jth problem variable specified by the ith EBOA member, N refers to EBOA population size, and m refers to number of decision variables.
The initial position of individuals in the search space is determined randomly according to Eq. (2).
x i,j = lb j + r · ub j − lb j , i = 1,2,...,N ,j = 1,2,...,m, where lb j and ub j refer to the lower bound and upper bound of the jth variable, respectively, and r is a random number in the interval [0,1]. Based on the values proposed by each EBO member for the problem variables, a value can be evaluated for the objective function. These evaluated values for the objective function of the problem are specified using a vector according to Eq. (3).
where OF refers to the vector of obtained objective function values of EBOA population and OF i refers to the obtained objective function value for the ith EBOA member. The values of the objective function are the criterion for measuring the quality of the proposed solutions in such a way that the best value of the objective function specifies the best member while the worst value of the objective function specifies the worst member.

Mathematical model of EBOA
The main difference between metaheuristic algorithms is how members of the population are updated and the process that improves the proposed solutions in each iteration. The process of updating the algorithm population in EBOA has two phases of exploration and exploitation, which are discussed below. Phase 1: Voting process and holding elections (exploration). EBOA members, based on their awareness, participate in the election and vote for one of the candidates. People's awareness can be considered as dependent on the quality and goodness of the value of the objective function. Accordingly, the awareness of individuals in the community is simulated using Eq. (4). In this awareness simulation process, individuals with better values of the objective function are more aware.
where A i is the awareness of the ith EBOA member, OF best and OF worst are the best and worst values of the objective function, respectively. It should be noted that in minimization problems, OF best is related to the minimum value of the objective function and OF worst is related to the maximum value of the objective function, while in maximization problems, OF best is related to the maximum value of the objective function and OF worst is related to the minimum value of the objective function. Among the members of the society, 10% of the most awareness individuals in the society are considered as election candidates. In the EBOA, it is assumed that the minimum number of candidates (N C ) is equal to 2 (i.e., N C ≥ 2), meaning that at least two candidates will register for the election.
The implementation of the voting process in EBOA is such that the level of awareness of each person is compared to a random number, if the level of awareness of a person is higher than that random number, the person is able to vote for the best candidate (known as C 1 ). Otherwise, that person randomly votes for one of the other candidates. This voting process is mathematically modeled in Eq. (5).
where V i refers to the vote of the ith person in the community, C 1 refers to the best candidate, and C k refers to the k th candidate, where k isa randomly selected number from the set {2,3,...,N C }.
At the end of the voting process, based on the counting of votes, the candidate who has received the highest number of votes is selected as elected (leader). This elected leader affects the situation of all members of the society and even those who did not vote for him. The position of individuals in the EBOA is updated under the influence and guidance of the elected leader. This leader directs the algorithm population to different areas in the search space and increases the EBOA's exploration ability in the global search. The process of updating the EBOA population is led by the leader in such a way that firstly a new position is generated for each member. The newly generated position is acceptable for updating if it improves the value of the objective function. Otherwise, the corresponding member remains in the previous position. This update process in the EBOA is modeled using Eqs. (6) and (7).
where X new.P1 i refers to a new generated position for the ith EBOA member, i is its value of the objective function, I is an integer selected randomly from the values 1 or 2, L refers to the elected leader, L j is its jth dimension, and OF L is its objective function value.
Phase 2: Public movement to raise awareness (exploitation). The awareness of the people of the society has a great impact on their correct decisions in the election and voting process. In addition to the leader's influence on people's awareness, every person's thoughts and activities can increase that person's awareness. From a mathematical point of view, a better solution may be identified based on a local search adjacent to any proposed solution. Thus, the activities of community members to increase their awareness, lead to an increase in the EBOA's exploitation ability in the local search and find better solutions to the problem. To simulate this local search process, a random position is considered in the neighborhood of each member in the search space. The objective function of the problem is then evaluated based on this new situation to determine if this new situation is better than the existing situation of that member. If the new position has a better value for the objective function, the local search is successful and the position of the corresponding member is updated. Improving the value of the objective function will increase that person's awareness for better decision-making in the next election (in the next iteration). This update process to increase people's awareness in the EBOA is modeled using Eqs. (8) and (9).
where X new.P2 i refers to a new generated position for the ith EBOA member, i is its value of the objective function, R is the constant equals to 0.02, t refers to iteration contour, and T refers to maximum number of iterations.

Repetition process, pseudocode, and flowchart of EBOA
An EBOA iteration is completed after updating the status of all members of the population. The EBOA enters the next iteration with the newly updated values, and the population update process is repeated based on the first and second phases according to Eqs. (4) to (9) until the last iteration. Upon completion of the full implementation of the algorithm, EBOA introduces the best proposed solution found during the algorithm iterations as the solution to the problem. The EBOA steps are summarized as follows: Start.
Step 1: Specify the given optimization problem information: objective function, constraints, and a number of decision variables.
Step 2: Adjust the number of iterations of the algorithm (T ) and the population size (N ).
Step 3: Initialize the EBOA population at random and evaluate the objective function.
Step 4: Update the best and worst members of the EBOA population.
Step 5: Calculate the awareness vector of the community.
Step 6: Determine the candidates from the EBOA population.
Step 7: Hold the voting process.
Step 8: Determine the elected leader based on the vote count.
Step 9: Update the position of EBOA members based on elected leader guidance in the search space.
Step 10: Update the position of EBOA members based on the concept of local search and public movement to raise awareness.
Step 11: Save the best EBOA member as the best candidate solution so far.
Step 12: If the iterations of the algorithm are over, go to the next step, otherwise go back to Step 4.
Step 13: Print the best-obtained candidate solution in the output. End.
The flowchart of all steps of implementation of the EBOA is specified in Fig. 1 and its pseudocode is presented in Algorithm 1.

Computational complexity of EBOA
This subsection is devoted to examining the computational complexity of the EBOA. The computational complexity of EBOA initialization, including random population generation and initial evaluation of the objective function, is equal to O(Nm)where N is the size of the EBOA population and m is the number of problem variables. Holding the election and updating the EBOA population in the first phase has the computational complexity equal to O(NmT ) where T is the number of iterations. Population update based on the second phase of EBOA to increase people's awareness is equal to O(NmT ). Accordingly, the total computational complexity of EBOA is equal to O(Nm(1 + 2T )).

RESULTS
This section is dedicated to analyzing EBOA performance in optimization and its ability to provide solutions to problems. Thirty-three objective functions of different types have Start EBOA. Input problem information: variables, objective function, and constraints.
Set EBOA population size (N ) and iterations (T ). Generate the initial population matrix at random. Evaluate the objective function.
For t = 1 to T Update best and worst population members. been selected to evaluate different aspects of the proposed approach. Information and details of these benchmark functions are specified in Tables 1, 2, 3 and 4. The reasons for selecting these objective functions are as follows: functions F1 to F7 are selected as a unimodal type. These types of functions have only one extremum in their search space and are suitable in this regard to evaluate the EBOA's exploitation ability in local search and converge to this optimal position. Therefore, the reason for choosing unimodal functions is to evaluate the exploitation potential of EBOA. The high-dimensional multimodal functions F8 to F13 include numerous local extremums in their search space in addition to the main extremum. These local optimal situations may cause the algorithm to fail. This feature has adapted the functions F8 to F13 to analyze the EBOA's exploration ability in global search and determine whether the proposed approach is able to bypass local optimal locations and identify the original optimal location. Therefore, the reason for choosing high-dimensional multimodal functions is to evaluate the EBOA exploration capability. Fixed-dimensional multimodal functions F14 to F23 have fewer local optimal locations in their search space. These types of functions are great criteria for simultaneously measuring exploration and exploitation in optimization methods. Therefore, the reason [−50,50] 30 0  et al. (2018).
The quality of the EBOA optimization results is compared with ten state-of-the-art metaheuristic algorithms including (i) the most widely used and oldest methods: GA, PSO, (ii) most cited methods from 2009 to 2014: GSA, TLBO, GWO, (iii) Recently published and widely used methods from 2016 to 2021: WOA, MPA, LPB, FDO, and TSA. As noted in the literature, numerous optimization methods have been developed to date. Comparing the proposed EBOA approach with all of these methods, while possible, generates a hug deal of data. Among the metaheuristic algorithms developed, some methods have attracted  more attention due to their high efficiency. For this reason, in this study, the ten mentioned metaheuristic algorithms that have been most considered and used have been selected to compare with the performance of EBOA. The values set for the control parameters of these metaheuristics are listed in Table 5. The EBOA and ten competitor metaheuristics are each employed in twenty independent implementations to solve the objective functions F1 to F23, while each implementation contains 1,000 repetitions. The termination condition can be based on various criteria such as number of iterations, number of function evaluations, the error between several consecutive iterations, and other cases. In this study, the termination condition is considered based on the number of iterations. The experiments are performed in the Matlab R2020a version in the environment of Microsoft Windows 10 with 64 bits on the Core i-7 processor with 2.40 GHz and 6 GB memory. Simulation results and performance of Cognitive and social constant (C 1 ,C 2 ) = (2,2).

Inertia weight
Linear reduction from 0.9 to 0.1 Velocity limit 10% of dimension range. metaheuristic algorithms are reported using five indicators: mean, best proposed solution, standard deviation (std), median, and rank.

Evaluation unimodal objective function
The results of applying EBOA and ten competitor metaheuristic algorithms to optimize F1 to F7 unimodal functions are reported in Table 6. The optimization outputs show the EBOA has provided the global optimal in solving the F1, F3, and F6 functions. EBOA is the first best optimizer in solving F2, F4, F5, and F7. The simulation results show that in handling the F1 to F7 functions, the EBOA performed better than the ten competitor metaheuristic algorithms and ranked first.

Evaluation of high-dimensional multimodal objective functions
The optimization results of F8 to F13 functions obtained from the implementation of EBOA and ten competing metaheuristic algorithms are released in Table 6. EBOA is able to converge to the global optimum in handling F9 and F11 functions. In optimizing the F10, F12, F13, and F14 functions, what is evident from the simulation results is that EBOA is the first best optimizer in these functions. In optimizing the F8 function, after GA and TLBO, the proposed EBOA is the third best optimizer of this function. What can be deduced from the results of Table 7 is that EBOA has a higher capability in optimizing high-dimensional multimodal functions compared to ten competitor algorithms and is ranked first as the best optimizer in functions 8 to F13.

Evaluation of fixed-dimensional multimodal objective functions
The results obtained from the implementation of EBOA and ten competitor metaheuristic algorithms on F14 to F23 functions are presented in Table 8. What emerges from the simulation output is that EBOA is the first best optimizer to handle F14 to F23 functions. Analysis and comparison of the obtained results indicate that the proposed EBOA approach has a superior performance over ten metaheuristic algorithms and among them, it has the first rank of the best optimizer. The performance of the EBOA and the ten competitor metaheuristic algorithms implemented on the F1 to F23 objective functions are shown in Fig. 2 as the boxplot. For visual analysis of the ability to achieve the searched solution, Figs. 3 to 11 show the convergence curves of the EBOA and ten other competing algorithms in optimizing a number of objective functions.

Statistical analysis
Capability analysis of metaheuristic algorithms in terms of mean, best, std, median, and rank indices provides valuable information to compare their performance. However, a very small probability can be considered for the chance superiority of one method over another. In this study, the Wilcoxon rank sum test (Wilcoxon, 1992) and non-parametric t -test (Kim, 2015) are used to determine whether the superiority of the EBOA over any of the competing metaheuristic algorithms was statistically significant. The results of applying Wilcoxon rank sum test and non-parametric t -test on EBOA performance and competitor metaheuristic algorithms are released in Tables 9 and 10, respectively. In cases where the      p-value is less than 0.05, it can be concluded that there is a significant difference between the two compared groups. What is clear from the results of the Wilcoxon rank sum test and non-parametric t -test is that the EBOA has a significant superiority in terms of statistical analysis over all ten competing algorithms in all objective function groups.

Sensitivity analysis
The proposed EBOA approach is a population-based metaheuristic algorithm that addresses optimization problems in a repetitive-based process. Thus, the two parameters of EBOA, population number (N ) and the maximum number of iterations of the algorithm (T ), affect EBOA performance. This subsection is dedicated to the sensitivity analysis of EBOA to changes in N and T parameters. EBOA sensitivity analysis to parameter N has been studied by applying it to the handling of functions F1 to F23 for different values of parameter N equal to 20, 30, 50, and 80. The results of EBOA sensitivity analysis to parameter N are released in Table 11. The effect of the parameter N changes on EBOA convergence curves and how to achieve the solution is shown in Fig. 12. The simulation results reveal the fact that increasing the EBOA population size increases the search power of this algorithm, as it can be seen that by increasing the values of parameter N , the proposed EBOA achieves better solutions and as a result, the values of all objective functions decrease.
EBOA sensitivity analysis to the parameter T has been tested by implementing it on the handling of F1 to F23 functions for the parameter T equal to 100, 500, 800, and 1,000. The outputs of the EBOA sensitivity analysis for the T parameter value are shown in Table 12. In addition, the EBOA convergence curves, which show how to achieve the optimal solution under changes in the T parameter, are shown in Fig. 13. What can be understood from the simulation results is that increasing the number of iterations gives the EBOA more opportunity to be able to identify the main optimal area more accurately and to converge more towards the global optimal, which this reduced the values of the objective functions.

Evaluation of CEC 2019 suite objective functions
The implementation of EBOA on the functions F1 to F23 indicated the high ability of EBOA in optimization applications. In this subsection, the performance of the EBOA is evaluated in addressing the CEC 2019 objective functions, which consist of ten functions of CEC01 to CEC10. The optimization results of CEC 2019 functions using EBOA and competitor algorithms are presented in Table 13. EBOA is the first best optimizer in addressing the functions cec02, cec03, cec07, cec08, cec09, and cec10. The results of the Wilcoxon rank-sum test and t -test are reported in Table 14. In cases where the p-value in this table is less than 0.05, the proposed EBOA approach has a statistically significant superiority over the corresponding algorithm. Analysis of the simulation results shows that the proposed EBOA approach has a superior performance over competitor algorithms in handling most cases of CEC 2019 test functions.

DISCUSSION
Exploitation and exploration are very influential on the performance of metaheuristic algorithms in finding optimal solutions to problems. Exploitation is the notion of local search capability around existing solutions that enables the algorithm to converge to better solutions that may be located in situations close to existing solutions. The impact of exploitation on the ability of metaheuristic algorithms is especially evident in dealing with problems that have only one main peak. The results of optimizing the functions F1 to F7 (with only the main peak) show that the EBOA has the high exploitation ability in local search and convergence to the global optimal solution. The high exploitation of the EBOA is especially evident in the handling of the functions F1, F3, and F6, which has converged to the global optimal. Exploration is the concept of global search capability in all areas of the problem-solving space that enables the algorithm to identify the main optimal area containing the global optimal in the presence of local optimal areas. The effect of exploration on the ability of metaheuristic algorithms is especially evident in handling problems that have several non-optimal peaks in addition to the main peak. The results of optimizing the F8 to F13 functions (with several non-optimal peaks) show that the EBOA has acceptable exploration power in the global search and identification of the main optimal area. The high exploration capability of EBOA, especially in handling F9 and F11, has led to the accurate identification of the main optimal area and the success of the algorithm in achieving the global optimum.
In addition to having high capabilities in exploration and exploitation, the conditions that predispose metaheuristic algorithms to success in achieving solutions are the proper balance between these two indicators. Objective functions F14 to F23 have fewer nonoptimal peaks than functions F8 to F13, and are good criteria for analyzing the ability of optimization algorithms to have the proper balance between exploration and exploitation. The results of optimization of F14 to F23 functions indicate that EBOA has a high potential for balancing exploration and exploitation to identify the main optimal region and converge towards the global optimal. An overall analysis of the results of optimizing      the F1 to F23 objective functions frees the inference that the proposed EBOA approach has a high potential for exploration and exploitation as well as a balance between the two capabilities.

CONCLUSIONS
Metaheuristic algorithms are one of the most widely used and effective stochastic methods for solving optimization problems. In this study, a new human-based algorithm called the Election Based Optimization Algorithm (EBOA) was proposed. The fundamental inspiration of the EBOA is the voting and election process in which people vote for their preferred candidate to elect the leader of the population. The EBOA steps in two phases of (i) exploration, including election holding and (ii) exploitation, including raising public awareness for better decision-making are mathematically modeled. The efficiency of EBOA in providing solutions to optimization problems was tested on thirty-three standard benchmark functions of a variety of unimodal, high-dimensional multimodal, fixed-dimensional multimodal, CEC 2019 types. The optimization results of unimodal functions indicated the high exploitation ability of EBOA in local search. The optimization results of high-dimensional multimodal functions showed the EBOA exploration capability in the global search of problem-solving space. In addition, the results obtained from the optimization of fixed-dimensional multimodal functions concluded that EBOA, by creating the proper balance between exploration and exploitation, has an effective efficiency in providing solutions to this type of problems. The implementation of EBOA on the complex CEC2019 suite test functions indicated the effectiveness of the proposed approach in dealing with complex optimization problems. The quality of the results delivered by the EBOA is compared against the performance of ten state-of-the-art metaheuristic algorithms. Comparing the simulation results, it can be found that EBOA has provided better optimization results and is much more competitive than the ten metaheuristic algorithms. The findings of simulation, statistical analysis, and sensitivity analysis indicate the high capability and efficiency of the EBOA in dealing with optimization issues. The proposed EBOA approach enables several future directions, the most specific of which are the development of the EBOA binary version for discrete space applications, and the design of the EBOA multi-objective version to handle multi-objective optimization problems. The EBOA is applied to solve optimization problems in various sciences as well as real-world applications are other suggestions for future directions.
The proposed EBOA approach is a stochastic-based solving method. So, the main limitation of EBOA, similar to all stochastic-based approaches, is there is no guarantee that EBOA will achieve the optimal global solution. In addition, EBOA may fail to address some optimization applications because, according to the NFL theorem, there is no presumption that a metaheuristic algorithm is successful or not. Another limitation of EBOA is that it is always possible to develop newer algorithms that perform better than existing algorithms and EBOA. However, the optimization results show that the EBOA has provided solutions that are very close to the global optimal and, in some cases, precisely the global optimal. This EBOA capability is particularly evident in optimizing the F1, F3, F6, F9, F11, and F18 because it has made available the optimal global solution.