A new metaphor-less simple algorithm based on Rao algorithms: a Fully Informed Search Algorithm (FISA)

Many important engineering optimization problems require a strong and simple optimization algorithm to achieve the best solutions. In 2020, Rao introduced three non-parametric algorithms, known as Rao algorithms, which have garnered significant attention from researchers worldwide due to their simplicity and effectiveness in solving optimization problems. In our simulation studies, we have developed a new version of the Rao algorithm called the Fully Informed Search Algorithm (FISA), which demonstrates acceptable performance in optimizing real-world problems while maintaining the simplicity and non-parametric nature of the original algorithms. We evaluate the effectiveness of the suggested FISA approach by applying it to optimize the shifted benchmark functions, such as those provided in CEC 2005 and CEC 2014, and by using it to design mechanical system components. We compare the results of FISA to those obtained using the original RAO method. The outcomes obtained indicate the efficacy of the proposed new algorithm, FISA, in achieving optimized solutions for the aforementioned problems. The MATLAB Codes of FISA are publicly available at https://github.com/ebrahimakbary/FISA.


INTRODUCTION
The objective of maximizing profits or minimizing losses is a crucial concern in several fields, including engineering.In brief, an optimization problem refers to the situation where the aim is to maximize or minimize a function.With the development of technology, optimization problems have become increasingly complex and abundant across a wide range of scientific fields.The complexity and interdependence of modern engineering systems and problems necessitate the selection of the best optimization method to solve them.Metaheuristic algorithms are among the strongest, simplest, and most commonly used optimization methods in recent years (Gogna & Tayal, 2013;Zervoudakis & Tsafarakis, 2020).
In general, a mathematical model in the optimization process has three main parts: objective function, design variables, and problem and system constraints.Design or decision variables are the independent variables that must be optimally determined and are denoted by the D-dimensional vector X .According to the problem's nature, they can have a combination of several types of discrete and continuous decision variables.The objective function, also called the cost function, is a function of the decision variables that should be minimized or maximized.The goal of solving optimization problems is to obtain an acceptable solution that minimizes/maximizes objective function while satisfying the problem constraints.The constraints are the same as the physical and design constraints of the problem that must be satisfied in the optimization process so that a practical optimal solution can be obtained (Gogna & Tayal, 2013).
Optimization problems can be broadly categorized into two types: unconstrained optimization problems and constrained optimization problems.In the latter case, the design space is limited by one or more constraints, which can take the form of equality or inequality equations.These constraints determine the acceptable region in the design space where the optimal solution must be found.

Optimization algorithms
Optimization algorithms can be broadly classified into two types: exact methods and approximate methods.Exact methods are capable of guaranteeing an optimal solution, but they may require significant computational resources and time.In contrast, approximate methods focus on finding good solutions in a reasonable amount of time.Heuristic algorithms are a popular type of approximate methods that are designed to quickly generate high-quality solutions to a wide range of problems.The effectiveness of heuristic algorithms depends on the nature of the problem being solved (Gogna & Tayal, 2013).

Metaheuristic methods
Metaheuristics refer to methods that guide the search process and are often inspired by nature.Unlike heuristic algorithms, due to their problem-independent nature, these algorithms can be utilized to optimize a diverse range of problems.These methods are among the most important and promising research in the optimization domain.
The general principles of metaheuristic methods are as follows: • Employing a given number of repetitive efforts • Employing one or more agents (particles, neurons, ants, chromosomes, etc.) • Operation (in multi-factor mode with a cooperation-competition mechanism) • Creating methods of self-change and self-transformation Nature has two great tactics: 1. Rewarding strong personal characteristics and punishing weaker ones.2. Introducing random mutations, which can lead to the birth of new individuals.Recently, many optimization techniques have been proposed that operate on the basis of natural behaviors and social behaviors.In the initialization stage, these algorithms randomly generate solutions and in the later stages, they rely on natural processes to produce better answers.Some of the popular and widely used types of metaheuristics are Particle swarm optimization algorithm (PSO) (Kennedy & Eberhart, 1995), differential evolution (DE) (Storn & Price, 1995), genetic algorithm (GA) (Whitley, 1994), firefly algorithm (Yang, 2009), ant colony optimization (Dorigo & Di Caro), bat algorithm (Yang, 2010), teachinglearning-based optimization (TLBO) (Rao, Savsani & Vakharia, 2011), grey wolf optimizer (GWO) (Mirjalili, Mirjalili & Lewis, 2014), artificial bee colony algorithm (ABC) (Karaboga & Basturk, 2007), imperialist competitive algorithm (ICA) (Atashpaz-Gargari & Lucas, 2007), moth-flame optimization algorithm (MFO) (Mirjalili, 2015), gravitational search algorithm (GSA) (Rashedi, Nezamabadi-pour & Saryazdi, 2009), shuffled frog-leaping algorithm (SFLA) (Eusuff, Lansey & Pasha, 2006), whale optimization algorithm (WOA) (Mirjalili & Lewis, 2016), etc.Now the question is why all these new optimization algorithms, either modified or combined, are needed.The main reason for this is the inability to determine with certainty which optimization or metaheuristic algorithm is appropriate for resolving a problem, and only through the comparison of the outcomes can it be asserted which algorithm provides a superior approach.In addition, based on (Mirjalili, Mirjalili & Lewis, 2014), an algorithm may perform well for some groups of functions but not for some other groups.Therefore, the motivation to modify algorithms or introduce new algorithms has been very high in recent years (Mirjalili & Lewis, 2016).
In 2020, Rao (2020) suggested three effective and powerful straightforward algorithms for optimization problems without the use of metaphors.These methods use the most and least optimal solutions in each iteration and the casual interrelations between possible solutions.Additionally, these methods need no control parameter other than the population size and the number of iterations.
A significant proportion of optimization problems encountered in practical applications contain shifted functions, for which the performance of Rao algorithms may not be much optimal, as demonstrated in the simulation section.This article introduces a new algorithm, called Fully Informed Search Algorithm (FISA), which is based on Rao algorithms and address this drawback.FISA not only outperforms the original Rao algorithms in optimizing shifted functions but also preserves their simplicity and requires no control parameters.The suggested algorithm's performance is evaluated by optimizing benchmark problems with shifted functions and real-world problems.The results demonstrate that FISA outperforms not only the original Rao algorithms but also other state-of-the-art methods, indicating its superior performance.
The article continues in four sections: the formulation of the suggested and Rao algorithms is presented in the following section.Reporting the outcomes of simulations and presenting and discussing the findings are done in next section.Lastly, the conclusions are presented.

Rao algorithms
The basic formulations of Rao algorithms, which are very simple algorithms without control parameters, rely on the difference vectors obtained by subtracting the position (location) of the worst individual from the location of the finest individual in the present iteration.This ensures that the population always moves towards a better solution.These algorithms consist of three distinct movements (position update) vectors for updating the position which are defined as follows (Rao, 2020): Rao-1 algorithm: Rao-2 algorithm: ; else . (2) Rao-3 algorithm: In the above equations, X Iter i represents the ith solution's location in the present iteration Iter; j(changing from 1 to D) represents the jth dimension of each solution; X Iter best and X Iter worst represent the position of the highest and lowest performing members of the population during the present iteration, in that order; r 1 and r 2 are two randomly selected values between 0 and 1 with the dimension of D; X Iter k represents the position of the kth solution, which is indiscriminately chosen; and f (.) represents the numerical output of the function being optimized of the corresponding solution in the present iteration.The location of the ith solution in the next iteration is obtained using Eq.(4): (4) The proposed Fully Informed Search Algorithm (FISA) The performance of Rao algorithms in optimizing shifted functions, which may be the case for many real-world problems, may be suboptimal.This will be demonstrated in the simulation section.Therefore, we propose a new algorithm, the Fully Informed Search Algorithm (FISA), which is based on Rao algorithms and is designed to address this issue.FISA improves the optimization of shifted functions while retaining the simplicity and absence of control parameters of the original algorithms.Similar to Rao algorithms, FISA moves the population towards better solutions.In summary, FISA can be summarized as follows: In fact, in FISA, each member moves away from the mean position of the individuals within the population that have worse fitness values and approaches the mean position of the individuals that have better fitness values than the associated member.Then, the position of each member is updated using Eq. ( 4).In Eq. ( 5), the values of MX Iter best,j and MX Iter worst,j in each iteration are calculated using Eqs.( 6) and ( 7), respectively: where B i and W i are the set of population members that have a better and worse fitness value than the i th member in iteration Iter, respectively, and length(.)represents the count of the individuals in the set.
The flowchart of FISA is shown in Fig. 1.

NUMERICAL RESULTS OF FISA FOR SOLVING BENCHMARK TEST FUNCTIONS FISA for solving CEC2005 problems
To assess the effectiveness of the suggested algorithm in comparison to the original Rao algorithms, we have chosen 14 real-world shifted functions with 30 dimensions (including unimodal, multimodal, and expanded multimodal functions), numbered in the order introduced in CEC 2005 (Liu et al., 2013), whose data were extracted from Suganthan et al. (2005).These functions have been successfully utilized in many articles (Ghasemi, Aghaei & Hadipour, 2017;Birogul, 2019;Ghasemi et al., 2019;Ghasemi et al., 2022a;Ghasemi et al., 2022b;Ghasemi et al., 2023;Akbari, Rahimnejad & Gadsden, 2021;Premkumar et al., 2021;Zou et al., 2022).The total count of function evaluations (NFE) during the execution of each algorithm is considered 300,000 based on Liu et al. (2013); accordingly, the number of population members of each algorithm during the optimization is 30; therefore, the convergence curves, in this study, have been extracted for 10,000 iterations.Furthermore, to acquire the optimal solution for each function, each algorithm has been executed Is new solution better than existing?Is termination criteria satisfied?
Final value of solutions and end.

Yes No
Accept as the new solution: The previous solution to be preserved.

Figure 1 Flowchart of FISA.
Full-size DOI: 10.7717/peerjcs.1431/fig- 1 independently for 25 runs.A summary of the results comprising the average value, standard deviation, and rank for each algorithm amongst the investigated algorithms is given in Table 1.In this table, N b and N w represent the total instances where the corresponding algorithm has the best or the worst result among the studied algorithms, respectively.M R also represents the mean ranking of each algorithm for 14 test functions.
After a concise investigation of Table 1, it can be observed that the FISA algorithm noticeably outperformed the original Rao algorithms, particularly for functions F2, F4, and F6.The proposed algorithm surpassed the Rao-2 and Rao-3 algorithms for all investigated functions, except for the F8 test function where it achieved the same performance as the Rao algorithms.Although the FISA algorithm showed slightly lower performance than the Rao-1 algorithm for test functions F1, F7, and F9, these results were close, and FISA outperformed the Rao-1 algorithm for the remaining 10 test functions.These findings indicate that the proposed algorithm has a strong capability to achieve optimal solutions for practical problems.Additionally, the convergence behaviors of different algorithms for solving the selected functions are presented in Fig. 2, which provides evidence of the superior convergence behavior of the suggested algorithm.

FISA for solving CEC2014 problems
In the second part of demonstrating the efficacy of the suggested method, FISA, in comparison with the RAO algorithms, 30 test functions from CEC 2014 Test Functions, with dimension 30 are selected (Askari & Younas, 2021;Suganthan et al., 2005;Liu & Nishi, 2022;Meng et al., 2022;Band et al., 2022), whose data were extracted from (Suganthan et , 2005).The optimal value for all functions is 0. The population number was selected as 30 and the stopping criterion was selected as 10,000 iterations for all algorithms; so that the NFE for each algorithm is equal to 300,000.The experiment is conducted by running each algorithm independently 25 times for every function.The summary of the results comprising the Mean value, standard deviation, and best value for each of the investigated algorithms is given in Table 2.
After reviewing the results presented in Table 2, taking into account the average value, standard deviation, and also the best optimal value of 25 runs, we can see that the suggested algorithm has a significant advantage over the Rao algorithms.The last row of the table shows the number of test functions in which each algorithm achieved their best solution, denoted as Nb.It is evident that the proposed method outperformed the other algorithms by obtaining the best solution in 23 out of 30 test functions.This indicates the strength and efficiency of the suggested technique as a new optimization algorithm.The convergence characteristics of algorithms for some selected test functions are displayed in Fig. 3.

NUMERICAL RESULTS OF FISA IN SOLVING ENGINEERING PROBLEMS
To demonstrate the effectiveness and optimization efficiency of FISA, three widely recognized engineering problems, namely optimal design of a pressure vessel, tension/compression spring, and welded beam, have been selected.Then, we perform the optimization operation for these optimization problems under the same conditions for all algorithms.The number of populations for each algorithm is chosen as 60, and the number of iterations of the algorithms for each run is chosen as 2000.In addition, each optimization operation is performed in 30 separate runs for each problem, using all the parameters as suggested by the respective algorithm designers in their original publications.

Pressure vessel optimal design
The goal of the problem is to minimize the overall costs of a pressure vessel, comprising material, forming, and welding expenses.As depicted in Fig. 4, this problem involves four design variables: shell thickness (denoted as x 1 or T s ), head thickness (denoted as x 2 or T h ), inner radius (denoted as x 3 or R), and cylindrical section length (denoted as x 4 or L).While x 3 and x 4 are continuous variables, x 1 and x 2 are discrete variables represented as integer multiples of 0.0625 in.The problem's objective function is nonlinear and it has both a linear and a nonlinear inequality constraint, which are illustrated below (Askarzadeh, 2016): Minimize: subject to: Table 3 compares the outcomes achieved by the proposed algorithm for the problem and other widely used standard algorithms, including quantum-inspired PSO (QPSO)  and Gaussian QPSO (G-QPSO) (Coelho, Dos Santos Coelho & Coelho, 2010), ABC (Akay & Karaboga, 2012), a GA equipped with a constraint-handling via dominance-based tournament selection (GA4) (Coello Coello et al., 2002), co-evolutionary PSO (CPSO) (He & Wang, 2007), co-evolutionary DE (CDE) (Huang, Wang & He, 2007), unified PSO  (UPSO) (Parsopoulos & Vrahatis, 2005), Crow search algorithm (CSA) (Askarzadeh, 2016), hybridizing a genetic algorithm with an artificial immune system (HAIS-GA) (Bernardino et al., 2008), bacterial foraging optimization algorithm (BFOA) (Mezura-Montes & Hernández-Ocana, 2008), evolution strategies (ES) (Mezura-Montes & Coello, 2008), A modification of the T-Cell algorithm (Aragón, Esquivel & Coello, 2010), a GA enhanced with a self-adaptive penalty method (GA3) (Coello Coello, 2000), queuing search (QS) algorithm (Zhang et al., 2018), and a GA equipped with a constraint-handling via automatic dynamic penalization (ADP) method (BIANCA) (Montemurro, Vincenti & Vannucci, 2013).Furthermore, the best solution found for the problem using the suggested technique was shown in Table 4. Tables 3 and 4 demonstrate that FISA outperforms other algorithms with the smallest value for the best solution, and ranking first for the worst solution and average value.This confirms that FISA performs well and reliably in tackling the pressure vessel optimal design problem.

Tension/compression spring optimal design
According to Fig. 5, the goal of the problem includes reducing the tension/compression spring weight while satisfying four inequality limitations (one linear and three nonlinear).

Hernández
The tables demonstrate that FISA outperforms all other algorithms in terms of the best value, with the worst solution and average value being the smallest.This suggests that FISA is more effective than other competitive optimizers in solving this problem.

MANAGERIAL IMPLICATIONS
Metaheuristic methods provide managers and decision makers with reliable tools for finding appropriate solutions to real-world problems with a limited computational burden and a limited time.Since there are major difficulties in finding the exact solutions of a wide  variety of real-world problems, metaheuristic methods are still the focus of many studies for tackling these issues.The article proposed a simple non-parametric algorithm, named Fully Informed Search Algorithm.The suggested algorithm's effectiveness was verified by testing it on both shifted benchmark functions and mechanical design problems.
The non-parametric nature of the proposed method along with its good performance in finding high quality solution of complicated real-world optimization problems make it a good choice for supporting managers in decision making without having to deal with sophisticated parameter tuning.

CONCLUSIONS
In this article, a new and powerful variant of Rao algorithms, entitled Fully Informed Search Algorithm (FISA), is suggested to enhance the performance of Rao algorithms in optimizing real-parameter shifted functions.The efficacy of the suggested algorithm was assessed compared to three original Rao algorithms for test functions presented in CEC 2005 and CEC 2014 and engineering design optimization problems.The obtained results demonstrated that the proposed algorithm has a much better performance compared to the original Rao algorithms.
On the other hand, each algorithm has its own limitations.For this reason, after presenting each algorithm, its improved and modified versions are published one after another in different formats and forms.Like any other algorithm, the proposed algorithm can have limitations, like low convergence speed or getting stuck in local optima.Here we propose several ways to improve and evolve this algorithm.However, the most effective way to evaluate the performance of an algorithm for any given problem is through experimental testing.
In future studies, the suggested FISA can be utilized for solving various complex optimization problems that occur in the real world.Additionally, our future plans involve creating binary and multiobjective variants of FISA.Also, the optimization of support vector machines or kernel extreme learning machines is possible using FISA.By merging FISA with other algorithms, we can establish new hybrid algorithms that take advantage of the strengths and abilities of both algorithms.Recently, many studies in high-dimensional optimization have been conducted, with the majority focusing on the cooperative co-evolution technique.In upcoming studies, FISA could be integrated into various cooperative co-evolution frameworks with different classes to enhance its effectiveness.Additionally, FISA can tackle other practical large-scale optimization problems.Moreover, initializing with opposite learning in FISA would be an appropriate domain to be explored in the future.

Figure 6 Schematic of welded beam optimal design problem.
Table 7 and 8 demonstrate that FISA is capable of discovering the most effective optimization solution.The statistical findings also indicate that FISA surpasses other methods and can more effectively handle the constrained engineering problems.