A symbiotic organisms search algorithm with memory guidance for global optimization problems

Symbiotic organisms search (SOS) algorithm is a nature-inspired metaheuristic algorithm, which has been successfully applied to solve a large number of problems of different areas. In this work, a novel modiﬁed variant of SOS with a memory strategy and good-point set (GMSOS) is proposed to improve the proper-ties of exploration and exploitation. For improving the population diversity and the search capability of the SOS algorithm, the good point set theory rather than random selection is used to generate the initial population, and the memory strategy is adopted in three phases of the SOS algorithm, which aims at maintaining the tradeoff between exploration and exploitation effectively, and preventing the current best solution from getting trapped into local optima. The performance of the proposed version of SOS is tested on standard benchmark functions with different characteristics and real-world problems. The numerical and statistical results on these applications demonstrate the competitive ability of the proposed algorithm as compared to other popular algorithms available in the literature.

lows: where x is the n-dimensional decision variable in the search region S defined by lower boundary lb = [lb 1 , lb 2 , · · · , lb n ] T and the upper boundary ub = [ub 1 , ub 2 , · · · , ub n ] T , f (x) is the objective function being optimized over x.
Symbiotic organisms search (SOS) algorithm (Cheng and Prayogo 2014), which simulates the interactive behavior of organisms in the ecosystem, is a recently proposed meta-heuristic algorithm. It has the advantages of simple principle and powerful search ability, which make it to be a good method in solving real-world optimization problems. Since SOS is introduced in 2014, several versions of the SOS algorithm have been proposed and successfully used in real-life optimization problems. A brief overview of some recently proposed variants and applications of the SOS algorithm is highlighted below.
In order to make SOS algorithm suitable for problems with different characteristics, several variants of the SOS algorithm were developed. Panda and Pani (2016) combined SOS with adaptive penalty function to solve multi-objective constrained optimization problems. Yu et al. (2017) developed six discrete SOS algorithms and applied them for the capacitated vehicle routing problem. Panda and Pani (2018) introduced augmented lagrange multiplier method into the SOS to solve the constrained optimization problems. Truong et al. ( 2019) presented an improved version of the SOS algorithm combined the quasi-oppositional based learning with chaotic local search strategy. Hybridization with other evolutionary algorithms is the other aspect to improve the performance of the SOS. Hybrid SOS (Nama et al. 2016) was proposed by combining SOS algorithm with quadratic interpolation to deal with complex optimization problems. The proposed algorithm provided more efficient behavior when dealing with real-life and large scale problems. Guha et al. (2017) integrated quasioppositional based learning with SOS to solve the load frequency control problem of the power system. Ezugwu et al. (2017) combined SOS with simulated annealing to solve traveling salesman problem. Ezugwu and Adewumi (2017a) proposed a soft set symbiotic organisms search algorithm for optimizing virtual machine resource selection in cloud computing environment. Miao et al. (2018) introduced the modified versions of SOS by incorporating the simplex method in the original SOS algorithm to solve the unmanned combat aerial vehicle path planning problem. Saha and Mukherjee (2018) presented a reduced SOS integrated with a chaotic local search to improve the solution accuracy and convergence mobility of the basic SOS. The collective and comprehensive description of modification and hybridization of SOS algorithm can be found in (Ezugwu and Prayogo 2019;Abdullahi et al. 2020).
SOS and its improved variants have been successfully applied to diversified practical problem, such as power systems optimization, construction project scheduling, design of engineering structures, and other fields. Verma et al. (2017) applied SOS algorithm on modified IEEE 30-and 57-bus test power system for the solution of congestion management problem. Tran et al. (2016) presented multi-objective SOS for optimizing multiple work shifts problem. Das et al. (2016) applied SOS algorithm to optimize the distributed generation allocation. Ezugwu and Adewumi (2017b) proposed a discrete SOS algorithm for finding a near optimal solution for the travelling salesman problem. Duman (2016) applied SOS algorithm to solve the optimal power flow problem with valve-point effect and prohibited zones. Sadek et al. (2017) proposed an improved adaptive fuzzy backstepping control of a magnetic levitation system based on SOS. Prayogo et al. (2018) presented modified SOS for coping with the resource leveling problem. Do et al. (2019) employed modified SOS to solve two optimization problems: buckling and free vibration with various volume constraints. Küçükuǧurlu and Gedikli (2020) applied SOS algorithm to multilevel thresholding problem. The comprehensive and collective description of applications of SOS algorithms can be found in (Ezugwu and Prayogo 2019;Abdullahi et al. 2020).
SOS is currently an active research direction, which has good exploitation capability in the mutualism phase and the commensalism phase and is good at exploration in the parasitism phase. The basic SOS and the previously developed variants of SOS have obtained satisfactory results in solving practical problems. However, there are several shortcomings that can affect the performance of SOS algorithm such as quality of solution, stuck in local optima for solving complex problems. In the first two stages of the basic SOS algorithm, population decide about the next move based on best organism which often leads algorithm suffer from premature convergence, and on by analyzing the parasitism phase, only part of the information of the organism X i , or the information of the organism X j was shared to the next generation, as a result, it may easily cause over-exploration because of the unbalance between exploration and exploitation, meanwhile the best information has not yet been systematically exploited in the parasitism phase. On the other hand, the no free lunch (NFL) theorem (Wolpert and Macready 1997) has been logically proved that an optimization algorithm can not solve all the problems, that is to say, the average performance of an optimization algorithm is the same when taking into account all optimization problems. Hence, it is always necessary that some modifications should be incorporated to make the current optimization algorithm fit for a particular optimization problem. Motivated by these considerations, further improvements based on memory mechanism and good-point set are proposed for solving optimization problems. The tests were carried out on well-known optimization problems comprising of unimodal and multimodal functions and real-world optimization problems.
The remainder of this paper is organized as follows. Section 2 describes briefly an overview of SOS algorithm. In Section 3, a novel improved SOS (GMSOS) is introduced in detail. The experiment results and comparisons are given in Section 4.
Finally, conclusion and future scope are described in Section 5.

The SOS algorithm
The SOS algorithm is a new population-based algorithm proposed by Cheng and Prayogo (2014) inspired from the cooperating behaviour among species in the society. In this optimization algorithm, a group of organisms (individuals) in an ecosystem is considered a population, and each organism are considered candidate solution to the given optimization problem. Each organism in the ecosystem is associated with the objective function value of the optimization problem. The SOS process is divided into three stages, the mutualism phase, commensalism phase and parasitism phase. The detailed description of SOS can be found in (Cheng and Prayogo 2014). these three phases are briefly explained in the subsequent subsections.

Mutualism phase
In the mutualism phase, two different organisms get benefits from each other mutually. An organism, X j , is randomly selected from the ecosystem to mutually interact with X i where i ̸ = j. The new organism X * i and X * j in the ecosystem for each of X i and X j are generated according to Eq. (2) and (3).
where X best is the best organism discovered so far in the ecosystem and MV = X i +X j 2 is a mutual vector which represents the relationship characteristic between X i and X j , rand(0, 1) is a vector of uniformly random numbers between 0 and 1, and it is used to guide the direction of exploration. Beneficial factors (BF 1 and BF 2 ) are determined randomly as either 1 or 2 (Cheng and Prayogo 2014). X * i and X * j will be however, accepted if they provide a better objective function values than them.

Commensalism phase
In this commensalism phase, an organism, X j , is randomly chosen to interact with X i . However, in this case, X i tries to benefit by interaction while X j neither gains nor loses from the relationship. The new organism X * i in the ecosystem is generated according to Eq. (4).
where rand(−1, 1) is a vector of uniformly random numbers between -1 and 1. It is used to intensify the exploration. After that, the new organism X * i will replace X i if it is fitter. If the new organism X * i has a high fitness, it will replace the current X i .

Parasitism phase
In this phase, an artificial parasite vector, X pv , is created in the search space by duplicating organism X i and then modifying randomly selected dimensions using a random number. Another organism, X j , is randomly chosen from the ecosystem to serve as a host to the X pv . The X pv replaces X j in the ecosystem if it is fitter. The algorithm steps of SOS are listed in Algorithm 1.
Algorithm 1 SOS Algorithm (Cheng and Prayogo 2014) Input: Set population size N, create population of organisms X i , i = 1, 2, · · · , N, initialize X i , Set stopping criteria. Output: Optimal solution 1: Identify the best organism, X best 2: while stopping criteria is not met do 3: for i = 1 to N do 4: Mutualism Phase 5: BF 1 = round(1 + rand(0, 1)) 6: BF 2 = round(1 + rand(0, 1)) 7: MV = X i +X j 2 8: Commensalism Phase 18: Parasitism Phase 24: Create X pv 25: Calculate f (X pv ) 26: if f (X pv ) < f (X j ) then 27: X j = X pv 28: end if 29: Identify the best organism, X best 30: end for 31: end while 3 Proposed algorithm To actively guide the search behavior, an improved version of SOS called memory guided SOS algorithm with good-point set (GMSOS) will be introduced. GMSOS brings in two improvements to the original SOS: memory strategy and good-point set theory. By introducing these two mechanisms into SOS, not only a better balance between exploration and exploitation of the algorithm can be attained, but also it can improve the optimization ability of SOS.

Population initialization based on good point set
Population initialization play an important role in improving optimization efficiency of meta-heuristic algorithm. Wang et al. (2020) pointed out that the idea of good point set is to take the point set more evenly than even random point, the definition and characters of good point set can be found in (Wang et al. 2020). In this study, the initial population based on good-point set in number theory is produced in the initial search space to improve the optimization efficiency of SOS. Fig. 1 illustrates the comparison between the two-dimensional point set generated by using the theory of good point set and that generated by using uniform random method, it can be seen that good point set is obviously more uniform than that of random points. For the SOS, this method can avoid the generation of invalid organisms and accelerate the convergence speed.

The modified mutualism phase and commensalism phase
A fixed-sized memory with an updating mechanism is employed to store multiple previous best organisms found so far, which is like an external archive in multi-objective optimization, the summation of number of history best organisms selective is denoted by M, which is normally a user-defined parameter, at each iteration, the oldest best organism is removed from the memory to accommodate the latest best organism found. Then the selection probability for ith organism from the memory is calculated as in Eq. (5).
is the objective function value of ith or jth organism in memory. In modified mutualism phase and commensalism phase, update equations in GMSOS are same as SOS except for replacing the current best organism X best with history best organism X k best , which is chosen by using the roulette wheel selection in the memory.

The modified parasitism phase
The history best organism X k best is chosen because it is likely to have some good evolutionary information of the global optimum, and hence, this should be inherited. The inheritance is performed through a Gaussian distribution in order to explore the search space more efficiently. It is used as a guidance to search the region around the evolutionary direction and thus improve the exploitation capability. This operation can balance the exploration and exploitation abilities effectively and avoid sticking in local optima. The X pv is generated by Eq. (6).
where G(0, 1) is a Gaussian random number with a mean of 0 and a standard deviation of 1, Once a new parasite vector X pv is formed, a greedy selection is applied between parasite vector X pv and X j according to their objective function value. Empirical study shows that the strategy results in good performance on most of the test functions. The pseudo code for the implementation of the modified parasitism phase is presented in Algorithm 2. It can be seen that the main difference lies in generation of the artificial parasite vector, X pv . the operator can achieve a better trade-off between exploration and exploitation, and effectively reduce the impact of the over-exploration. It does not increase any algorithm-specific adjusting parameters.

Algorithm 2 The Modified Parasitism Phase
Input: history best organism X k best Output: organism X j 1: It is worth noting that GMSOS brings simple yet effective modification in the original SOS and it may help produce efficient results. The algorithm steps of GMSOS are listed in Algorithm 3.

Algorithm 3 GMSOS Algorithm
Input: Set population size N,control parameter ML create population of organisms X i , i = 1, 2, · · · , N, initialize X i , Set stopping criteria. Output: Optimal solution 1: Identify the best organism, X best 2: while stopping criteria is not met do 3: for i = 1 to N do 4: Mutualism Phase 5: Commensalism Phase 18: Identify the best organism, X best 30: end for 31: end while 4 Experimental results and discussion In the following sections various benchmark functions are employed to probe the effectiveness of the proposed method in practice. To demonstrate the efficiency and robustness of the proposed algorithm, in this section, various simulations were carried out. The first is comparison on 35 classic benchmark functions (Yao et al. 1999;Digalakis and Margaritis 2001), and the second one is comparison on two real-world problems. These problems have been widely used in the literature.
To test the performance of the proposed algorithm, All algorithms are implemented in Matlab (version R2015b) and executed on HP machine (Core Xeon(R), 2.53 GHz, 8GB RAM).

Comparison on classic benchmark functions
In this subsection, the proposed algorithm is applied to 35 well-known benchmark functions with different characteristics which have been extensively solved with different algorithms in the literature in order to test their performance. These functions are given in Tables 1-5 in , 100] -80 Table 5 Shifted and biased multimodal benchmark functions u(x i , 10, 100, 4) − 80 For the sake of fairness, the most common parameters for all the algorithms are taken as suggested by respective authors in original articles except for the maximum iterations, population size and memory size for algorithms used in validation, a total of 30 individuals are allowed to determine the best solution over 1000 iterations in each run for each algorithm, a preliminary parametric shows that M = 3 works better for most applications, and in order to eliminate stochastic discrepancy, each algorithm is run 30 times on each objective function to analyze the robustness and convergence of the optimization algorithms. For unimodal functions and multimodal functions (Except for fixed-dimension multimodal benchmark functions), n = 30 are tested. Tables 6-15 include the comparison results of test functions, according to the  statistical results (average and standard deviation) of Tables 6-15, it can be seen from  the experimental results in Tables 6-15that the GMSOS provides promising results than other algorithms with regard to the quality of the solutions in most of the test problems.
From Table 6 and Table 11, firstly, it can be perceived that the proposed GM-SOS technique provides better results for f 1f 5 not only when compared to basic SOS but also when compared to other techniques, secondly, The QOCSOS algorithm outperforms other opponents for function f 6 , and variants of SOS provides nearly similar result on function f 7 . In accordance with Table 7 and Table 12, in these multimodal problems, the proposed GMSOS is able to locate the optima in f 9 and f 11 . The GMSOS algorithm outperforms other opponents for functions f 8 and f 13 , and the GMSOS as the second best after the ALSOS and ISOS obtains mean result for function f 10 . The obtained results on these problems ( f 14f 23 ) by the proposed GMSOS are presented in Table 8 and Table 13. The Table 8 and Table 13 clearly demonstrate the better ability of search and superior solution accuracy of the GMSOS as compared to the other algorithms. It is clear from Table 9 that the proposed GMSOS is superior to the others for shifted and biased unimodal functions expect for functions f 24 , f 25 and f 29 . GMSOS has better or similar performance than the other variants on functions f 24 , f 25 and f 29 . It can be directly seen from the Table 14 that the proposed GMSOS finds the solution of the problems with better accuracy as compared to the other algorithms. For shifted and biased multimodal functions functions ( Table 10 and Table 15), It can be observed that the GMSOS shows prominent results than compared methods for the most of the benchmark functions in terms of solution quality and robustness, for function f 30 , ISOS and GWO presents relatively small results. GMSOS as the second best after the HHO obtains mean result for function f 34 . Tables 6-15 show that GMSOS exhibits the good convergence performance on most of the test cases. It can be concluded that GMSOS provides an efficient strategy for finding the optimal solution of an optimization problem with a fast convergence rate. Therefore, It is evident that the GMSOS provides a comparatively better exploration and exploitation with a proper balance between them during the search process.
However, the precision is not the sole main feature that should be achieved. Improving the convergence speed is a very essential factor. Therefore, the average convergence curves of the optimization processes for some typical functions by considered algorithms are illustrated in Figs. 2, 3 and 4, respectively. As shown in Fig. 2, for most of functions, GMSOS not only has the higher convergence accuracy, but also converges faster. As can be seen in Fig. 3, GMSOS has a higher convergence accuracy and faster convergence speed on most functions except for function f 20 . The convergence curve of GMSOS for shift and biased unimodal functions and multimodal functions is shown in Fig. 4, consequently, it could be concluded that the proposed GMSOS apparently improves the performance (efficiency) of the SOS and indeed outperforms well in terms of the convergence speed with an accurate solution. Therefore, the performance of SOS after combining the memory strategy and good-point set becomes more efficient as it became able to globally discover the search space and then refines the obtained global solutions faster than the basic SOS. Through the comparison of these experimental results, it can be seen that the GMSOS has the  characteristics of fast convergence and strong optimization ability in optimizing most function optimization. GMSOS is generally effective for both unimodal functions and multimodal functions with regards to the accuracy and convergence speed.
Although the average of the best solutions over 30 runs give us a reliable comparison, we have done a nonparametric statistical test to see how significant the results are. Friedman test is used (Derrac et al. 2011) on the mean values found by algorithms given in Tables 6-15 and the ranking values and p-value are presented in Table 16. The lower the ranking value, the better the performance. GMSOS obtains the lowest average ranking value of 3.10, Friedman test ranks the QOCSOS as the second best after the GMSOS, as can be seen from Table 16 and the other algorithms were ranked from the third best to the worst as ISOS, SOS, ESOS, HHO, ALCSOS, WOA, SSA, MVO and GWO. The p-value computed through the statistics of Friedman test strongly indicated the existence of significant differences among eleven algorithms.
Here it is observed that the GMSOS algorithm is the best one among these algorithms.

Comparison on real-world problems
In order to further testify the effectiveness of GMSOS on real-world applications, two problems (Li et al. 2012) from real life: gear train design problem and parameter estimation for frequency modulated (FM) sound waves were chosen. The first problem is to minimize the gear ratio for a compound gear train that contains three gears, defined by x 1 x 2 x 3 x 4 . Mathematically, gear train design problem can be stated as follows: where x i ∈ [12, 60], i = 1, 2, 3, 4.  The second problem is to estimate the parameters of an FM synthesizer. The parameter vector has six components: x = (a 1 , ω 1 , a 2 , ω 2 , a 3 , ω 3 ), and the formula of the estimated sound wave is given as: y(t) = a 1 · sin(ω 1 · t · θ + a 2 · sin(ω 2 · t · θ + a 3 · sin(ω 3 · t · θ ))) (8) and the equation of the target sound waves is given by: y 0 (t) = sin(5 · t · θ − 1.5 · sin(4.8 · t · θ + 2 · sin(4.9 · t · θ ))) where θ = 2π 100 and the parameters are defined in the range [−6.4, 6.35]. The goal function is defined as follows: The comparison on two real-world problems over 30 runs are shown in Table 17. The results of compared algorithm with the proposed algorithm are directly taken from Li  et al. (Li et al. 2012). As seen from Table 17, GMSOS solved the first problem easily, and obtained the best values in all aspects of min, max, mean and standard deviation, respectively, the proposed algorithm is very competitive than other algorithms. For the second problem, it can be analyzed that the GMSOS obtained the best standard deviation, which means GMSOS has obvious advantages in finding the best solution and maintaining stability when facing real-word problems. These results demonstrate the efficiency of the GMSOS method to solve the real-world problems. From these observations, all simulation results assert that the proposed improved variant is very helpful in improving the efficiency of the SOS in the terms of result quality. it is clear that GMSOS are superior to other compared methods on most of the test problems, the search capability of GMSOS is better than that other algorithms   on the most of functions and real-life problems. Overall, GMSOS can exhibit highly competitive search performance among all compared algorithms, it can be concluded that these findings strengthen the effectiveness of GMSOS algorithm in solving unconstrained global optimization problems both empirically and statistically.

Conclusion
In the present work, SOS has been extended to GMSOS. GMSOS employs goodpoint set for population initialization, and three phases of the SOS algorithm are modified to strengthen the diversity of solutions in the search process and balance the exploration and exploitation abilities effectively. In the first two phases, the current best organism is replaced by history best organism. In the parasitism phase, a new artificial parasite vector based on history best organism is generated. In the experiments, 35 classical test functions and two classical real-life problems are used to evaluate performance of GMSOS. Moreover, GMSOS is compared with several recently developed algorithms, The experimental results indicate that GMSOS can obtain competitive results on the majority of the test functions and two classical reallife problems than other comparative algorithms. According to the promising results of the proposed algorithm on test problems, in the future, The algorithm will also be hybridized with other classical algorithms, and application of the GMSOS to some real-world engineering problems is another possible future work.