A Modified Slime Mould Algorithm for Global Optimization

Slime mould algorithm (SMA) is a population-based metaheuristic algorithm inspired by the phenomenon of slime mould oscillation. The SMA is competitive compared to other algorithms but still suffers from the disadvantages of unbalanced exploitation and exploration and is easy to fall into local optima. To address these shortcomings, an improved variant of SMA named MSMA is proposed in this paper. Firstly, a chaotic opposition-based learning strategy is used to enhance population diversity. Secondly, two adaptive parameter control strategies are proposed to balance exploitation and exploration. Finally, a spiral search strategy is used to help SMA get rid of local optimum. The superiority of MSMA is verified in 13 multidimensional test functions and 10 fixed-dimensional test functions. In addition, two engineering optimization problems are used to verify the potential of MSMA to solve real-world optimization problems. The simulation results show that the proposed MSMA outperforms other comparative algorithms in terms of convergence accuracy, convergence speed, and stability.


Introduction
Solving optimization problems means finding the best value of the given variables which satisfies the maximum or minimum objective value without violating the constraints. With the continuous development of artificial intelligence technology, real-world optimization problems are becoming more and more complex. Traditional mathematical methods find it difficult to solve nonproductivity and noncontinuous problems effectively and are easily trapped in local optima [1,2]. Metaheuristic optimization algorithms are able to obtain optimal or near-optimal solutions within a reasonable amount of time [3].
us they are widely used for solving optimization problems, such as mission planning [4][5][6][7], image segmentation [8][9][10], feature selection [11][12][13], and parameter optimization [14][15][16][17][18]. Metaheuristic algorithms find optimal solutions by modeling physical phenomena or biological activities in nature. ese algorithms can be divided into three categories: evolutionary algorithms, physics-based algorithms, and swarm-based algorithms. Evolutionary algorithms, as the name implies, are a class of algorithms that simulate the laws of evolution in nature. Genetic algorithms [19] based on Darwin's theory of superiority and inferiority are one of the representatives.
ere are other algorithms such as differential evolution which mimic the crossover and variation mechanisms of genetics [20], biogeography-based optimization inspired by natural biogeography [21], and evolutionary programming [22] and evolutionary strategies [23]. Physics-based algorithms search for the optimum by simulating the laws or phenomena of physics in the universe. e simulated annealing, inspired by the phenomenon of metallurgical annealing, is the best-known physics-based algorithm. Apart from SA, other physics-based algorithms have been proposed, such as gravity search algorithm [24], sine cosine algorithm [25], black hole algorithm [26], nuclear reaction optimizer [27], and Henry gas solubility optimization [28]. Swarm-based algorithms are inspired from the social group behavior of animals or humans. Particle swarm optimization [29] and ant colony optimization [30], which simulate the foraging behavior of birds and ants, are two of the most common swarm-based algorithms. In addition to those, the researchers have proposed new swarm-based algorithms.
e grey wolf optimizer [31] simulates the collaborative foraging of grey wolves. e salp swarm algorithm [32] is inspired by the foraging and following of the salps. Monarch butterfly optimization [33] is inspired by the migratory activities of monarch butterfly populations. e naked mole-rat algorithm [34] mimics the mating patterns of naked mole-rats. However, the no free lunch theory points out that no single algorithm can solve all optimization problems well [35]. is motivates us to continuously propose new algorithms and improve existing ones. Recently, inspired by the phenomenon of slime oscillations, Li proposed a new population-based algorithm called slime mould algorithm (SMA) [36]. Although SMA is competitive compared to other algorithms, there are some shortcomings in SMA. Due to the shortcoming of diminished population diversity in SMA, it easily falls into local optimum [37]. e selection of update strategies by SMA weakens the exploration ability [38,39]. As the problem grows more complex, SMA converges slower in late iterations and has difficulty maintaining a balance between exploitation and exploration [40,41]. To further enhance the performance of SMA and considering that the NFL encourages us to continuously improve these existing algorithms, a modified variant of SMA called MSMA is proposed in this paper. A chaotic oppositionbased learning strategy is first used to improve population diversity. e search scope is expanded using the inverse solution of the imposed chaos operator. Second, two adaptive parameter control strategies are proposed to balance the relationship between exploitation and exploration better. Finally, a spiral search strategy is introduced to enhance the global exploration ability of the algorithm and avoid falling into local optimum. To verify the superiority of MSMA, 13 functions with variable dimensions and 10 functions with fixed dimensions were used for testing. e differences between the algorithms were also analyzed using the Wilcoxon test and the Friedman test. Moreover, two engineering optimization problems were used to verify the performance of MSMA further. e remainder of this paper is organized as follows. A review of the basic SMA is provided in Section 2. Section 3 provides a detailed description of the proposed MSMA. In Section 4, the effectiveness of the proposed improved strategy and the superiority of the improved algorithm are verified using classical test functions. Based on this, the MSMA is applied to solve the two engineering design problems in Section 4. e main reasons for the success of MSMA are discussed in Section 5. Finally, conclusions and future works are given in Section 6.

Slime Mould Algorithm
In this section, the basic procedure of SMA is described. SMA works by simulating the behavioral and morphological changes of slime mould during the foraging process. e mathematical model of the slime mould is as follows: rand ≥ p, where t denotes the number of current iterations. X best t denotes the optimal individual. X A t and X B t are two individuals randomly selected from the population at iteration t. vb is a parameter in the range of − a to a. vc is a variable decreasing from 1 to 0. W denotes the weight of slime mould. p is a variable that is calculated by the following formula: where S(i) is the fitness of X. NP is the population number. DF is the best fitness so far. e formula for a is as follows: where t max is the maximum number of iterations. e formula of W is calculated as follows: where condition denotes individuals ranking in the top half of fitness. bF and ωF denote the best fitness and worst fitness in the current population, respectively. e mathematical formula for updating the position of the slime mould is as follows: where ub and ub are the upper and lower bounds of the search space, respectively.

Proposed MSMA
To overcome the shortcomings of the basic SMA, this paper proposes three improvement strategies to enhance its performance. A chaotic opposition-based learning strategy is used to enhance the population diversity, as well as balancing algorithm exploitation and exploration ability using self-adaptive strategy. A spiral search strategy is used to prevent the algorithm from falling into local optimum. e three improvement strategies are described in detail in the following.

Chaotic Opposition-Based Learning Strategy.
Opposition-based learning (OBL) is a new technique that has emerged in recent years in computing, proposed by Tizhoosh [42]. It has been shown that the probability that the reverse solution gets closer to the global optimal solution is nearly 50% higher than that of the current original solution. OBL enhances population diversity mainly by generating the reverse position of each individual and evaluating the original and reverse individuals to retain the dominant individuals into the next generation. e OBL formula is as follows: where X o i is the reverse solution corresponding to X t i . To further enhance the population diversity and overcome the deficiency that the reverse solution generated by the basic OBL is not necessarily better than the current solution, considering that chaotic mapping has the characteristics of randomness and ergodicity, it can help to generate new solutions and enhance the population diversity. erefore, this paper combines chaotic mapping with OBL and proposes a chaotic opposition-based learning strategy. e specific mathematical model is described as follows: where X To i denotes the inverse solution corresponding to the ith individual in the population. λ i is the corresponding chaotic mapping value.

New Nonlinear Decreasing Strategy.
During the iterative optimization of SMA, the changes of parameter a have an important impact on the balance of exploitation and exploration. In SMA, a decreases rapidly in the early iterations and slows down in the later iterations. Smaller a in the early stage is not conducive to global exploration. erefore, in order to further balance the exploitation and exploration and enhance the global exploration capability and the convergence capability of local exploitation, a new nonlinear decreasing strategy is proposed in this paper. e new definition of parameter a is shown as follows: To visually illustrate the effect of the new strategy, we compare it with the parameter change strategy in SMA, as shown in Figure 1. e new strategy proposed in this paper decreases slowly in the early stages, which increases the time for global exploration. In the late iteration, the reduction is also faster than the original strategy, which facilitates the SMA to accelerate the exploitation. (1), the original SMA randomly selects two individuals from all populations. is is not conducive to the later convergence of the algorithm. In order to enhance the convergence of SMA, the selection range in equation (1) is reduced with increasing number of iterations. e selection range parameter SR is described as follows:

Linear Decreasing Selection Range. For equation
where SR max and SR min are the maximum and minimum selection ranges, respectively.

Spiral Search Strategy.
In order to better balance the exploitation and exploration of SMA, this paper introduces a spiral search strategy. e spiral search diagram is shown in Figure 2.
As can be seen from Figure 2, the spiral search strategy can expand the search scope and better improve the global exploration performance. e mathematical formula of the spiral search strategy is shown as follows: e spiral search strategy and the original strategy are chosen randomly according to the probability to update the   Computational Intelligence and Neuroscience population location. us, the modified position updating formula is as follows: and the pseudocode and flowchart of the MSMA are shown in Algorithm 1 and Figure 3.

Computational Complexity Analysis.
MSMA is mainly composed of subsequent components: initialization, fitness assessment, reverse population fitness assessment, ranking, weight update, and position update, in which Np denotes the number of slime moulds, D denotes the function's dimension, and T denotes the maximum number of iterations. e computational complexity of initialization isO(D), fitness evaluation and inverse population fitness evaluation is O(Np + Np), the computational complexity of ranking is O(NP · log N), the computational complexity of weight update is O(Np · D), and the complexity of position update is O(Np · D). erefore, the total complexity of MSMA is O(D + T · Np · (2 + log N + D)).

Numerical Experiment and Analysis
In this section, various experiments are performed to verify the performance of MSMA. e experiments mainly include twenty-three classical test functions and two engineering design optimization problems.

Benchmark Test Functions and Parameter Settings.
e 23 benchmark test functions include 7 unimodal, 6 multimodal, and 10 fixed-dimension multimodal functions. e unimodal functions have only one global optimal solution and are usually used to verify the exploitation capability of the algorithm. e multimodal functions have multiple locally optimal solutions and are therefore often used to examine the algorithm's exploration ability and its ability to escape from local optimums. ese benchmark functions are listed in Table 1.   Computational Intelligence and Neuroscience e experimental results of MSMA are compared with those of the other eight algorithms. e comparison algorithms are MPA [43], MFO [44], SSA [32], EO [45], MRFO [46], HHO [47], GSA, and GWO. To ensure fairness, all algorithms were run on a Windows 10 AMD R7 4800U 16 GB platform and code was programmed using MATLAB R2016b. In the experiment, the number of populations NP is 30 and the maximum number of iterations t is 500. e results of 50 independent runs of the experiment are recorded. e parameters of the comparison algorithms were set according to the original literature as shown in Table 2.

Chaotic Mapping Selection Test.
e chaotic oppositionbased learning strategy proposed in this paper combines chaotic mapping and opposition-based learning mechanisms. To verify which chaotic mapping is used, 10 chaotic mappings are combined with the opposition-based learning mechanism. e SMA using chaotic mapping with ID 1 is named SMA-C1. e rest of the SMA algorithms using chaotic mapping are named similarly. e details of the chaotic mappings are shown in Table 3. Table 4 lists the results of each algorithm for solving the benchmark test functions.
As shown in Tables 4 and 5 is indicates that all 10 chaotic opposition-based learning strategies can improve SMA performance. SMA-C6 achieved the best results in solving the unimodal functions F1-F7, which indicates that the Pricewise map can better enhance the exploitation ability of SMA. When solving the multimodal functions F8-F13, SMA-C5 achieved satisfactory answers, which shows that the Logistic map can enhance the exploration ability of SMA. SMA-C4 achieves satisfactory solutions in the fixed-dimensional functions F14-F23, which indicates that Iterative map can enhance the local optimal avoidance ability of SMA. While SMA-C7 with Sine map is not the best performer in any of the three categories, it is ranked first in the overall ranking. is indicates that Sine map has the best effect in improving the comprehensive performance of SMA. In summary, in this paper, the Sine map with the first overall ranking is chosen to generate chaotic mapping values for chaotic oppositionbased strategy.

Improvement Strategy Effectiveness Test.
As seen in Section 3, three strategies are used in this paper to improve SMA performance. To evaluate the impact of each strategy on SMA, three SMA-derived algorithms (MSMA-1, MSMA-2, and MSMA-3) are developed according to Table 6. COBL represents chaotic opposition-based learning strategy, SA represents adaptive strategy, and SS represents spiral search strategy. Table 6 lists the results of each algorithm for solving the benchmark test functions.
As shown in Tables 7 and 8, MSMA with complete improvement strategies performs best overall. e three SMA-derived algorithms also ranked higher than SMA. e ranking from highest to lowest is as follows: MSMA-1, MSMA-2, and MSMA-3. It is shown that the three strategies have the largest to smallest impact on MSMA performance in the following order: COBL > SA > SS. Further analysis shows that MSMA-1 performs best in solving the unimodal functions F1-F7. is indicates that COBL can significantly improve the local search ability of SMA. MSMA-3 achieves satisfactory results on multimodal functions F8-F13 and F14-F23.
is shows that SS can improve the global exploration capability of SMA, allowing the algorithm to get rid of local optimal solutions. MSMA-2 performs better in all three types of functions, which indicates that the adaptive strategy balances the exploitation and exploration capabilities of SMA. It is worth noting that MSMA-3 performs less Initialization { Initialize z, NP, t max , SR max , SR max , Initialize the the positions of slime mould X} Main loop While (t < t max ) Calculate the fitness of X Generate inverse position of slime mould X o by equation (8) Calculate the fitness of X o Select the best (X, X o ) Calculate the W by equation (4) Update SR by equation (10) Update a by equation (9) For each search solution Update p, vb, vc, l Update positions by equation (12) End For t � t+1 End While Return the best fitness and X best ALGORITHM 1: Pseudocode of the MSMA.

Computational Intelligence and Neuroscience
0.998004 0.0003075 15] 0.398 Computational Intelligence and Neuroscience well than SMA in the unimodal functions. is is due to the fact that the spiral search strategy expands the search of the space around the individual, resulting in a weakened exploitation capability. However, the combination of the three strategies allows the comprehensive performance of MSMA to be significantly improved, further illustrating the importance of balanced exploitation and exploration capabilities to enhance the performance of an algorithm. Finally, to more visually show the performance of each algorithm, a radar plot is drawn based on the ranking of each algorithm. As shown in Figure 4, the smaller the area enclosed by each curve, the better the performance. Obviously, MSMA has the smallest enclosed area, which means that MSMA has the best performance. On the contrary, SMA has the largest area. Tables 9 to Table 12 list the optimization results for F1-F13 of each algorithm for Dim � 30, 100, 500, 1000. Table 13 then shows the results of the ten algorithms in fixed-dimensional functions F14-F23. From the optimization results, MSMA achieves better results in most of the test functions.

Comparison and Analysis of Optimization Results.
Specifically, for the unimodal functions F1-F7, MSMA achieved satisfactory results both in low dimensions and in high dimensions. MSMA can obtain the theoretical optimal solutions of F1 and F3 stably in different dimensions. In comparison, SMA failed to achieve the theoretical optimal value in all the test functions and performed weaker than MSMA. Comparing the test results of each dimension, we found that MSMA's performance has not dropped too much with increasing dimensions. is indicates that MSMA has excellent local exploitation capability. For the multimodal functions F8-F13, the MSMA steadily achieves the theoretical optimal values at F9-F11 with Dim � 30, 100, 500, 1000. When in low dimensions (Dim � 30, 100), MSMA performs best in solving F8. As the dimension increases, MSMA ranks second, only after SSA. MSMA has the best comprehensive performance in the multimodal functions, indicating that the improved strategy greatly enhances the global exploration capability of SMA.
Fixed-dimension functions are often used to test the ability of an algorithm to keep a balance between exploitation and exploration. e SMA performs best in six of the ten functions (F14, F16, F17, and F21-F23) when analyzing a � 2 (linearly decreased over iterations) Table 3: Ten-chaotic-mapping information.

ID
Mapping type Function 1 Chebyshev map Iterative map Pricewise map Singer map Tent map Computational Intelligence and Neuroscience    Computational Intelligence and Neuroscience   the mean and standard deviation. In addition, MSMA provides a better solution than SMA in all fixed-dimensional functions. erefore, we can conclude that the MSMA proposed in this paper can well balance the exploitation and exploration capabilities with strong local optimal avoidance.

Convergence and Stability
Analysis. In order to analyze the convergence performance of MSMA, convergence curves are plotted according to the results of different dimensions, as shown in Figure 5. We can learn that the convergence speed and convergence accuracy of MSMA are better than those of SMA in different dimensional cases. In addition, the convergence speed and convergence accuracy of MSMA do not decrease too much as the dimensionality increases. erefore, the improvement strategy proposed in this paper can effectively improve the convergence speed of SMA and achieve better optimization results.
To analyze the distribution properties of MSMA on a fixed-dimensional function, box plots were drawn. From Figure 6, it can be seen that the maximum, minimum, and median values of MSMA are almost the same in most of the test functions. Especially for F14 and F17, there are no outliers in MSMA. e above shows that MSMA is superior to the comparison algorithm in terms of stability.

Statistical Test.
To statistically validate the differences between MSMA and the comparison algorithms, Wilcoxon's rank-sum test [48] and Friedman test [49] were used for testing. Table 14 presents the statistical results with a significance level of 0.05. e symbols "+/�/− " indicate that MSMA is better than, similar to, or worse than the comparison algorithm. As shown in Table 14, MSMA outperforms other comparative algorithms in different cases and achieves results of 91/23/3, 96/18/3, 94/18/5, 93/19/5, and 66/15/9, confirming the significant superiority of MSMA in most cases compared to other algorithms. Table 15 shows the statistics of F1-F13 in different dimensions and the fixed-dimensional functions F14-F23. e statistics show that MSMA ranks first in all cases. erefore, it can be considered that MSMA has the best performance compared to other algorithms.

Engineering Design Problems.
Engineering design optimization problems are often solved using metaheuristic algorithms. In this section, MSMA is used to solve two engineering design problems: the welded beam design problem and the tension/compression spring design problem. e results provided by MSMA are compared with those of other algorithms.

Welded Beam Design Problem.
e welded beam design problem is a classical structural optimization problem, proposed by Coello [50]. As shown in Figure 7, the objective of this design problem is to minimize the manufacturing cost of the welded beam. e optimization variables include weld thickness h (x 1 ), joint beam length l (x 2 ), beam height t (x 3 ), and beam thickness b (x 4 ). e mathematical model of the welded beam design problem is as follows: It is subject to where  Computational Intelligence and Neuroscience e results of MSMA solving this problem are compared with those of other algorithms, as shown in Table 16. e results show that MSMA is the optimal algorithm for solving this problem, and the optimal solutions for each parameter are [0.205729, 3.470488, 9.036623, 0.205729], with the corresponding minimum cost of 1.724852.

Tension/Compression Spring Design Problem.
e tension/compression spring design problem is a mechanical engineering design optimization problem. As shown in Figure 8, the objective of this problem is to reduce the weight of the spring. It includes three optimization objectives: wire diameter w (x 1 ), average coil diameter d (x 2 ), and the number of coils L (x 3 ). e comparison results are shown in     Figure 8: Schematic of tension/compression spring design problem.  Table 17. e mathematical model of this problem is described below.

Discussion
In this section, the reasons for the superior performance of MSMA are discussed. e results in Table 5 demonstrate that the chaotic opposition-based learning strategy can enhance the performance of SMA. e different results of different chaotic mappings are caused by the different sequences generated by each chaotic mapping. e results reported in Table 8 demonstrate that all three improvement strategies proposed in this paper can improve the performance of the algorithm. MSMA-1 is competitive in unimodal functions.
is is mainly due to the utilization of chaotic mapping for MSMA-1 to enhance the exploitation. MSMA-3 uses a spiral search strategy to improve the performance on the multimodal functions. It is due to the fact that the strategy expands the search of each individual for the space around itself and the population diversity is better. MSMA-2 maintains a balance of exploitation and exploration through adaptive strategies and thus ranks medium in both the multimodal and unimodal functions. e best performance of MSMA indicates that these three strategies complement each other and maintain a good balance between exploitation and exploration. is is also evidenced by the results of the Friedman test in Table 15.

Conclusions
In this paper, three improvement strategies are proposed in order to improve the performance of SMA. Firstly, a chaotic opposition-based learning strategy is used to enhance the population diversity. Secondly, two adaptive parameter control strategies are proposed to effectively balance the exploitation and exploration of SMA. Finally, a spiral search strategy is used to expand the SMA to search near individuals and avoid falling into local optimum. To evaluate the performance of the proposed MSMA, 23 classical test functions are used, including 13 multidimensional functions (Dim � 30, 100, 500, 1000) and 10 fixed-dimensional functions.
From the experimental results and the discussion just mentioned, the following conclusions can be drawn. e sine mapping works best in combination with the opposition-based learning mechanism. Using chaotic opposition-based learning strategy can enhance the exploitation capability of MSMA.
Using a spiral search strategy can significantly enhance MSMA's exploration capabilities and avoid getting trapped in local optimum. e two self-adaptive strategies maintain a good balance between exploitation and exploration.
Compared with the eight advanced algorithms, MSMA has better convergence accuracy, faster convergence speed, and more stable performance. MSMA has the potential to solve real-world optimization problems.
In future work, we will use MSMA to solve the multi-UAV path planning problem and the task assignment problem. Moreover, MSMA can be extended as a multiobjective optimization algorithm.

Data Availability
e data used to support the findings of this study are included within the article.

Conflicts of Interest
e authors declare no conflicts of interest.