A Hybrid SSA and SMA with Mutation Opposition-Based Learning for Constrained Engineering Problems

Based on Salp Swarm Algorithm (SSA) and Slime Mould Algorithm (SMA), a novel hybrid optimization algorithm, named Hybrid Slime Mould Salp Swarm Algorithm (HSMSSA), is proposed to solve constrained engineering problems. SSA can obtain good results in solving some optimization problems. However, it is easy to suffer from local minima and lower density of population. SMA specializes in global exploration and good robustness, but its convergence rate is too slow to find satisfactory solutions efficiently. Thus, in this paper, considering the characteristics and advantages of both the above optimization algorithms, SMA is integrated into the leader position updating equations of SSA, which can share helpful information so that the proposed algorithm can utilize these two algorithms' advantages to enhance global optimization performance. Furthermore, Levy flight is utilized to enhance the exploration ability. It is worth noting that a novel strategy called mutation opposition-based learning is proposed to enhance the performance of the hybrid optimization algorithm on premature convergence avoidance, balance between exploration and exploitation phases, and finding satisfactory global optimum. To evaluate the efficiency of the proposed algorithm, HSMSSA is applied to 23 different benchmark functions of the unimodal and multimodal types. Additionally, five classical constrained engineering problems are utilized to evaluate the proposed technique's practicable abilities. The simulation results show that the HSMSSA method is more competitive and presents more engineering effectiveness for real-world constrained problems than SMA, SSA, and other comparative algorithms. In the end, we also provide some potential areas for future studies such as feature selection and multilevel threshold image segmentation.


Introduction
In recent years, metaheuristic algorithms have been widely concerned by a large number of scholars. Compared with other traditional optimization algorithms, the concept of metaheuristic algorithms is simple. Besides, they are flexible and can bypass local optima. us, metaheuristics have been successfully applied in different fields to solve various complex optimization problems in the real world [1][2][3].
It is worth noting that the most widely used swarmbased optimization algorithm is PSO [25]. PSO simulates the behavior of birds flying together in flocks. During the search, they all follow the best solutions in their paths. Cacciola et al. [26] discussed the problem of corrosion profile reconstruction starting from electrical data, in which PSO was utilized to obtain the image of the reconstructed corrosion profile. e result shows that PSO can obtain the optimal solution compared with LSM, and it takes the least time. is allows us to recognize the huge potential of the optimization algorithm.
Salp Swarm Algorithm (SSA) [27] is a swarm-based algorithm proposed in 2017. SSA is inspired by swarm behavior, navigation, and foraging of salps in the ocean. Since SSA has fewer parameters and is easier to be realized in a program than other algorithms, SSA has been applied to many optimization problems, such as feature selection, image segmentation, and constrained engineering problems. However, like other metaheuristic algorithms, SSA may be easy to trap into local minima and lower population density. erefore, many improved researches have been proposed to enhance the performance of SSA in many fields. Tubishat et al. [28] presented a Dynamic SSA (DSSA), which shows better accuracy than SSA in feature selection. Salgotra et al. [29] proposed a self-adaptive SSA to enhance exploitation ability and convergence speed. Neggaz et al. [30] proposed an improved leader in SSA using Sine Cosine Algorithm and disrupt operator for feature selection. Jia and Lang [31] presented an enhanced SSA with a crossover scheme and Levy flight to improve the movement patterns of salp leaders and followers. ere are also other attempts on the hybrid algorithm of SSA. Saafan and El-gendy [32] proposed a hybrid improved Whale Optimization Salp Swarm Algorithm (IWOSSA). e IWOSSA achieves a better balance between exploration and exploitation phases and avoids premature convergence effectively. Singh et al. [33] developed a hybrid SSA with PSO, which integrated the advantages of SSA and PSO to eliminate trapping in local optima and unbalanced exploitation. Abadi et al. [34] proposed a hybrid approach by combining SSA with GA, which could obtain good results in solving some optimization problems.
Slime Mould Algorithm (SMA) [35] is the latest swarm intelligence algorithm proposed in 2020.
is algorithm simulates the oscillation mode and the foraging of Slime Mould in nature. SMA has a unique search mode, which keeps the algorithm from falling into local optima, and has superior global exploration capability. e approach has been applied in real-world optimization problems like feature selection [36], parameters optimization of the fuzzy system [37], multilevel threshold image segmentation [38], control scheme [39], and parallel connected multistacks fuel cells [40]. erefore, based on the capabilities of both above algorithms, we try to do a hybrid operation to improve the performance of SMA or SSA and then propose a new hybrid optimization algorithm (HSMSSA) to speed up the convergence rate and enhance the overall optimization performance. e specific method is that we integrate SMA as the leader role into SSA and retain the exploitation phase of SSA. At the same time, inspired by the significant performance of opposition-based learning and quasioppositionbased learning, we propose a new strategy named mutation opposition-based learning (MOBL), which switches the algorithm between opposition-based learning and quasiopposition-based learning through a mutation rate to increase the diversity of the population and speed up the convergence rate. In addition, Levy flight is utilized to improve SMA's exploration capability and balance the exploration and exploitation phases of the algorithm. e proposed HSMSSA algorithm can improve both the exploration and exploitation abilities. e proposed HSMSSA is tested on 23 different benchmark functions and compared with other optimization algorithms. Furthermore, five constrained engineering problems are also utilized to evaluate HSMSSA's capability on real-world optimization problems. e experimental results illustrate that the HSMSSA possesses the superior capability to search the global minimum and achieve less cost engineering design results than other state-of-the-art metaheuristic algorithms. e remainder of this paper is organized as follows. Section 2 provides a brief overview of SSA, SMA, Levy flight, and mutation opposition-based learning strategy. Section 3 describes the proposed hybrid algorithm in detail. In Section 4, the details of benchmark functions, parameter settings of the selected algorithms, simulation experiments, and results analysis are introduced. Conclusions and prospects are given in Section 5.

Salp Swarm Algorithm.
In the deep sea, salps live in groups and form a salp chain to move and forage. In the salp chain, there are leaders and followers. e leader moves towards the food and guides the followers. In the process of moving, leaders explore globally, while followers thoroughly search locally [27]. e shapes of a salp and salp chain are shown in Figure 1.

Leader Salps.
e front salp of the chain is called the leader, so the following equation is used to perform this action to the salp leader: 2 Computational Intelligence and Neuroscience where X 1 j and F j represent the new position of the leader and food source in the jth dimension and r 1 and r 2 are randomly generated numbers in the interval [0, 1]. It is worth noting that c 1 is essential for SSA because it balances exploration and exploitation during the search process. t is the current iteration and T is the max iteration.

Follower Salps.
To update the position of the followers, the new concept is introduced, which is based on Newton's law of motion as in the following equation: where X i j represents the position of ith follower salp in the jth dimension and g and ω 0 indicate the acceleration and the velocity, respectively. e updating process of followers can be expressed as in the following equation: e pseudocode of SSA is presented in Algorithm 1.

Slime Mould
Algorithm. e main idea of SMA is inspired by the behavior and morphological changes of Slime Mould in foraging. Slime Mould can dynamically change search mode based on the quality of food sources. If the food source has a high quality, the Slime Mould will use the region-limited search method. If the food concentration is low, the Slime Mould will explore other food sources in the search space. Furthermore, even if Slime Mould has found a high-quality food source, it still divides some individuals to explore another area in the region [35]. e behavior of Slime Mould can be mathematically described as follows: where parameters r 3 , r 4 , and r 5 are random values in the range of 0 to 1. UB and LB indicate the upper and lower bound of search space. z is a constant. X b (t) �����→ represents the best position obtained in all iterations, X A (t) ����� �→ and X B (t) �����→ represent two individuals selected randomly from the population, and X(t) ����→ represents the location of Slime Mould. vc → decreases linearly from one to zero, and vb → is an oscillation parameter in the range [− a, a], in which a is calculated as follows: e coefficient W �→ is a very important parameter, which simulates the oscillation frequency of different food concentrations so that Slime Mould can approach food more quickly when they find high-quality food. e formula of W �→ is listed as follows: Computational Intelligence and Neuroscience where i ∈ 1, 2, . . ., N and S(i) represents the fitness of X. condition indicates that S(i) ranks first half of the Slime Mould, and r 6 are random numbers uniformly generated in the interval of [0, 1]. bF represents the optimal fitness obtained in the current iterative process, wF represents the worst fitness value obtained in the iterative process currently, and SmellIndex denotes the sequence of fitness values sorted (ascends in the minimum value problem). e p parameter can be described as follows: where DF represents the best fitness over all iterations. Figure 2 visualizes the general logic of SMA. e pseudocode of SMA is presented in Algorithm 2.

Levy Flight.
Levy flight is an effective strategy for metaheuristic algorithms, successfully designed in many algorithms [41][42][43][44]. Levy flight is a class of non-Gaussian random processes that follows Levy distribution. It alternates between short-distance and occasionally long-distance walking, which can be inferred from Figure 3. e formula of Levy flight is as follows: where r 7 and r 8 are random values in the range of [0, 1] and β is a constant equal to 1.5.

Mutation
Opposition-Based Learning. Opposition-based learning (OBL) was proposed by Tizhoosh in 2005 [45]. e essence of OBL is selecting the best solution to the next iteration by comparing the current solution and its opposition-based learning solution.
e OBL strategy has been successfully used in varieties of metaheuristic algorithms [46][47][48][49][50][51] to improve the ability of local optima stagnation avoidance, and the mathematical expression is as follows: Quasiopposition-based learning (QOBL) [52] is an improved version from OBL, which applies quasiopposite points instead of opposite points. ese points produced through QOBL have more likelihood of being unknown solutions than the points created by OBL. e mathematical formula of QOBL is as follows: (1) Initialize the population size N and max iteration T; (2) Initialize the positions of salp X i (i � 1, 2, . . ., N) (3) While (t ≤ T) (4) Calculate fitness of each salp; (5) Denote the best solution as F (6) update c 1 by equation (2); (7) For i � 1 to N do (8) if (i � � 1) then (9) update position of leader salp by equation (1) (10) Else (11) update position of follower salp by equation (4) (12) End if (13) End for (14) t � t + 1; (15) End While (16) Return the best solution F; ALGORITHM 1: Pseudocode of Salp Swarm Algorithm. 4 Computational Intelligence and Neuroscience Considering the superior performance of the two kinds of opposition-based learning, we propose mutation opposition-based learning (MOBL) by combining the mutation rate with these two opposition-based learning. By selecting the mutation rate, we can give full play to the characteristics of the OBL and QOBL and effectively enhance the ability of the algorithm to jump out of the local optima. Figure 4 is an MOBL example, in which Figure 4(a) shows an objective function and Figure 4(b) displays three candidate solutions and their OBL solutions or QOBL solutions. e mathematical formula is as follows: where rate is mutation rate, and we set it to 0.1.

Details of HSMSSA.
In SSA, the population is divided into leader salps and follower salps: leader salps are the first half of salps in the chain, and follower salps follow the leader. However, the leader salp has poor randomness and is easy to fall into local optima. For the SMA algorithm, Slime Mould selects different search modes according to the positive and negative feedback of the current food concentration and has a certain probability of isolating some individuals to explore other regions in search space. ese mechanisms increase the randomness of Slime Mould and enhance the ability to explore. e vb parameter is utilized to realize the oscillation mode of Slime Mould, which is in the range of [− a, a]. However, vb has the drawback of low randomness, which cannot effectively simulate the process of Slime Mould  (1) Initialize the population size N and max iteration T; looking for food sources. erefore, we introduce Levy flight into the exploration phase to further enhance the exploration ability. Next, we integrate SMA into SSA, change the position update method of leader salps, and further improve the randomness of the algorithm through Levy flight. For followers, we propose a mutation opposition-based learning to enhance its population diversity and increase the ability of the algorithm to jump out of the local optima. e mathematical formula of leader salps is as follows: e pseudocode of HSMSSA is given in Algorithm 3, and the summarized flowchart is displayed in Figure 5. As shown in Algorithm 3, the position of the population is initially generated randomly. en, each individual's fitness will be calculated. For the entire population in each iteration, parameter W is calculated using equation (7). e search agents of population size N are assigned to the two algorithms, which can utilize the advantages of SSA and SMA, and realize the sharing of helpful information to achieve global optimization. If the search agent belongs to the first half of the population, the position will be updated using equation (14) in SMA with Levy flight. Otherwise, the position is determined using equation (4) and MOBL. Finally, if the termination criteria are satisfied, the algorithm returns the best solution found so far; else the previous steps are repeated.

Experimental Results and Discussion
is section compared the HSMSSA with some state-of-theart metaheuristics algorithms on 23 benchmark functions to validate its performance. Moreover, five engineering design problems are employed as examples for real-world applications. e experimentations ran on Windows 10 with 24 GB RAM and Intel (R) i5-9500. All simulations were carried out using MATLAB R2020b.

Definition of 23 Benchmark Functions.
To assess HSMSSA's ability of exploration, exploitation, and escaping from local optima, 23 benchmark functions, including unimodal and multimodal functions, are tested [27]. e unimodal benchmark functions (F1-F7) are utilized to examine the exploitation ability of HSMSSA. e description of the unimodal benchmark function is shown in Table 1. e multimodal and fixed-dimension multimodal benchmark functions (F8-F23) shown in Tables 2 and 3 are used to test the exploration ability of HSMSSA.
In order to show the experimental results more representative, the HSMSSA is compared with the basic SMA [35]    and SSA [27], AO [24], AOA [15], WOA [22], SCA [14], and MVO [10]. For all tests, we set the population size N � 30, dimension size D � 30, and maximum iteration T � 500, respectively, for all algorithms with 30 independent runs. e parameter settings of each algorithm are shown in Table 4. After all, average results and standard deviations are employed to evaluate the results. Note that the best results will be bolded.

Evaluation of Exploitation Capability (F1-F7).
As we can see, unimodal benchmark functions have only one global optimum. ese functions are allowed to evaluate the exploitation ability of the metaheuristic algorithms. It can be seen from Table 5 that HSMSSA is very competitive with SMA, SSA, and other metaheuristic algorithms. In particular, HSMSSA can achieve much better results than other metaheuristic algorithms except F6. For F1-F4, HSMSSA can find the theoretical optimum. For all unimodal functions except F5, HSMSSA gets the smallest average values and standard deviations compared to other algorithms, which indicate the best accuracy and stability. Hence, the exploitation capability of the proposed HSMSSA algorithm is excellent.

Evaluation of Exploration Capability (F8-F23).
Unlike unimodal functions, multimodal functions have many local optima. us, this kind of test problem turns very useful to evaluate the exploration capability of an optimization algorithm. e results shown in Table 5 for functions F8-F23 indicate that HSMSSA also has an excellent exploration capability. In fact, we can see that HSMSSA can find the theoretical optimum in F9, F11, F16-F17, and F19-F23. ese results reveal that HSMSSA can also provide superior exploration capability.

Analysis of Convergence Behavior.
e convergence curves of some functions are selected and shown in Figure 6, which show the convergence rate of algorithms. It can be seen that HSMSSA shows competitive performance compared to other state-of-the-art algorithms.
e HSMSSA presents a faster convergence speed than all other algorithms in F7-F13, F15, and F19-F23. For other benchmark functions, HSMSSA shows a better capability of local optima avoidance than other comparison algorithms in F5 and F6.
Update position by equation (14)  (11) Else (12) IF r 5 < p (13) Update position by equation (14)  (14) Else (15) Update position by equation (14)  (16) End if (17) End if (18) Else (19) Update position by equation (4)  (20) If r 10 < 0.1 (21) Update the position of MOBL using equation (13)  (22) Else (23) Update the position of MOBL using equation (13)  (24) End if (25) End if (26) Check if the position goes out of the search space boundary and bring it back. (27) select the best position into the next iteration. (28) t � t + 1 (29) End for (30) End while (31) Return X best ALGORITHM 3: Pseudocode of HSMSSA. 8 Computational Intelligence and Neuroscience      Figure 5, the first search agent constantly oscillates in the first dimension of the search space, which suggests that the search agent investigates the most promising areas and better solutions widely. is powerful search capability is likely to come from the Levy flight and MOBL strategies. e average fitness presents if exploration and exploitation are conducive to improve the first random population, and an accurate approximation of the global optimum can be found in the end. Similarly, it can be noticed that the average fitness oscillates in the early iterations and then decreases abruptly and begins to level off. e average fitness maps also show the significant improvement of the first random population and the final global optimal, accurate approximation acquisition. At last, the convergence curves reveal the best fitness value found by search agents after each iteration. By observing this, the HSMSSA shows breakneck convergence speed.

Wilcoxon Signed-Rank Test.
Because the algorithm results are random, we need to carry out statistical tests to prove that the results have statistical significance. We use Wilcoxon signed-ranks (WSR) test results to evaluate the statistical significance of the two algorithms at 5% significance level [53]. e WSR is a statistical test that is applied to two different results for searching the significantly different. As is well-known, a p-value less than 0.05 indicates that it is significantly superior to other algorithms. Otherwise, the obtained results are not statistically significant. e calculated results of the Wilcoxon signed-rank test between HSMSSA and other algorithms for each benchmark function are listed in Table 6. HSMSSA outperforms all other algorithms in varying degrees. is superiority is statistically significant on unimodal functions F2 and F4-F7, which indicates that HSMSSA possesses high exploitation. HSMSSA also shows better results on multimodal function F8-F23, suggesting that HSMSSA has a high capability of exploration. To sum up, HSMSSA can provide better results for almost all benchmark functions than other comparative algorithms.

Experiments on Engineering Design
Problems. In this section, HSMSSA is evaluated to solve five classical engineering design problems: pressure vessel design problem, tension spring design problem, three-bar truss design problem, speed reducer problem, and cantilever beam design. To address these problems, we set the population size N � 30 and maximum iteration T � 500. e results of HSMSSA are compared to various state-of-the-art algorithms in the literature. e parameter settings are the same as previous numerical experiments.

Pressure Vessel Design Problem.
e pressure vessel design problem [53] is to minimize the total cost of cylindrical pressure vessel to match pressure requirements and form the pressure vessel shown in Figure 8. Four parameters in this problem need to be minimized, including the thickness of the shell (Ts), the thickness of head ( ), inner radius (R), and the length of the cylindrical section without the head (L), as shown in Figure 8.

Tension Spring Design Problem.
is problem [27] tries to minimize the weight of the tension spring, and there are three parameters that need to be minimized, including the wire diameter (d), mean coil diameter (D), and the number of active coils (N). Figure 9 shows the structure of the tension spring.
e mathematical of this problem can be written as follows. Consider Minimize subject to Results of HSMSSA for solving tension spring design problem are listed in Table 8, which are compared with SMA, SSA, AO, AOA, WOA, SCA, and MVO. It is evident that HSMSSA obtained the best results compared to all other algorithms.

4.2.3.
ree-Bar Truss Design Problem. ree-bar truss design is a complex problem in the field of civil engineering [49]. e goal of this problem is to achieve the minimum weight in truss design. Figure 10 shows the design of this problem. e formula of this problem can be described as follows. Consider Minimize subject to Variable range is 0 ≤ x 1 , where l � 100 cm, P � 2 KN/cm 2 , and σ � 2 KN/cm 2 . Results of HSMSSA for solving the three-bar design problem are listed in Table 9, which are compared with SMA, SSA, AO, AOA, WOA, SCA, and MVO. It can be observed that HSMSSA has an excellent ability to solve the problem in confined space.

Speed Reducer Problem.
In this problem [15], the total weight of the reducer is minimized by optimizing seven variables. Figure 11 shows the design of this problem, and the mathematical formula is as follows. Minimize D d Figure 9: Tension spring design problem.   Figure 10: ree-bar truss design problem.  Computational Intelligence and Neuroscience subject to Variable range is e comparison results are listed in Table 10, which shows the advantage of HSMSSA in realizing the minimum total weight of the problem.

Cantilever Beam Design.
Cantilever beam design is a type of concrete engineering problem. is problem aims to determine the minimal total weight of the cantilever beam by optimizing the hollow square cross-section parameters [24]. Figure 12 illustrates the design of this problem, and the mathematical described is as follows. Consider Minimize subject to Variable range is as follows: 0.01 ≤ x 1 , x 2 , x 3 , x 4 , x 5 ≤ 100. e results are shown in Table 11. From this table, we can see that the performance of HSMSSA is better than all other algorithms and the obtained total weight is minimized.
As a summary, this section demonstrates the superiority of the proposed HSMSSA algorithm in different characteristics and real case studies. HSMSSA is able to outperform the basic SMA and SSA and other well-known algorithms with very competitive results, which are derived from the robust exploration and exploitation capabilities of HSMSSA. Excellent performance in solving industrial engineering design problems indicates that HSMSSA can be widely used in real-world optimization problems.

Conclusion
In this paper, a Hybrid Slime Mould Salp Swarm Algorithm (HSMSSA) is proposed by combining the whole SMA as leaders and the exploitation phase of SSA as followers. At the same time, two strategies, including Levy flight and mutation opposition-based learning, are incorporated to enhance the capabilities of exploration and exploitation of HSMSSA. e 23 standard benchmark functions are utilized to evaluate this algorithm for analyzing its exploration, exploitation, and local optima avoidance capabilities. e experimental results show competitive advantages compared to other state-of-the-art metaheuristic algorithms, proving that HSMSSA has better performance than others. Five engineering design problems are solved as well to verify the superiority of the algorithm further, and the results are also very competitive with other metaheuristic algorithms. e proposed HSMSSA can produce very effective results for complex benchmark functions and constrained engineering problems. In the future, HSMSSA can be applied to real-world optimization problems such as multiobjective problems, feature selection, multithresholding image segmentation, convolution neural network, or any problem that belongs to NP-complete or NP-hard problems.

Data Availability
e data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest
All authors declare that there are no conflicts of interest.  Figure 12: Cantilever beam design [24].