Abstract

Based on Salp Swarm Algorithm (SSA) and Slime Mould Algorithm (SMA), a novel hybrid optimization algorithm, named Hybrid Slime Mould Salp Swarm Algorithm (HSMSSA), is proposed to solve constrained engineering problems. SSA can obtain good results in solving some optimization problems. However, it is easy to suffer from local minima and lower density of population. SMA specializes in global exploration and good robustness, but its convergence rate is too slow to find satisfactory solutions efficiently. Thus, in this paper, considering the characteristics and advantages of both the above optimization algorithms, SMA is integrated into the leader position updating equations of SSA, which can share helpful information so that the proposed algorithm can utilize these two algorithms’ advantages to enhance global optimization performance. Furthermore, Levy flight is utilized to enhance the exploration ability. It is worth noting that a novel strategy called mutation opposition-based learning is proposed to enhance the performance of the hybrid optimization algorithm on premature convergence avoidance, balance between exploration and exploitation phases, and finding satisfactory global optimum. To evaluate the efficiency of the proposed algorithm, HSMSSA is applied to 23 different benchmark functions of the unimodal and multimodal types. Additionally, five classical constrained engineering problems are utilized to evaluate the proposed technique’s practicable abilities. The simulation results show that the HSMSSA method is more competitive and presents more engineering effectiveness for real-world constrained problems than SMA, SSA, and other comparative algorithms. In the end, we also provide some potential areas for future studies such as feature selection and multilevel threshold image segmentation.

1. Introduction

In recent years, metaheuristic algorithms have been widely concerned by a large number of scholars. Compared with other traditional optimization algorithms, the concept of metaheuristic algorithms is simple. Besides, they are flexible and can bypass local optima. Thus, metaheuristics have been successfully applied in different fields to solve various complex optimization problems in the real world [13].

Metaheuristic algorithms include three main categories: evolution-based, physics-based, and swarm-based techniques. The inspirations of evolutionary-based methods are the laws of evolution in nature. The most popular evolution-based algorithms include Genetic Algorithm (GA) [4], Differential Evolution Algorithm (DE) [5], and Biogeography-Based Optimizer (BBO) [6]. Physics-based algorithms mimic the physical rules in the universe. There are some representative algorithms such as Simulated Annealing (SA) [7], Gravity Search Algorithm (GSA) [8], Black Hole (BH) algorithm [9], Multiverse Optimizer (MVO) [10], Artificial Chemical Reaction Optimization Algorithm (ACROA) [11], Ray Optimization (RO) [12], Curved Space Optimization (CSO) [13], Sine Cosine Algorithm (SCA) [14], Arithmetic Optimization Algorithm (AOA) [15], and Heat Transfer Relation-based Optimization Algorithm (HTOA) [16]. The third category algorithm is swarm-based techniques, which simulate the social behavior of creatures in nature. Some optimization techniques of this class include Particle Swarm Optimization (PSO) [17] Ant Colony Optimization Algorithm (ACO) [18], Firefly Algorithm (FA) [19], Grey Wolf Optimizer (GWO) [20], Cuckoo Search (CS) Algorithm [21], Whale Optimization Algorithm (WOA) [22], Bald Eagle Search (BES) algorithm [23], and Aquila Optimizer (AO) [24].

It is worth noting that the most widely used swarm-based optimization algorithm is PSO [25]. PSO simulates the behavior of birds flying together in flocks. During the search, they all follow the best solutions in their paths. Cacciola et al. [26] discussed the problem of corrosion profile reconstruction starting from electrical data, in which PSO was utilized to obtain the image of the reconstructed corrosion profile. The result shows that PSO can obtain the optimal solution compared with LSM, and it takes the least time. This allows us to recognize the huge potential of the optimization algorithm.

Salp Swarm Algorithm (SSA) [27] is a swarm-based algorithm proposed in 2017. SSA is inspired by swarm behavior, navigation, and foraging of salps in the ocean. Since SSA has fewer parameters and is easier to be realized in a program than other algorithms, SSA has been applied to many optimization problems, such as feature selection, image segmentation, and constrained engineering problems. However, like other metaheuristic algorithms, SSA may be easy to trap into local minima and lower population density. Therefore, many improved researches have been proposed to enhance the performance of SSA in many fields. Tubishat et al. [28] presented a Dynamic SSA (DSSA), which shows better accuracy than SSA in feature selection. Salgotra et al. [29] proposed a self-adaptive SSA to enhance exploitation ability and convergence speed. Neggaz et al. [30] proposed an improved leader in SSA using Sine Cosine Algorithm and disrupt operator for feature selection. Jia and Lang [31] presented an enhanced SSA with a crossover scheme and Levy flight to improve the movement patterns of salp leaders and followers. There are also other attempts on the hybrid algorithm of SSA. Saafan and El-gendy [32] proposed a hybrid improved Whale Optimization Salp Swarm Algorithm (IWOSSA). The IWOSSA achieves a better balance between exploration and exploitation phases and avoids premature convergence effectively. Singh et al. [33] developed a hybrid SSA with PSO, which integrated the advantages of SSA and PSO to eliminate trapping in local optima and unbalanced exploitation. Abadi et al. [34] proposed a hybrid approach by combining SSA with GA, which could obtain good results in solving some optimization problems.

Slime Mould Algorithm (SMA) [35] is the latest swarm intelligence algorithm proposed in 2020. This algorithm simulates the oscillation mode and the foraging of Slime Mould in nature. SMA has a unique search mode, which keeps the algorithm from falling into local optima, and has superior global exploration capability. The approach has been applied in real-world optimization problems like feature selection [36], parameters optimization of the fuzzy system [37], multilevel threshold image segmentation [38], control scheme [39], and parallel connected multistacks fuel cells [40].

Therefore, based on the capabilities of both above algorithms, we try to do a hybrid operation to improve the performance of SMA or SSA and then propose a new hybrid optimization algorithm (HSMSSA) to speed up the convergence rate and enhance the overall optimization performance. The specific method is that we integrate SMA as the leader role into SSA and retain the exploitation phase of SSA. At the same time, inspired by the significant performance of opposition-based learning and quasiopposition-based learning, we propose a new strategy named mutation opposition-based learning (MOBL), which switches the algorithm between opposition-based learning and quasiopposition-based learning through a mutation rate to increase the diversity of the population and speed up the convergence rate. In addition, Levy flight is utilized to improve SMA’s exploration capability and balance the exploration and exploitation phases of the algorithm. The proposed HSMSSA algorithm can improve both the exploration and exploitation abilities. The proposed HSMSSA is tested on 23 different benchmark functions and compared with other optimization algorithms. Furthermore, five constrained engineering problems are also utilized to evaluate HSMSSA’s capability on real-world optimization problems. The experimental results illustrate that the HSMSSA possesses the superior capability to search the global minimum and achieve less cost engineering design results than other state-of-the-art metaheuristic algorithms.

The remainder of this paper is organized as follows. Section 2 provides a brief overview of SSA, SMA, Levy flight, and mutation opposition-based learning strategy. Section 3 describes the proposed hybrid algorithm in detail. In Section 4, the details of benchmark functions, parameter settings of the selected algorithms, simulation experiments, and results analysis are introduced. Conclusions and prospects are given in Section 5.

2. Preliminaries

2.1. Salp Swarm Algorithm

In the deep sea, salps live in groups and form a salp chain to move and forage. In the salp chain, there are leaders and followers. The leader moves towards the food and guides the followers. In the process of moving, leaders explore globally, while followers thoroughly search locally [27]. The shapes of a salp and salp chain are shown in Figure 1.

2.1.1. Leader Salps

The front salp of the chain is called the leader, so the following equation is used to perform this action to the salp leader:where and represent the new position of the leader and food source in the jth dimension and r1 and r2 are randomly generated numbers in the interval [0, 1]. It is worth noting that c1 is essential for SSA because it balances exploration and exploitation during the search process. t is the current iteration and T is the max iteration.

2.1.2. Follower Salps

To update the position of the followers, the new concept is introduced, which is based on Newton’s law of motion as in the following equation:where represents the position of ith follower salp in the jth dimension and and indicate the acceleration and the velocity, respectively. The updating process of followers can be expressed as in the following equation:

The pseudocode of SSA is presented in Algorithm 1.

(1)Initialize the population size N and max iteration T;
(2)Initialize the positions of salp Xi (i = 1, 2, …, N)
(3)While (t ≤ T)
(4) Calculate fitness of each salp;
(5) Denote the best solution as F
(6) update c1 by equation (2);
(7)For i = 1 to N do
(8)  if (i = = 1) then
(9)   update position of leader salp by equation (1)
(10)  Else
(11)   update position of follower salp by equation (4)
(12)  End if
(13)End for
(14)t = t + 1;
(15)End While
(16)Return the best solution F;
2.2. Slime Mould Algorithm

The main idea of SMA is inspired by the behavior and morphological changes of Slime Mould in foraging. Slime Mould can dynamically change search mode based on the quality of food sources. If the food source has a high quality, the Slime Mould will use the region-limited search method. If the food concentration is low, the Slime Mould will explore other food sources in the search space. Furthermore, even if Slime Mould has found a high-quality food source, it still divides some individuals to explore another area in the region [35]. The behavior of Slime Mould can be mathematically described as follows:where parameters r3, r4, and r5 are random values in the range of 0 to 1. UB and LB indicate the upper and lower bound of search space. z is a constant. represents the best position obtained in all iterations, and represent two individuals selected randomly from the population, and represents the location of Slime Mould. decreases linearly from one to zero, and is an oscillation parameter in the range [−a, a], in which a is calculated as follows:

The coefficient is a very important parameter, which simulates the oscillation frequency of different food concentrations so that Slime Mould can approach food more quickly when they find high-quality food. The formula of is listed as follows:where i ∈ 1, 2, …, N and S(i) represents the fitness of X. condition indicates that S(i) ranks first half of the Slime Mould, and r6 are random numbers uniformly generated in the interval of [0, 1]. bF represents the optimal fitness obtained in the current iterative process, wF represents the worst fitness value obtained in the iterative process currently, and SmellIndex denotes the sequence of fitness values sorted (ascends in the minimum value problem).

The p parameter can be described as follows:where DF represents the best fitness over all iterations. Figure 2 visualizes the general logic of SMA.

The pseudocode of SMA is presented in Algorithm 2.

(1)Initialize the population size N and max iteration T;
(2)Initialize the positions of Slime Mould Xi (i = 1, 2, …, N)
(3)While (t ≤ T)
(4) Calculate fitness of each Slime Mould;
(5) update bF, wF, and Xb;
(6) Calculate W by equation (3);
(7)For i = 1 to N do
(8)  update p, vb, and vc;
(9)  update positions by equation (1)
(10)End For
(11) t = t + 1;
(12)End While
(13)Return Xb;
2.3. Levy Flight

Levy flight is an effective strategy for metaheuristic algorithms, successfully designed in many algorithms [4144]. Levy flight is a class of non-Gaussian random processes that follows Levy distribution. It alternates between short-distance and occasionally long-distance walking, which can be inferred from Figure 3. The formula of Levy flight is as follows:where r7 and r8 are random values in the range of [0, 1] and β is a constant equal to 1.5.

2.4. Mutation Opposition-Based Learning

Opposition-based learning (OBL) was proposed by Tizhoosh in 2005 [45]. The essence of OBL is selecting the best solution to the next iteration by comparing the current solution and its opposition-based learning solution. The OBL strategy has been successfully used in varieties of metaheuristic algorithms [4651] to improve the ability of local optima stagnation avoidance, and the mathematical expression is as follows:

Quasiopposition-based learning (QOBL) [52] is an improved version from OBL, which applies quasiopposite points instead of opposite points. These points produced through QOBL have more likelihood of being unknown solutions than the points created by OBL. The mathematical formula of QOBL is as follows:

Considering the superior performance of the two kinds of opposition-based learning, we propose mutation opposition-based learning (MOBL) by combining the mutation rate with these two opposition-based learning. By selecting the mutation rate, we can give full play to the characteristics of the OBL and QOBL and effectively enhance the ability of the algorithm to jump out of the local optima. Figure 4 is an MOBL example, in which Figure 4(a) shows an objective function and Figure 4(b) displays three candidate solutions and their OBL solutions or QOBL solutions. The mathematical formula is as follows:where rate is mutation rate, and we set it to 0.1.

3. The Proposed Algorithm

3.1. Details of HSMSSA

In SSA, the population is divided into leader salps and follower salps: leader salps are the first half of salps in the chain, and follower salps follow the leader. However, the leader salp has poor randomness and is easy to fall into local optima. For the SMA algorithm, Slime Mould selects different search modes according to the positive and negative feedback of the current food concentration and has a certain probability of isolating some individuals to explore other regions in search space. These mechanisms increase the randomness of Slime Mould and enhance the ability to explore. The vb parameter is utilized to realize the oscillation mode of Slime Mould, which is in the range of [−a, a]. However, vb has the drawback of low randomness, which cannot effectively simulate the process of Slime Mould looking for food sources. Therefore, we introduce Levy flight into the exploration phase to further enhance the exploration ability. Next, we integrate SMA into SSA, change the position update method of leader salps, and further improve the randomness of the algorithm through Levy flight. For followers, we propose a mutation opposition-based learning to enhance its population diversity and increase the ability of the algorithm to jump out of the local optima. The mathematical formula of leader salps is as follows:

The pseudocode of HSMSSA is given in Algorithm 3, and the summarized flowchart is displayed in Figure 5. As shown in Algorithm 3, the position of the population is initially generated randomly. Then, each individual’s fitness will be calculated. For the entire population in each iteration, parameter W is calculated using equation (7). The search agents of population size N are assigned to the two algorithms, which can utilize the advantages of SSA and SMA, and realize the sharing of helpful information to achieve global optimization. If the search agent belongs to the first half of the population, the position will be updated using equation (14) in SMA with Levy flight. Otherwise, the position is determined using equation (4) and MOBL. Finally, if the termination criteria are satisfied, the algorithm returns the best solution found so far; else the previous steps are repeated.

(1)Set the initial values of the population size N and the maximum number of iterations T
(2)Initialize positions of the population X
(3)While t ≤ T
(4) Check if the position goes out of the search space boundary and bring it back.
(5) Calculate the fitness of each search agent
(6) Update W, bF, and wF
(7)For i = 1 to N
(8)  If i = = 1
(9)   If r3 < z
(10)    Update position by equation (14)
(11)   Else
(12)    IF r5 < p
(13)     Update position by equation (14)
(14)    Else
(15)     Update position by equation (14)
(16)    End if
(17)   End if
(18)  Else
(19)   Update position by equation (4)
(20)   If r10 < 0.1
(21)    Update the position of MOBL using equation (13)
(22)   Else
(23)    Update the position of MOBL using equation (13)
(24)   End if
(25)  End if
(26)  Check if the position goes out of the search space boundary and bring it back.
(27)  select the best position into the next iteration.
(28)  t = t + 1
(29)End for
(30)End while
(31)Return Xbest
3.2. Computational Complexity Analysis

HSMSSA mainly consists of three components: initialization, fitness evaluation, and position updating. In the initialization phase, the computational complexity of positions generated is O(N × D), where D is the dimension size of the problem. Then, the computational complexity of fitness evaluation for the solution is O(N) during the iteration process. Finally, we utilize mutation opposition-based learning to keep the algorithm from falling into local optima; thus, the computational complexities of position updating of HSMSSA are O(2 × N × D). Therefore, the total computational complexity of the proposed HSMSSA algorithm is O(N × D + N + 2 × N × D).

4. Experimental Results and Discussion

This section compared the HSMSSA with some state-of-the-art metaheuristics algorithms on 23 benchmark functions to validate its performance. Moreover, five engineering design problems are employed as examples for real-world applications. The experimentations ran on Windows 10 with 24 GB RAM and Intel (R) i5-9500. All simulations were carried out using MATLAB R2020b.

4.1. Definition of 23 Benchmark Functions

To assess HSMSSA’s ability of exploration, exploitation, and escaping from local optima, 23 benchmark functions, including unimodal and multimodal functions, are tested [27]. The unimodal benchmark functions (F1–F7) are utilized to examine the exploitation ability of HSMSSA. The description of the unimodal benchmark function is shown in Table 1. The multimodal and fixed-dimension multimodal benchmark functions (F8–F23) shown in Tables 2 and 3 are used to test the exploration ability of HSMSSA.

In order to show the experimental results more representative, the HSMSSA is compared with the basic SMA [35] and SSA [27], AO [24], AOA [15], WOA [22], SCA [14], and MVO [10]. For all tests, we set the population size N = 30, dimension size D = 30, and maximum iteration T = 500, respectively, for all algorithms with 30 independent runs. The parameter settings of each algorithm are shown in Table 4. After all, average results and standard deviations are employed to evaluate the results. Note that the best results will be bolded.

4.1.1. Evaluation of Exploitation Capability (F1–F7)

As we can see, unimodal benchmark functions have only one global optimum. These functions are allowed to evaluate the exploitation ability of the metaheuristic algorithms. It can be seen from Table 5 that HSMSSA is very competitive with SMA, SSA, and other metaheuristic algorithms. In particular, HSMSSA can achieve much better results than other metaheuristic algorithms except F6. For F1–F4, HSMSSA can find the theoretical optimum. For all unimodal functions except F5, HSMSSA gets the smallest average values and standard deviations compared to other algorithms, which indicate the best accuracy and stability. Hence, the exploitation capability of the proposed HSMSSA algorithm is excellent.

4.1.2. Evaluation of Exploration Capability (F8–F23)

Unlike unimodal functions, multimodal functions have many local optima. Thus, this kind of test problem turns very useful to evaluate the exploration capability of an optimization algorithm. The results shown in Table 5 for functions F8–F23 indicate that HSMSSA also has an excellent exploration capability. In fact, we can see that HSMSSA can find the theoretical optimum in F9, F11, F16–F17, and F19–F23. These results reveal that HSMSSA can also provide superior exploration capability.

4.1.3. Analysis of Convergence Behavior

The convergence curves of some functions are selected and shown in Figure 6, which show the convergence rate of algorithms. It can be seen that HSMSSA shows competitive performance compared to other state-of-the-art algorithms. The HSMSSA presents a faster convergence speed than all other algorithms in F7–F13, F15, and F19–F23. For other benchmark functions, HSMSSA shows a better capability of local optima avoidance than other comparison algorithms in F5 and F6.

4.1.4. Qualitative Results and Analysis

Furthermore, Figure 7 shows the results of several representative test functions on search history, trajectory, average fitness, and convergence curve. From search history maps, we can see the search agent’s distribution of the HSMSSA while exploring and exploiting the search space. Because of the fast convergence, the vast majority of search agents are concentrated near the global optimum. Inspecting trajectory figure in Figure 5, the first search agent constantly oscillates in the first dimension of the search space, which suggests that the search agent investigates the most promising areas and better solutions widely. This powerful search capability is likely to come from the Levy flight and MOBL strategies. The average fitness presents if exploration and exploitation are conducive to improve the first random population, and an accurate approximation of the global optimum can be found in the end.

Similarly, it can be noticed that the average fitness oscillates in the early iterations and then decreases abruptly and begins to level off. The average fitness maps also show the significant improvement of the first random population and the final global optimal, accurate approximation acquisition. At last, the convergence curves reveal the best fitness value found by search agents after each iteration. By observing this, the HSMSSA shows breakneck convergence speed.

4.1.5. Wilcoxon Signed-Rank Test

Because the algorithm results are random, we need to carry out statistical tests to prove that the results have statistical significance. We use Wilcoxon signed-ranks (WSR) test results to evaluate the statistical significance of the two algorithms at 5% significance level [53]. The WSR is a statistical test that is applied to two different results for searching the significantly different. As is well-known, a -value less than 0.05 indicates that it is significantly superior to other algorithms. Otherwise, the obtained results are not statistically significant. The calculated results of the Wilcoxon signed-rank test between HSMSSA and other algorithms for each benchmark function are listed in Table 6. HSMSSA outperforms all other algorithms in varying degrees. This superiority is statistically significant on unimodal functions F2 and F4–F7, which indicates that HSMSSA possesses high exploitation. HSMSSA also shows better results on multimodal function F8–F23, suggesting that HSMSSA has a high capability of exploration. To sum up, HSMSSA can provide better results for almost all benchmark functions than other comparative algorithms.

4.2. Experiments on Engineering Design Problems

In this section, HSMSSA is evaluated to solve five classical engineering design problems: pressure vessel design problem, tension spring design problem, three-bar truss design problem, speed reducer problem, and cantilever beam design. To address these problems, we set the population size N = 30 and maximum iteration T = 500. The results of HSMSSA are compared to various state-of-the-art algorithms in the literature. The parameter settings are the same as previous numerical experiments.

4.2.1. Pressure Vessel Design Problem

The pressure vessel design problem [53] is to minimize the total cost of cylindrical pressure vessel to match pressure requirements and form the pressure vessel shown in Figure 8. Four parameters in this problem need to be minimized, including the thickness of the shell (Ts), the thickness of head (Th), inner radius (R), and the length of the cylindrical section without the head (L), as shown in Figure 8. The constraints and equation are as follows.

Consider

Minimize

subject to

Variable range is

From the results in Table 7, we can see that HSMSSA can obtain superior optimal values compared with SMA, SSA, AO, AOA, WOA, SCA, and MVO.

4.2.2. Tension Spring Design Problem

This problem [27] tries to minimize the weight of the tension spring, and there are three parameters that need to be minimized, including the wire diameter (d), mean coil diameter (D), and the number of active coils (N). Figure 9 shows the structure of the tension spring. The mathematical of this problem can be written as follows.

Consider

Minimize

subject to

Variable range is

Results of HSMSSA for solving tension spring design problem are listed in Table 8, which are compared with SMA, SSA, AO, AOA, WOA, SCA, and MVO. It is evident that HSMSSA obtained the best results compared to all other algorithms.

4.2.3. Three-Bar Truss Design Problem

Three-bar truss design is a complex problem in the field of civil engineering [49]. The goal of this problem is to achieve the minimum weight in truss design. Figure 10 shows the design of this problem. The formula of this problem can be described as follows.

Consider

Minimize

subject to

Variable range iswhere .

Results of HSMSSA for solving the three-bar design problem are listed in Table 9, which are compared with SMA, SSA, AO, AOA, WOA, SCA, and MVO. It can be observed that HSMSSA has an excellent ability to solve the problem in confined space.

4.2.4. Speed Reducer Problem

In this problem [15], the total weight of the reducer is minimized by optimizing seven variables. Figure 11 shows the design of this problem, and the mathematical formula is as follows.

Minimize

subject to

Variable range is

The comparison results are listed in Table 10, which shows the advantage of HSMSSA in realizing the minimum total weight of the problem.

4.2.5. Cantilever Beam Design

Cantilever beam design is a type of concrete engineering problem. This problem aims to determine the minimal total weight of the cantilever beam by optimizing the hollow square cross-section parameters [24]. Figure 12 illustrates the design of this problem, and the mathematical described is as follows.

Consider

Minimize

subject to

Variable range is as follows: .

The results are shown in Table 11. From this table, we can see that the performance of HSMSSA is better than all other algorithms and the obtained total weight is minimized.

As a summary, this section demonstrates the superiority of the proposed HSMSSA algorithm in different characteristics and real case studies. HSMSSA is able to outperform the basic SMA and SSA and other well-known algorithms with very competitive results, which are derived from the robust exploration and exploitation capabilities of HSMSSA. Excellent performance in solving industrial engineering design problems indicates that HSMSSA can be widely used in real-world optimization problems.

5. Conclusion

In this paper, a Hybrid Slime Mould Salp Swarm Algorithm (HSMSSA) is proposed by combining the whole SMA as leaders and the exploitation phase of SSA as followers. At the same time, two strategies, including Levy flight and mutation opposition-based learning, are incorporated to enhance the capabilities of exploration and exploitation of HSMSSA. The 23 standard benchmark functions are utilized to evaluate this algorithm for analyzing its exploration, exploitation, and local optima avoidance capabilities. The experimental results show competitive advantages compared to other state-of-the-art metaheuristic algorithms, proving that HSMSSA has better performance than others. Five engineering design problems are solved as well to verify the superiority of the algorithm further, and the results are also very competitive with other metaheuristic algorithms.

The proposed HSMSSA can produce very effective results for complex benchmark functions and constrained engineering problems. In the future, HSMSSA can be applied to real-world optimization problems such as multiobjective problems, feature selection, multithresholding image segmentation, convolution neural network, or any problem that belongs to NP-complete or NP-hard problems.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

All authors declare that there are no conflicts of interest.

Acknowledgments

This research was funded by the Sanming University Introduces High-Level Talents to Start Scientific Research Funding Support Project (20YG01 and 20YG14), the Guiding Science and Technology Projects in Sanming City (2020-S-39, 2020-G-61, and 2021-S-8), the Educational Research Projects of Young and Middle-Aged Teachers in Fujian Province (JAT200638 and JAT200618), the Scientific Research and Development Fund of Sanming University (B202029 and B202009), the Open Research Fund of Key Laboratory of Agricultural Internet of Things in Fujian Province (ZD2101), the Ministry of Education Cooperative Education Project (202002064014), the School Level Education and Teaching Reform Project of Sanming University (J2010306 and J2010305), and the Higher Education Research Project of Sanming University (SHE2102 and SHE2013).