An improved Coati Optimization Algorithm with multiple strategies for engineering design optimization problems

Aiming at the problems of insufficient ability of artificial COA in the late optimization search period, loss of population diversity, easy to fall into local extreme value, resulting in slow convergence and lack of exploration ability; In this paper, an improved COA algorithm based on chaotic sequence, nonlinear inertia weight, adaptive T-distribution variation strategy and alert updating strategy is proposed to enhance the performance of COA (shorted as TNTWCOA). The algorithm introduces chaotic sequence mechanism to initialize the position. The position distribution of the initial solution is more uniform, the high quality initial solution is generated, the population richness is increased, and the problem of poor quality and uneven initial solution of the Coati Optimization Algorithm is solved. In exploration phase, the nonlinear inertial weight factor is introduced to coordinate the local optimization ability and global search ability of the algorithm. In the exploitation phase, adaptive T-distribution variation is introduced to increase the diversity of individual population under low fitness value and improve the ability of the algorithm to jump out of the local optimal value. At the same time, the alert update mechanism is proposed to improve the alert ability of COA algorithm, so that it can search within the optional range. When Coati is aware of the danger, Coati on the edge of the population will quickly move to the safe area to obtain a better position, while Coati in the middle of the population will randomly move to get closer to other Coatis. IEEE CEC2017 with 29 classic test functions were used to evaluate the convergence speed, convergence accuracy and other indicators of TNTWCOA algorithm. Meanwhile, TNTWCOA was used to verify 4 engineering design optimization problems, such as pressure vessel optimization design and welding beam design. The results of IEEE CEC2017 and engineering design Optimization problems are compared with Improved Coati Optimization Algorithm (ICOA), Coati Optimization Algorithm (COA), Golden Jackal Optimization Algorithm (GJO), Osprey Optimization Algorithm (OOA), Sand Cat Swarm Optimization Algorithm (SCSO), Subtraction-Average-Based Optimizer (SABO). The experimental results show that the improved TNTWCOA algorithm significantly improves the convergence speed and optimization accuracy, and has good robustness. Three‑bar truss design problem, The Gear Train Design Problem, Speed reducer design problem shows a strong solution advantage. The superior optimization ability and engineering practicability of TNTWCOA algorithm are verified.


Algorithm initialization process
In the initialization stage of the COA, the position of the coatis in search space is randomly generated for the COA by using the expression in Eq. (1)   In Eq. ( 1): X i stand for the position in the search space of the i th coati,x i,j represents the value of the j th decision variable.b L j and b U j represent the upper and lower bound of the decision variables, respectively.N is the coatis' number, m is the number of decision variables.

Mathematical model of COA Phase 1: Hunting and attacking strategy on iguana (exploration phase)
The first phase of updating the coatis' population in the search space is modeled based on simulating their strategy when attacking iguanas.In this strategy, a group of coatis climbs the tree to reach an iguana and scare it.Several other coatis wait under a tree until the iguana falls to the ground.After the iguana falls to the ground, the coatis attack it and hunt it.This strategy leads coatis to move to different positions in the search space, which demonstrates the COA's exploration ability in global search in the problem-solving space.In exploration phase, Coati's position update strategy mainly simulates Coati's hunting and attacking iguana behavior.The behavior of Coati is divided into two steps to complete the hunting and attacking of the iguana.(1) Fright.A group of Coatis climb a tree to approach an iguana and scare it.(2) Several other Coatis wait under the tree, waiting for the frightened iguana to fall to the ground, and after the iguana lands, complete the attack and hunt it.As shown in Fig. 1.
This strategy causes Coati to move to different locations in the search space, which in turn shows that the COA optimization algorithm has exploration capability in solving the global search of the problem space.In the COA algorithm, it is assumed that the location of the best member of the population is that of the iguana.It is also assumed that the number of Coatis completing steps (1) and ( 2) is each half of the total number of Coatis.Thus, the mathematical expression of position is: where,X P1 i,j is the new position of the ith Coati in the jth dimension; r is the random number between [0,1]; G j is the iguana's position in the jth dimension, which actually refers to the position of the best member; I is a number randomly selected from the set {1,2}; N is the number of Coati; [N/2] is the largest integer not exceeding [N/2]; m is the number of decision variables.
After the iguana falls to the ground, it is placed in a random position in the search space.Based on this random location, the Coati on the ground moves through the search space.This step is simulated by two formulas.
where:G g j is the position of the iguana on the ground in the j dimension.
(1) www.nature.com/scientificreports/where:F g G,j is the objective function value of the jth dimension iguana after it falls to the ground; F i,j is the objec- tive function value of the ith Coati in jth dimension.If the updated individual is better, the current individual is updated.Otherwise, leave it as it is.
where:F P1 i is the objective function value of the ith Coati at the new position; F i is the objective function value of the ith Coati at the previous position.

Phase 2: The process of escaping from predators (exploitation phase)
In exploitation phase, Coati's location-updating strategy mainly mimics Coati's natural behavior when encountering predators and when fleeing from predators.As shown in Fig. 2.
In exploitation phase, When a predator attacks Coati, Coati flees its position.Coati's move on this strategy resulted in it being in a safe position close to its current position.This demonstrates the exploitation of COA algorithms in local search.To simulate this behavior, a random location is generated near the location of each Coati based on the following equation.
where: b loc j,L is the local lower bound of the jth decision variable, b loc j,U is the local upper bound of the jth decision variable, and t is the number of iterations; T is the maximum number of iterations.
where:X P2 i,j is the new position of the ith Coati in the jth dimension.If the updated individual is better, update the current individual, otherwise leave it as it is.www.nature.com/scientificreports/

Improved Coati Optimization Algorithm Chaotic mapping strategy for algorithm initialization process
Because the individual positions of the original Coati Optimization Algorithm are generated randomly, the diversity of the population is likely to be lost, and its uniform distribution in the solution space cannot be guaranteed, this makes the algorithm easily fall into local optimization.Uniform population can speed up convergence 20,21 .Therefore, it is necessary to improve the population initialization method of the algorithm.Chaotic mapping is ergodic and stochastic.If chaotic mapping function is used to generate chaotic sequence as the initial position of population individuals to make the population distribution more uniform and avoid population uniformity, thus improving the search efficiency.Commonly used chaotic mappings are as follows: Chebyshev map, Circle map, Gauss map, Iterative map, Logistic map, Sine map, Singer map, Tent map, Cubic map.The population distribution results generated by the commonly used chaotic mappings methods is shown in Fig. 3a, and the histogram is shown in Fig. 3b.
As shown in Fig. 3, the population distribution generated by tent chaos has the best uniformity among the above major chaotic maps.Therefore, this paper chooses TENT Chaos to improve the distribution quality of the initial population in search space, and the global search ability can be strengthened, so as to improve the solution efficiency of the algorithm, and Eq. ( 1) can be rewritten as: The expression of the tent mapping was shown in Eq. ( 9), α = 0.5 .

Nonlinear inertia weight factor for hunting and attacking strategy on iguana
The local optimization ability and global search ability of the coordinated meta-heuristic algorithm are the key factors that affect the optimization accuracy and optimization speed of the algorithm.Since the update of Coati individual position is closely related to the current Coati position, the nonlinear inertia weight factor is used to adjust the correlation between the update of Coati position and the current Coati position information.The calculation method of the nonlinear inertia weight factor is as follows.
where, t is the current iteration number and T is the maximum iteration number.The maximum inertia weight ω max = 1 , with the progress of iteration, the inertia weight factor will increase nonlinear and eventually reach a large value.

Adaptive T-distribution variation strategy for process of escaping from predators
In the exploitation phase of Coati algorithm, an adaptive T-distribution variation strategy is introduced.In each iteration, the relationship between the current Coati fitness value and the average fitness value of the population was compared.When the Coati fitness value is higher than the average fitness value of the population, it indicates that the current Coati is in an aggregation state.In this case, adaptive T-distribution variation strategy is adopted to increase Coati diversity.When the Coati fitness value is lower than the average fitness value of the population, the original Coati location updating method is used.The improved formula is as follows: The T-distribution contains a degree of freedom parameter, and its probability density is shown as follows 23 : When t(n → 1), t-distribution is Cauchy(0,1): When t(n → ∞), t-distribution is Norm(0,1): where, t is the degree of freedom parameter, n is the degree of freedom, Γ() is the gamma function.When t(n → ∞) → N(0,1), t(n → 1) = C(0,1), N(0,1)as the Gaussian distribution, C(0,1) for the Cauchy distribution.
In TNTWCOA, the position of each COATI is perturbed using t-distribution mutation with adaptive parameters.t-distribution mutation operator is mathematically formulated as: www.nature.com/scientificreports/At the beginning of iteration, the T-distribution mutation is similar to Cauchy mutation, and the algorithm has a good global exploration ability, which increases the diversity of the population, and the ability to jump out of the local optimal is also enhanced.With the increase of the number of iterations, the T-distribution mutation is similar to the Gaussian mutation, which improves the local development ability of the algorithm, and the disturbance strength of the whole population changes from strong to weak.By introducing adaptive T-distribution mutation as an improved search strategy, the optimization performance of the algorithm can be effectively enhanced, and the ability of the algorithm to escape local optimal can be improved.

Alert mechanism for process of escaping from predators
The first half of the Coati algorithm is updated using the formula of improvement point 3, and the second half is updated by introducing sparrow alert mechanism.Introducing the Coati alert update mechanism in the second stage of Coati can improve the alert capability of Coati algorithm and enable it to search within the optional range.When Coati is aware of danger, Coati on the edge of the group will quickly move to the safe area to obtain a better position.Coati in the middle of the group will randomly move around to get closer to other Coatis.The mathematical expression is as follows: where: G is the current global optimal location.β, as a step control parameter, is a random number with a normal distribution of mean 0 and variance 1.K ∈ [− 1,1] indicates the direction of Coati's movement and the step size.The control parameter is a random number, and F i is the fitness value of the current Coati individual.F g and F w are the best and worst global fitness values respectively.ε is a constant to avoid zeros in the denominator.
In short, F i > F g indicates that the Coati is at the edge of the group and is vulnerable to predators, and F i = F g indicates that the Coati in the middle of the group is aware of the danger and needs to stay close to other Coatis to avoid predation.

Pseudocode and flowchart
The flowchart of the proposed TNTWCOA technique is shown in Fig. 5. Different improvement strategies are proposed in the initialization process, exploration phase and exploitation phase.In initialization process.The algorithm introduces chaotic sequence mechanism to initialize the position.The position distribution of the initial solution is more uniform, the high quality initial solution is generated, the population richness is increased, and the problem of poor quality and uneven initial solution of the Coati Optimization Algorithm is solved.In exploration phase, the nonlinear inertial weight factor is introduced to coordinate the local optimization ability and global search ability of the algorithm.In the exploitation phase, adaptive T-distribution variation is introduced to increase the diversity of individual population under low fitness value and improve the ability of the algorithm to jump out of the local optimal value.At the same time, the alert update mechanism is proposed to improve the alert ability of COA algorithm, so that it can search within the optional range.When Coati is aware of the danger, Coati on the edge of the population will quickly move to the safe area to obtain a better position, while Coati in the middle of the population will randomly move to get closer to other Coatis.Furthermore, Algorithm 1 defines the TNTWCOA technique's pseudocode.
Pseudocode of the TNTWCOA algorithm.

Experimental studies and results
To verify the effectiveness of the Improved Coati Optimization Algorithm with Multiple strategies proposed in this paper.In this section, the well-known IEEE CEC2017 benchmark functions are used to validity of the Improved Coati Optimization Algorithm with Multiple strategies proposed in this paper in 30, 50 and 100 dimensions and compared with the Improved Coati Optimization Algorithm (ICOA), Coati Optimization Algorithm (COA), Golden Jackal Optimization Algorithm (GJO), Osprey Optimization Algorithm (OOA), Sand Cat Swarm Optimization Algorithm (SCSO), Subtraction-Average-Based Optimizer (SABO).The evaluation involves using statistical measurements, such as best values, mean values, worst values, and standard deviation (STD).All statistical measurements were obtained after 30 runs of each algorithm.The number of iterations was 10,000 and the individual was 50.The analyzes and discusses the optimization of IEEE CEC2017 benchmark functions between the algorithm proposed in this paper and other algorithms in the case of 30, 50 and 100 dimensions respectively as followings.The details of CEC2017 was shown in Table 1.

Statistics analysis
Table 2 shows the statistical results of IEEE CEC2017 benchmark functions in 30 dimensions optimized by TNTWCOA ICOA, COA, GJO, OOA, SCSO and SABO algorithms.Also the Friedman values based on the average value of the IEEE CEC2017 benchmark functions optimized by the TNTWCOA, ICOA, COA, GJO, OOA, SCSO and SABO algorithms were statistically analyzed.As can be seen from Table 2, When optimizing the other 28 test functions in the 30-dimensional case, More or less evaluation index of the TNTWCOA proposed in this paper is superior to that of the ICOA, GJO, COA, SCSO, OOA, and SABO algorithms; Among them, when F1, F3, F4, F11, F12, F13, F14, F15, F18, F19, F22, F25, F28 and F30 functions are optimized, all the evaluation indexes of TNTWCOA algorithm are optimal, showing excellent performance; the performance of std value is inferior to that of SABO algorithm for F5, that of GJO algorithm for F6, that of ICOA algorithm for F9 and F20, the performance of std and worse value are inferior to that of SABO algorithm for F7, the performance of min value is inferior to that of SCSO algorithm and std value is inferior to COA algorithm for F8, the performance of min value is inferior to that of SBSO algorithm, avg, median and worse value are inferior to COA algorithm for F10, the performance of std and worse value are inferior to that of ICOA algorithm for F16, F17, the performance of std and worse value are inferior to that of COA algorithm for F21.All values are inferior to that of ICOA algorithm for F23, Std, avg, median and worse inferior to that of COA algorithm for F24, The std and worse values when optimizing F26 are not as good as GJO algorithm, The std and worse values when optimizing F27 and F29 are not as good as ICOA algorithm.
In general, the improved algorithm shows stronger optimization ability in the case of 30 dimensions, especially for F3, F4, F6, F11, F17 functions, whose optimization value is close to the theoretical value of the function.Friedman's overall order is TNTWCOA > ICOA > GJO > SABO > OOA > SCSO > COA.Therefore, from the statistical results of evaluation index, when Dim = 30, the algorithm proposed in this paper shows excellent performance compared with other six algorithms.Compared with the original COA algorithm, the statistical results of evaluation index of the improved algorithm have been significantly improved.

Convergence analysis
Figure 6 illustrates the convergence curves of GJO, SCSO, OOA, SABO, original COA, ICOA and TNTWCOA on 29 benchmark functions of the IEEE CEC2017 throughout the iterations with 10000times.
As can be seen from Fig. 6, except for functions F10 and F29, the convergence speed of the improved TNT-WCOA algorithm is optimal.Among them, when the function F1, F3, F4, F6, F11, F12, F13, F14, F15, F16, F17, F18, F19, F20, F22, F25, F26, F27, F28 and F30 are optimized, it can quickly converge to the best value and be stable.When optimizing functions F5, F7, F8, F9, F21, they can rapidly converge to the best value and remain stable, but the best value will be surpassed by other algorithms in the later period, for example, when optimizing F5, F7, F8, F9, it will be surpassed by GJO algorithm in the later period, when optimizing F21, it will be exceeded by ICOA, GJO algorithms in the later period, when optimizing F23 and F24, it will be exceeded by ICOA algorithms in the later period.

Analysis of box plot results
A box chart is a statistical chart consisting of the smallest number (minimum value), the first quartile (25% locus value); The middle number (median value); The third quartile (75% locus value); The largest number (maximum value) constitutes.Figure 7 is the box graph obtained after 30 runs of the algorithm.The largest number (maximum value) and the smallest number (minimum value) in Fig. 5

Analysis of Wilcoxon rank sum test results
Wilcoxon rank sum test is a non-parametric statistical test.The difference between different algorithms is found by comparison.Table 3 shows the statistical results of TNTWCOA and five other algorithms over 30 runs.When the Wilcoxon comparison result is less than 0.05, it indicates that there is a significant deviation in the function optimization results between the two comparison algorithms; if the comparison result is greater than 0.05, it indicates that there is no significant deviation in the function optimization results between the two comparison algorithms.From the comparison results of TNTWCOA algorithm with ICOA, GJO, COA, SCSO, OOA and SABO algorithms, there is a significant deviation between TNTWCOA algorithm and most algorithms in function optimization results.However, when optimizing functions F6, F8, F9, F19, F16, F17, F23 and F26, the Wilcoxon rank sum test of TNTWCOA algorithm and GJO algorithm is greater than 0.05, indicating that there is no significant difference in the optimization values of the functions obtained.At the same time, when optimizing function F10, F16, F17, F21, F26 the Wilcoxon rank sum test of TNTWCOA algorithm and ICOA algorithm is greater than 0.05, indicating that there is no significant difference between the obtained function values, when optimizing function F10, F16, the Wilcoxon rank sum test of TNTWCOA algorithm and SCSO algorithm is greater than 0.05, indicating that there is no significant difference between the obtained function values, when

Statistics analysis
Table 4 shows the statistical results of IEEE CEC2017 benchmark functions in 50 dimensions optimized by TNTWCOA, ICOA, GJO, COA, SCSO, OOA and SABO algorithms.Also the Friedman values based on the average value of the IEEE CEC2017 benchmark functions optimized by the TNTWCOA, ICOA, GJO, COA, SCSO, OOA and SABO algorithms were statistically analyzed.As can be seen from Table 2.More or less evaluation indexes of TNTWCOA proposed in this paper is superior to that of the ICOA, GJO, COA, SCSO, OOA, and SABO algorithms; Among them, when F1, F4, F11, F12, F13, F14, F15, F18, F19, F20, F25, F27, F28 and F30 functions are optimized, all the evaluation indexes of TNTWCOA algorithm are optimal, showing excellent performance.When optimizing F3, F9 the performance of std value is inferior to that of ICOA algorithm, for F5, which is inferior to that of COA algorithm, for F8, F22, F26, which is inferior to that of OOA algorithm, When optimizing F10 and F27, only worse values performed worse than ICOA algorithm; When optimizing F6, std value performance is inferior to COA algorithm, avg, median, worse value performance is inferior to GJO algorithm, for F7 std value performance is inferior to OOA algorithm, worse value performance is inferior to GJO algorithm, for F16 std, median, worse value performance is inferior to GJO algorithm, for F17 std value www.nature.com/scientificreports/performance is inferior to SCSO algorithm, avg value performance is inferior to GJO algorithm, for F21 std and worse value performance are inferior to ICOA algorithm, for F23 std value performance is inferior to GJO algorithm, avg, median, worse value performance are inferior to ICOA algorithm, for F24 std, avg, median, worse value performance are inferior to ICOA algorithm, For f29 std, avg, median, value performance are inferior to ICOA algorithm.In general, the improved algorithm shows stronger optimization ability in the case of 50 dimensions.Friedman's overall order is ICOA > SCSO > ICOA > GJO > SABO > COA > OOA.Therefore, from  As can be seen from Fig. 8, the convergence speed of the improved TNTWCOA algorithm is optimal; Among them, when the optimization functionF1, F3, F4, F8, F10, F11, F13, F15, F18, F19, F20, F21, F22, F25, FF26, F27, F28, F29, F30, it can quickly converge to the best value and remain stable.When the optimization function F12, it can quickly converge to the best value and will continue to optimize to make the best better.When optimizing functions F6, F7, F16, F23, F24, it can quickly converge to the best value and remain stable, but the best value will be surpassed by other algorithms in the later period, such as optimizing F6, F7, F16, it will be surpassed by GJO algorithm in the later period, and for F23, F24 it will be surpassed by ICOA algorithm in the later period.When  www.nature.com/scientificreports/

Analysis of Wilcoxon rank sum test results
Table 5 shows the Wilcoxon rank sum test results for ICOA and 5 different algorithms over 50 runs.From the comparison of TNTWCOA algorithm with ICOA, GJO, COA, SCSO, OOA, SABO algorithm, there is a significant deviation between TNTWCOA algorithm and most algorithms.However, when optimizing functions F7, F16, F17, F20 and F23, the Wilcoxon rank sum test of TNTWCOA algorithm and COA algorithm is greater than 0.05, indicating that there is no significant difference between the obtained function values.

Statistics analysis
Table 6 shows the statistical results of IEEE CEC2017 benchmark functions in 100 dimensions optimized by TNTWCOA, ICOA, GJO, COA, SCSO, OOA and SABO algorithms.Also the Friedman values based on the average value of the IEEE CEC2017 benchmark functions optimized by the TNTWCOA, ICOA, GJO, COA, SCSO, OOA and SABO algorithms were statistically analyzed.As can be seen from Table 2.More or less evaluation indexes of TNTWCOA proposed in this paper is superior to that of the ICOA, GJO, COA, SCSO, OOA, and SABO algorithms; Among them, when F1, F4, F11, F12, F13, F14, F15, F16, F18, F19, F25, F28, F29 and F30 functions are optimized, all the evaluation indexes of TNTWCOA algorithm are optimal, showing excellent performance; When F3, F7, F8, F27 are optimized, std value only is inferior to ICOA algorithm,F5 are optimized, www.nature.com/scientificreports/min and std value only is inferior to ICOA algorithm, and for F21,std and worse value only is inferior to ICOA algorithm; for 24, avg, median, worse value only is inferior to ICOA algorithm, F6, F22 are optimized, std value only is inferior to COA algorithm, min value only is inferior to GJO algorithm and std, worse value only is inferior to ICOA algorithm for F20, std value only is inferior to COA algorithm and worse value only is inferior to SCSO algorithm for F9, std value only is inferior to COA algorithm and worse value only is inferior to ICOA algorithm for F10, F26 MIN ICOA STD GJO, min value only is inferior to ICOA algorithm and std value only is inferior to GJO algorithm for F26.In general, the improved algorithm shows stronger optimization ability in the case of 50 dimensions.Friedman's overall order is TNTWCOA > SCSO > ICOA > GJO > SABO > OOA > COA.Therefore, from the statistical results of evaluation index, when Dim = 50, the algorithm proposed in this paper shows excellent performance compared with other five algorithms.Compared with the original COA and ICOA algorithm, the statistical results of evaluation index of the improved algorithm have been significantly improved.

Convergence analysis
Figure 10 illustrates the convergence curves of GJO, SCSO, OOA, SABO, original COA, ICOA and TNTWCOA on 29 benchmark functions of the IEEE CEC2017 throughout the iterations with 10000times.
As can be seen from Fig. 10, except F3, F17 and F24, the convergence speed of the improved TNTWCOA algorithm is optimal.Among them, when the optimization function F4, F5, F6, F7, F8, F9, F10, F11, F14, F15, F16, F18, F21, F22, F25, F26, F28, F29, F30, it can quickly converge to the best value and maintain the stability.When optimizing the functions F1, F12, F19, F20, it can quickly converge to the best value and will continue to optimize to make the best better.When optimizing functions F23, they can quickly converge to the best value and remain stable, but the best value will be surpassed by ICOA and GJO in the later period, and F27 will be surpassed by ICOA.When optimizing functions F13, it can not quickly converge to the best value, but it will gradually converge to the best and better.

Analysis of box plot results
The largest number (maximum value) and the smallest number (minimum value) in Fig. 9 constitute the variation range of the optimal values of 29 functions optimized by GJO, SCSO, OOA, SABO, COA, and ICOA algorithms in CEC 2017 after running 30 times.That is, the narrower the box graph, the smaller the fluctuation range of the optimal value of the function running 30 times, and the more stable the optimization; The lower the position of the box diagram, the smaller the function optimization value and the closer it is to the theoretical value.The "o" in the diagram indicates the existence of singularity.As can be seen from Fig. 11, except for F23 and F 24, the other have the lower the position of the box diagram, when optimized F9, F20, the narrow of the box diagram is worst, when optimized F1, F3, F11, F12, F13, F14, F15, F16, F17, F19, F25, F27, F28, F29, F30, they have the

Analysis of Wilcoxon rank sum test results
Table 7 shows the Wilcoxon rank sum test results (Dim = 100).As can be seen from Table 7.There is a significant deviation between TNTWCOA and the function optimization results of most algorithms.However, when optimizing functions F20, the Wilcoxon rank sum test of TNTWCOA algorithm and GJO and ICOA algorithm is greater than 0.05, indicating that the obtained function optimization values are not significantly different.At the same time, when optimizing function F21, the Wilcoxon rank sum test of TNTWCOA algorithm and ICOA algorithm is greater than 0.05, indicating that there is no significant difference between the obtained function optimization values.F24, the Wilcoxon rank sum test of TNTWCOA algorithm and SCSO algorithm is greater than 0.05, indicating that there is no significant difference between the obtained function optimization values.

TNTWCOA for engineering optimization problems
To verify the actual optimization effect of TNTWCOA in solving engineering problems, the optimization performance of TNTWCOA is tested by using the selected four classic engineering problems, and the specific performance of the TNTWCOA on each engineering problem is as follows:

Three-bar truss design problem6
The main purpose of studying the design of a three-bar truss is to reduce the structure's weight under the action of the total sup-porting load p.The geometry of this problem is given in Fig. 12.In the benchmark suite, the total number of decision variables D = 2 , the number of inequality constraints g = 3 , the number of equality constraints h = 0 , and the best known feasible objective function value f (x * ) = 2.6389584338E + 02 24,25 .
The mathematical formula of the three-bar truss problem is as follows: www.nature.com/scientificreports/Subject to: The iterative process of finding the optimal solution of the six algorithms is shown in Fig. 13a, and its boxplots is shown in Fig. 13b.What is clear from the simulation results is that TNTWCOA has fast provided the optimal solution to the three-bar truss problem and the objective function value equal to 2.63896E+02.The statistical results obtained from TNTWCOA and competitor algorithms implementation are released in Table 8.These results show that TNTWCOA has superior performance over competitor algorithms due to better values of statistical indicators.

Welded beam design5
Welding beam design is a common and challenging problem in structural engineering.The goal is to achieve the best structural performance and minimize the weight of the beam by optimizing parameters such as the shape, size, and layout of the weld under given constraints.In the benchmark suite, the total number of decision variables D = 4 , the number of inequality constraints g = 5 , the number of equality constraints h = 0 , and the best known feasible objective function value f (x * ) = 1.6702177263 25,26 .The specific structure of the welded beam design is shown in Fig. 14.

The gear train design problem
The gear train design problem aims to minimize the transmission ratio.In the benchmark suite, the total number of decision variables D = 4 , the number of inequality constraints g = 1 , the number of equality constraints h = 1, and the best known feasible objective function value f (x * ) = 0 25,27 .The gear train design problem structural diagram is shown in Fig. 16.
The iterative process of finding the optimal solution of the six algorithms is shown in Fig. 17a, and its boxplots is shown in Fig. 17b, What is clear from the simulation results is that TNTWCOA has fast provided the optimal solution to the gear train design problem and the objective function value equal to 0. The statistical results obtained from TNTWCOA and competitor algorithms implementation are released in Table 10.These results show that TNTWCOA has superior performance over ICOA, GJO, OOA, SCSO, SABO algorithms due to better values of statistical indicators.

Fig. 3 .
Fig. 3. Results of population samples under different chaotic maps.

Fig. 6 .
Fig. 6.The convergence curves and search history of the proposed technique and other five algorithms for IEEE CEC2017 benchmark functions (Dim = 30).

Table 1 .
Test functions details of CEC2017.

Table 2 .
The statistical results of benchmark functions using the proposed technique and other five algorithms (Dim = 30).Significant values are in bold.

Table 3 .
Wilcoxon rank sum test results.Significant values are in bold.

Table 4 .
The statistical Results of benchmark functions using the proposed technique and other five algorithms (Dim = 50).Significant values are in bold.

Table 6 .
The statistical results of benchmark functions using the proposed technique and other five algorithms (Dim = 100).Significant values are in bold.

Table 11 .
Statistical results for the speed reducer design problem.Bold values have the best performance.Significant values are in bold.

Table 12 .
Statistical results of Wilcoxon rank-sum test.Significant values are in bold.