Next Article in Journal
Study of the Briquetting Process of Walnut Shells for Pyrolysis and Combustion
Previous Article in Journal
Enhancing P300-Based Brain-Computer Interfaces with Hybrid Transfer Learning: A Data Alignment and Fine-Tuning Approach
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Self-Adaptive Differential Evolution with Gauss Distribution for Optimal Mechanism Design

1
School of Mechanical Engineering, Hanoi University of Science and Technology, Hanoi 10000, Vietnam
2
Department of Machinery and Control Systems, Shibaura Institute of Technology, Tokyo 135-8548, Japan
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(10), 6284; https://doi.org/10.3390/app13106284
Submission received: 26 March 2023 / Revised: 14 May 2023 / Accepted: 16 May 2023 / Published: 21 May 2023
(This article belongs to the Section Mechanical Engineering)

Abstract

:
Differential evolution (DE) is one of the best evolutionary algorithms (EAs). In recent decades, many techniques have been developed to enhance the performance of this algorithm, such as the Improve Self-Adaptive Differential Evolution (ISADE) algorithm. Based on the analysis of the aspects that may improve the performance of ISADE, we proposed a modified ISADE version with applying the Gauss distribution for mutation procedure. In ISADE, to determine the scaling factor (F), the population is ranked, then, based on the rank number, population size, and current generation, the formula of the Sigmoid function is used. In the proposed algorithm, F is amplified by a factor which is generated based on Gaussian distribution. It has the potential to enhance the variety of population. In comparison with several reference algorithms regarding converging speed and the consistency of optimal solutions, the simulation results reveal the performance of the suggested algorithm is exceptional.

1. Introduction

A few recent decades witness a class of many powerful evolutionary algorithms and their applications. Like many other evolutionary algorithms, differential evolution is a population-based technique. This algorithm is developed by Rainer Storn and Kenneth V. Price [1]. It is inspired by the evolution phenomenon in nature and is a strong tool to solve engineering problems. There are three mechanisms (mutation, crossover, and selection) operating in DE in which three control parameters (population size, scaling factor, and crossover rate) have significant effect on exploration and exploitation of this algorithm.
Parallel to the development of computer and industry 4.0, evolutionary algorithms gradually demonstrate its pros in comparison with a traditionally mathematical method, such as the gradient descent method [2] and the Newton–Raphson method [3]. It results that more and more researchers focus on improvement of evolutionary algorithms and differential evolution has not been laying out this trend.
A literature review on the advance of the differential evolution algorithm, we cannot ignore some state of art developments, such as self-adaptive differential evolution algorithm (SaDE) [4], self-adaptive jDE algorithm (JDE) [5], and adaptive differential evolution with optional external archive (JADE) [6]. Recently, Yong Wang et al. incorporated the covariance matrix learning and by adjusting the bimodal distribution parameter to generate the control parameters of mutation and crossover operators, the effectiveness of DE is improved in solving problems that have a high correlation among variables [7]. Ali Wagdyy Mohamed and Ali Khater Mohamed randomly selected two factors from the best and worst individuals in each current generation, and the third vector was chosen from the middle individuals. This mechanism effectively keeps the balance between the global exploration and local exploitation abilities for searching process [8]. Tam bui et al. described an improved different evolution algorithm with opposition-based learning mechanism [9]. Each of the above-mentioned algorithms may work best for one or some specific problems. Thus, study in this field is still on-going and this paper introduces an improved version of DE which solved all four groups of benchmark functions with outstanding convergence speed.
Gaussian distribution, also called normal distribution, is a continuous function which is popularly used in optimization. It provides an excellent possibility to produce a new candidate for crossover operation based on a current position. Simultaneously, adaptive strategy of standard deviation parameter keeps good balance between exploitation and exploration process. This paper adopts a scaling factor generated by Gaussian distribution to improve mutation operation in a self-adaptive differential evolution algorithm (also named ISADE) proposed by Tam bui et al. [10]. This factor will be applied to three types of mutation operation (DE/best/1, DE/best/2, and DE/rand to best/1) with equally random selection probability. In addition, standard deviation parameter of Gaussian distribution is updated by comparing the global best fitness value between the current and previous generations. The proposed algorithm is tested in 19 benchmark functions and 4 conventional engineering problems. Numerical simulation results confirm that the proposed algorithm express significant improvement in comparison to the original DE and some recent related works.
The remain of this paper is composed of five sections. Section 2 describes the method of original DE. Section 3 reviews some recent related works on the applications of Gaussian distribution to DE. A modified Differential Evolution is introduced in Section 4. Section 5 describes comparisons of the proposed DE algorithm with the others and its application in real problem. Finally, Section 6 comprises some brief conclusions.

2. Review of Differential Evolution

The DE algorithm is firstly introduced by Rainer Storn and Kenneth V. Price [1]. It mainly consists of the four operations, such as initialization, crossover, mutation, and selection, as described in Algorithm 1.
Algorithm 1: DE algorithm
Initialization: Generate initial population within boundary.
While termination condition is not reached.
   Mutation: Calculate mutation vector V i .
   Crossover: Perform crossover to create U i vector.
   Selection: Select the candidates for the next generation based on fitness value.
End.
Return the best solution.

2.1. Initialization Operation

The DE is a population-based algorithm, thus, an initial population need to be generated randomly by uniform probability. This procedure increases a diversity of population and scatters individuals in the entire domain. Assume that D , N P are dimension of problem and population size, respectively. The search space is constrained within lower boundary LB = [ l 1 , l 2 , , l D ] and upper boundary UB = [ u 1 , u 2 , , u D ]. NP individuals are randomized by Equation (1).
x i , j = L B + r a n d [ 0 , 1 ] ( U B L B )
where x i , j is the j-th component of the i-th individual.

2.2. Mutation Operation

In this stage, DE deploys mutation schemes to produce an i-th new vector v i for each i-th vector x i . Equations (2)–(6) are the five most commonly applied mutation schemes in different kinds of DE algorithms.
D E / r a n d / 1 : v i i t e r = x r 1 i t e r + F * ( x r 2 i t e r x r 3 i t e r )
D E / b e s t / 1 : v i i t e r = x b e s t i t e r + F * ( x r 1 i t e r x r 2 i t e r )
D E / b e s t / 2 : v i i t e r = x b e s t i t e r + F * ( x r 1 i t e r x r 2 i t e r ) + F * ( x r 3 i t e r X r 4 i t e r )
D E / c u r r e n t t o b e s t / 1 : v i i t e r = x i i t e r + F * ( x b e s t i t e r x r 1 i t e r ) + F * ( x r 2 i t e r x r 3 i t e r )
D E / r a n d t o b e s t / 1 : v i i t e r = x r 1 i t e r + F * ( x b e s t i t e r x r 1 i t e r ) + F * ( x r 2 i t e r x r 3 i t e r )
where x b e s t is best vector in current population at i t e r generation; r 1 , r 2 , r 3 , and r 4 are exclusively randomized from [ 1 ; N P ]. As can be seen that the effectiveness of mutation operation much depends on scaling factor F. Thus, many previous studies concentrated on this point to improve the performance of DE algorithm [1]. Likewise, the author of this paper introduced an adaptive scaling factor [10] which is formulated as follows:
Adaptive scaling factor F i i t e r : in the beginning iterations, initial scaling factor F i of the i-th individual is set to be high and it will gradually decrease after certain iteration for proper exploitation. The value F i is produced by a Sigmoid function as in Equation (7).
In which α is the control parameter that decides the trend value of scaling factor F i , as illustrated in Figure 1.
F i = 1 1 + e x p α * i N P 2 N P
ISADE gives an addition scaling F m e a n i t e r as in Equation (8).
F m e a n i t e r = F m i n + ( F m a x F m i n ) i t e r m a x i t e r i t e r m a x n i t e r
where F m a x and F m i n are the maximum and minimum value of F, of which the recommended values are 0.8 and 0.15, respectively. i t e r m a x and n i t e r are the maximum iteration, and the non-linear modulation index. n i t e r is determined as in Equation (9).
n i t e r = n m i n + ( n m a x n m i n ) i t e r i t e r m a x
where n m a x and n m i n are selected in the [0, 15] range. n m a x and n m i n are 6.0 and 0.2, respectively. Lastly, scaling factor F i i t e r is determined for each individual in each iteration, as in Equation (10). In mutation operation, each individual within a population will be assigned a unique scaling factor. This approach is designed to ensure a balance between exploration and exploitation by the algorithm.
F i i t e r = F i + F m e a n i t e r 2

2.3. Crossover Operation

Once the mutation has been carried out on the selected individuals, DE then utilizes a crossover operation as Equation (11).
u i i t e r = v i i t e r i f   r a n d [ 0 , 1 ] c r o r j = j r a n d x i i t e r otherwise .
In which u , v , and x are, respectively, trial vector, mutation vector, and current vector. c r is a crossover limitation within the range of (0, 1). j r a n d is a randomly chosen number from set {1, 2, …, D}.

2.4. Selection Operation

In this stage, trial vector u and target vector x will be compared with each other in order to pick out the better fitness for the next generation. This operation is carried out as in Equation (12).
x i i t e r + 1 = u i i t e r i f   f ( u i i t e r ) f ( x i i t e r ) x i i t e r otherwise .
To improve the exploration ability of ISADE, Tam Bui et al. applied the opposition-based learning scheme [11] and named EOBL-DE [9]. In the first opposition-based learning scheme, EOBL-DE generate 2NP population consisting of NP initial candidates and their corresponding opposition. Equation (13) determines the opposition of particle p ( x 1 , x 2 , . . . , x D ) . After that, all 2NP candidates will be evaluated and ranked by their fitness value. NP best candidates are selected for the next generation.
o x i = L B i + U B i x i
In which L B i and U B i designate the lowest and highest possible values for x i . Figure 2 illustrates the position of a candidate and its corresponding opposite in both one- and two-dimensional graphs, where blue point is a current candidate and orange points is its corresponding opposition particle.
The second scheme applied the opposition-based learning strategy to the elite candidates in a selection operation. After ranking all candidates by their fitness value, only a certain number of the top best particles are selected to calculate their opposition. The candidate and its corresponding opposition will then be assessed and compared to each other based on their fitness value in order to find out the most suitable ones for the next generation. Figure 3 described the elite-based opposition strategy of EOBL-DE. The elite particle is marked in green, while its corresponding opposition is marked in orange. It was confirmed that applying OBL to elite particles enhanced the local search ability of ISADE.

3. Related Works

In the process of the EAs, Gaussian distribution is applied by the different approaches as the follows.
H. Zheng et al. [12] proposed a modification of Cuckoo search algorithm (GCS), which applied a Gaussian random step instead of Lévy flight when producing a new solution. GCS was validated to be better than standard Cuckoo search algorithm when comparing the best solution and convergence rate.
X.-S. He et al. [13] used a Gaussian random vector to generate the new offspring vector based on current position. This procedure is considered as an additional mutation operation after updating process of original bat algorithm.
Z. Lin and Q. Zhang [14] integrated a Gauss mutation into particle swarm optimization. New candidate is generated by adding Gaussian distribution-based random number to current solution. Adoption Gaussian distribution will give a high probability in producing a number around the mean and low probability of a large number; thus, a new population will contain two inevitably important abilities of evolutionary process, exploration and exploitation.
Focusing on DE, Jena et al. [15] added a Gaussian random variable N ( 0 , σ i 2 ) to the parent vector to produce a new solution, in which the standard deviation σ i 2 is formulated as Equation (14).
σ i = f ( x i ) f m i n
where f ( x i ) is fitness value of evaluating candidate, x i , f m i n is global fitness value. This define controls the adding amount of perturbation and guarantees the distribution of a suitable random value to each parent vector in population.
Sun et al. [16] combined Gauss distribution and crossover operator to deploy mutation operation where the mean and the standard deviation are dynamically adjusted at each iteration. This mechanism was expected to improve the diversity of population and performed the effectiveness on benchmark functions and real-world engineering problem.
Li and Jin [17] randomized scale factor through two Gauss distributions to increase the chance to escape from local optima. However, all the above-mentioned studies show that there is no free lunch in search and optimization. Each optimization algorithm is significantly affected on a specific benchmark function; this paper proposed a novel algorithm using a mutation mechanism with Gauss distribution which solved all representations in four groups of benchmark function. In addition, the proposed algorithm has an outstanding performance on convergence speed in comparison with the original DE and its recent improvement.

4. Modified Differential Evolution

Gaussian distribution is widely used in probability and statistics, and is often applied to create real-valued random variables. Gaussian distribution gives a good chance to exploit a new potential candidate in the optimization of many complex functions. In this paper, we the improved exploration ability of EOBL-DE by using Gauss distribution. Our introduced strategy will generate a new trial vectors in mutation operation by Gaussian perturbation mechanism which multiples Gaussian random variable to a vector difference ( X r 1 , j i t e r X r 2 , j i t e r ) in Equation (15), ( X r 1 , j i t e r X r 2 , j i t e r ) and ( X r 3 , j i t e r X r 4 , j i t e r ) in Equation (16), and ( X r 2 , j i t e r X r 3 , j i t e r ) in Equation (17).
The equation for mutation operation schemes has been modified as the following Equations (15)–(17).
D E / b e s t / 1 : V i , j i t e r = X b e s t , j i t e r + N ( μ , σ 2 ) i * F * ( X r 1 , j i t e r X r 2 , j i t e r )
D E / b e s t / 2 : V i , j i t e r = X b e s t , j i t e r + N ( μ , σ 2 ) i * [ F * ( X r 1 , j i t e r X r 2 , j i t e r ) + F * ( X r 3 , j i t e r X r 4 , j i t e r ) ]
D E / r a n d t o b e s t / 1 : V i , j i t e r = X r 1 , j i t e r + F * ( X b e s t , j i t e r X r 1 , j i t e r ) + N ( μ , σ 2 ) i * F * ( X r 2 , j i t e r X r 3 , j i t e r )
where r 1 , r 2 , and r 3 are randomized from [ 1 ; N P ]; N( μ , σ 2 ) is a Gaussian random variable with mean μ and standard deviation σ . Recommended value for μ is 1, whereas σ is defined in Equation (18). N ( μ , σ 2 ) i is calculated to be suitable for each i-th individual based on the fitness value.
σ i = 1 0.5 * e x p f i t i i t e r f i t m i n i t e r T
where T = T o * i t e r , recommended value for T o is 1.5, i t e r is current number of iteration. The G-ISADE algorithm is described as in Algorithm 2.
Algorithm 2: G-ISADE algorithm.
Initialization: Generate initial population of NP candidates and their corresponding opposition within boundary.
Generate initial Gauss variables.
Evaluation and rank: Ranking all 2NP population based on comparing fitness value. After that, NP particles will be selected for the next generation.
While termination condition is not reached.
  Adaptive scaling factor: Calculating mutation adaptive scaling factor F by Equation (10).
Mutation: Calculate mutation vector V i by Equations (15)–(17).
Crossover: Perform crossover to create U i vector.
  Selection: Calculate oppositon of a certain top best number of particles. Select the NP candidates for the next generation based on fitness value.
  Update Gauss variables: Recalculate standard deviation of Gauss distribution and generate new Gauss variable. The details of this step can be seen in the procedure of G-ISADE in Figure 4.
End.
Return the best solution.
Recently, L. Tang et al. [18] presented an individual-dependent mechanism for DE, namely IDE. This approach deployed a similar scale factor ( F n e w ) to mutation operation, this parameter is randomized from a normal distribution ( F i b a s e , 0.1 ), where F i b a s e is original scale factor, associated with individual x i and determined by Equation (19).
F i b a s e = b i N P
where b i is an index of the fitness value of individual x i after raking; N P is number of population. Standard derivation is set to 0.1 . This is different from G-ISADE, which used a strategy to calculate standard deviation at each iteration. The comparison of two these algorithms is implemented in Section 5.

5. Numerical Simulation

This section is to compare the performance of the presented method with the previous methods, consisting of the original DE and its developed version on the following issues, convergence speed, stability, and accuracy.

5.1. Benchmark Function

The novel algorithm is tested with 19 benchmark-test functions which are multidimensional problems [19] that contain multidimensional unimodal and multimodal functions.
All tested functions are continuous and divided into four groups as the following:
  • Function group 1 is separable and unimodal: Sphere (f1), weighted sphere (f12), Sum of Different Power (f13), Bent Cigar (f17).
  • Function group 2 is non-separable and unimodal: Schwefel’s Problem 1.2 (f6), Schawefel or Schwefel function 2.22 (f8), Rosenbrock (f2).
  • Function group 3 is separable and multimodal: Rastrigin (f4), Levy (f7), Alpine (f10), Schwefel’s Problem 2.26 (f19).
  • Function group 4 is non-separable and multimodal: Ackley (f5), Griewank (f3), Zakharov (f14), Exponential Problem (f15), Salomon Problem (f16); Schaffer function (f9), Pathological (f11), Expanded Schaffer’s F6 (f18).
The formulae of all tested benchmarks are expressed in Appendix A. All benchmark functions are rendered hereafter as problems of minimization, however, they do not lack generality as they can also be used for problems of maximization by simply converting the sign of the objective function.

5.2. Performance Comparison of Proposed Algorithm with IDE

This section is to compare search ability of G-ISADE with IDE. To have a fair comparison, we also set up the same searching condition with their original paper, such as the dimensionality of the test functions are 30; the number of populations, N P = 100 ; and the searching process will stop at iteration of 10 4 * D . The benchmark functions, such as f 1 , f 2 , f 3 , f 4 , f 5 , f 17 , f 18 , and f 19 , were tested with 51 independent runs, these test functions are representative functions of four considered groups and are also considered in the original paper of IDE. We define an error value ( ε ) which is the difference between the obtained optimal fitness value with the known one and this value is taken to zero if it is less than 10 8 . The mean value of the 51 runs with standard derivation is shown in Table 1. The comparison results show that the proposed approach is better than IDE while solving the considered functions. With the test functions, f 1 , f 4 , f 5 , f 17 , f 18 , and f 19 , the error value of the fitness is less than 10 8 and is taken to zero, the new approach expresses significant robustness. These results proved the more powerful searching ability of the proposed method in comparison with IDE.

5.3. Performance Comparison of G-ISADE with Its Previous Improvements

In this part, we will assess and compare the effectiveness of the suggested algorithms (G-ISADE) with the original DE [1] and some previous improvements of DE, such as ISADE [10], JOBLDE, and EOBLDE [9]. The comparing criterion are the success rate (SR) and the total count of function evaluations (FE) where SR is calculated by the ratio of the number of the successful findings in total number of samples, NS = 30, one numerical simulation can accomplish if the ending condition is reached (error value, ε 10 8 ); FE is the most popular parameter for calculating the convergence speed of an optimization algorithm, measured by the number of the function calls to compute the fitness value. In addition, an improvement rate (IR) is defined by improvement percentage ratio on FE of the proposed algorithm in comparison with original DE, this factor is to confirm the effectiveness of the proposed algorithm on the convergence speed while solving the benchmark functions.
In the experiment, the parameter was set as follows. The number of dimensions, D = 30 and the number of populations, N P = 100 . From original DE algorithm proposed by Storn and Price [1], we chose F = 0.5 and C r = 0.9 . All of the benchmarks were independently tested with 30 random initial samples. This was carried out using a random number generator in MATLAB software, version R2015a with a fixed seed. The simulation results are described in Table 2 and Figure 5. As can be seen, G-ISADE and EOBLDE could obtain the optimal solutions for all considered benchmark functions. ISADE was not able to reach the optimal solution for the test function f 9 , f 11 , f 16 , and f 18 while DE demonstrated instability in reaching the termination condition of the error value ( ε 10 8 ); it was unable to succeed in solving the test function f 2 , f 4 , f 9 , f 11 , f 16 , f 18 , and f 19 . In terms of comparing the number of function evaluation calls, G-ISADE achieved the optimal solution using fewer function evaluation calls, except for test functions f 11 and f 14 . With the 11 test functions f 1 , f 3 , f 5 , f 6 , f 7 , f 8 , f 10 , f 12 , f 13 , f 15 , and f 17 , the proposed method demonstrated a significant improvement in terms of the convergence speed.
Figure 6, Figure 7, Figure 8, Figure 9, Figure 10, Figure 11, Figure 12, Figure 13, Figure 14, Figure 15, Figure 16, Figure 17, Figure 18, Figure 19, Figure 20, Figure 21 and Figure 22 express the behavior of the convergence of all algorithms for 17 benchmark-test problems except for f 11 and f 14 because the performance of the proposed algorithm was not as good as EOBLDE for these test functions. On the graph, the number of iterations is shown on the horizontal axis, while the function value is shown on the vertical axis. These graphs show that the proposed algorithm achieved convergence to the global optimum minimum value with ε 10 8 in fewer iterations when compared to the remaining reference algorithms.

5.4. Modified Differential Evolution for Real-World Problem

To prove the ability of the introduced methods, the four popular real-world problems are considered and solved in this section. The parameters of the experiments are set as follows, the number of population N P = 8 * D , the terminal condition is that the iteration or the function evaluation reach the maximum value. All numerical simulations are independently implemented 30 runs for each test problem.

5.4.1. Welded-Beam Design Problem

The objective of this problem is to minimize the manufacturing cost by optimizing a welded beam under some constraint conditions [20]. Figure 23 shows the structure of a welded-beam. The target is to determine the four optimal design variables, such as x 1 ( h ) , x 2 ( l ) , x 3 ( t ) , and x 4 ( b ) subjected to shear stress τ , bending stress σ , and end deflection on beam δ in the condition of bending force at the end of bar F. The optimization problem is modeled by Equation (20).
Minimize:
f ( x ) = 1.10471 x 1 2 x 2 + 0.04811 x 3 x 4 ( 14.0 + x 2 )
Subject to
g 1 = τ ( x ) 13 , 600 0
With τ ( x ) = τ 2 + 2 τ τ x 2 2 R + τ 2 ; τ = 6000 2 x 1 x 2 ; τ = M R J ;
M = 6000 14.0 + x 2 2 ; R = x 2 2 4 + x 1 + x 3 2 2 ; J = 2 x 1 x 2 2 x 2 2 4.0 + x 1 + x 3 2 2
g 2 = σ ( x ) 30 , 000 0 ;
With σ ( x ) = 504 , 000 x 4 x 3 2
g 3 = x 1 x 4 0 ;
g 4 = 0.10471 x 1 2 + 0.04811 x 3 x 4 ( 14.0 + x 2 ) 5.0 0 ;
g 5 = 0.125 x 1 0 ;
g 6 = δ ( x ) 0.25 0 ;
With δ ( x ) = 65 , 856 , 000 ( 30 × 10 6 ) x 4 x 3 3
g 7 = 6000 P c ( x ) 0 ;
With P c ( x ) = 4.013 ( 30 × 10 6 ) x 3 2 x 4 6 36.0 196.0 1 x 3 30 × 10 6 4 ( 12 × 10 6 ) 28.0 where 0.1 x 1 , x 4 2.0 , and 0.1 x 2 , x 3 10.0 .
Our optimal solution is compared with several approaches, including the coevolutionary differential evolution method (CDE) [21], coevolutionary particle-swarm optimization (CPSO) [22], GA-based coevolution model (CGA) [23], and elite opposition-based learning differential evolution (EOBLDE) [9] as expressed in Table 3 and Table 4. As can be seen, our optimal solution is better than the remaining methods; in particular, the standard deviation value is “0” that proved the exceptional convergence ability of our method while solving the welded-beam design problem.

5.4.2. Design Problem of Air-Storage Tank

In this section, the dimension of the air-storage tank is optimized, high-compression pressure of 3000 psi is applied to the structure [24]. The design domain is a cylindrical structure with half of a sphere at both heads as Figure 24. The target is to minimize the manufacturing cost by evaluating design variables, including end thickness x 1 ( T s ) , head thickness x 2 ( T h ) , inner radius x 3 ( R ) , and cylindrical length x 4 ( L ) . This optimization problem is described by Equations (28)–(32) as below.
Minimize:
f ( x ) = 0.6224 x 1 x 3 x 4 + 1.7781 x 2 x 3 2 + 3.1661 x 1 2 x 4 + 19.84 x 1 2 x 3
Subject to
g 1 = x 1 + 0.0193 x 3 0 ;
g 2 = x 2 + 0.00954 x 3 0 ;
g 3 = π x 3 2 x 4 4 3 π x 3 3 + 1 , 296 , 000.0 0 ;
g 4 = x 4 240.0 0 ;
where 1 × 0.0625 x 1 , x 2 99 × 0.0625 , 10.0 x 3 , and x 4 200.0 .
The four methods, such as CDE, CPSO, CGA, and EOBLDE, were continued compared with our solution for this problem. The comparison results were given in Table 5 and Table 6. The best solution of G-ISADE is better than that of the other algorithms. As can be seen in Table 6, the search ability of G-ISADE is also better than the remains since the worst solution found by G-ISADE is better than those of the others.

5.4.3. Design Problem of Compression Spring

A compression spring will be considered in this part [25]. The target is to minimize weight of spring, while constraint functions are limitation of deflection, shear stress, surge frequency, moreover, outside diameter and design variables are limited in certain range. The optimized parameters were wire diameter x 1 ( d ) , outer diameter x 2 ( D ) , and number of turns x 3 ( N ) , as seen in Figure 25. This optimization problem is described by Equations (33)–(37).
Minimize:
f ( x ) = ( x 3 + 2 ) x 2 x 1 2
Subject to
g 1 = 1.0 x 2 3 x 3 71,785 x 1 4 0 ;
g 2 = 4.0 x 2 2 x 1 x 2 12,566 ( x 2 x 1 3 x 1 4 ) + 1.0 5108 x 1 2 1.0 0 ;
g 3 = 1.0 140.45 x 1 x 2 2 x 3 0 ;
g 4 = x 1 + x 2 1.5 1.0 0 ;
where 0.05 x 1 2.0 , 0.25 x 2 1.3 , and 2.0 x 3 15.0 .
Our optimal solution for this problem was evaluated with the four methods, namely CDE, CPSO, CGA, and EOBLDE. The best solution for all approaches is shown in Table 7, while their statistics are expressed in Table 8. The optimal solution obtained by G-ISADE is better than those of the other approaches.

5.4.4. Optimization Problem of Gear Reducer

Gear reducer is widely applied in machinery, it is used to reduce the speed and increase the torque taken by the motor. This research applied the modified differential evolution to two optimization problems, single stage and two-stage planetary spur gear reducer.
Problem 1: Single stage spur gear reducer
This problem is described in [26] and shown in Figure 26, it contains seven design variables, including the face width, x 1 ; the gear module, x 2 ; the teeth number on pinion gear, x 3 ; the length and diameter of the shaft I, x 4 and x 5 , respectively; and the length and diameter of the shaft II, x 6 and x 7 , respectively. The target is the minimal weight of speed reducer box expressed in Equation (38). This problem is bounded by 11 constraint functions as Equations (39)–(49).
Design variable: x 1 = [2.6, 3.6]; x 2 = [0.7, 0.8]; x 3 = [17, 28], x 3 is an integer; x 4 = [7.3, 8.3]; x 5 = [7.3, 8.3]; x 6 = [2.9, 3.9]; x 7 = [5, 5.5].
Minimize:
f ( x ) = 0.7854 x 1 x 2 2 ( 3.3333 x 3 2 + 14.9334 x 3 43.0934 ) 1.508 x 1 ( x 6 2 + x 7 2 ) + 7.4777 ( x 6 3 + x 7 3 ) + 0.7854 ( x 4 x 6 2 + x 5 x 7 2 )
Subject to
g 1 = 27 x 1 1 x 2 2 x 3 1 1 0 ;
g 2 = 397.5 x 1 1 x 2 2 x 3 2 1 0 ;
g 3 = 1.93 x 4 3 x 2 1 x 3 1 x 6 4 1 0 ;
g 4 = 1.93 x 5 3 x 2 x 3 x 7 4 1 0 ;
g 5 = 1 ( 110 x 6 3 ) ( 745 x 4 x 2 x 3 ) 2 + 16.9 × 10 6 1 0 ;
g 6 = 1 ( 85 x 7 3 ) ( 745 x 5 x 2 x 3 ) 2 + 157.5 × 10 6 1 0 ;
g 7 = x 2 x 3 40 1 0 ;
g 8 = 5 x 2 x 1 1 0 ;
g 9 = x 1 12 x 2 1 0 ;
g 10 = 1.5 x 6 + 1.9 x 4 1 0 ;
g 11 = 1.1 x 7 + 1.9 x 5 1 0 ;
Table 9 and Table 10 show the comparison of the our optimal solution with previous research which consists of the artificial immune system genetic algorithm (AIS-GA) [27], real-coded steady-state genetic algorithm (APM r c ) [28], and EOBLDE. From Table 9, the best solution of G-ISADE is better than those of the remain algorithms. In addition, the standard deviation in statistical results of G-ISADE is “0”, that verified the ability of convergence of the proposed method in solving minimal weight problem of single stage spur gear reducer box.
Problem 2: Two-stage planetary gear reducer
Planetary gear reducer plays an important role in transmission systems and are commonly applied in industry. It has many pros in comparison with normal transmission systems, such as weight reduction, compact dimensions, wide range of speed ratios, and high efficiency.
This section considers a two-stage planetary gear reducer, as described in Figure 27. This problem involves nine design variables, including the number of teeth on the primary sun gear (1) and the secondary sun gear (1 ), which are x 1 and x 6 , respectively; the gear module, x 2 , x 7 ; the face width, x 3 , x 8 ; the sun gear`s modification coefficient, x 4 , x 9 ; and the first-stage transmission ratio, x 5 . To minimize the weight of the planetary gear reducer, the total volume of all sun gears and planetary gears is set as the objective function and displayed in Equation (50) with 10 constraint functions, which include stress limitations, and the adjacent condition is expressed in Equations (51)–(60). The details of the problem can be found in [29].
Design variable: x 1 = [17, 30]; x 2 = [2, 5]; x 3 = [55, 61]; x 4 = [0.2, 0.5]; x 5 = [5, 30]; x 6 = [17, 30]; x 7 = [2, 5]; x 8 = [140, 200]; x 9 = [0.2, 0.5]
Minimum:
f ( x ) = π 4 × ( x 3 ( ( ( x 1 + 2 ( 1 + x 4 ) ) x 2 ) 2 + 3 ( ( 0.5 ( x 5 2 ) x 1 + 2 ( 1 + x 4 ) ) x 2 ) 2 ) + x 8 ( ( ( x 6 + 2 ( 1 + x 9 ) ) x 7 ) 2 + ( ( 0.5 ( u x 5 2 ) x 6 + 2 ( 1 + x 9 ) ) x 7 ) 2 ) )
where u is the total transmission ratio of the planetary gear system, u = 30.
Subject to
g 1 = 2 × 2.89 × 5.69 × 10 5 × ( x 5 + 1 ) x 3 ( x 1 x 2 ) 2 x 5 ( 1033.41 2.22 × 189.98 × 0.95 ) 2 0 ;
g 2 = 2 × 2.95 × 1.78 × 10 6 × ( u x 5 + 1 ) x 8 ( x 6 x 7 ) 2 u x 5 ( 1104.47 2.25 × 189.98 × 0.94 ) 2 0 ;
g 3 = 2 × 2.89 × 5.69 × 10 5 x 1 × x 2 2 × x 3 499.39 2.29 × 1.73 × 1.12 0 ;
g 4 = 2 × 2.89 × 1.78 × 10 6 x 6 × x 7 2 × x 8 521.53 2.32 × 1.73 × 1.08 0 ;
g 5 = x 1 × ( x 5 3 2 ( x 5 + 1 ) ) 2 × x 4 + 4 0 ;
g 6 = x 6 × ( u x 5 3 2 ( 30 x 5 + 1 ) ) 2 × x 9 + 4 0 ;
g 7 = 0.6 × x 1 × x 2 x 3 0 ;
g 8 = x 3 1.3 × x 1 × x 2 0 ;
g 9 = 0.6 × x 6 × x 7 x 8 0 ;
g 10 = x 8 1.3 × x 6 × x 7 0 ;
The obtained solution is compared with the original DE, as shown in Table 11. Our optimal solution is better than the candidate generated by the original DE.

6. Conclusions

This paper addressed the application of a new approach for solving optimization problem of mechanical engineering. The proposed algorithm is an improve version of ISADE, it adopts a Gaussian random variable to mutation operation.
Firstly, the proposed method is verified through numerical simulation with 19 benchmark functions. The new approaches proved to be highly effective across all the testing criteria. The updated strategies significantly improved the convergence speed and accuracy in comparison with the previous approaches.
In addition, the new method is also a powerful tool to solve real-world engineering problems. By introducing this method, our research contributes one more option to engineers when solving the real problems in their work.

Author Contributions

Conceptualization, V.-T.N.; methodology, V.-T.N.; numerical simulation and validation, V.-T.N.; formal analysis, writing—original draft preparation, V.-T.N.; writing—review and editing, V.-M.T.; supervision, N.-T.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Hanoi University of Science and Technology (HUST) under project number T2022-PC-036.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Acknowledgments

This work was also supported by the Centennial SIT Action for the 100th anniversary of Shibaura Institute of Technology to enter the top ten Asian Institute of Technology.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
DEDifferential evolution
EAsEvolutionary algorithms
ISADEImprove Self-Adaptive Differential Evolution
SaDESelf-adaptive differential evolution algorithm
SRSuccess rate
FEFunction evaluations

Appendix A

Shere function: f 1 ( x ) = Σ i = 1 n x i 2
Rosenbrock function: f 2 ( x ) = Σ i = 1 n [ b ( x i + 1 x i 2 ) 2 + ( a x i ) 2 ]
Griewank function: f 3 ( x ) = 1 + Σ i = 1 n x i 2 4000 i = 1 n c o s ( x i i )
Rastrigin function: f 4 ( x ) = 10 n + Σ i = 1 n ( x i 2 10 c o s ( 2 π x i ) )
Ackley function: f 5 ( x ) = a . e x p ( b 1 n Σ i = 1 n x i 2 ) e x p ( 1 n Σ i = 1 n c o s ( c x i ) ) + a + e x p ( 1 )
Ridge function: f 6 ( x ) = Σ i = 1 n ( Σ j = 1 i x j ) 2
Levy function: f 7 ( x ) = s i n 2 ( 4 π x 1 ) + Σ i = 1 n 1 ( x i 1.0 ) 2 ( 1.0 + s i n ( 3 π x i + 1 ) 2 ) +
+ ( x n 1 ) 2 ( 1 + s i n 2 ( 4 π x n ) )
Schawefel function: f 8 ( x ) = Σ i = 1 n | x i | + i = 1 n | x i |
Schaffer function: f 9 ( x , y ) = 0.5 + s i n 2 ( x 2 + y 2 ) 0.5 [ 1 + 0.001 ( x 2 + y 2 ) ] 2
Alpine function: f 10 ( x ) = Σ i = 1 n | x i . s i n ( x i ) + 0.1 x i |
Pathological function: f 11 ( x ) = Σ i = 1 n 1 0.5 + ( s i n ( 100 . x i 2 + x i + 1 2 ) 0.5 ) 1 + 0.001 ( x i 2 + x i + 1 2 2 x i x i + 1 ) 2
Weighted sphere function: f 12 ( x ) = Σ i = 1 n i x i 2
Sum of Different Power function: f 13 ( x ) = Σ i = 1 n | x i | i + 1
Zakharov function: f 14 ( x ) = Σ i = 1 n x i 2 + ( Σ i = 1 n 0.5 i x i ) 2 + ( Σ i = 1 n 0.5 i x i ) 4
Exponential Problem: f 15 ( x ) = e x p ( 0.5 Σ i = 1 n x i 2 )
Salomon Problem: f 16 ( x ) = 1 c o s ( 2 π Σ i D x i 2 ) + 0.1 Σ i D x i 2
Bent Cigar Function: f 17 ( x ) = x i 2 + 10 6 Σ i = 1 n x i 2
Expanded Schaffer’s F6 Function: f 18 ( x 1 x n ) = Σ i = 1 n 1 f 9 ( x i , x i + 1 ) + f 9 ( x n , x 1 )
Schwefel Function: f 19 ( x ) = 418.9829 n + Σ i = 1 n ( x i s i n ( | x i | ) )

References

  1. Storn, R.; Price, K. Differential Evolution-A Simple and Efficient Heuristic for Global Optimization over Continuous Spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  2. Lemarechal, C. Cauchy and the Gradient Method. Doc. Math. Extra 2012, ISMP, 251–254. [Google Scholar]
  3. Wallis, J. A Treatise of Algebra, Both Historical and Practical; Richard Davis: Oxford, UK, 1685. [Google Scholar]
  4. Qin, A.K.; Suganthan, P.N. Self-Adaptive Differential Evolution Algorithm for Numerical Optimization. In Proceedings of the 2005 IEEE Congress on Evolutionary Computation, Edinburgh, UK, 2–5 September 2005; pp. 1785–1791. [Google Scholar]
  5. Brest, J.; Greiner, S.; Boskovic, B.; Mernik, M.; Zumer, V. Self-Adapting Control Parameters in Differential Evolution: A Comparative Study on Numerical Benchmark Problems. IEEE Trans. Evol. Comput. 2006, 10, 646–657. [Google Scholar] [CrossRef]
  6. Zhang, J.; Sanderson, A.C. JADE: Adaptive Differential Evolution with Optional External Archive. IEEE Trans. Evol. Comput. 2009, 13, 945–958. [Google Scholar] [CrossRef]
  7. Wang, Y.; Li, H.-X.; Huang, T.; Li, L. Differential evolution based on covariance matrix learning and bimodal distribution parameter setting. Appl. Soft Comput. 2014, 18, 232–247. [Google Scholar] [CrossRef]
  8. Ali W., M.; Ali K., M. Adaptive guided differential evolution algorithm with novel mutation for numerical optimization. Int. J. Mach. Learn. Cybern. 2019, 10, 253–277. [Google Scholar]
  9. Tam, B.; Trung, N.; Hiroshi, H. Opposition-based learning for self-adaptive control parameters in differential evolution for optimal mechanism design. J. Adv. Mech. Des. Syst. Manuf. 2019, 13, 4. [Google Scholar]
  10. Tam, B.; Pham, H.; Hiroshi, H. Improve Self-Adaptive Control Parameters in Differential Evolution for Solving Constrained Engineering Optimization Problems. J. Comput. Sci. Technol. 2013, 7, 59–74. [Google Scholar]
  11. Tizhoosh H., R. Opposition-Based Learning: A New Scheme for Machine Intelligence. In Proceedings of the International Conference on Computational Intelligence for Modelling, Control and Automation, Vienna, Austria, 28–30 November 2005; pp. 695–701. [Google Scholar]
  12. Zheng, H.; Zhou, Y. A Novel Cuckoo Search Optimization Algorithm Base on Gauss Distribution. J. Comput. Inf. Syst. 2012, 8, 4193–4200. [Google Scholar]
  13. He, X.-S.; Ding, W.J.; Yang, X.S. Bat algorithm based on simulated annealing and Gaussian perturbations. Neural Comput. Appl. 2014, 25, 459–468. [Google Scholar] [CrossRef]
  14. Lin, Z.; Zhang, Q. An effective hybrid particle swarm optimization with Gaussian mutation. J. Algorithms Comput. Technol. 2017, 11, 271–280. [Google Scholar] [CrossRef]
  15. Jena, C.; Basu, M.; Panigrahi, C.K. Differential evolution with Gaussian mutation for combined heat and power economic dispatch. Soft Comput. 2016, 20, 681–688. [Google Scholar] [CrossRef]
  16. Sun, G.; Lan, Y.; Zhao, R. Differential evolution with Gaussian mutation and dynamic parameter adjustment. Soft Comput. 2019, 23, 1615–1642. [Google Scholar] [CrossRef]
  17. Li, X.; Yin, M. Modified differential evolution with self-adaptive parameters method. J. Comb. Optim. 2016, 31, 546–576. [Google Scholar] [CrossRef]
  18. Tang, L.; Dong, Y.; Liu, J. Differential Evolution with an Individual-Dependent Mechanism. IEEE Trans. Evol. Comput. 2015, 19, 560–574. [Google Scholar] [CrossRef]
  19. Suganthan, P.N.; Hansen, N.; Liang, J.J.; Deb, K.; Chen, Y.-P.; Auger, A.; Tiwari, S. Problem Definitions and Evaluation Criteria for the CEC 2005 Special Session on Real-Parameter Optimization; Technical Report Number 2005005; Nanyang Technological University: Singapore; KanGAL: Kanpur, India, 2005. [Google Scholar]
  20. Ragsdell, K.; Phillips, D. Optimal Design of a Class of Welded Structures using Geometric Programming. ASME J. Manuf. Sci. Eng. 1976, 98, 1021–1025. [Google Scholar] [CrossRef]
  21. Huang, F.; Wang, L.; He, Q. An effective co-evolutionary differential evolution for constrained optimization. Appl. Math. Comput. 2007, 186, 340–356. [Google Scholar] [CrossRef]
  22. He, Q.; Wang, L. An effective co-evolutionary particle swarm optimization for constrained engineering design problems. Eng. Appl. Artif. Intell. 2007, 20, 89–99. [Google Scholar] [CrossRef]
  23. Coello, C.A.C. Use of a self-adaptive penalty approach for engineering optimization problems. Comput. Ind. 2000, 41, 113–127. [Google Scholar] [CrossRef]
  24. Sandgren, E. Nonlinear Integer and Discrete Pro-gramming in Mechanical Design Optimization. ASME J. Mech. Des. 1990, 112, 223–229. [Google Scholar] [CrossRef]
  25. Belegundu, A. A Study of Mathematical Programming Methods for Structural Optimization. Ph.D. Thesis, University of Iowa, Iowa City, IA, USA, 1982. [Google Scholar]
  26. Golinski, J. Optimal synthesis problems solved by means of nonlinear programming and random methods. J. Mech. 1970, 3, 287–309. [Google Scholar] [CrossRef]
  27. Bernardino, H.; Barbosa, H.; Lemonge, A.; Fonseca, L. A new hybrid AIS-GA for constrained optimization problems inmechanical engineering. In Proceedings of the IEEE Congress on Evolutionary Computation, Hong Kong, China, 1–6 June 2008; pp. 1455–1462. [Google Scholar]
  28. Lemonge, A.C.C.; Barbosa, H.J.C.; Borgesc, C.C.H.; Silvad, F.B.S. Constrained optimization problems in mechanical engineering design using a real-coded steady-state genetic algorithm. Mecánica Comput. 2010, 29, 9287–9303. [Google Scholar]
  29. Chen, T.; Zhang, Z.; Chen, D.; Li, Y. The Optimization of Two-Stage Planetary Gear Train Based on Mathmatica. In Pervasive Computing and the Networked World: Joint International Conference, ICPCA/SWS 2012, Istanbul, Turkey, 28–30 November 2012; Springer: Berlin/Heidelberg, Germany, 2013; Volume 7719, pp. 122–136. [Google Scholar]
Figure 1. Value of initial scaling factor F i depends on rank number i, population size N P , and α .
Figure 1. Value of initial scaling factor F i depends on rank number i, population size N P , and α .
Applsci 13 06284 g001
Figure 2. Definition of opposition particle.
Figure 2. Definition of opposition particle.
Applsci 13 06284 g002
Figure 3. Searching strategy of elite-based opposition particle.
Figure 3. Searching strategy of elite-based opposition particle.
Applsci 13 06284 g003
Figure 4. Procedure of G-ISADE.
Figure 4. Procedure of G-ISADE.
Applsci 13 06284 g004
Figure 5. Comparison of function evaluation calls.
Figure 5. Comparison of function evaluation calls.
Applsci 13 06284 g005
Figure 6. Convergence comparison for f1.
Figure 6. Convergence comparison for f1.
Applsci 13 06284 g006
Figure 7. Convergence comparison for f2.
Figure 7. Convergence comparison for f2.
Applsci 13 06284 g007
Figure 8. Convergence comparison for f3.
Figure 8. Convergence comparison for f3.
Applsci 13 06284 g008
Figure 9. Convergence comparison for f4.
Figure 9. Convergence comparison for f4.
Applsci 13 06284 g009
Figure 10. Convergence comparison for f5.
Figure 10. Convergence comparison for f5.
Applsci 13 06284 g010
Figure 11. Convergence comparison for f6.
Figure 11. Convergence comparison for f6.
Applsci 13 06284 g011
Figure 12. Convergence comparison for f7.
Figure 12. Convergence comparison for f7.
Applsci 13 06284 g012
Figure 13. Convergence comparison for f8.
Figure 13. Convergence comparison for f8.
Applsci 13 06284 g013
Figure 14. Convergence comparison for f9.
Figure 14. Convergence comparison for f9.
Applsci 13 06284 g014
Figure 15. Convergence comparison for f10.
Figure 15. Convergence comparison for f10.
Applsci 13 06284 g015
Figure 16. Convergence comparison for f12.
Figure 16. Convergence comparison for f12.
Applsci 13 06284 g016
Figure 17. Convergence comparison for f13.
Figure 17. Convergence comparison for f13.
Applsci 13 06284 g017
Figure 18. Convergence comparison for f15.
Figure 18. Convergence comparison for f15.
Applsci 13 06284 g018
Figure 19. Convergence comparison for f16.
Figure 19. Convergence comparison for f16.
Applsci 13 06284 g019
Figure 20. Convergence comparison for f17.
Figure 20. Convergence comparison for f17.
Applsci 13 06284 g020
Figure 21. Convergence comparison for f18.
Figure 21. Convergence comparison for f18.
Applsci 13 06284 g021
Figure 22. Convergence comparison for f19.
Figure 22. Convergence comparison for f19.
Applsci 13 06284 g022
Figure 23. Welded-beam design problem.
Figure 23. Welded-beam design problem.
Applsci 13 06284 g023
Figure 24. Design problem of air-storage tank.
Figure 24. Design problem of air-storage tank.
Applsci 13 06284 g024
Figure 25. Tension/compression-spring design problem.
Figure 25. Tension/compression-spring design problem.
Applsci 13 06284 g025
Figure 26. Single stage spur gear reducer.
Figure 26. Single stage spur gear reducer.
Applsci 13 06284 g026
Figure 27. Two-stage planetary gear reducer.
Figure 27. Two-stage planetary gear reducer.
Applsci 13 06284 g027
Table 1. Average of fitness value (Mean) and the standard deviation (Std) of 51 runs.
Table 1. Average of fitness value (Mean) and the standard deviation (Std) of 51 runs.
Function (30-D)IDEG-ISADE
MeanStdMeanStd
F10.000.000.000.00
F25.002.82 7.28 × 10 5 1.11 × 10 4
F3 3.42 × 10 2 1.47 × 10 2 7.73 × 10 6 1.96 × 10 5
F40.000.000.000.00
F520.9 4.78 × 10 2 0.000.00
F17 2.08 × 10 4 1.46 × 10 5 0.000.00
F189.940.490.000.00
F1923.431.80.000.00
Table 2. Performance comparison of proposed algorithm with the others.
Table 2. Performance comparison of proposed algorithm with the others.
FunctionDEISADEEOBLDEG-ISADE
SRFESRFESRFESRFEIR
F11.0035,6881.0034,1631.0016,2221.00813277.21%
F20.00300,0000.84179,3071.00176,2371.00144,56851.81%
F30.3438,6410.5049,6871.0022,0641.0012,47067.73%
F40.00300,0001.00144,0681.0085,7101.0081,18472.94%
F50.8254,3851.0058,2901.0022,0471.0012,03577.87%
F61.00126,0460.96138,3381.0014,9951.0013,76289.08%
F70.7633,6180.9446,5561.0017,9721.0013,58259.60%
F81.0057,8981.0095,7011.0025,3771.0014,91674.24%
F90.00300,0000.00300,0001.0085,0591.0084,62671.79%
F101.0058,6820.98132,9380.62135,3311.0014,82874.73%
F110.00300,0000.00300,0001.0090,9630.77107,12564.29%
F121.0033,6141.0046,4571.0015,1021.00796876.30%
F131.0083221.0096441.0046101.00246970.33%
F141.00182,6201.00172,2521.0013,9321.0021,56688.19%
F151.0024,9421.0042,0091.0012,1981.00609275.58%
F160.00300,0000.00300,0001.0085,1720.9377,16374.28%
F171.0061,8281.00113,4291.0025,9811.0012,67579.50%
F180.00300,0000.00300,0001.0088,7381.0052,43082.52 %
F190.00300,0001.0094,8491.0084,7451.0072,76175.75%
Table 3. Best solution for welded beam problem.
Table 3. Best solution for welded beam problem.
Design VariablesCDECPSOCGAEOBLDEG-ISADE
x 1 0.2031370.2023690.2088000.2057290.207407
x 2 3.5429983.5442143.4205003.4704883.448728
x 3 9.0334989.0482108.997509.0366239.000000
x 4 0.2061790.2057230.2100000.2057290.207407
g 1 −44.57856−12.83979−0.337812 1.4790 × 10 7 1.8 × 10 12
g 2 −44.66353−1.247467−353.9026 1.8863 × 10 7 7.3 × 10 12
g 3 −0.003042−0.001498−0.001200 9.7584 × 10 11 0.000000
g 4 −3.423726−3.429347−3.411865−3.432983−3.428506
g 5 −0.078137−0.079381−0.083800−0.080729−0.082407
g 6 −0.235557−0.235536−0.235649−0.235540−0.235481
g 7 −38.02826−11.68135−363.2323 4.4691 × 10 8 −131.577828
f ( x ) 1.7334621.7280241.7483091.7248521.713571
Table 4. Statistical results for welded beam problem.
Table 4. Statistical results for welded beam problem.
MethodsBestMeanWorstStandard Deviation
CDE1.7334611.7681581.8241050.022
CPSO1.7280241.7488311.7821430.013
CGA1.7483091.7719731.7858350.011
EOBLDE1.7248521.7248561.725001 2.131 × 10 5
G-ISADE1.7135711.7135711.7135710.00
Table 5. Best solution for air-storage tank problem.
Table 5. Best solution for air-storage tank problem.
Design VariablesCDECPSOCGAEOBLDEG-ISADE
x 1 0.81250.81250.81250.81250.78603
x 2 0.43750.43750.43750.43750.46406
x 3 42.098442.091240.323942.098442.0000
x 4 176.6376176.7465200.0000176.6365177.8603
g 1 6.677 × 10 7 −0.0001−0.0343 1.1102 × 10 16 −0.0019
g 2 −0.0358−0.0359−0.0528−0.0358−0.0368
g 3 −3.6830−116.3827−27.1058 2.3283 × 10 10 2.3283 × 10 10
g 4 −63.3623−63.2535−40.0000−63.3634−62.1396
f ( x ) 6059.73406061.07776288.74456059.71436059.60
Table 6. Statistical results for air-storage tank problem.
Table 6. Statistical results for air-storage tank problem.
MethodsBestMeanWorstStandard Deviation
CDE6059.73406085.23036371.045543.013
CPSO6061.07776147.13326363.804186.454
CGA6288.74456293.84326308.14977.413
EOBLDE6059.71436093.84316370.779783.671
G-ISADE6059.606067.266117.0619.8682
Table 7. Best solution for the compression spring problem.
Table 7. Best solution for the compression spring problem.
Design VariablesCDECPSOCGAEOBLDEG-ISADE
x 1 0.0516090.0517280.0514800.0516890.051897
x 2 0.3547140.3576440.3516610.3567180.361748
x 3 11.41083111.24454311.63220111.28894711.000000
g 1 −0.000039−0.000845−0.002080 2.220 × 10 16 10 15
g 2 −0.000183 1.2600 × 10 5 −0.000110 1.1102 × 10 16 0.000000
g 3 −4.048627−4.051300−4.026318−4.053785−4.063608
g 4 −0.729118−0.727090−4.026318−0.727728−0.724236
f ( x ) 0.01267020.01267470.01270480.0126650.012662
Table 8. Statistical results for compression spring problem.
Table 8. Statistical results for compression spring problem.
MethodsBestMeanWorstStandard Deviation
CDE0.0126700.0127030.012790 2.70 × 10 5
CPSO0.0126740.0127300.012924 5.19 × 10 5
CGA0.0127040.0127690.012822 3.93 × 10 5
EOBLDE0.0126650.0126690.012713 9.01 × 10 6
G-ISADE0.0126620.01268450.0128108 3.23 × 10 5
Table 9. Best solution for single stage spur gear reducer problem.
Table 9. Best solution for single stage spur gear reducer problem.
Design VariablesAIS-GAAPM rc EOBLDEG-ISADE
x 1 3.5000013.5000003.5000003.500000
x 2 0.7000000.7000000.7000000.700000
x 3 17171717
x 4 7.3000087.3000007.3000007.300000
x 5 7.8000017.8000007.7138887.713888
x 6 3.3502153.3502153.3502153.350214
x 7 5.2866845.2866835.2853535.285353
f ( x ) 2996.34832996.34822.993.61322993.61
Table 10. Statistical results for single stage spur gear reducer problem.
Table 10. Statistical results for single stage spur gear reducer problem.
MethodsBestMeanWorstStandard Deviation
AIS-GA2996.34832996.35012996.3599 7.45 × 10 3
APM r c 2996.34822997.47283051.45567.87
EOBLDE2993.61322993.61652993.6296 0.33 × 10 2
G-ISADE2993.612993.612993.610.00
Table 11. Best solution for two-stage planetary gear reducer problem.
Table 11. Best solution for two-stage planetary gear reducer problem.
Design VariablesDEG-ISADE
x 1 1320
x 2 2.470444.957651
x 3 60.710459.557864
x 4 0.2384340.5
x 5 12.98915.423887
x 6 1324
x 7 6.98054.907367
x 8 110.067147.951034
x 9 0.24080210.5
f ( x ) 14,555,831.2914,204,000
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Nguyen, V.-T.; Tran, V.-M.; Bui, N.-T. Self-Adaptive Differential Evolution with Gauss Distribution for Optimal Mechanism Design. Appl. Sci. 2023, 13, 6284. https://doi.org/10.3390/app13106284

AMA Style

Nguyen V-T, Tran V-M, Bui N-T. Self-Adaptive Differential Evolution with Gauss Distribution for Optimal Mechanism Design. Applied Sciences. 2023; 13(10):6284. https://doi.org/10.3390/app13106284

Chicago/Turabian Style

Nguyen, Van-Tinh, Vu-Minh Tran, and Ngoc-Tam Bui. 2023. "Self-Adaptive Differential Evolution with Gauss Distribution for Optimal Mechanism Design" Applied Sciences 13, no. 10: 6284. https://doi.org/10.3390/app13106284

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop