Abstract

For the deficiency of the basic sine-cosine algorithm in dealing with global optimization problems such as the low solution precision and the slow convergence speed, a new improved sine-cosine algorithm is proposed in this paper. The improvement involves three optimization strategies. Firstly, the method of exponential decreasing conversion parameter and linear decreasing inertia weight is adopted to balance the global exploration and local development ability of the algorithm. Secondly, it uses the random individuals near the optimal individuals to replace the optimal individuals in the primary algorithm, which allows the algorithm to easily jump out of the local optimum and increases the search range effectively. Finally, the greedy Levy mutation strategy is used for the optimal individuals to enhance the local development ability of the algorithm. The experimental results show that the proposed algorithm can effectively avoid falling into the local optimum, and it has faster convergence speed and higher optimization accuracy.

1. Introduction

Many problems in the field of engineering practice and scientific research come down to the global optimization problems. The traditional methods which purely lie upon the exactly mathematical mode have unsatisfactory effect in solving such optimization problems. These problems need to be continuous and derivable when the traditional methods are used for solving such practical engineering optimization problems, and these methods do not have the ability of global optimization for the multimodal, strong-nonlinearity, and dynamic change problems [1]. Accordingly, many scholars begin to explore new solution methods. The swarm intelligence optimization algorithm is a kind of global optimization algorithm designed by simulating the mutual cooperation behavior mechanism of gregarious biology in nature. Compared with the traditional optimization methods, the swarm intelligence optimization algorithm is characterized by simple principle and fewer adjustment parameters, and the gradient information and strong global optimization algorithm of problems are not required. So it is widely used in the engineering field of function optimization [24], combinatorial optimization [5], neural network training [6, 7], and image processing. At present, many swarm intelligence optimization algorithms are proposed [2, 815] like particle swarm optimization (PSO) [8], differential evolution (DE) [9, 10], artificial bee colony algorithm (ABC) [2, 11], cuckoo search (CS) [12, 13], and flower pollination algorithm (FPA) [14, 15].

Sine-cosine algorithm (SCA) is a new swarm intelligence optimization algorithm proposed by Mirjalili in 2016 [16]. This algorithm has been concerned and studied by many scholars due to its simple implementation and less parameter setting, and its optimization search can be realized through simple variation of sine and cosine function values. It has been successfully applied to solving the parameter optimization of support vector regression [17], short-term hydrothermal scheduling [18], and other engineering fields at present. However, as with other swarm intelligence algorithms, this algorithm also has the disadvantage of low optimization precision and slow convergence speed. Many scholars have put forward various improved sine-cosine algorithms from different perspectives in order to overcome this disadvantage in last two years. Elaziz et al. [19] proposed a sine-cosine algorithm based on the opposition method, and the more accurate solutions is obtained. Nenavath et al. [20] adopted a hybrid algorithm by combining differential evolution with sine-cosine to solve the problem of global optimization and target tracking. This algorithm has faster convergence speed and ability of seeking the optimal solution compared with the basic sine-cosine algorithm and differential evolution algorithm. Reddy et al. [21] applied a new binary variant of sine-cosine algorithm to solve the PBUC (profit-based unit commitment) problem. Sindhu et al. [22] used the elitism strategy and new updating mechanism to improve the sine-cosine algorithm, which improved the accuracy of classification in the selection of features or attributes. Kumar et al. [23] proposed a new sine-cosine optimization algorithm with the hybrid Cauchy and Gaussian mutations in order to track MPP (maximum power point) quickly and efficiently. Mahdad et al. [24] presented a sine-cosine algorithm coordinated with the interactive process to improve the security of the power system aimed at loading margin stability and faults at specified important branches. Bureerat et al. [25] adopted an adaptive differential sine-cosine algorithm to solve the problem of structural damage detection. Turgut et al. [26] combined backtracking search algorithm (BSA) and sine-cosine algorithm (SCA) to obtain the optimal design for the shell and tube evaporator. Attia et al. [27] embed Levy's flight into the original sine-cosine algorithm to increase the local search ability of the algorithm and avoided the algorithm being trapped in a local optimal defect. Tawhid et al. [28] used elite nondominated sorting to obtain different nondominated grades and applied crowd distance method to maintain the diversity of optimal solution sets, putting forward a multiobjective SCA algorithm. Issa et al. [29] presented an enhanced version of SCA by embedding the particle swarm optimization algorithm in SCA(ASCA-PSO). The ASCA-PSO algorithm makes full use of developing ability of the particle swarm optimization algorithm in the search space, which is stronger than that of the SCA. In the tests of some functions, it is found that the search performance of ASCA-PSO is apparently superior to that of SCA and other recently proposed basic metaheuristic algorithms. Rizk-Allah et al. [30] proposed a multiorthogonal sine-cosine algorithm (MOSCA) based on a multiorthogonal search strategy (MOSS) to solve the problem of engineering designs. The MOSCA algorithm eliminated the disadvantages which are that the basic SCA lacked exploitability and it was easily trapped in local optimum.

The modified sine-cosine algorithm (MSCA) based on neighborhood search and the greedy Levy mutation has been proposed in order to better balance the global exploration ability and local exploitation ability. The improved algorithm makes improvements in the following three aspects. Firstly, both the linear decreasing inertia weight and exponential declining conversion parameters are used to balance the global exploration and local exploitation ability, which achieves the smooth transition of algorithm from global exploration to local development. Secondly, the guidance of random individuals near the optimal solution is fully used to allow the algorithm easily jump out of the local optimum, which effectively prevents the algorithm premature convergence and increases the diversity of population. Thirdly, the greedy Levy mutation strategy is used for the optimal individuals to enhance the local development ability of the algorithm. Compared with other swarm intelligence algorithms, the improved sine-cosine algorithm has better performance in terms of searching precision, convergence speed, and stability.

2. Basic Sine-Cosine Algorithm

In the basic sine-cosine algorithm, the simple variation of sine and cosine function values is used to achieve the optimization search. In this paper, the population size is n. The dimension of search space is d, and the ith individual in the population is . In each iteration, the update mode of can be obtained by the following equation:where t is the current iteration,   is the jth dimension value of the optimal individual at iteration t, and is the jth dimension value of the individual i at iteration t. , , , and are the random numbers. and obey uniform distribution between 0 and 2. obey uniform distribution between 0 and 2, and obey uniform distribution between 0 and 1.

In (1), or jointly lead the global exploration and local development ability of the algorithm. When the value of or is greater than 1 or less than -1, the algorithm conducts a global exploration search. When the value of or is within the range of , the algorithm conducts a local development search. The value of or is within the range of . So the control parameter plays a crucial role in the global exploration, which controls the transition of the algorithm from global exploration to local development. In the basic algorithm, the control parameter adopts the linear decreasing method of (2) to guide the algorithm transit from the global exploration to the local development.where is a constant, is the current iteration, and is the maximum number of iterations.

3. Modified Sine-Cosine Algorithm

3.1. Exponential Decreasing Conversion Parameter

The parameter setting is crucial to the search performance in the basic sine-cosine algorithm, in which the control parameter controls the transition of algorithm from global exploration to local development. The larger value can improve the global searching ability of the algorithm, and the smaller value can enhance the local development ability of the algorithm. Therefore, is designed as the linear decreasing method of (2) in the basic algorithm to balance the global exploration and local development ability of the algorithm. In the literature [31], experimental contrast analysis is made on the linear decreasing method, parabola decreasing method, and exponential decreasing method in the basic algorithm. It is found that the exponential decreasing method is superior to the other two methods in the search performance. At the same time, the inertia weight remains unchanged in the iterative process of the basic algorithm, which may easily cause the population individuals to oscillate in the later period of search. In this paper, both the linear decreasing inertia weight and exponential decreasing conversion parameter strategy are used on the basis of (1), which can better balance the global exploration and local development ability of the algorithm. The update mode of individuals is as follows:where t is the current iteration, is the maximum number of iterations, is the jth dimension value of the optimal individual at iteration t, is the jth dimension value of the individual i of current iteration, and and are the maximum and minimum inertia weight, respectively.

It can be seen from (3) that the population individuals work together through the inertia weight and conversion parameter . The value of and is large in the early iterations, which is conducive to the global exploration of the algorithm. The values of and are small in later iterations, which are conducive to the local development of the algorithm so as to improve the searching precision and convergence speed of the algorithm.

3.2. The Neighborhood Search of the Optimal Individual

In the basic sine-cosine algorithm, the search directions of the new individuals simply are updating process by optimal individuals in the population. Once the global optimal individuals fall into the local optimum, the whole algorithm easily gets into premature convergence. Therefore, in order to reduce the possibility of algorithm getting into the local optimum, the guiding role of the better individuals possibly existing near the optimal solution should be used. In this paper, the random individuals near the optimal solution are used to replace the current optimal individuals to guide the algorithm search, so as to improve the possibility of algorithm jumping out of the local optimum. The sine-cosine algorithm strategy for the neighborhood search of the optimal individual iswhere is the uniform distribution number within (-1, 1), and is the disturbance coefficient. Other parameters are in line with (3).

In the neighborhood search of the optimal individual, the current optimal individual is taken as the center and as the step size, and the algorithm searches between the section and . It effectively expands the search orientation and increases the probability of algorithm jumping out of the local optimum.

3.3. Greedy Levy Mutation

In the basic sine-cosine algorithm, the optimal individuals lead the search direction of the whole population. But the optimal individuals lack experiential knowledge and self-learning ability. So they may hardly get effective improvement and thus get into the domain of local optimum. In order to further prevent the basic sine-cosine algorithm from getting into the local optimum and eliminate the defect of low efficiency in later period, a strategy based on greedy Levy mutation is proposed for the optimum individuals. Thus, the population individuals can jump out of the position of optimal value searched previously through the mutation operation, which retains the diversity of population. The mutation method is as follows:where is the random number that obeys the Levy distribution, is the coefficient of self-adapting variation, and is the jth dimension value of the optimal individual at iteration t (Algorithm 1).

Set the initial parameters, including the total population size n, the maximum number of
generations N_iter, control parameter a et al.
Generate a population .
Calculate the fitness and find the best solution of the population.
for t=1: N_iter
for i=1:n
for j=1:d
Generate a rand .
if
else
end if
end for
Cross-border processing for .
Calculate the fitness
if
end if
end for
end for
Output the best solution
3.3.1. Random Number Generated According to the Levy Distribution

The flight is characterized by long-term short-distance migration and occasional long-distance jump, which is suitable for describing the life active law of many colonial organisms. In this paper, the characteristic of flight is used to form a mutation mechanism. This mechanism ensures that the proposed algorithm makes sufficient search near the area of the optimal individuals and has a certain mutation at the same time, which can improve the global searching ability of the algorithm. As the integral of probability density function of   distribution is difficult, it has been proved that Mantegna algorithm can be used to achieve the equivalent calculation [32]. That is,where , and can be calculated based on where is the standard Gamma function.

3.3.2. Coefficient of Self-Adapting Variation

The swarm intelligence optimization algorithm is generally divided into two stages in the iterative process, namely, global exploration at the earlier stage and local development at the later stage. Therefore, in order to achieve the goal of obtaining a big variation to conduct the global disturbances at the earlier stage and decreasing the variation range to accelerate the local search at the later stage, the proposed algorithm is used a self-adapting mutation strategy. The self-adapting variation control coefficient is in where t is the current iteration, is the maximum iteration, is the coefficient, is the difference between the jth dimension value of the current optimal individual and the jth dimension average value of the population individual, and is the maximum distance of the jth dimension in the population.

From (10) ~ (12), it can be seen that the coefficient can be mainly considered from both iterative process and diversity. The iterative part is controlled by the part of , and the diversity is adjusted by the part of . On the early iterations, the individuals have poor performance and large diversity. So large coefficient can cause enough disturbances to the population and enhance the global searching ability. As iterations go on, the individuals in the population have better performance and gradually decrease coefficient, which can ensure that the algorithm converges to the optimal value smoothly to reduce the search oscillation of the optimal value. The solution method is shown in Algorithm 2.

Set the parameters
Obtain the best individual and its fitness value.
for j=1:d
;
Calculate the value of according to Eq.(11).
Calculate the value of according to Eq.(12).
Calculate the value of according to Eq.(10).
Calculate the value of according to Eq.(8).
;
if
end
end
3.4. The Modified Sine-Cosine Algorithm Based on the Greedy Levy Variation

The procedure of the improved sine-cosine algorithm based on neighborhood search and the greedy Levy variation is shown in Algorithm 3.

Set the initial parameters, including the total population size n, the maximum number of
generations N_iter, control parameter a, , , , and et al.
Generate a population .
Calculate the fitness and find the best solution of the population.
for t=1: N_iter
Calculate the value of according to Eq.(4).
Calculate the value of according to Eq.(5).
for i=1:n
for j=1:d
Generate a rand .
if
else
end if
end for
Cross-border processing for.
Calculate the fitness .
if
end if
end for
Perform the improved sine-cosine algorithm based on the greedy levy
variation(described in Algorithm 2).
end for
Output the best solution .

For the basic SCA algorithm, the time complexity of creating the initial population is , the time complexity of performing sine and cosine operations is , and the cross-border processing is . So the time complexity of the basic SCA algorithm is +. In the MSCA algorithm, the time complexity of creating the initial population is , and the time complexity of calculating and is . The time complexity of performing the sine and cosine operations based on the neighborhood search is . The time complexity of cross-border processing is , and the time complexity of the greedy Levy mutation operation is . Therefore, the time complexity of the MSCA algorithm is . Obviously, the time complexity of the MSCA algorithm is higher than that of the standard SCA algorithm while both of them are in the same order of magnitude.

4. Experimental Simulation

In order to verify the performance of MSCA, the experiment will be conducted from the following three aspects: Contrast experiment is conducted between MSCA and particle swarm optimization (PSO) [8], differential evolution (DE) [9], bat algorithm (BA) [33, 34], teaching-learning-based optimization (TLBO) [35, 36], grey wolf optimizer (GWO) [37], and basic SCA algorithm. The effectiveness of 3 improvement strategies is analyzed. The parameter in the optimal individual neighborhood search strategy and parameter in the greedy mutation strategy are analyzed, respectively, and the effectiveness of the algorithm is discussed, so that the specific reference value of the above parameters in the algorithm can be determined.

4.1. Test Function and Experimental Platform
4.1.1. Experimental Platform

In order to provide a comprehensive and full test environment, the simulation experiment is conducted in the test environment with operating system of Windows 10, CPU of Intel (R) Core (TM) i5-4210U (quad core), dominant frequency of 2.4GHZ and internal storage of 4GB, and programming tool of Matlab 2016b.

4.1.2. Benchmark Functions

In order to validate the performance of the presented algorithm, 20 benchmark test functions in the literature [38, 39] are selected as experimental subjects, which have been widely used in the test. The benchmark test functions selected can be categorized into three types (i.e., unimodal high-dimensional functions, multimodal high-dimensional functions, and multimodal low-dimensional functions ). are the unimodal high-dimensional functions, and they can be used to investigate the optimization precision of the algorithm, which can hardly converge to the global optimal point. are the multimodal high-dimensional functions with several local extreme points, which can be used to test the global searching performance and ability to avoid premature convergence of the algorithm. are the multimodal low-dimensional functions. As the optimal value of the most test functions is zero, we select some test functions with nonzero optimal value. The function name, expression, dimension, search range, and theoretical optimal value are shown in Table 1.

4.2. Contrastive Analysis of Sine-Cosine Algorithm Based on Greedy Levy Mutation

In order to evaluate the performance of the algorithm proposed in this paper, six algorithms are selected as contrast algorithms in the experiment, that is, PSO, DE, BA, TLBO, GWO, and SCA, respectively. The contrast algorithms selected the same parameters as the original literature and the parameter setting as shown in Table 2. The parameters of the MSCA algorithm are set as follows. The population size is 100. The minimum inertia weight is 0.9. The minimum inertia weight is 0.4. is 30. is 0.01. The other parameters are consistent with the basic SCA. For each test function, the number of iterations is 5000, and each algorithm runs independently 20 times. The performance of each algorithm is measured by four indexes, which are optimal value, average value, worst value, and variance. The statistical results are as shown in Tables 35.

It can be seen from Table 3 that 5 theoretical optimal values (,,,, and) are searched by the MSCA algorithm for the 7 unimodal high-dimensional functions, and the searching precision of another two functions ( and ) is also close to the theoretical optimal values. The MSCA algorithm performs better than PSO, DE, BA, and CSA algorithms in the aspect of optimal value, average value, worst value, and variance. For , , and, both TLBO algorithm and MSCA algorithm can search the global optimal theoretical value. For , , , and , the MSCA algorithm obtains better results than the TLBO algorithm. The MSCA algorithm obtains better results than the GWO algorithm besides (both algorithms can search the global optimal value). It shows that the MSCA algorithm has a great advantage in the searching precision of unimodal high-dimensional problems.

From the search results of the multimodal high-dimensional functions in Table 4, it can be seen that 3 functions (, , and ) obtain the globally optimal solution in the MSCA algorithm, and the search results of the other functions are also better than in the other 6 algorithms. The search result of the PSO algorithm is not good, and the search result of the DE algorithm is better than BA, TLBO, GWO, and CSA algorithms. In contrast to TLBO, MCSA has better performance in the aspect of optimal value, average value, worst value, and variance (besides), which indicates the superiority of optimization results of the MSCA in the multimodal high-dimensional functions.

It can be seen from Tables 3 and 4 that the search ability of MCSA is better than that of the TLBA in most high-dimensional functions. Both MCSA and TLBA find out the global optimizing in other functions (i.e., , , , and ).

For multimodal low-dimensional functions , most functions have the characteristics of strong shocks. The low-dimensional functions are usually used to test the ability of the algorithm in breaking away from the local optimum. From the search results of low-dimensional multimodal functions in Table 5, it can be seen that the MSCA algorithm obtains the global optimal solution of all functions, while the basic CSA algorithm has poor stability in solving such problems. MSCA, DE, TLBO, and GWO can obtain theoretical optimal value, illustrating that the four algorithms have the ability of jumping out the local optimal values in multimodal low-dimensional functions.

Figures 17 show the convergence curves of optimal results for some high-dimensional functions by the 7 algorithms. The data in the figures show the optimal results based on the 7 algorithms after 20 independent experiments. For the convenience of drawing, the abscissa takes the number of iterations, and the ordinate takes the logarithm of fitness value for , , , and . Besides, the ordinate takes the fitness value for , , and . It can be seen from Figures 17 that the MSCA algorithm has faster convergence speed and higher optimization precision than the other 6 intelligence algorithms.

In order to verify that the performance of the proposed algorithm has significant advantages over other intelligence algorithms, the statistics are carried out (optimal value, average value, worst value, and variance) for the 7 algorithms after 20 independent experiments, and t-test is also used in the experiments for the significance analysis of the optimization results. The function ttest (x,y,0.05, “left”) is verified in the experiments. Here, “x” means the experimental result of MSCA algorithm. “y” means the experimental result of contrast algorithms. The significance level is 0.05, and “left” means left-tailed test. The test results are shown in Table 6. “+” indicates that the MSCA algorithm has significant advantages over the contrast algorithms. “≈” indicates that there is no significant difference between the MSCA algorithm and the contrast algorithms. “-” indicates that the MSCA algorithm is inferior to the contrast algorithms. According to the data listed in correlation Table 6, compared with the PSO, DE, BA, TLBO, GWO, and SCA algorithms, there are 20, 13, 19, 12, 15, and 17 test functions, respectively, in significant advantages. For , the search results of the MSCA algorithm are inferior to that of the DE, BA, and TLBO algorithms. In addition, there is no significant difference between the MSCA algorithm and other contrast algorithms for the search results of other test functions (such as, , , , , , and in the DE algorithm). The main reason is that both the MSCA algorithm and contrast algorithms can obtain the global theoretical solution.

4.3. Efficiency Analysis of the Improvement Strategy

In order to analyze the influence of the three improvement strategies on the performance of SCA algorithm, the odd-numbered standard test functions in Table 1 have been used to experimentalize. In the C-SCA algorithm, the linear decreasing inertia weight and exponential decreasing conversion parameter strategy are combined with the basic SCA algorithm. In the N-SCA algorithm, optimal individual neighborhood search strategy is combined with the basic SCA algorithm. In the G-SCA algorithm, the greedy Levy mutation strategy is combined with the basic SCA algorithm. The C-SCA, N-SCA, G-SCA, and the basic SCA are compared with the proposed algorithm. The experimental parameters are consistent with those in Section 4.2. Table 7 summarizes the experimental results of the three strategies and the proposed algorithm. It can be seen from the experimental results that the C-SCA which used a single strategy makes a limited improvement on the search performance of the functions besides and . The N-SCA algorithm is basically the same as the basic SCA algorithm. The G-SCA strategy has better improvement effect on the test functions , , and , while it has basically the same search results of other test functions as the basic SCA algorithm. However, when the three improvement strategies work together with the SCA algorithm, the search performance of the proposed algorithm can be greatly improved. The main reasons are analyzed as follows. Firstly, the optimal individual neighborhood research allows the random individuals near the current optimal individuals to play the roles of the leader, which increases the probability of the proposed algorithm jumping out of the local optimal solution. Secondly, the greedy Levy mutation strategy increases the diversity of population and adequacy of local search. Thirdly, as the linear decreasing inertia weight and exponential declining conversion parameter method are used, the algorithm chooses larger inertia weight value and conversion parameter value in the early iteration, which is conducive to the global searching ability of the algorithm. In the later iteration, the algorithm chooses smaller values, which is conducive to the local search. Thus, the presented algorithm avoids falling into the local optimum. The solution precision and convergence speed are significantly improved by the collaboration of the three improvement strategies.

From the results of Wilcoxon rank sum test in Table 8, it can be seen that the C-SCA algorithm has significant advantages over the basic SCA algorithm only in the test results of functions , , and . There is no significant difference between the N-SCA algorithm and the basic SCA algorithm. The G-SCA algorithm has significant advantages over the basic SCA algorithm in the searching performance other than and .

4.4. Parameter Sensitivity Analysis in the Algorithm
4.4.1. The Analysis of Parameter in the Optimal Individual Domain Search Strategy

In order to explore the influence of the parameter in the optimal individual domain search strategy, the even-numbered standard test functions in Table 1 are selected. The parameter takes 0.005, 0.01, 0.02, 0.03, and 0.05, respectively, for independent experiments, with other parameters unchanged. The optimal individual domain search strategy independently acts on the SCA algorithm (N-SCA). Table 9 summarizes the results when the N-SCA algorithm takes different values of . Here, the black boldface means the winners in the comparison expressed by “+”. It can be seen from the last row of Table 9 that the number of the winners is 3 when , which is better than other cases. Therefore, is the optimal parameter selected.

4.4.2. The Analysis of Parameter in the Greedy Mutation Strategy

The value of parameter has a great effect on the algorithm performance in the self-adapting mutation mode adopted in (10). In order to explore the influence of the parameter on the searching performance of the algorithm, the even-numbered standard test functions in Table 1 are selected. The parameter takes 10, 30, 60, and 90, respectively, for independent experiments, with other parameters unchanged. The greedy mutation strategy independently acts on the SCA algorithm (G-SCA). Table 10 summarizes the results when the G-SCA algorithm takes different values of . Here, the optimal results are marked with “+” and showed by overstriking. It can be seen from Table 10 that when takes 10, 30, 60, and 90, respectively, the number of optimal search results obtained by GLM-SCA is 1, 5, 0, and 1, respectively. When =30, the search results of GLM-SCA are much better than those of other values. Therefore, =30 is a reasonable parameter chosen.

5. Conclusion

An improved sine-cosine algorithm based on greedy mutation is proposed in this paper. The proposed algorithm adopts the method of both exponential decreasing conversion parameter and linear decreasing inertia weight to better balance the global searching and local development ability of the algorithm. The update mode guided by the of random individual near the optimal individuals is introduced, which increases the probability of algorithm jumping out of the local extremum. Inspired by the flight mode of long-term short-distance migration and occasional long-distance jump, a self-adapting greedy mutation strategy is designed to mutate the optimal individuals. The proposed strategy can increase the population diversity and reduce the search oscillation of algorithm, making the algorithm converge to the global optimum smoothly. Twenty typical benchmark test functions are applied to verify the performance of the proposed algorithm. The results show that the searching precision and convergence speed of the proposed algorithm can be greatly improved through the collaboration of the three improvement strategies. At the same time, the contribution of the three improvement strategies to the proposed algorithm is analyzed in detail. The influence of parameter selection on the algorithm performance is discussed, and suggestions on parameter selection are also given in this paper. However, the proposed algorithm is still theoretically and practically in its infancy stage, and the setting of the parameters in the algorithm is determined by empirical tests. At the same time, when the algorithm introduces greedy Levy mutation strategy, the time complexity of the algorithm is greatly increased. Therefore, the proposed algorithm only conducts the greedy Levy mutation strategy on the best individual at each iteration.

Data Availability

All data are included within the tables and figures of this article.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

Acknowledgments

This work is financially supported by the Natural Science Foundation of Guangxi Province (Grant no. 2014GXNSFBA118283), the Ability Enhancement Project of Young Teachers in Guangxi Universities (Grant no. 2018KY0579), and the Philosophy and Social Science Planning Project of Guangxi Province (Grant no. 17FJY008).