Constrained optimization of line-start PM motor based on the gray wolf optimizer

This is an open access article under the CC BY license (https://creativecommons.org/licenses/by/4.0/) heuristic algorithms, gray wolf algorithm, constrained optimization, external penalty function, line-start PM synchronous motor. This paper presents the algorithm and computer software for constrained optimization based on the gray wolf algorithm. The gray wolf algorithm was combined with the external penalty function approach. The optimization procedure was developed using Borland Delphi 7.0. The developed procedure was then applied to design of a line-start PM synchronous motor. The motor was described by three design variables which determine the rotor structure. The multiplicative compromise function consisted of three maintenance parameters of designed motor and one non-linear constraint function was proposed. Next, the result obtained for the developed procedure (together with the gray wolf algorithm) was compared with results obtained using: (a) the particle swarm optimization algorithm, (b) the bat algorithm and (c) the genetic algorithm. The developed optimization algorithm is characterized by good convergence, robustness and reliability. Selected results of the computer simulation are presented and discussed. Highlights Abstract

The mathematical model of designed motor was • developed on the basis of Finite Element Method (FEM).
The objective function contains product of multi-• plication of three maintenance parameters of the motor.
The performance reliability of three heuristic al-

Nomenclature:
A k , C k -factors of the gray wolf algorithm, -1 p k X -vector of the location of the prey, r, r 1 , r 2 , κ -random numbers from range (0, 1), b k -coefficient describing the ability of migration of wolfs, X k -1 α , X k -1 β , X k -1 δ -vectors of positions of the α, β and δ wolves, respectively, k -number of iteration of optimization algorithm, x i , v i -vectors of velocity and positions the i-th particle for the PSO algorithm, x B -vector of leader position, x L -vector of best self position in previous iterations, w 1 -weight of inertia, c 1 , c 2 -learning factors, F i , A i , r i -frequency, loudness and pulse emission of i-th bat, ζ, γ -bat algorithm constants, P N , V N , I N , n N , T N -rated power, voltage, current, velocity and torque, cosφ, η -power factor and efficiency, T max -peak torque, l m , g m -length and thickness of the permanent magnet, r m -distance between poles,

Introduction
At present, complex models of the various phenomena in the designed device are used in the design process. These models consist of: (a) an equation describing of the electromagnetic field, (b) supply circuit equations, (c) an equation describing rotational equilibrium and also (d) an equation describing the thermal phenomena [2,3,9,11,33]. All of these phenomena are usually taken into account when the finite element method (FEM) is used. FEM models are very complex, therefore, the optimization processes which incorporate them are very time-consuming.
Constrained optimization (CO) is the most important tool in the process of modern design of electromagnetic devices, such as: electric motors, transformers and electromagnetic actuators. The design problem must often necessarily to take into account constraints related to the dimensions of the devices and the imposed functional parameters. Solutions of these constrained problems require new, more and more effective methods of optimization.
In the optimization process the objective functions have usually economic features and are closely connected with the minimization of the production costs and power losses [19,31]. They may also include components related to the protection of natural environment.
In international literature, intensive development of new optimization algorithms has been observed. Currently heuristic (non-deterministic) algorithms [1,6] are being most dynamically developed. These types of optimization algorithms are especially effective in solving the design challenge connected with electromagnetic converters [8,10]. Classification of the optimization algorithms proposed by the author is presented in Figure 1.
In the last two decades, an increased number of papers devoted to the application of different probabilistic algorithms elaborated on the basis of the natural environment (nature-inspired algorithm [36]) has been observed in relation to the design of PM motors. Among PM motors, brushless direct-current motors (BLDC) and permanent magnet synchronous motors (PMSM) are currently developing most dynamically. The Genetic Algorithm (GA) and Particle Swarm Optimization (PSO) algorithms are often used to optimize these two types of motors [7,11,14] and other technical problem [32]. In order to achieve a better convergence of optimization processes, scientists are continuously applying new optimization algorithms. New optimization algorithms are being used more and more often. This group of algorithms consists of: the Ant Colony Optimization (ACO) algorithm [26], the Cuckoo Search (CS) algorithm [5,37], the Bat Algorithm (BA) [4] and the Gray Wolf Optimizer (GWO) algorithm [14]. Optimization calculations are often performed on simplified models (analytical or lumped parameters) of the PM motors [14,26,27]. There are not many articles on the subject of optimization algorithms using 2D FEA models and gray wolf optimization. The aim of this paper is to develop a constrained optimization procedure based on the GWO algorithm. The developed algorithm was employed for the constrained optimization of a line-start permanent magnet motor (LSPMSM). Additionally, the convergence of the developed procedure in comparison to other nature-inspired algorithms was investigated.
The organization and social rules in a wolf-pack and the mathematical rules of the gray wolf method are presented in details and are described in section 2. Also, the main equations for the PSO and the BA are presented in section 2. Next, the algorithm and computer software for the constrained optimization of the LSPMSM are presented in section 3. In section 4, a comparison of the performance of the GWO, the PSO, the BA and the GA has been performed. The summarizing conclusions are discussed in section 5.

Nature inspired optimization algorithms
In general, the PSO and GA algorithms are most often used for those problems related to electromagnetic design optimization. Recently, a rapid development of metaheuristic algorithms has been observed, including: CS, BA, GWO and others. These algorithms are increasingly used to optimize and design technical devices. In the international literature, there are no papers which confirm the advantages of these algorithms over PSO and GA.

Mathematical description of the organization and social rungs in wolf pack
Wolves represent the Carnivora order and are members of the Canidae family. Wolves are social animals organized in groups called packs. Wolf demonstrates an expanded system of social rungs (hierarchy), which decide the position of each member within the wolf pack.
Each pack occupies a specific area where it lives and hunts and which it protects from other wolves. Just like any other social group, the pack needs a leader -an individual to keep the order in the group. The leader of pack is the alpha individual (α). The alpha (the most well adapted individual) always leads the wandering group, initiates attacks on other wolves infringing on the pack's territory, initiates hunting as well as all other activities of the wolf pack [20]. The wolf pack hierarchy is linear, the group leader is the wolf who won direct confrontations with other pack members. Family relations can also determine the pack's hierarchy [24]. A very important role in the pack is the beta wolf (β). This individual submits only to the alpha, while being stronger than other members of the group. For groups living in the wild, the beta individual is usually the strongest individual, but less ingenious and less intelligent than the alpha. The alpha and beta individuals are a complementary pair. The beta male is strong, brave and confident but submits to the alpha male. The beta assumes the pack leadership when the alpha leaves the group grows old or dies.
The third level of the pack hierarchy are the δ individuals, which are weaker than α and β individuals but stronger than omega individuals (ω). The omega individuals form the lowest rung of the pack hierarchy [22]. They submit to all other members of the pack; they are the last to be allowed to feed and are often used as "scapegoats". Most often, the omega individuals are the oldest or frailest.
The mathematical model of the GWO algorithm was based on the wolves hunting behavior. Depending on the size of their prey, wolves can use different hunting tactics.
The wolves recognize the herds of potential victims before starting the hunt. This is the searching for prey stage [25]. At the beginning, the wolves choose their victim. Then the predator get very close to it. If the prey does not get scared by the predator or even starts to approach it, the wolf retreats. If the animal starts to run away, the wolves immediately start the chase. This is the chase stage. During the chase, the wolves often change which of them, pursues the prey. The main purpose of this tactic is to disorientate the victim. The wolves may also force the animal to change its direction of escape. After stopping the prey, wolfs immediately attack prey.
In the numerical implementation of the GWO algorithm it was assumed that the global extreme point is situated between the three leaders (α, β and δ). Thus, the vector of position for each i-th individual in k-th iteration (k-th time step) is determined as [17]: where A k and C k are the factors of the gray wolf algorithm, -1 p k X is the location vector of the prey (the global extreme).
The factor A k and C k are different for the α, β and δ individuals in the developed algorithm and are calculated as follows: in which: r 1 to r 6 are the numbers randomly selected from range (0, 1), b k is the factors which describes the ability of wolves to migrate in the permissible area of the solved task. In the case of a large value of this parameter the individuals can move freely in the area of the considered task. The algorithm then has the properties of a global search algorithm. A low value of this parameter means the algorithm has the properties of a local search method. The value of the b parameter is usually decreased from 2 to 0 [17,25].
In the developed algorithm the author proposed a decrease in the value of the coefficient during the process of optimization. The value of b in subsequent k-th iterations is calculated according to the following formula [15]: According to the developed procedure the three best-adapted wolves are represented as individuals α, β and δ. Thus, there is one α, one β and one δ in each iteration. In order to determine the new position of i-th ω individual, it is necessary to calculate the distance of this individual from the best wolves in the pack [38]: where: X k -1 α , X k -1 β and X k -1 δ denote the positions of the three best individuals (α, β and δ) in the previous iteration. Finally, the new position for each i-th ω individual in the k-th iteration (k-th time step) is determined using the following equation: where: X X The flowchart of the GWO method is illustrated in Figure 2.

The classical Particle Swarm Optimization
The classical PSO algorithm was developed in 1995 [30]. This algorithm mimics the foraging behavior of flocks of birds and fish shoals. The optimization process uses the interaction between the leader (the best particle in the group) and the rest of the particles. All of the particles form the swarm system. The best-adapted particle is the leader of the swarm. Each particle is described by two vectors: (a) the position vector (x i ) and (b) the velocity vector (v i ). In order to determine the new position of the i-th particle, the position vector of the leader (x B ), the local best known position of the i-th particle and the vector of velocity from the previous iteration are taken into account. The velocity vector of the i-th particle in the k-th iteration is calculated according to the following formula: where: w 1 is the weight of inertia, c 1 and c 2 are the learning factors, r 1 , r 2 are the random numbers from range (0, 1), x B is the position vector of the swarm leader and x L is the vector of the best location of the i-th particle in the previous iteration. Finally, the position of the i-th particle is determined as follows:

The Bat Algorithm
The Bat Algorithm (BA) was developed in 2010 [34]. The mathematical model of the BA is based on the echolocation navigation of small species of bats. Each bat is characterized by a velocity vector (v i ), a position vector (x i ), varying frequency (F i ), loudness (A i ) and pulse emission (r i ). A group of bats constitutes a bat colony. The search of the global extreme is carried out by randomly searching the permissible area. The individual with the best objective function value is the colony leader. The position of the leader is updated in every k-th iteration of the algorithm. The position vector of i-th bat is calculated using: the position of the best bat in the colony, the velocity vector of i-th bat from the previous iteration and the random value of frequency. The position of each bat is determined as follows: where: r is a randomly selected number from the range (0, 1); F max and F min are the maximum and minimum frequency values.
In each step of the algorithm, a test modification of the leader/random individual is performed, this represents the local search capability. The test position x* near the leader/random bat is determined as follows: where: κ is a randomly selected number from the range (0, 1), A av is the value of the average loudness of a bat colony in the j-th time step.
If the test position ( *) i k x has a better objective function value than i k x , then i k x is updated by x*. The loudness A i and rate of pulse emission r i of a bat in the next iteration are calculated as follows: in which: ζ and γ are the bat algorithm constants, r 0 is the initial value of emission rate.

Constrained optimization of PM synchronous motor
The GWO algorithm is very often applied for the purpose of solving global optimization problem. However, there are not too many articles in international literature regarding the application of this method to solve constrained optimization tasks [13,17]. Moreover, in the case of optimization of PM motors the simplified model of phenomena is usually applied.
In order to validate the effectiveness and performance of the GWO algorithm in combination with the external penalty function, constrained optimization of the LSPMSM has been performed. The inhouse optimization software consists of two independent modules: (a) the optimization module and (b) the numerical model of PM motor. The optimization module comprises the gray wolf algorithm and was developed using Delphi 7.0. The non-linear constraint was taken into account through the external penalty function [35]. The numerical model of the LSPMSM was developed in the ANSYS Maxwell environment using 2D finite element method (FEM). This model consist two independent 2-D FEM transient models: (a) a steady-state operation model at synchronous speed and (b) a model describing start-up of motor. The efficiency and power factor under the rated load condition was calculated in the steady-state operation model. The model which describes the start-up allows for the calculation of the value of electromagnetic torque during the start-up of the motor. A stator from a serially manufactured six-poles induction motor, type MS2 132M-6 was applied. The rated parameters of the MS2 132M-6 motor are listed in Table 1.
The stator winding has a three-phase, a double-layer which overlaps the whole-coiled winding. The winding is wye-connected. The main dimensions of the stator and the rotor of the induction motor are listed in Table 2.
The optimization task consists of such a selection of the structure of the rotor, which include cage winding made of aluminium and permanent magnets. Only the dimension of the permanent magnet and its location in the rotor structure were taken into account. Three design parameters were adopted: s 1 = l m -magnet length, s 2 = g m -magnet thickness and s 3 = r m -distance between poles (see Figure 3). The ranges of the design variables are presented in Table 3. During study, the all stator dimensions and winding parameters represented constants.    An optimal motor should be characterized by high values of three maintenance (functional) parameters: (a) efficiency [18], (b) power factor and (c) synchronization capability. The synchronization capability is determined by value of the electromagnetic torque for a speed equal 0,8 of the synchronous speed. In the case of induction motors, a speed of about 0,8 of synchronous speed is the highest value of electromagnetic torque, i.e. the peak torque. During the LSPMSM start-up process we can observe two types of torque [39]. The first one is the asynchronous torque generated by squirrel cage winding, where slips differ from 0. The second one is the synchronous torque, which includes (a) the opposite braking torque generated by the permanent magnet and (b) the reluctance torque [23]. The braking torque depends on the dimensions of the permanent magnets. The reluctance torque is generated by the diversity of the rotor structure in which gaps for permanent magnets are present. The braking torque degrades the line-start performance. The total electromagnetic torque generated by a machine during the synchronization period is the sum of the synchronous and the asynchronous torques. Figure 4 illustrates the waveform of the total torque, the sum of the asynchronous and the reluctance torque and the opposite torque generated by the PM during the start-up of the LSPMSM. The presented torque-slip curves were calculated for a machine obtained in the start population. It can be observed, that magnets with excessively large dimensions were used in the studied LSPMSM. The motor was characterized by a high value of opposite torque generated by the PM. The high value of the opposite torque significantly weakens the value of the asynchronous torque. As a result of this, the motor can have problems when going into the synchronous state. The authors of papers concerning the optimization of the LSPMSM very frequently use the T 80 value to guarantee a proper synchronization process [11,12].
In the case of multi-criteria optimization problems, two types of compromise objective functions can be applied: (a) multiplicative and (b) additive [28]. After performing many computational simulations for the three iterations of the GWO algorithm, it was observed that the multiplicative function is more sensitive to changes in functional parameters than the additive function. During the optimization of the LSPMSM the most important parameter is the synchronization capability; in the case of multiplicative function the obtained values of this parameter were better. Thus, in the developed algorithm, the objective function for j-th wolf has been defined as follows: where s=[s 1 , s 2 , s 3 ] T is the vector made up of design parameters, cos ( ) ϕ j s , η j ( ) s and 80 () j T s represent the power factor, the efficiency and the out put torque at a speed equal to 0,8 of the synchronous speed, η 0 , cosφ 0 and T 0 are the values of these 3 parameters calculated as a mean value of such parameters from 15 runs of optimization procedure for random distribution of the start wolf pack before the start of the optimization process, q 1 , q 2 and q 3 are the weighting coefficients.
Moreover, an economic factor has been taken into account during the optimization process and the consumption of the permanent magnet material was included into the design process. The non-linear constraint concerning the total mass of permanent magnet material was defined as ( ) is the imposed total mass of permanent magnet material in the designed motor. The non-linear constraint function has been normalized and calculated as: The imposed constraint was included in the optimization process by using the external penalty function [35]. In the penalty function approach, the modified objective function h(s) is created. The h(s) function is composed of: (a) objective function f(s) and (b) penalty term p(s). The penalty term p(s) for the j-th wolf is calculated as follows: In the developed algorithm, the iterations related to a change in external penalty (n) are intertwined with iterations of the GWO algorithm (k). The value of the penalty coefficient is modified after performing three iterations of the GWO algorithm. In each subsequent iteration of the penalty (n), the penalty coefficient value is increased. After performing the maximum number of penalty iterations (n max ), the optimization process is finished.
In the developed algorithm the objective function f(s) is maximized. Therefore, the modified objective function h(s) is defined as follows:

Simulation results for different optimization algorithms
In order to evaluate the convergence and reliability of the developed optimization procedure (containing the gray wolf algorithm), test calculations which consisted of the constrained optimization of a LSPMSM motor were performed. All the studied optimization procedures were developed by the author in the Borland Delphi 7.0 environment. The optimization procedures containing each method (GWO, PSO, BA and GA) were repeated 12 times. The best results obtained from all runs of the optimization software for the GWO algorithm were compared with those of the PSO, the BA and the GA in Tables 4, 5, 6, 7 and 8. Next, statistical analysis of the results was performed for all of the optimization algorithms used.

Calculations for the GWO method
The optimization calculation was performed for a wolf pack size equal to 32 individuals. Due to the application of the FEM model of the LSPMSM, such a number of individuals provided a compromise between good convergence of the optimization procedure and the calculation time. The following parameters of the optimization proce- The penalty coefficient was increased in every third internal iterations of the GWO method. As a stop criteria, a maximum number of external iterations n max =10 was assumed, i.e. 30 iterations of the GWO method. The following values of reference parameters were adopted: η 0 =81,325 %, cosφ 0 =0,753 and T 0 =50,124 Nm. Such values was the same for all investigated optimization procedures.
The results of the optimization calculation for the selected GWO iterations are presented in Tables 4 and 5. The results for the α individual are reported in Table 4, while the results for the β individuals are listed in Table 5. In the successive columns, the design parameters values (r m , g m , l m ), the functional parameters (η, cosφ and T 80 ) of the designed motor, the modified objective function value (h), and the number of calls function (N of ), i.e. number of calculations of the objective function are listed.
The calculations were made on a computer with the following parameters: processor: Ryzen 5 six-core, 3,40 GHz and 16,0 GB RAM. The calculation time for one individual is equal to 8 minutes and 6 seconds. The calculation time depends on the saturation, especially at the initiation of the wolf pack. The approximate calculation time for the optimization process was about 116 hours for a single computer. The developed optimization procedure was tested and a good convergence was achieved. It should be noted that the developed algorithm determined the optimal solution after about 12 iterations, i.e. after four iterations related to the increasing penalty, and the imposed total mass of permanent material was attained. In the suc-cessive iteration of optimization process, the maximized functional parameters were improved.

Calculations for the PSO algorithm
Next, the calculations for the classical PSO algorithm were performed. The number of particles was N=32. The following values of the PSO coefficients were assumed: w=0,2, c 1 =0,35 and c 2 =0,45. The values of the PSO coefficients were assumed on the basis of many computer simulations to provide good convergence of the optimization procedure for the first three iterations. The PSO algorithm was combined with an external penalty. The parameters of the optimization procedure are the same as the parameters of the GWO algorithm. The course of the optimization process for the selected iterations is presented in Table 6.
The total approximate calculation time in case of PSO optimization procedure is 132 hours. The computation time for the PSO method is slightly longer than for the GWO method. Moreover, the obtained result is lower quality in comparison to the GWO algorithm.

Calculations for the BA algorithm
Thus, the constrained optimization process was executed for the BA algorithm. The number of bats in a colony was N=32 and the maximum external penalty iteration is equal to n max =10. The adopted values of the BA parameters were based on the author's previous experience. The following values have were adopted: range of frequency F min =0, F max =1,0, initial pulse emission value r 0 =0, and initial  tion are calculated only for new individuals created in crossover and mutation procedures. The course of the optimization process for the selected genetic iterations is presented in Table 8. The total calculation time for GA was the longest from all the tested algorithms and was equal 195 h. However, the obtained result was the best.

Statistical analysis of all the tested methods
On the basis of the obtained results it can be noted, that the investigated optimization algorithms (GWO, PSO, BA and GA) correctly found the global maximum. Similar values of design variables were determined for the GWO and the PSO algorithms. After about 12 iterations, i.e. 380 calls of objective function, the gray wolf algorithm determined a solution which was close to optimal. The value of the modified objective function for the α individual improved in subsequent iterations (see Table 4). In comparison with the PSO algorithm, the β individual has much more accurate results than the PSO leader. The PSO method provided the least accurate solution from among all the tested methods. It can be observed that for the PSO algorithm, the modified objective function value was improved until the last iteration of the algorithm. The highest value of the objective function was obtained for GA.
The convergence process for all the studied algorithms is presented in Figure 5. The fastest convergence at the beginning of the optimiza-loudness value A 0 =1, ζ=0,7 and γ=0,6. The results of the optimization process for the selected time steps are presented in Table 7. In the successive columns, the number of time steps, the coordinates of the best bat (individual), the modified objective function for the leader of a bat colony and the number of call function are listed.
In the case of the BA algorithm, the obtained approximate operating time of the optimization procedure is similar to the PSO and equal 132 hours.

Calculations for the GA algorithm
Also, the calculations for GA procedure for parameters: population size equal N=32, probability of mutation p m =0,007 were executed. The optimization procedure consists of three genetic operators: reproduction, crossover and mutation [27]. Additionally, the simple elitism procedure has been applied to save the best individual during genetic operations, especially mutation procedure. The roulette wheel reproduction and one cut-point in chromosome crossover methods have been applied. Also, improved crossover procedure was applied. The crossover procedure consists of two phases. First, all individuals are crossover. The new generation was created from the best half of parents and the best half of children [16]. Such a crossover operation significantly improves the convergence of an elaborated optimization procedure based on genetic algorithm. The GA optimization procedure is adapted to perform optimization calculation by the use of the FEM models. In each genetic generation, the values of objective func-  tion process was provided by the GWO algorithm. The BA algorithm was characterized by the worst convergence. The process of searching for the global maximum in the case of the BA algorithm is very interesting. On the basis of the data presented in Figure 5, it can be observed that every few iterations, there is a change in the position of the best adapted bat. In the BA method the individuals move randomly around the best bat. The result, which is close to the optimal one is obtained after 24 time steps of the BA algorithms, i.e. after 800 calls of the objective function. This is double the number of the calls function than in the case of the GWO.
Subsequently, the changes of the design variables during the optimization process were analyzed. Figure 6 shows the variation of all design variables in successive iterations of the optimization process.
It can be observed, that in the case of the distance between the poles (r m ), the design variables changed during the optimization process. As a result of the optimization process for the PSO and the GWO algorithms, similar "optimal" values were received (see Figure 6a). However, for the BA and GA a different "optimal" value of the (r m ) variable than for the other tested algorithms were obtained. In the case of the magnet length (l m ), as a result of the optimization process, a similar PM magnet length value was obtained for all the examined optimization algorithms (see Figure 6). It was also observed, that the BA convergence was the worst when compared with the other algorithms. For the magnet thickness (g m ) for all the optimization algorithms, similar of the optimal value of this design parameter were obtained.
The results were obtained during 12 runs of the optimization software for different optimization procedures. For all the optimization algorithms, the best, and the worst solution, as well as the average and   standard deviation were determined. The obtained results are shown in Table 9.
The best value of the objective function was obtained for the BA. However, the other best results (mean value, worst objective function value and standard deviation) were attained in the GWO algorithm. The PSO algorithm achieved the worst optimal value of the modified objective function and the worst mean value among the investigated algorithms. While, the worst value of the standard deviation for 12 runs of the optimization procedure was obtained for the BA.

Conclusions
In the article the optimization procedure for the constrained optimization problem was developed. The optimization procedure is a combination of the gray wolf algorithm and the external penalty function. The procedure was applied to optimization of the LSPMSM. The designed motor was described by the FEM model. The multiplicative compromise objective function composed of three parameters of the motor has been used. Additionally, the non-linear constraint function concerning the mass of the permanent magnet material was taken into consideration.
The performance reliability of the developed optimization procedure containing the GWO algorithm was compared with other natureinspired optimization algorithms (PSO and BA). All the investigated optimization procedures were developed in Delhi 7.0 environment. In the GWO, the results close to the optimal one can be reached almost two times faster in comparison to the BA. Of course the BA enables to obtain more accurate solution. In the computational experiment, the worst value of objective function was obtained for the PSO method. Furthermore, the GWO algorithm allows obtaining the best standard A comparison of the quality of results between alpha individual and alternative leader (β individual) was presented. A much precise optimal solution was obtained for the α wolf than for the β wolf. The beta individual has a higher value of the objective function than the leader for the PSO algorithm, which underlines the advantage of the GWO algorithm over the PSO.
In the GWO algorithm, the new positions of the omega individuals in subsequent iterations are determined on the basis of the position of the leader and two alternative leaders (β and δ individuals). This feature causes high-efficiency in comparison to other investigated methods. For the majority of nature-inspired method, in order to determine the new position of individuals only the position of the leader is taken into account. Including only the leader position may lead to a movement of the swarm/colony towards the local extreme, which causes a disturbance in the optimization process. Therefore, taking into account several leaders with different rungs can improve the performance of the optimization algorithm. The presented results are very encouraging and show clearly that the GWO method is a very interesting optimization tool.
The developed optimization procedure can be successfully applied to an optimization of different electromagnetic devices described by models of varying complexity.
In the future research, the author will build a prototype of the PM motor and will perform an experimental verification.