Currents Analysis of a Brushless Motor with Inverter Faults—Part I: Parameters of Entropy Functions and Open-Circuit Faults Detection

: In the ﬁeld of signal processing, it is interesting to explore signal irregularities. Indeed, entropy approaches are efﬁcient to quantify the complexity of a time series; their ability to analyze and provide information related to signal complexity justiﬁes their growing interest. Unfortunately, many entropies exist, each requiring setting parameter values, such as the data length N , the embedding dimension m , the time lag τ , the tolerance r and the scale s for the entropy calculation. Our aim is to determine a methodology to choose the suitable entropy and the suitable parameter values. Therefore, this paper focuses on the effects of their variation. For illustration purposes, a brushless motor with a three-phase inverter is investigated to discover unique faults, and then multiple permanent open-circuit faults. Starting from the brushless inverter under healthy and faulty conditions, the various possible switching faults are discussed. The occurrence of faults in an inverter leads to atypical characteristics of phase currents, which can increase the complexity in the brushless response. Thus, the performance of many entropies and multiscale entropies is discussed to evaluate the complexity of the phase currents. Herein, we introduce a mathematical model to help select the appropriate entropy functions with proper parameter values, for detecting open-circuit faults . Moreover, this mathematical model enables to pick up many usual entropies and multiscale entropies (bubble, phase, slope and conditional entropy) that can best detect faults, for up to four switches. Simulations are then carried out to select the best entropy functions able to differentiate healthy from open-circuit faulty conditions of the inverter.


Introduction
One of the most powerful tools to assess the dynamical characteristics of time series is entropy. Entropy used in several kinds of applications is able to account for vibrations of rotary machines [1] (electric machines), to detect battery faults [2] (short-circuit and open-circuit faults), to reveal important information about seismically actives zones [3] (electroseismic time series), to measure financial risks [4] (economic sciences), to categorize softwood species under uniform and gradual cross-sectional structures [5] (biology) and to categorize benign and malignant tissues of different subjects [5] (biomedical).
Various entropy measures have been established over the past two decades. Pincus [6] proposed the approximation entropy ApEn, which calculates the complexity of data and measures the frequency of similar patterns of data in a time series. However, ApEn also has some disadvantages: due to self-matching, the bias of ApEn is important for small time series and depends on the entropy parameters. To avoid self-matching, Richman [6] defined the sample entropy SampEn. Since the introduction of ApEn [6], other entropies have been proposed, such as Kolmogorov entropy K2En, conditional entropy CondEn, dispersion embedding dimension m, time lag τ, tolerance r and scale s. However, the dependence of the entropy effectiveness on the choice of parameters used for the phase currents analysis has not yet been investigated for a brushless motor. Moreover, using this mathematical model, we are able to pick up many usual entropies and multiscale entropies (bubble, phase, slope and conditional entropy) that can better detect faults for up to four switches. Our goal herein is to be able to select the appropriate entropy.
The paper is organized as follows. The usual entropies and multiscale entropies are introduced in Section 2. Sections 3 and 4 present the brushless motor and the dataset we used of output currents under healty state, with one, two, three and four open-circuit faults. Then, Section 5 illustrates the evaluation of the different entropies under variation of the data length, embedding dimension, time lag, tolerance and scale. We end with a Conclusion in Section 7.

•
Sample entropy SampEn and approximate entropy ApEn are the most commonly used measures for analyzing time series. For a time series {x i } N i=1 with a given embedding dimension m, tolerance r and time lag τ, the embedding vector x m i = [x i , x i+τ , . . . , x i+(m−1)τ ] is constructed. The number of vectors x m i and x m j , close to each other, in Chebyshev distance: is expressed by the number P m i (r). This number is used to calculate the local probability of occurrence of similar patterns: The global probability of the occurrence of similar patterns is: with a tolerance r. For m + 1: The approximation entropy is: • Kolmogorov entropy [33]-K2En is defined as the probability of a trajectory crossing a region of the phase space: suppose that there is an attractor in phase space and that the trajectory {x i } N i=1 is in the basin of attraction. K2En defines the probability distribution of each trajectory, calculated from the state space, and computes the limit of Shannon entropy. The state of the system is now measured at intervals of time. The time series {x i } N i=1 is divided into a finite partition α = {C 1 , C 2 , . . . , C k }, according to C k = [x(iτ), x((i + 1)τ), . . . , x((i + k − 1)τ)]. The Shannon Entropy of such a partition is given by: K2En is then defined by: • Conditional entropy [34]-CondEn quantifies the variation of information necessary to specify a new state in a one-dimensional incremented phase space. Small Shannon entropy values are obtained when a pattern appears several times. CondEn uses the normalization: where av[X] is the series' mean and std[X] is the standard deviation of the series. From the normalized series, the vector x L (i) = [x(i), x(i − 1), . . . , x(i − L + 1)] of L consecutive pattern is constructed in L dimensional phase space. With a variation in the Shannon entropy of x L (i), the CondEn is obtained as: • Dispersion entropy [35,36]-DispEn focuses on the class sequence that maps the elements of time series into positive integers. According to the mapping rule of dispersion entropy, the same dispersion pattern results from multiple forms of sample vectors. The time series {x i } N i=1 is reduced with the standard distribution function to normalized series y m j = [y j , y j+τ , . . . , y j+(m−1)τ ]: where y i ∈(0, 1). The phase space is restructured in c class number as z c i = round(c · y i + 0.5) and z m,c j = [z c j , z c j+τ , . . . , z c j+(m−1)τ ]. Each z c i corresponds to the dispersion pattern υ. The frequency of υ can be deduced as: where Number{j|j ≤ n − (m − 1)τ, υ} is the number of dispersion patterns υ corresponding to z m,c j . Dispersion entropy can be defined according to information entropy theory: • Cosine similarity entropy [37]-CoSiEn evaluates the angle between two embedding vectors instead of the Chebyshev distance. The global probability of occurrence of similar patterns using the local probability of occurrence of similar patterns is used to estimate entropy. The angular distance for all pairwise embedding vectors is: where When AngDist m i,j ≤ r, the number of similar patterns P m i (r) is obtained. The local and global probabilities of occurrence are: Finally, cosine similarity entropy is defined by: ). (15) • Bubble entropy [38,39]-BubbEn reduces the significance of the parameters employed to obtain an estimated entropy. Based on permutation entropy, the BubbEn vectors are ranked in the embedding space. The bubble sort algorithm is used for the ordering procedure and counts the number of swaps performed for each vector. More coarsegrained distributions are created and then compute the entropy of this distribution.
BubbEn reduces the dependence on input parameters (such as N and m) by counting the number of sample swaps necessary to achieve the ordered subsequences instead of counting order patterns. BubbEn embeds a given time series {x i } N i=1 into an m dimensional space, producing a series of vectors of size N − m + 1: X 1 , X 2 , . . . , . The number of swaps required for sorting is counted for each vector X i . The probability p i of having i swaps is used to evaluate Renyi entropy: Increasing by one the embedding dimension m, the procedure is repeated to obtain a new entropy value B m+1 2 . Finally, BubbEntropy is obtained as for ApEntropy: • Fuzzy entropy [40,41]-FuzzEn employs the fuzzy membership functions as triangular, trapezoidal, bell-shaped, Z-shaped, Gaussian, constant-Gaussian and exponential functions. FuzzEn has less dependence on N and uses the same step as in the SampEn approach. Firstly, the zero-mean embedding vectors (centered using their own means) are constructed q m i = x m i − µ m i , where: FuzzEn calculates the S m i (r, η) fuzzy similarity: obtained from a fuzzy membership function, where η is the order of the Gaussian function. The Chebyshev distance is: As in the SampEn approach, the local and global probabilities of occurrence are computed, obtaining a subsequent fuzzy entropy: • Increment entropy [42]: the IncrEn approach (similar to the permutation entropy) encodes the time series in the form of symbol sequences. For a time series in each vector is mapped to a word consisting of the sign s k = sgn(v(j)) and the size q k , which is: . (22) However, the sign indicates the direction of the volatility between the corresponding neighboring elements in the original time series. The pattern vector w is a combination of all corresponding s k and q k pairs. The relative frequency of each word w n is defined as P(w n ) = Q(w n )/(N − m), where Q(w n ) is the total number of instances of the nth word. Finally, IncrEn is defined as: • PhasEntropy [43] quantifies the distribution of the time series {x i } in a two-dimensional phase space. First, the time-delayed time series Y[n] and X[n] are calculated as follows: The second-order difference plot of x is constructed as a scatter plot of Y[n] against X[n]. The slope angle of θ[n] of each point (X[n], Y[n]) is measured from the origin (0, 0). The plot is split into k sectors serving as a coarse-graining parameter. For each k, the sector slope angle S θ [i] is the addition of the slope angle of points as follows: where i = 1, 2, . . . , k and N i is the points number of the ith sector. The probability distribution p(i) of the sector is: The estimation of the Shannon entropy of the probability distribution p(i) leads to PhasEn, computed as: • Slope entropy [44]-SlopEn includes amplitude information in a symbolic representation of the input time series {x i } N i=1 . Thus, each subsequence of length m drawn from {x i } N i=1 , can be transformed into another subsequence of length m − 1 with the differences of x i − x i−1 . In order to find the corresponding symbols, a threshold is added to these differences. Then, SlopEn uses 0, 1 and 2 symbols with positive and negative versions of the last two. Each symbol covers a range of slopes for the segment joining two consecutive samples of the input data. The frequency of each pattern found is mapped into a value using a Shannon entropy approach: it is applied with the factor corresponding to the number of slope patterns found.

•
Entropy of entropy [45]-EnofEn: the time series {x i } N i=1 is divided into consecutive non-overlapping windows w τ j of length τ: w τ j = x (j−1)τ+1 , . . . , x (j−1)τ+τ . The probability p jk for the interval x i over w τ j to occur in state k is: Shannon entropy is used now to characterize the system state inside each window. Consequently: In the second step, the probability p l for the interval y j to occur in state l is: Shannon entropy is used for the second time instead of the Sample entropy, to characterize the degree of the state change.
MSEntropy is: where A m s (r) and B m s (r) represent the probability that two sequences match for m + 1 points and m points, respectively, calculated from the coarse-grained time series at the scale factor s. Multiscale entropy reduces the accuracy of entropy estimation and is often undefined as the data length becomes shorter with an increase in scale s. This is true in the case of SampEn, which is sensitive to parameters (data length N, embedding dimension m, time lag τ, tolerance r) of short signals. To avoid this, many variants of the traditional multiscale entropy method, such as composite multiscale entropy [49,50] and refined multiscale entropy [51,52], are proposed. In the classical multiscale entropy method, there is only one coarse-grained time series derived from a non-overlapping coarse-grained procedure at scale s. However, s is the number of coarse-grained time series in the composite multiscale entropy method. The sliding windows of all coarse-grained procedures overlap. The mean of entropy values for all coarse-grained time series is defined as the composite multiscale entropy value at the scale s to improve the multiscale entropy accuracy. At a scale factor s, the cMSEntropy is defined as: where n m k,s is the total number of m-dimensional matched vector pairs and is calculated from the kth coarse-grained time series at a scale factor s. The refined multiscale entropy [51,52], based on the multiscale entropy approach, applies different entropies as a function of time scale in order to perform a multiscale irregularity assessment. rMSEn prevents the influence of the reduced variance on the complexity evaluation and removes the fast temporal scales. Thus, an rMSEn method improves the coarse-grained process.

System Description
Many industrial applications require precise regulation of the speed of the drive motors. A brushless motor operates under various speed and load conditions and the knowledge of some physical parameters (speed, torque, current) for proper speed regulation is essential. Figure 1 shows a system implementation for brushless motor control as a Permanent Magnet Synchronous Machine in Matlab/Simulink. A three-phase inverter is used to feed the motor phases, thereby injecting currents in the coils to create the necessary magnetic fields for three phases. The three-phase inverter is modeled as an universal bridge in Matlab, with three arms and MOSFET/ Diode as power electronic devices (T i and B i , i = a, b, c), controlled by pulse width modulation. A simplified model of stator consists of three coils arranged in a, b and c directions. To ensure the brushless motor movement, the a, b and c stator windings are powered according to the rotor's position. The rotor magnetic field position is detected by three Hall sensors (placed every 120°) and provides the corresponding winding excitation through the commutation logic circuit. Table 1 summarizes the main specifications of this brushless machine.
In permanent magnet synchronous motors, a physical phenomenon can appear: the electromagnetic torque oscillations. These oscillations are named the Cogging effect and are taken in consideration by [53,54]. The Cogging phenomenon is the interaction of the magnetic field produced by the permanent magnet rotor with the stator teeth. This interaction can be reduced by the physical modification of the rotor and stator internal structure. Another modality to reduce the phenomenon is the control technique introducing this knowledge directly in the controller design, as in [53,54]. Cogging torque is a significant problem for high-precision applications where position control is required. In order to simplify the analysis, the previously mentioned Cogging phenomenon can be neglected.
The brushless motor design and the analysis of various control techniques are discussed in [55], where double closed loops (speed and current) are considered. The current loop is used to improve the dynamic performance of the controlled system. Wu [56] used only one speed control loop as [57]. Nga [58] assumed that the current control loop is ideal, meaning that the transfer function of the closed current control loop is equal to 1. In the cascaded control structure, the inner loops are designed to achieve fast response and outer loop is designed to achieve optimum regulation and stability.
The inner loop also keeps the torque output below a safe limit. Moreover, the controller should be developed in such a manner that it produces less torque ripple. Torque ripple is developed from motor control through inefficient commutation strategies and internal gate control schemes. Ideally, the torque ripple is constant due to the in-phase back electromotive force and quasi-square wave stator current. In this paper, we consider the dynamics of the current control loop much faster than that of the speed control loop in order to decouple both dynamics. Satisfying this condition, the reference value of the inner loop, which is the output of the outer controller can be considered nearly constant (a simple current limit closed control loop). To achieve the regulation objective, we are interested by the steady-state phase currents and not by their dynamic performances. This paper proposes a simple control structure using only the speed control loop.
The outer loop helps to control the speed of the motor. The speed feedback comes from the Hall sensor positions. The three-phase control technique for brushless motor uses a proportional integral (PI) controller. The controller receives the error signal and generates the control signal to regulate the speed response (referred to the target speed). The PI controls the duty cycle of the PWM pulses to maintain the desired speed. The proportional and integral gains of the controller are described in Table 1. The inner loop synchronizes the inverter gate states of the brushless motor and stator winding excitation as in Table 2

Datasets
Each inverter phase has two arms, i.e., the upper arm and the lower arm, whose currents are denoted as i a , i b and i c . The three-phase currents of the brushless motor are recorded as a one-dimensional time series.
Let us observe the current of phase a under normal operating conditions (i.e., without any fault on the switches T a , B a , T b , B b , T c or B c of the inverter). Figure 2a  When an open-circuit fault occurs in phase a, the positive phase current gets distorted and that phase average current becomes negative; it is positive for the two others. The DC side of phase a output current can be observed in Table 3 (line 2). The current amplitude of phases b and c change; their means change too. Similarly, when an open-circuit fault occurs in phase a on switch B a , the negative phase current gets distorted and the average current of that phase becomes positive when it is negative for the two others.
Considering a phase fault in switches T a and B a , the phase current waveforms are illustrated in Figures 4a,b and 5a. Consequently to these faults, the mean of the phase current i a has a very low amplitude. The currents of the other phases recover the alternating waveforms. Line 8 of Table 3 presents the current means.
For instance, when two upper open-circuit faults simultaneously occur in T a and T b , the currents in the upper half-bridges are only able to flow in T c . Figures 5b and 6a,b show the abnormal distortions of the currents of phases a, b and c, which differ from normal operating conditions. During this process, the open-circuit faults degrade the system's performances, but do not cause a shutdown.   Time t(s)  If two open-circuit faults occur in T a and B b , the phase a current remains positive and the phase b current remains negative, as shown in Figure 7a,b . The other phase current (i c ) is also affected with unbalance during these fault conditions, as shown in Figure 8a.
Considering three faults in T a , B a , T b , the phase a current is near zero and the phase b current during the positive cycle is eliminated as shown in Figures 8b and 9a. Consequently, the phase c current only remains during the positive cycles, as shown in Figure 9b.
Similarly, the effects of B a , T b and B c faults on the phase currents are easy to find out. When the lower switch B a is faulty, the current i a flows only T a , having a positive mean (Figure 10a). With T b fault, the positive cycle of phase current i b vanishes, as shown in Figure 10b.    Table 3. For these cases, the mean of the phase currents are calculated and shown in Table 3.

Selection of Entropy Functions
In this part, entropy is employed to characterize the complexity of signals in the opencircuit case, such as the healthy and faulty waveforms, as in Figures 2a-12b

One Open-Circuit Fault on T a on the Phase a
This study investigates the efficiency of different entropies with several parameters, such as data length N = 6000 samples, embedding dimension m = 2, time delay τ = 1, tolerance r = 0.2 and scale s = 3. The entropies of the 6000 samples are shown in Figure 13. The BubbEn entropy of phase a samples (represented in red), where the open-circuit fault occurs, has larger value than the entropy of phases b and c (represented in black). Incontestably, they are clearly separated. Even the entropy of phase a is lower than the entropy of phases b and c for SampEn, K2En, DispEn, ApEn, SlopEn and AttEn. The separation of the three phases a, b and c is shown in Figure 13: phases b and c have an entropy very close to each other, and different from that of phase a. Each of these entropies is able to detect the faulty phase. Figure 13 represents the larger difference between the entropy of phase a; the entropy of phases b and c is given by BubbEn. Many values represented in Figure 13 are an average of two, three or four entropies. Relevant values of SampEn, MSSampEn, cMSSampEn and rMSSampEn are averaged to give a mean entropy of phases a, b and c. In the same way, for slope entropy, the same entropy value is obtained with SlopEn, MSSlopEn, cMSSlopEn and rMSSlopEn functions. With dispersion entropy also, for DispEn, MSDispEn, cMSDispEn and rMSDispEn, the same value is obtained. K2En, MSK2En, cMSK2En and rMSK2En give similar entropy val-ues. For bubble entropy, a pertinent value is obtained only with rMSBubbEn: unfortunately, BubbEn, MSBubbEn and cMSBubbEn do not distinguish the open-circuit fault on phase a from phases b and c. Figure 13  The optimal entropy should be searched from all possible combinations, according to the following rules: The objective is to maximise the distance between the phase a entropy, where the open-circuit fault occurs and the entropy of phases b and c. For a typical brushless motor, 52 possible open-circuit faults can be diagnosed using Equations (36) and (37) according to the principle shown in Table 4. Normally, the entropy is able to detect the faulty phase: it is denoted with ''. Otherwise, if the open-circuit fault is not detected (the distance between faulty phase a and phases b or c is nearly zero), it is pointed out by ''. The neutral mark '-' is employed if the distance between phase a and phases b or c is not zero but not enough to detect the open-circuit fault. However, the distance is only an approximate measure on the characteristic plot.

T a B a T b T a B a T b T c B a B b T c B a T b B b B a T b B b T c
SampEn MSSampEn cMSSampEn rMSSampEn K2En MSK2En cMSK2En rMSK2En

Two Open-Circuit Faults on B a -Phase a and on T b -Phase b
The embedding dimension m, data length N, time delay τ and the choice of tolerance r remain unchanged. Figure 14 shows the performance of several entropies with two opencircuit faults: on B a -phase a and on T b -phase b. The entropies of faulty phases a and b are in red, the entropy of phase c is in black. The optimal entropy should be searched from all possible combinations, according to the following rules: max 52 distance j=1 entropy phase−a − entropy phase−c (38) and The objective is to maximise the distance between the entropy of phase a and this of phase c, as distance between the entropy of phase b and phase c. Relevant results are obtained with rMSSampEn, CoSiEn, FuzzEn, EnofEn and rMSAttEn. For following explanations, SlopEn and rMSBubbEn are also represented even if they do not distinguish as well the open-circuits on phases a and b.

Two Open-Circuit Faults on T a and on B a -Phase a
The entropy parameters are unchanged. We investigate a phase fault: on T a and on B a -phase a. Figure 15 shows the investigation of different entropies: the phase a entropy with open-circuit is in red, the entropies of phases b and c are in black. The largest distance between phase a entropy and those of phases b and c are obtained with SampEn, ApEn and rMSBubbEn. The entropies are selected using Equations (36) and (37). The values of SampEn, ApEn are for the particular form of the phase current i a . As shown in Figure 4a, this current has a regular shape with a very small amplitude.

Two Open-Circuit Faults on T b -Phase b and on T c -Phase c
The two open-circuit faults considered in this subsection are on T b -phase b and on T c -phase c. Figure 16 shows the different entropies: the entropy of phase a is in black, phases b and c entropies are in red. The biggest distance between the phase a entropy and those of phases b and c is obtained with rMSBubbEn. The entropies are selected using Equations (36) and (37). For the following explanations, SlopEn is also represented even if it does not distinguish as well the open-circuits on phases b and c compared with phase a.

Three Open-Circuit Faults on B a -Phase a, T b -Phase b and on T c -Phase c
Three open-circuit faults occur on B a -phase a, on T b -phase b and T c -phase c. Figure 17 shows the entropies of phases a, b and c with open-circuit faults, in red. The optimal entropy should be searched from all possible combinations, according to the following rules: According to Equations (40) and (41), Figure 17 presents BubbEn and IncrEn. The entropies of phases a, b and c are very closed.

Three Open-Circuit Faults on B a -Phase a, T b and B b -Phase b
Three open-circuit faults occur on B a -phase a, on T b and B b -phase b. As we can see in Figure 18, the entropies of phases a and b with open-circuit faults is represented in red, and the entropy of phase c is in black. The optimal entropy should be searched from all possible combinations, according to Equations (38) and (39). This time, only PhasEn is able to detect the phases where the open-circuit faults occur. For example, this is not the case with SlopEn. Phase a entropy is too close to phase c entropy.

Four Open-Circuit Faults on B a -Phase a, T b and B b -Phase b and T c -Phase c
Four open-circuit faults occur on B a -phase a, on T b and B b -phase b and T c -phase c. Figure 19 shows the entropies of phases a, b and c with open-circuit faults, in red according to Equations (40) and (41). Once again, IncrEn presents very good results.

Optimization of Parameters L, m, r, τ and s
The parameters, data length N, embedding dimension m, time lag τ and tolerance r, are discussed in the next subsections. The calculated values of entropy depend on the parameters as embedded dimension m and tolerance r. The scale s may also affect the performance of our fault detection method. Finding an optimum set is a major challenge. The parameter optimization is carried out by the maximization of the distance: All x, y, z and w cases are presented in Table 3. There are five main parameters for the entropy methods, including length L, embedding dimension m, threshold r, time delay τ and scale s. The optimal combination of L, m, r, τ and s should be searched. In order to check the incidence of the parameters variation on the entropy, we present When changing from 1000 samples to 6000 samples, the data lengths are: L 1 = 1000 points; then, the length increases up to L 2 = 2000 samples, approximately two periods of the signal, followed by L 3 = 3000 points; L 4 = 4000 represents four signal periods; then, 5000 points are saved of L 5 data; and, finally, L 6 = 6000 samples, i.e., six signal periods, as in Figures 2b-12b. rMSSampEn, K2En, CondEn, DispEn, CoSiEn, rMSBubbEn, MSApEn, FuzzEn, PhasEn, SlopEn, EnofEn and rMSAttEn are performed with a specifier open-circuit fault. Then, the distance between the healthy phase (represented by a red curve) and the opencircuit phases (represented by a black curve) is maximal, except for one case: the distance between the three red curves is minimal for IncrEn, performed with three open-circuit faults B a , T b and T c . Figure 20a shows rMSSampEn as function of the data length. In Figure 20a, rMSSampEn, increases for L 1 -L 2 in the sample range (1000, 2000), decreases for [L 2 -L 3 ] in the sample range (2000, 3000) and is followed by an increase in the range L 3 -L 4 in the sample range (3000, 4000). Then, it slowly decreases to a constant value for L 6 . K2En rMSBubbEn are unchanged as L increases, keeping a constant entropy value as in Figure 20b,f. In Figure 20c, CondEn increases, decreases and increases slowly, keeping a constant distance between the entropies curves. DispEn of the healthy phase gradually decreases when the data length increases, as shown in Figure 20d. For DispEn, it is appropriate to choose L 1 because the entropy values are length independent. To ensure a large difference between phases a and b entropies and phase c entropy (Figure 20e), it is appropriate to choose L 6 for CoSiEn. In Figure 20g, MSApEn increases slowly for [L 1 -L 6 ] in the sample range (1000, 6000). With regards to the entropy shape, a large length of data ensures a maximum distance between the healthy and faulty phases. Figure 20h,k show FuzzEn and SlopEn. After an insignificant variation, the entropies are nearly constant when L is in the range [L5, L6]. A suitable value of data length is L 3 for IncrEn: the distance between the three entropies is minimal. A maximal distance between the healthy phase (red curve) and the opencircuit phases (black curve) of PhasEn is for L 6 , as in Figure 20j. The results of EnofEn and rMSAttEn are shown in Figure 20l,m. Even if these entropies vary (increase or decrease), the distance between the healthy and faulty phases is constant. It seems better to choose L 5 as the data length. Results of SampEn, rMSBubbEn and ApEn are shown in Figure 21a,f,g. These entropies decrease faster, ensuring a large difference between phase a entropy and phases b and c entropies for m = 2. K2En, CondEn, FuzzEn and CoSiEn are constant when m is in the range [2,8], as in Figure 21b,c,e,h. In Figure 21d, DispEn gradually decreases when the embedding dimension m increases, keeping a constant difference between the entropy of phase a and the entropies of phases b and c, as in Figure 21d. Figure 21i shows the entropies IncrEn of phases a, b and c (in red), which are very close to each other. m = 2 is chosen in order to minimise the distance between these entropies. For the last entropy, SlopEn (Figure 21j): the entropies of the healthy phases b and c decrease when m increases; the entropy of the open-circuit phase a increases when m increases. It is appropriate to choose m = 2 for SlopEn.

Varied Time Lag (τ)
Here, we deepen the influence of another indicator, such as time lag τ on some entropies. Time lag τ varies from 1 to 7. We already illustrated the influence of data length L and embedding dimension m on the entropy; let us now examine the performance of rMSSampEn, K2En, CondEn, DispEn, CoSiEn, rMSBubbEn, MSApEn, FuzzEn, IncrEn, PhasEn, SlopEn and EnofEn with the variation of time lag τ. The function AttnEn does not require a time lag τ. The data length L, embedding dimension m, scale s and tolerance r were fixed at N = 6000, m = 2, s = 2 and r = 0.2 in the following analysis. Figure 22a,f,g, show the impact of different values of τ on rMSSampEn, rMSBubbEn and rMSApEn: a steep decrease of these entropies of phase a and a nearly constant value for phases b and c can be observed when τ increases. The major difference between the curves is for a smaller τ = 1. We find that the difference between the CondEn, DispEn, CoSiEn, PhasEn and EnofEn of healthy phase and open-circuit phase is nearly constant, suggesting correlation, as plotted in Figure 22c-e,j,l.
In Figure 22b the shape of rMSK2En in function of τ, decreases at the beginning of the interval τ = [1,2], followed by a slow increase for τ = [2,4], ending with an abrupt increase of open-circuit phase entropy.   Figure 22i shows the entropies IncrEn of phases a, b and c, which are very close to each other. τ = 1 is chosen in order to minimize the distance between these entropies. SlopEn presents a peak for τ = 3: the difference between FuzzEn of phase a and of phase b is then maximal. However, as the time-lag increases, the difference between black and red curves become smaller. Only a lower time-lag (τ = 3) entropy has a relevant significance, as in Figure 22k.

Varied Tolerance (r)
The analysis of the tolerance r, changing from 0.2 to 0.7, was only performed on rMSSampEn, rMSK2En, CoSiEn, rMSApEn and FuzzEn. The data length, time lag and embedding dimension are N = 6000, τ = 2, s = 2 and m = 2. It is appropriate to choose r = 0.2 for rMSK2En. For the last entropy, the difference between phases a and bFuzzEn is nearly constant, as plotted in Figure 23d. The entropy FuzzEn is valid for any value of r.   Figure 24a shows rMSSampEn as a functions of scale. The entropy of the open-circuit phase a decreases for s = [1,2], is nearly constant in the range s = [3,7], followed by an increase in the range of s = [7,9], ending with a decrease for s = [9,10]. In the meantime, the entropy of the healthy phase is nearly constant for all ranges of s. The scale s = 2 is appropriate. As for rMSSampEn, rMSK2En is nearly constant for a healthy case. After a very slow variation, rMSK2En of the open-circuit phase a increases in the range s = [5,9], decreasing at the end for the last scale. To ensure a large difference between phase a entropy and phases b and c entropies, it is appropriate to choose s = 9 for rMSK2En, as in Figure 24b.
In Figure 24c-e,h,j, CondEn, DispEn, CoSiEn, FuzzEn and PhasEn are represented. The differences between the healty phase entropy and the open-circuit phase entropyare nearly constant over the range s = [2,10].
Results of rMSBubbEn are shown in Figure 24f. The entropy of phase a decreases gradually with an increase of s. Meanwhile, entropy of phase b undergoes slight variations. The first scale s = 2 gives the largest distance between the entropies of phases a and b. The same result is obtained for rMSApEn, as in Figure 24g. At the end of the s interval, the two curves merge and the open-circuit fault on phase a cannot be detected any more. Only a lower scale (s = 2) entropy has relevant significance, as in Figure 24g. Scale s = 4 or 5 gives the smallest distance between the faulty phases a, b and c for IncrEn, as shown in Figure 24i. cMSSlopEn entropy of phases a and b is shown in Figure 24k: only lower scale entropies show a relevant significance. For s = 2, cMSSlopEn is 1, exceeding 2.4 for s = 4. Furthermore, the scale analysis reveals additional entropy information not previously observed at scale s = 2. cMSSlopEn clearly presents two peaks for s = 4 and 6: the difference between phase a and phase b cMSSlopEn is maximal for s = 4. However, the difference between the black and red curves becomes smaller as the scale increases.
The results of MSEnofEn are shown in Figure 24l. Even if these entropies vary (increase or decrease), the distance between the healty and faulty phases is constant. It seems better to choose the scale s = 2. Figure 24m shows that rMSAttEn with repeated up and down, where phase a entropy and phases b and c entropies are interlaced. As long as the two curves (phase a entropy and other phases entropy) merge, the phase a open-circuit fault cannot be detected. Only lower scale (s = 2) and middle scale (s = 5, 6 or 7) entropies have relevant significance.

Conclusions
In this paper, we provide a systematic overview of many known entropy measures, highlighting their applicability to inverter fault detection. Several usual entropies (sample entropy, Kolmogorov entropy, dispersion entropy, cosine entropy, bubble entropy, approximation entropy, fuzzy entropy, incremental entropy, phase entropy, slope entropy, entropy of entropy, attention entropy) and multiscale entropies (and also refined multiscale entropy, composite multiscale entropy) are proposed to quantify the complexity of the brushless motor currents. Their roles in fault detection are summarized into the entropy distance between a healthy phase and an open-circuit faulty phase. Moreover, this paper reveals the great ability of some entropies to distinguish between a healthy and an open-circuit faulty phase. Finally, the simulation results show that these entropies are able to detect and locate the arms of the bridge with one, two, three or even four open-circuit faults.