Step Size Bound of the Sequential Partial Update LMS Algorithm with Periodic Input Signals

This paper derives an upper bound for the step size of the sequential partial update (PU) LMS adaptive algorithm when the input signal is a periodic reference consisting of several harmonics. The maximum step size is expressed in terms of the gain in step size of the PU algorithm, deﬁned as the ratio between the upper bounds that ensure convergence in the following two cases: ﬁrstly, when only a subset of the weights of the ﬁlter is updated during every iteration; and secondly, when the whole ﬁlter is updated at every cycle. Thus, this gain in step-size determines the factor by which the step size parameter can be increased in order to compensate the inherently slower convergence rate of the sequential PU adaptive algorithm. The theoretical analysis of the strategy developed in this paper excludes the use of certain frequencies corresponding to notches that appear in the gain in step size. This strategy has been successfully applied in the active control of periodic disturbances consisting of several harmonics, so as to reduce the computational complexity of the control system without either slowing down the convergence rate or increasing the residual error. Simulated and experimental results conﬁrm the expected behavior.


Context of application: active noise control systems
Acoustic noise reduction can be achieved by two different methods. Passive techniques are based on the absorption and reflection properties of materials, showing excellent noise attenuation for frequencies above 1 kHz. Nevertheless, passive sound absorbers do not work well at low frequencies because the acoustic wavelength becomes large compared to the thickness of a typical noise barrier. On the other hand, active noise control (ANC) techniques are based on the principle of destructive wave interference, whereby an antinoise is generated with the same amplitude as the undesired disturbance but with an appropriate phase shift in order to cancel the primary noise at a given location, generating a zone of silence around an acoustical sensor. The basic idea behind active control was patented by Lueg [1]. However, it was with the relatively recent advent of powerful and inexpensive digital signal processors (DSPs) that ANC techniques became practical because of their capacity to perform the computational tasks involved in real time.
The most popular adaptive algorithm used in DSP-based implementations of ANC systems is the filtered-x least meansquare (FxLMS) algorithm, originally proposed by Morgan [2] and independently derived by Widrow et al. [3] in the context of adaptive feedforward control and by Burgess [4] for the active control of sound in ducts. Figure 1 shows the arrangement of electroacoustic elements and the block diagram of this well known solution, aimed at attenuating acoustic noise by means of secondary sources. Due to the presence of a secondary path transfer function following the adaptive filter, the conventional LMS algorithm must be modified to ensure convergence. The mentioned secondary path includes the D/A converter, power amplifier, loudspeaker, acoustic path, error microphone, and A/D converter. The solution proposed by the FxLMS is based on the placement of an accurate estimate of the secondary path transfer function in the weight update path as originally suggested in [2]. Thus, the regressor signal of the adaptive filter is obtained by filtering the reference signal through the estimate of the secondary path.

Partial update LMS algorithm
The LMS algorithm and its filtered-x version have been widely used in control applications because of their simple implementation and good performance. However, the adaptive FIR filter may eventually require a large number of coefficients to meet the requirements imposed by the addressed problem. For instance, in the ANC system described in Figure 1(b), the task associated with the adaptive filterin order to minimize the error signal-is to accurately model the primary path and inversely model the secondary path. Previous research in the field has shown that if the active canceller has to deal with an acoustic disturbance consisting of closely spaced frequency harmonics, a long adaptive filter is necessary [5]. Thus, an improvement in performance is achieved at the expense of increasing the computational load of the control strategy. Because of limitations in computational efficiency and memory capacity of low-cost DSP boards, a large number of coefficients may even impair the practical implementation of the LMS or more complex adaptive algorithms.
As an alternative to the reduction of the number of coefficients, one may choose to update only a portion of the filter coefficient vector at each sample time. Partial update (PU) adaptive algorithms have been proposed to reduce the large computational complexity associated with long adaptive filters. As far as the drawbacks of PU algorithms are concerned, it should be noted that their convergence speed is reduced approximately in proportion to the filter length divided by the number of coefficients updated per iteration, that is, the decimation factor N. Therefore, the tradeoff between convergence performance and complexity is clearly established: the larger the saving in computational costs, the slower the convergence rate. Two well-known adaptive algorithms carry out the partial updating process of the filter vector employing decimated versions of the error or the regressor signals [6]. These algorithms are, respectively, the periodic LMS and the sequential LMS. This work focuses the attention on the later.
The sequential LMS algorithm with decimation factor N updates a subset of size L/N, out of a total of L, coefficients per iteration according to (1), for 1 l L, where w l (n) represents the lth weight of the filter, μ is the step size of the adaptive algorithm, x(n) is the regressor signal, and e(n) is the error signal. The reduction in computational costs of the sequential PU strategy depends directly on the decimation factor N. Tables 1 and 2 show, respectively, the computational complexity of the LMS and the sequential LMS algorithms in terms of the average number of operations required per cycle, when used in the context of a filtered-x implementation of a single-channel ANC system. The length of the adaptive filter is L, the length of the offline estimate of the secondary path is L s , and the decimation factor is N.
The criterion for the selection of coefficients to be updated can be modified and, as a result of that, different PU adaptive algorithms have been proposed [7][8][9][10]. The variations of the cited PU LMS algorithms speed up their convergence rate at the expense of increasing the number of operations per cycle. These extra operations include the "intelligence" required to optimize the election of the coefficients to be updated at every instant.
In this paper, we try to go a step further, showing that in applications based on the sequential LMS algorithm, where the regressor signal is periodic, the inclusion of a new parameter-called gain in step size-in the traditional tradeoff proves that one can achieve a significant reduction in the computational costs without degrading the performance of the algorithm. The proposed strategy-filtered-x sequential least mean-square algorithm with gain in step size (G μ -FxSLMS)-has been successfully applied in our laboratory in the context of active control of periodic noise [5].

Assumptions in the convergence analysis
Before focusing on the sequential PU LMS strategy and the derivation of the gain in step size, it is necessary to remark on two assumptions about the upcoming analysis: the independence theory and the slow convergence condition.
The traditional approach to convergence analyses of LMS-and FxLMS-algorithms is based on stochastic inputs instead of deterministic signals such as a combination of multiple sinusoids. Those stochastic analyses assume independence between the reference-or regressor-signal and the coefficients of the filter vector. In spite of the fact that this independence assumption is not satisfied or, at least, questionable when the reference signal is deterministic, some researchers have previously used the independence assumption with a deterministic reference. For instance, Kuo et al. [11] assumed the independence theory, the slow convergence condition, and the exact offline estimate of the secondary path to state that the maximum step size of the FxLMS algorithm is inversely bounded by the maximum eigenvalue of the autocorrelation matrix of the filtered reference, when the reference was considered to be the sum of multiple sinusoids. Bjarnason [12] used as well the independence theory to carry out a FxLMS analysis extended to a sinusoidal input. According to Bjarnason, this approach is justified by the fact that experience with the LMS algorithm shows that results obtained by the application of the independence theory retain sufficient information about the structure of the adaptive process to serve as reliable design guidelines, even for highly dependent data samples.
As far as the second assumption is concerned, in the context of the traditional convergence analysis of the FxLMS adaptive algorithm [13,Chapter 3], it is necessary to assume slow convergence-i.e., that the control filter is changing slowly-and to count on an exact estimate of the secondary path in order to commute the order of the adaptive filter and the secondary path [2]. In so doing, the output of the adaptive filter carries through directly to the error signal, and the traditional LMS algorithm analysis can be applied by using as regressor signal the result of the filtering of the reference signal through the secondary path transfer function. It could be argued that this condition compromises the determination of an upper bound on the step size of the adaptive algorithm, but actually, slow convergence is guaranteed because the convergence factor is affected by a much more restrictive condition with a periodic reference than with a white noise reference. It has been proved that with a sinusoidal reference, the upper bound of the step size is inversely proportional to the product of the length of the filter and the delay in the secondary path; whereas with a white reference signal, the bound depends inversely on the sum of these parameters, instead of their product [12,14]. Simulations with a white noise reference signal suggest that a realistic upper bound in the step size is given by [15,Chapter 3] where P x ¼ is the power of the filtered reference, L is the length of the adaptive filter, and Δ is the delay introduced by the secondary path. Bjarnason [12] analyzed FxLMS convergence with a sinusoidal reference, but employed the habitual assumptions made with stochastic signals, that is, the independence theory. The stability condition derived by Bjarnason yields In case of large delay Δ, (3) simplifies to Vicente and Masgrau [14] obtained an upper bound for the FxLMS step size that ensures convergence when the reference signal is deterministic (extended to any combination of multiple sinusoids). In the derivation of that result, there is no need of any of the usual approximations, such as independence between reference and weights or slow convergence. The maximum step size for a sinusoidal reference is given by The similarity between both convergence conditions- (4) and (5)-is evident in spite of the fact that the former analysis is based on the independence assumption, whereas the later analysis is exact. This similarity achieved in the results justifies the use of the independence theory when dealing with sinusoidal references, just to obtain a first-approach Updated during the Nth iteration with Figure 2: Summary of the sequential PU algorithm, showing the coefficients to be updated at each iteration and related samples of the regressor signal used in each update, x ¼ (n) being the value of the regressor signal at the current instant.
limit. In other words, we look for a useful guide on determining the maximum step size but, as we will see in this paper, derived bounds and theoretically predicted behavior are found to correspond not only to simulation but also to experimental results carried out in the laboratory in practical implementations of ANC systems based on DSP boards. To sum up, independence theory and slow convergence are assumed in order to derive a bound for a filtered-x sequential PU LMS algorithm with deterministic periodic inputs. Despite the fact that such assumptions might be initially questionable, previous research and achieved results confirm the possibility of application of these strategies in the attenuation of periodic disturbances in the context of ANC, achieving the same performance as that of the full update FxLMS in terms of convergence rate and misadjustment, but with lower computational complexity.
As far as the applicability of the proposed idea is concerned, the contribution of this paper to the design of the step size parameter is applicable not only to the filtered-x sequential LMS algorithm but also to basic sequential LMS strategies. In other words, the derivation and analysis of the gain in step size could have been done without consideration of a secondary path. The reason for the study of the specific case that includes the filtered-x stage is the unquestionable existence of an extended problem: the need of attenuation of periodic disturbances by means of ANC systems implementing filtered-x algorithms on low-cost DSP-based boards where the reduction of the number of operations required per cycle is a factor of great importance.

Overview
Many convergence analyses of the LMS algorithm try to derive exact bounds on the step size to guarantee mean and mean-square convergence based on the independence assumption [16, Chapter 6]. Analyses based on such assump-tion have been extended to sequential PU algorithms [6] to yield the following result: the bounds on the step size for the sequential LMS algorithm are the same as those for the LMS algorithm and, as a result of that, a larger step size cannot be used in order to compensate its inherently slower convergence rate. However, this result is only valid for independent identically distributed (i.i.d.) zero-mean Gaussian input signals.
To obtain a valid analysis in the case of periodic signals as input of the adaptive filter, we will focus on the updating process of the coefficients when the L-length filter is adapted by the sequential LMS algorithm with decimation factor N. This algorithm updates just L/N coefficients per iteration according to (1). For ease in analyzing the PU strategy, it is assumed throughout the paper that L/N is an integer. Figure 1(b) shows the block diagram of a filtered-x ANC system, where the secondary path S(z) is placed following the digital filter W(z) controlled by an adaptive algorithm. As has been previously stated, under the assumption of slow convergence and considering an accurate offline estimate of the secondary path, the order of W(z) and S(z) can be commuted and the resulting equivalent diagram simplified. Thus, standard LMS algorithm techniques can be applied to the filtered-x version of the sequential LMS algorithm in order to determine the convergence of the mean weights and the maximum value of the step size [13,Chapter 3]. The simplified analysis is based on the consideration of the filtered reference as the regressor signal of the adaptive filter. This signal is denoted as x ¼ (n) in Figure 1(b). Figure 2 summarizes the sequential PU algorithm given by (1), indicating the coefficients to be updated at each iteration and the related samples of the regressor signal. In the scheme of Figure 2, the following update is considered to be carried out during the first iteration. The current value of the regressor signal is x ¼ (n). According to (1) and Figure 2, this value is used to update the first N coefficients of the filter during the following N iterations. Generally, at each iteration of a full update adaptive algorithm, a new sample of the regressor signal has to be taken as the latest and newest value of Pedro Ramos et al. 5 the filtered reference signal. However, according to Figure 2, the sequential LMS algorithm uses only every Nth element of the regressor signal. Thus, it is not worth computing a new sample of the filtered reference at every algorithm iteration. It is enough to obtain the value of a new sample at just one out of N iterations.
The L-length filter can be considered as formed by N subfilters of L/N coefficients each. These subfilters are obtained by uniformly sampling by N the weights of the original vector. Coefficients of the first subfilter are encircled in Figure 2. Hence, the whole updating process can be understood as the N-cyclical updating schedule of N subfilters of length L/N. Coefficients occupying the same relative position in every subfilter are updated with the same sample of the regressor signal. This regressor signal is only renewed at one in every N iterations. That is, after N iterations, the less recent value is shifted out of the valid range and a new value is acquired and subsequently used to update the first coefficient of each subfilter.
To sum up, during N consecutive instants, N subfilters of length L/N are updated with the same regressor signal. This regressor signal is a N-decimated version of the filtered reference signal. Therefore, the overall convergence can be analyzed on the basis of the joint convergence of N subfilters: (i) each of length L/N, (ii) updated by an N-decimated regressor signal.

Spectral norm of autocorrelation matrices: the triangle inequality
The autocorrelation matrix R of a periodic signal consisting of several harmonics is Hermitian and Toeplitz. The spectral norm of a matrix A is defined as the square root of the largest eigenvalue of the matrix product A H A, where A H is the Hermitian transpose of A, that is, [17 The spectral norm of a matrix satisfies, among other norm conditions, the triangle inequality given by The application of the definition of the spectral norm to the Hermitian correlation matrix R leads us to conclude that Therefore, since A and B are correlation matrices, we have the following result:

Gain in step size for periodic input signals
At this point, a convergence analysis is carried out in order to derive a bound on the step size of the filtered-x sequential PU LMS algorithm when the regressor vector is a periodic signal consisting of multiple sinusoids. It is known that the LMS adaptive algorithm converges in mean to the solution if the step size satisfies [16,Chapter 6] where λ max is the largest eigenvalue of the input autocorrelation matrix x ¼ (n) being the regressor signal of the adaptive algorithm.
As has been previously stated, under the assumptions considered in Section 1.3, in the case of an ANC system based on the FxLMS, traditional LMS algorithm analysis can be used considering that the regressor vector corresponds to the reference signal filtered by an estimate of the secondary path. The proposed analysis is based on the ratio between the largest eigenvalue of the autocorrelation matrix of the regressor signal for two different situations. Firstly, when the adaptive algorithm is the full update LMS and, secondly, when the updating strategy is based on the sequential LMS algorithm with a decimation factor N > 1. The sequential LMS with N = 1 corresponds to the LMS algorithm.
Let the regressor vector x ¼ (n) be formed by a periodic signal consisting of K harmonics of the fundamental frequency f 0 , The autocorrelation matrix of the whole signal can be expressed as the sum of K simpler matrices with each being the autocorrelation matrix of a single tone [11] where If the simple LMS algorithm is employed, the largest eigenvalue of each simple matrix R k is given by [11] λ N=1 k,max k f 0 = max 1 4 According to (9) the largest eigenvalue of a sum of matrices is bounded by the sum of the largest eigenvalues of each of 6 EURASIP Journal on Audio, Speech, and Music Processing its components. Therefore, the largest eigenvalue of R can be expressed as At the end of Section 2.1, two key differences were derived in the case of the sequential LMS algorithm: the convergence condition of the whole filter might be translated to the parallel convergence of N subfilters of length L/N adapted by an N-decimated regressor signal. Considering both changes, the largest eigenvalue of each simple matrix R k can be expressed as and considering the triangle inequality (9), we have Defining the gain in step size G μ as the ratio between the bounds on the step sizes in both cases, we obtain the factor by which the step size parameter can be multiplied when the adaptive algorithm uses PU, In order to more easily visualize the dependence of the gain in step size on the length of the filter L and on the decimation factor N, let a single tone of normalized frequency f 0 be the regressor signal Now, the gain in step size, that is, the ratio between the bounds on the step size when N > 1 and N = 1, is given by Basically, the analytical expressions and figures show that the step size can be multiplied by N as long as certain frequencies, at which a notch in the gain in step size appears, are avoided. The location of these critical frequencies, as well as the number and width of the notches, will be analyzed as a function of the sampling frequency F s , the length of the adaptive filter L, and the decimation factor N. According to (19) and (21), with increasing decimation factor N, the step size can be multiplied by N and, as a result of that affordable compensation, the PU sequential algorithm convergence is as fast as the full update FxLMS algorithm as long as the undesired disturbance is free of components located at the notches of the gain in step size. Figure 3 shows that the total number of equidistant notches appearing in the gain in step size is (N 1). In fact, the notches appear at the frequencies given by It is important to avoid the undesired sinusoidal noise being at the mentioned notches because the gain in step size is smaller there, with the subsequent reduction in convergence rate. As far as the width of the notches is concerned, Figure 4 (where the decimation factor N = 2) shows that the smaller the length of the filter, the wider the main notch of the gain in step size. In fact, if L/N is an integer, the width between first zeros of the main notch can be expressed as Simulations and practical experiments confirm that at these problematic frequencies, the gain in step size cannot be applied at its maximum value N.
If it were not possible to avoid the presence of some harmonic at a frequency where there were a notch in the gain, the proposed strategy could be combined with the filterederror least mean-square (FeLMS) algorithm [13,Chapter 3]. The FeLMS algorithm is based on a shaping filter C(z) placed in the error path and in the filtered reference path. The transfer function C(z) is the inverse of the desired shape of the residual noise. Therefore, C(z) must be designed as a comb filter with notches at the problematic frequencies. As a result of that, the harmonics at those frequencies would not be canceled. Nevertheless, if a noise component were to fall in a notch, using a smaller step size could be preferable to using the FeLMS, considering that typically it is more important to cancel all noise disturbance frequencies rather than obtaining the fastest possible convergence rate.

NOISE ON THE WEIGHT VECTOR SOLUTION AND EXCESS MEAN-SQUARE ERROR
The aim of this section is to prove that the full-strength gain in step size G μ = N can be applied in the context of ANC Pedro Ramos et al.   systems controlled by the filtered-x sequential LMS algorithm without an additional increase in mean-square error caused by the noise on the weight vector solution. We begin with an analysis of the trace of the autocorrelation matrix of an N-decimated signal x N (n), which is included to provide mathematical support for subsequent parts. The second part of the section revises the analysis performed by Widrow and Stearns of the effect of the gradient noise on the LMS algorithm [16,Chapter 6]. The section ends with the extension to the G μ -FxSLMS algorithm of the previously outlined analysis.

Properties of the trace of an N-decimated autocorrelation matrix
Let the L ¢ 1 vector x(n) represent the elements of a signal.
The expectation of the outer product of the vector x(n) with itself determines the L ¢ L autocorrelation matrix R of the 8 EURASIP Journal on Audio, Speech, and Music Processing signal The N-decimated signal x N (n) is obtained from vector x(n) by multiplying x(n) by the auxiliary matrix I (N) k , where I (N) k is obtained from the identity matrix I of dimension L¢L by zeroing out some elements in I. The first nonnull element on its main diagonal appears at the kth position and the superscript (N) is intended to denote the fact that two consecutive nonzero elements on the main diagonal are separated by N positions. The auxiliary matrix I (N) k is explicitly expressed as As a result of (26), the autocorrelation matrix R N of the new signal x N (n) only presents nonnull elements on its main diagonal and on any other diagonal parallel to the main diagonal that is separated from it by kN positions, k being any integer. Thus, The matrix R N can be expressed in terms of R as We define the diagonal matrix Λ with main diagonal comprised of the L eigenvalues of R. If Q is a matrix whose columns are the eigenvectors of R, we have The trace of R is defined as the sum of its diagonal elements. The trace can also be obtained from the sum of its eigenvalues, that is, The relation between the traces of R and R N is given by

Effects of the gradient noise on the LMS algorithm
Let the vector w(n) represent the weights of the adaptive filter, which are updated according to the LMS algorithm as follows: w(n + 1) = w(n) μ 2 Ö(n) = w(n) + μe(n)x(n), (33) where μ is the step size, Ö(n) is the gradient estimate at the nth iteration, e(n) is the error at the previous iteration, and x(n) is the vector of input samples, also called the regressor signal.
We define v(n) as the deviation of the weight vector from its optimum value v(n) = w(n) w opt (34) and v ¼ (n) as the rotation of v(n) by means of the eigenvector matrix Q, In order to give a measure of the difference between actual and optimal performance of an adaptive algorithm, two parameters can be taken into account: excess meansquare error and misadjustment. The excess mean-square error ξ excess is the average mean-square error less the minimum mean-square error, that is, Pedro Ramos et al.

9
The misadjustment M is defined as the excess mean-square error divided by the minimum mean-square error Random weight variations around the optimum value of the filter cause an increase in mean-square error. The average of these increases is the excess mean-square error. Widrow and Stearns [16,Chapters 5 and 6] analyzed the steady-state effects of gradient noise on the weight vector solution of the LMS algorithm by means of the definition of a vector of noise n(n) in the gradient estimate at the nth iteration. It is assumed that the LMS process has converged to a steady-state weight vector solution near its optimum and that the true gradient Ö(n) is close to zero. Thus, we write The weight vector covariance in the principal axis coordinate system, that is, in primed coordinates, is related to the covariance of the noise as follows [16,Chapter 6]: In practical situations, (μ/2)Λ tends to be negligible with respect to I, so that (39) simplifies to From (38), it can be shown that the covariance of the gradient estimation noise of the LMS algorithm at the minimum point is related to the autocorrelation input matrix according to (41) cov n(n) = E n(n)n T (n) = 4E e 2 (n) R.
In (41), the error and the input vector are considered statistically independent because at the minimum point of the error surface both signals are orthogonal.
To sum up, (40) and (41) indicate that the measurement of how close the LMS algorithm is to optimality in the meansquare error sense depends on the product of the step size and the autocorrelation matrix of the regressor signal x(n).

Effects of gradient noise on the filtered-x sequential LMS algorithm
At this point, the goal is to carry out an analysis of the effect of gradient noise on the weight vector solution for the case of the G μ -FxSLMS algorithm in a similar manner as in the previous section.
The weights of the adaptive filter when the G μ -FxSLMS algorithm is used are updated according to the recursion w(n + 1) = w(n) + G μ μe(n)I (N) 1+n where I (N) 1+n mod N is obtained from the identity matrix as expressed in (27). The gradient estimation noise of the filteredx sequential LMS algorithm at the minimum point, where the true gradient is zero, is given by Considering PU, only L/N terms out of the L-length noise vector are nonzero at each iteration, giving a smaller noise contribution in comparison with the LMS algorithm, which updates the whole filter. The weight vector covariance in the principal axis coordinate system, that is, in primed coordinates, is related to the covariance of the noise as follows: Assuming that (G μ μ/2)Λ is considerably less than I, then (44) simplifies to The covariance of the gradient estimation error noise when the sequential PU is used can be expressed as cov n(n) = E n(n)n T (n) In (46), statistical independence of the error and the input vector has been assumed at the minimum point of the error surface, where both signals are orthogonal. According to (32), the comparison of (40) and (45)carried out in terms of the trace of the autocorrelation matrices-confirms that the contribution of the gradient estimation noise is N times weaker for the sequential LMS algorithm than for the LMS. This reduction compensates the eventual increase in the covariance of the weight vector in the principal axis coordinate system expressed in (45) when the maximum gain in step size G μ = N is applied in the context of the G μ -FxSLMS algorithm.

EXPERIMENTAL RESULTS
In order to assess the effectiveness of the G μ -FxSLMS algorithm, the proposed strategy was not only tested by simulation but was also evaluated in a practical DSP-based implementation. In both cases, the results confirmed the expected behavior: the performance of the system in terms of convergence rate and residual error is as good as the performance achieved by the FxLMS algorithm, even while the number of operations per iteration is significantly reduced due to PU.

Computer simulations
This section describes the results achieved by the G μ -FxSLMS algorithm by means of a computer model developed in MAT-LAB on the theoretical basis of the previous sections. The model chosen for the computer simulation of the first example corresponds to the 1 ¢ 1 ¢ 1 (1 reference microphone, 1 secondary source, and 1 error microphone) arrangement described in Figure 1(a). Transfer functions of the primary path P(z) and secondary path S(z) are shown in Figures 5(a) and 5(b), respectively. The filter modeling the primary path is a 64th-order FIR filter. The secondary path is modeledby a 4th-order elliptic IIR filter-as a high pass filter whose cut-off frequency is imposed by the poor response of the loudspeakers at low frequencies. The offline estimate of the secondary path was carried out by an adaptive FIR filter of 200 coefficients updated by the LMS algorithm, as a classical problem of system identification. Figure 5(c) shows the transfer function of the estimated secondary path. The sampling frequency (8000 samples/s) as well as other parameters were chosen in order to obtain an approximate model of the real implementation. Finally, Figure 5(d) shows the power spectral density of x(n), the reference signal for the undesired disturbance which has to be canceled where η(n) is an additive white Gaussian noise of zero mean whose power is After convergence has been achieved, the power of the residual error corresponds to the power of the random component of the undesired disturbance. The length of the adaptive filter is of 256 coefficients. The simulation was carried out as follows: the step size was set to zero during the first 0.25 seconds; after that, it is set to 0.0001 Pedro Ramos et al.  and the adaptive process starts. The value μ = 0.0001 is near the maximum stable step size when a decimation factor N = 1 is chosen.
The performance of the G μ -FxSLMS algorithm was tested for different values of the decimation factor N. Figure 6 shows the gain in step size over the frequency band of interest for different values of the parameter N. The gain in step size at the frequencies 62.5 Hz and 187.5 Hz are marked with two circles over the curves. The exact location of the notches is given by (22). On the basis of the position of the notches in the gain in step size and the spectral distribution of the undesired noise, the decimation factor N = 64 is expected to be critical because, according to Figure 6, the fullstrength gain G μ = N = 64 cannot be applied at the frequencies 62.5 Hz and 187.5 Hz; both frequencies correspond exactly to the sinusoidal components of the periodic disturbance. Apart from the case N = 64 the gain in step size is free of notches at both of these frequencies.
Convergence curves for different values of the decimation factor N are shown in Figure 7. The numbers that appear over the figures correspond to the mean-square error computed over the last 5000 iterations. The residual error is expressed in logarithmic scale as the ratio of the mean-square error and a signal of unitary power. As expected, the convergence rate and residual error are the same in all cases except when N = 64. For this value, the active noise control system diverges. In order to make the system converge when N = 64, it is necessary to decrease the gain in step size to a maximum value of 32 with a subsequent reduction in convergence rate.
The second example compares the theoretical gain in step size with the increase obtained by MATLAB simulation. The model of this example corresponds, as in the previous example, to the 1 ¢ 1 ¢ 1 arrangement described in Figure 1. In this example, the reference is a single sinusoidal signal whose frequency varied in 20 Hz steps from 40 to 1560 Hz. The sampling frequency of the model is 3200 samples/s. Primary and secondary paths-P(z) and S(z)-are pure delays of 300 and 40 samples, respectively. The output of the primary path is mixed with additive white Gaussian noise providing a signalto-noise ratio of 27 dB. It is assumed that the secondary path has been exactly estimated. In order to provide very accurate results, the increase in step size between every two consecutive simulations looking for the bound is less than 1/5000 the final value of the step size that ensures convergence. The decimation factor N of this example was set to 4. Figure 8 compares the predicted gain in step size with the achieved results. As expected, the experimental gain in step size is 4, apart from the notches that appear at 400, 800, and 1200 Hz.

Practical implementation
The G μ -FxSLMS algorithm was implemented in a 1¢2¢2 active noise control system aimed at attenuating engine noise at the front seats of a Nissan Vanette. Figure 9 shows the physical arrangement of electroacoustic elements. The adaptive algorithm was developed on a hardware platform based on the DSP TMS320C6701 from Texas Instruments [18]. The length of the adaptive filter (L) for the G μ -FxSLMS algorithm was set to 256 or 512 coefficients (depending on the spectral characteristics of the undesired noise and the degree of attenuation desired), the length of the estimate of the secondary path (L s ) was set to 200 coefficients, and the decimation factor and the gain in step size were N = G μ = 8. The sampling frequency was F s = 8000 samples/s. From the parameters selected, one can derive, according to (22), that the first notch in the gain in step size is located at 500 Hz.
The system effectively cancels the main harmonics of the engine noise. Considering that the loudspeakers have a low cutoff frequency of 60 Hz, the controller cannot attenuate the components below this frequency. Besides, the ANC system finds more difficulty in the attenuation of closely spaced frequency harmonics (see Figure 10(a)). This problem can be avoided by increasing the number of coefficients of the adaptive filter; for instance, from L = 256 to 512 coefficients (see Figure 10(b)).
In order to carry out a performance comparison of the G μ -FxSLMS algorithm with increasing value in the decimation term N-and subsequently in gain in step size G μit is essential to repeat the experiment with the same undesired disturbance. So to avoid inconsistencies in level and frequency, instead of starting the engine, we have previously recorded a signal consisting of several harmonics (100, 150, 200, and 250 Hz). An omnidirectional source (Brüel & Kjaer Omnipower 4296) placed inside the van is fed with this signal. Therefore, a comparison could be made under the same conditions. The ratio-in logarithmic scaleof the mean-square error and a signal of unitary power that appears over the graphics was calculated averaging the last Pedro Ramos et al. iterations shown. In this case, the length of the adaptive filter was set to 256 coefficients, the length of the estimate of the secondary path (L s ) was set to 200 coefficients, and the decimation factor and the gain in step size were set to N = G μ = 1, 2, 4, and 8. The sampling frequency was F s = 8000 samples/s and the first notch in the gain in step size appeared at 500 Hz, well above the spectral location of the undesired disturbance. From the experimental results shown in Figure 11, the application of the full-strength gain in step size when the decimation factor is 2, 4, or 8 reduces the computational costs without degrading in any sense the performance of the system with respect to the full update algorithm. Taking into account that the 2-channel ANC system implementing the G μ -FxSLMS algorithm inside the van ignored cross terms, the expressions given by Tables 1 and 2 show that approximately 32%, 48%, and 56% of the high-level multiplications can be saved when the decimation factor N is set to 2, 4, and 8, respectively.
Although reductions in the number of operations are an indication of the computational efficiency of an algorithm, such reductions may not directly translate to a more efficient real-time DSP-based implementation on a hardware platform. To accurately gauge such issues, one must consider the freedoms and constraints that a platform imposes in the real implementation, such as parallel operations, addressing modes, registers available, or number of arithmetic units. In our case, the control strategy and the assembler code was developed trying to take full advantage of these aspects [5].

CONCLUSIONS
This work presents a contribution to the selection of the step size used in the sequential partial update LMS and FxLMS adaptive algorithms. The deterministic periodic input signal case is studied and it is verified that under certain conditions the stability range of the step size is increased compared to the full update LMS and FxLMS. The algorithm proposed here-filtered-x sequential LMS with gain in step size (G μ -FxSLMS)-is based on sequential PU of the coefficients of a filter and on a controlled increase in the step size of the adaptive algorithm. It can be used in active noise control systems focused on the attenuation of periodic disturbances to reduce the computational costs of the control system. It is theoretically and experimentally proved that the reduction of the computational complexity is not achieved at the expense of slowing down the convergence rate or of increasing the residual error.
The only condition that must be accomplished to take full advantage of the algorithm is that some frequencies should be avoided. These problematic frequencies correspond to notches that appear at the gain in step size. Their width and exact location depend on the system parameters.
Simulations and experimental results confirm the benefits of this strategy when it is applied in an active noise control system to attenuate periodic noise.

ACKNOWLEDGMENT
This work was partially supported by CICYT of Spanish Government under Grant TIN2005-08660-C04-01.