Skip to main content
Log in

An efficient normalized LMS algorithm

  • Original Paper
  • Published:
Nonlinear Dynamics Aims and scope Submit manuscript

Abstract

The task of adaptive estimation in the presence of random and highly nonlinear environment such as wireless channel estimation and identification of non-stationary system etc. has been always challenging. The least mean square (LMS) algorithm is the most popular algorithm for adaptive estimation and it belongs to the gradient family, thus inheriting their low computational complexity and their slow convergence. To deal with this issue, an efficient normalization of the LMS algorithm is proposed in this work which is achieved by normalizing the input signal with an intelligent mixture of weighted signal and error powers which results in a variable step-size type algorithm. The proposed normalization scheme can provide both significant faster convergence in initial adaptation phase while maintaining a lower steady-state mean-square-error compared to the conventional normalized LMS (NLMS) algorithm. The proposed algorithm is tested on adaptive denoising of signals, estimation of unknown channel, and tracking of random walk channel and its performance is compared with that of the standard LMS and NLMS algorithms. Mean and mean-square performance of the proposed algorithm is investigated in both stationary and non-stationary environments. We derive the closed-form expressions of various performance measures by evaluating multi-dimensional moments. This is done by statistical characterization of required random variables by employing the approach of Indefinite Quadratic Forms. Simulation and experimental results are presented to corroborate our theoretical claims.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13

Similar content being viewed by others

Data availability

Data sharing not applicable to this article as no datasets were generated or analyzed during the current study.

Notes

  1. \(\varvec{\lambda }=\text{ diag }(\varvec{\varLambda }_x)\) is a vector containing diagonal entries of \(\varvec{\varLambda }_x\)

  2. \(\mathbf{1}\) is a M-dimensional vector with all entries equal to 1

  3. This is because all the entries of \(\mathbf{e}\) are function of past input vectors (i.e., \(\{\mathbf{x}_k\}\), \(k<i\)) and all \(\{\mathbf{x}_i\}\) are independent

  4. It can be seen from (37) that \(d_m=(1-\alpha )\) for \(1\le m \le M\) and \(d_m=\alpha \) for \(M+1\le m \le 2M\).

  5. The eigenvalue spread of a matrix is defined as the ratio of maximum to minimum of its eigenvalues, \(\eta =\frac{\lambda _{\text{ max }}}{\lambda _{\text{ min }}}\)

References

  1. Haykin, S.: Adaptive Filter Theory, 3rd edn. Prentice-Hall, Upper-Saddle River (1996)

    MATH  Google Scholar 

  2. Sayed, A.H.: Adaptive Filters. John Wiley & Sons, New York (2008)

    Book  Google Scholar 

  3. Widrow, B., McCool, J.M., Larimore, M.G., Johnson, C.R.: Stationary and Nonstationary Learning Characteristics of the LMS Adaptive Filter. IEEE Proceedings 64, 1151–1162 (1976)

    Article  MathSciNet  Google Scholar 

  4. Kwong, R.H., Johnston, E.W.: A variable step size LMS algorithm. IEEE Trans. Signal Process. 40, 1633–1642 (1992)

    Article  MATH  Google Scholar 

  5. Harris, R.W., Chabries, D.M., Bishop, F.A.: A variable step size (VS) algorithm. IEEE Trans. Acoust. Speech Signal Process. 34, 499–510 (1986)

    Article  Google Scholar 

  6. Shan, T.J., Kailath, T.: Adaptive algorithms with an automatic gain control feature. IEEE Trans. Acoust. Speech Signal Process. 35, 122–127 (1988)

    Google Scholar 

  7. Methews, V.J., Xie, Z.: A stochastic gradient adaptive filter with gradient adaptive step size. IEEE Trans. Signal Process. 41, 2075–2087 (1993)

    Article  Google Scholar 

  8. Aboulnasr, T., Mayyas, K.: A robust variable step-size LMS type algorithm: analysis and simulation. IEEE Trans. Signal Process. 45, 631–639 (1997)

    Article  Google Scholar 

  9. Sulyman, A.I., Zerguine, A.: Convergence and steady-state analysis of a variable step-size NLMS algorithm. Signal Process. 83, 1255–1273 (2003)

    Article  MATH  Google Scholar 

  10. Costa, M., Bermudez, J.: A robust variable step size algorithm for LMS adaptive filters. In: IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP, vol. 3 (2006)

  11. Costa, M., Bermudez, J.: A noise resilient variable step-size LMS algorithm. Signal Process. 88(3), 733–748 (2008)

    Article  MATH  Google Scholar 

  12. Zhao, S., Man, Z., Khoo, S., Wu, H.: Variable step-size LMS algorithm with a quotient form. Signal Process. 89, 67–76 (2008)

    Article  MATH  Google Scholar 

  13. Asad, S.M., Moinuddin, M., Zerguine, A.: On the convergence of a variable step-size LMF algorithm for quotient form. In: IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2010, Dallas, Texas, USA, March 14–19, (2010)

  14. Nagumo, J.I., Noda, A.: A learning method for system identification. IEEE Trans. Autom. Control 12, 282–287 (1967)

    Article  Google Scholar 

  15. Zerguine, A., Chan, M.K., Al-Naffouri, T.Y., Moinuddin, M., Cowan, C.F.N.: Convergence and tracking analysis of a variable normalized LMF (XE-NLMF) algorithm. Signal Process. 89(5), 778–790 (2008)

    Article  MATH  Google Scholar 

  16. Walach, E., Widrow, B.: The least mean fourth (LMF) adaptive algorithm and its family. IEEE Trans. Inf. Theory IT–30, 275–283 (1984)

    Article  Google Scholar 

  17. Chan, M.K., Cowan, C.F.N.: Using a normalised LMF algorithm for channel equalisation with co-channel interference. In: EUSIPCO 2002, pp. 48–51 (2002)

  18. Bershad, N.J.: Behavior of the $\epsilon $-normalized LMS algorithm with Gaussian inputs. IEEE Trans. Acoust. Speech Signal Process. ASSP–35(5), 636–644 (1987)

    Article  Google Scholar 

  19. Huang, H.-C., Lee, J.: A new variable step-size NLMS algorithm and its performance analysis. IEEE Trans. Signal Process. 60(4), 2055–2060 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  20. Al-Naffouri, T.Y., Moinuddin, M.: Exact performance analysis of the $\epsilon $-NLMS algorithm for colored circular Gaussian inputs. IEEE Trans. Signal Process. 58(10), 5080–5090 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  21. Al-Naffouri, T.Y., Sayed, A.H.: Transient analysis of adaptive filters with error nonlinearities. IEEE Trans. Signal Process. 51(3), 653–663 (2003)

    Article  Google Scholar 

  22. Gradshteyn, I.S., Ryzhik, I.M.: Table of Integral, Series, and Products, Corrected and Enlarged Edition. Academic Press, INC, New York (1980)

    MATH  Google Scholar 

  23. Hosseini, K., Montazeri, A., Alikhanian, H., Kahaei, M.H.: New classes of LMS and LMF adaptive algorithms. In: 2008 3rd International Conference on Information and Communication Technologies: From Theory to Applications, pp. 1–5 (2008)

  24. Belazi, A., Abd El-Latif, A.A.: A simple yet efficient S-box method based on chaotic sine map. OPTIK 130, 1438–1444 (2017)

    Article  Google Scholar 

  25. Rogers, T.D., Whitley, D.C.: Chaos in the cubic mapping. Math. Model. 4, 9–25 (1983)

    Article  MathSciNet  MATH  Google Scholar 

  26. Ricker, W.E.: Stock and recruitment. J. Fish. Res. Board Can. 11, 559–623 (1954)

    Article  Google Scholar 

  27. Al-Naffouri, T.Y., Moinuddin, M., Ajeeb, N., Hassibi, B., Moustakas, A.L.: On the distribution of indefinite quadratic forms in Gaussian random variables. IEEE Trans. Commun. 64(1), 153–165 (2016)

    Article  Google Scholar 

Download references

Acknowledgements

The authors would like to acknowledge the support provided by the Deanship of Research Oversight and Coordination (DROC) at King Fahd University of Petroleum & Minerals (KFUPM) for funding this work through Project No. SB191044.

Funding

The authors have not disclosed any funding.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Azzedine Zerguine.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix A: Evaluation of integral (40)

In this appendix, we evaluate the integral in (40). By knowing the fact that \(\mathbf{u} \sim {\mathcal {C}}{\mathcal {N}}~(\mathbf{0},\varvec{\varLambda })\), the pdf \(p(\mathbf{u})\) is given by

$$\begin{aligned} p(\mathbf{u})=\frac{1}{\pi ^{2M} |\varvec{\varLambda }|}e^{-\mathbf{u}\varvec{\varLambda }^{-1}{} \mathbf{u}^H}, \end{aligned}$$
(73)

We substitute the expression for \(p(\mathbf{u})\) in (40). In addition, we employ the following integral representation of the unit step function [20]

$$\begin{aligned} {\tilde{u}}(x)=\frac{1}{2\pi }\int _{-\infty }^{\infty } \frac{e^{x(j\omega +\beta )}}{(j\omega +\beta )} d\omega . \end{aligned}$$
(74)

As a result, the CDF of \(Y_{k{\bar{k}}}\) will take the form

$$\begin{aligned}&F_{Y_{k{\bar{k}}}}(y)=\frac{1}{2\pi ^{2M+1}|\varvec{\varLambda }|}\int _{-\infty }^{\infty } e^{-\mathbf{u}\varvec{\varLambda }^{-1}{} \mathbf{u}^H} \int _{-\infty }^{\infty }\nonumber \\&\quad \frac{e^{(\epsilon y+ y ||\mathbf{u}||_{\mathbf{D}}^2-||\mathbf{u}||_{\mathbf{D}_{k{\bar{k}}}}^2)(j\omega +\beta )}}{(j\omega +\beta )}~d\omega ~d\mathbf{u}. \end{aligned}$$
(75)

By collecting similar terms, the above CDF can be set up as

$$\begin{aligned} F_{Y_{k{\bar{k}}}}(y)= & {} \frac{1}{2\pi ^{2M+1}|\varvec{\varLambda }|}\nonumber \\&\int _{-\infty }^{\infty }\int _{-\infty }^{\infty } e^{-\mathbf{u}\big (\varvec{\varLambda }^{-1}-(y\mathbf{D}-\mathbf{D}_{k{\bar{k}}})(j\omega +\beta )\big )\mathbf{u}^H}\nonumber \\&\quad \times d\mathbf{u} ~\frac{e^{\epsilon y(j\omega +\beta )}}{(j\omega +\beta )}~d\omega ,\nonumber \\ \end{aligned}$$
(76)

In the above equation, the inner integral is nothing but the Gaussian integral (details can be found in [20, 27] for a formal proof) whose solution can be shown to be

$$\begin{aligned}&\frac{1}{\pi ^{2M}}\int e^{-\mathbf{u}\big (\varvec{\varLambda }^{-1}-(y\mathbf{D}-\mathbf{D}_{k{\bar{k}}})(j\omega +\beta )\big )\mathbf{u}^H} d\mathbf{u}\nonumber \\&\quad =\frac{1}{\Big |\varvec{\varLambda }^{-1}-(y\mathbf{D}-\mathbf{D}_{k{\bar{k}}})(j\omega +\beta )\Big |}, \end{aligned}$$
(77)

As a result, the CDF of \(Y_{k{\bar{k}}}\) is reduced to the following one-dimensional complex integral

$$\begin{aligned}&F_{Y_{k{\bar{k}}}}(y)=\frac{1}{2\pi |\varvec{\varLambda }|}\int _{-\infty }^{\infty }\nonumber \\&\quad \frac{e^{\epsilon y(j\omega +\beta )}}{\Big |\varvec{\varLambda }^{-1}-(y\mathbf{D}-\mathbf{D}_{k{\bar{k}}})(j\omega +\beta )\Big |(j\omega +\beta )}~d\omega . \end{aligned}$$
(78)

The above integral can be solved using the approach of [20] which will finally result in (60).

Appendix B: Evaluation of integral (50)

In this appendix, we provide the evaluation of integral (50). First, we employ the variable transformation \(t=\frac{y}{(1-y)}\) in (50) which results in

$$\begin{aligned} E[Y_k]=\int _0^{\infty }\frac{\lambda _k^{2M}e^{\frac{-\epsilon t}{\lambda _k(1-\alpha )}}}{|\varvec{\varLambda }|(1+t)^2 \Pi _{m=1,m\ne k}^{2M}\left( \rho _{km}+\frac{td_m}{(1-\alpha )}\right) }dy \end{aligned}$$
(79)

Now, expanding the above fraction using partial fraction to get

$$\begin{aligned}&\frac{\lambda _k^{2M}}{|\varvec{\varLambda }|(1+t)^2 \Pi _{m=1,m\ne k}^{2M}\left( \rho _{km}+\frac{td_m}{(1-\alpha )}\right) }= \end{aligned}$$
(80)
$$\begin{aligned}&\frac{a_1}{(1+t)}+\frac{a_2}{(1+t)^2}+\sum _{m=1,m\ne k}^{2M}\frac{{\tilde{a}}_m}{\left( \rho _{km}+\frac{td_m}{(1-\alpha )}\right) }. \end{aligned}$$
(81)

where the constants \(a_1\), \(a_2\), and \({\tilde{a}}_m\) are found to be

$$\begin{aligned} a_1= & {} -\frac{a_2\left[ \frac{d}{dv}\prod _{m=1,m \ne k}^{2M}\left( \frac{\rho _{km}(1-\alpha )}{d_m}+v\right) \right] _{v=-1}\lambda _k^{2M}(1-\alpha )^{2M-1}}{|\varvec{\varLambda }| \prod _{l=1, l \ne k}^{2M}(d_l)\prod _{m=1,m \ne k}^{2M}\left( \frac{\rho _{km}(1-\alpha )}{d_m}-1\right) }\end{aligned}$$
(82)
$$\begin{aligned} a_2= & {} \frac{\lambda _k^{2M}(1-\alpha )^{2M-1}}{|\varvec{\varLambda }| \prod _{l=1, l \ne k}^{2M}(d_l)\prod _{m=1,m \ne k}^{2M}\left( \frac{\rho _{km}(1-\alpha )}{d_m}-1\right) }\end{aligned}$$
(83)
$$\begin{aligned} {\tilde{a}}_m= & {} \frac{\lambda _k^{2M}(1-\alpha )^{2M-1}}{|\varvec{\varLambda }| \prod _{l=1, l \ne k}^{2M}(d_l)(1-\frac{\rho _{km}(1-\alpha )}{d_m})^2\prod _{l=1,l \ne k, m}^{2M}\left( \frac{\rho _{kl}(1-\alpha )}{d_l}-\frac{\rho _{km}(1-\alpha )}{d_m}\right) } \end{aligned}$$
(84)

Appendix C: Evaluation of \(E[\alpha _{\infty }]\)

The aim of this appendix is to evaluate the moment \(E[\alpha _i]\) at steady-state (i.e., \(i \rightarrow \infty \)). By looking the relation (69), it can be easily deduced that the conditional moment

$$\begin{aligned}&E[\alpha _{\infty }|\psi _{\infty }]\nonumber \\&\quad =\text{ erf }(\psi _{\infty })\text{ Pr }(\psi _{\infty }>1)+\psi _{\infty }(1-\text{ Pr }(\psi _{\infty }>1)) \nonumber \\ \end{aligned}$$
(85)

where \(\text{ Pr }(\psi _{\infty }>1)\) is the probability that the random variable \(\psi _{\infty }\) exceeds unity. To evaluate this probability, consider the recursion (68) at \(i \rightarrow \infty \) which gives

$$\begin{aligned} \psi _{\infty }= \frac{\gamma |e_{\infty }|^2}{(1-\nu )}=\frac{\gamma |e_a(\infty )+v_{\infty }|^2}{(1-\nu )} \end{aligned}$$
(86)

The assumption A4 allows us to assume the error quantity \(e_a(\infty )\) as complex zero mean Gaussian with variance \(\zeta \) which results in \(|e_{\infty }|^2\) to be central Chi-square random variable with 2 degree of freedom. Thus, we can evaluate the required probability \(\text{ Pr }(\psi _{\infty }>1)\) as follows

$$\begin{aligned} \text{ Pr }(\psi _{\infty }>1)= & {} 1-\text{ Pr }\left( \frac{\gamma |e_{\infty }|^2}{(1-\nu )}<1\right) \nonumber \\= & {} 1-\text{ Pr }\left( |e_{\infty }|^2<\frac{(1-\nu )}{\gamma }\right) \nonumber \\= & {} e^{-\frac{(1-\nu )}{\gamma (\zeta +\sigma _v^2)}} \end{aligned}$$
(87)

Next, to remove the condition on \(\psi _{\infty }\) in (85), we evaluate the moments \(E[\psi _{\infty }]\) as follows

$$\begin{aligned} E[\psi _{\infty }]= & {} \frac{\gamma }{(1-\nu )}E[|e_{\infty }|^2]=\frac{\gamma }{(1-\nu )}(\zeta +\sigma _v^2) \end{aligned}$$
(88)

Now, the moment \(E[\text{ erf }(\psi _{\infty })]\) can be evaluated as

$$\begin{aligned} E[\text{ erf }(\psi _{\infty })]= & {} \int _0^{\infty }\text{ erf }(\psi _{\infty }) f_{\psi }(\psi _{\infty })d\psi \end{aligned}$$
(89)

where \(f_{\psi }(\psi _{\infty })\) is the pdf of \(\psi \) which can be obtained from the variable transformation using the relation (86) and it is given by

$$\begin{aligned} f_{\psi }(\psi _{\infty })= \frac{(1-\nu )}{\gamma (\zeta +\sigma _v^2)}e^{\frac{\psi _{\infty }(1-\nu )}{\gamma (\zeta +\sigma _v^2)}} \end{aligned}$$
(90)

Hence, the moment \(E[\text{ erf }(\psi _{\infty })]\) results in

$$\begin{aligned} E[\text{ erf }(\psi _{\infty })]= & {} \frac{(1-\nu )}{\gamma (\zeta +\sigma _v^2)} \int _0^{\infty }\text{ erf }(\psi _{\infty }) e^{\frac{\psi _{\infty }(1-\nu )}{\gamma (\zeta +\sigma _v^2)}}d\psi \nonumber \\= & {} \left[ 1-\text{ erf }\left( \frac{(1-\nu )}{2\gamma (\zeta +\sigma _v^2)}\right) \right] e^{\frac{(1-\nu )^2}{4\gamma ^2(\zeta +\sigma _v^2)^2}} \nonumber \\ \end{aligned}$$
(91)

where we have used the integral solution 6.282-1 in [22]. Finally, by using \(E[\psi _{\infty }]\) and \(E[\text{ erf }(\psi _{\infty })]\), the unconditional moment \(E[\alpha _{\infty }]\) is given by

$$\begin{aligned} E[\alpha _{\infty }]= & {} \left[ 1-\text{ erf }\left( \frac{(1-\nu )}{2\gamma (\zeta +\sigma _v^2)}\right) \right] \nonumber \\&e^{-\frac{(1-\nu )}{\gamma (\zeta +\sigma _v^2)}\left[ 1-\frac{(1-\nu )}{4\gamma (\zeta +\sigma _v^2)}\right] }\nonumber \\&+\frac{\gamma }{(1-\nu )}(\zeta +\sigma _v^2)\left[ 1-e^{-\frac{(1-\nu )}{\gamma (\zeta +\sigma _v^2)}}\right] \end{aligned}$$
(92)

Appendix D: Evaluation of \(E[\alpha ^2_{\infty }]\)

In this appendix, we derive the moment \(E[\alpha ^2_{\infty }]\). To do so, we start with the following conditional moment

$$\begin{aligned}&E[\alpha _{\infty }^2|\psi _{\infty }, \psi ^2_{\infty }]\nonumber \\&\quad =\text{ erf}^2(\psi _{\infty })\text{ Pr }(\psi _{\infty }>1)+\psi _{\infty }^2(1-\text{ Pr }(\psi _{\infty }>1)) \nonumber \\ \end{aligned}$$
(93)

The probability \(\text{ Pr }(\psi _{\infty }>1)\) is derived in (87). Next, to remove the conditions on \(\psi _{\infty }\) and \(\psi ^2_{\infty }\), we evaluate the moments \(E[\text{ erf}^2(\psi _{\infty })]\) and \(E[\psi ^2_{\infty }]\). To do so, we used the relation (68) and (86) to obtain

$$\begin{aligned} \psi ^2_{\infty }=\frac{\gamma ^2(1+\nu )}{(1-\nu )(1-\gamma ^2)}|e_{\infty }|^4 \end{aligned}$$
(94)

Thus, it can be shown that

$$\begin{aligned} E[\psi ^2_{\infty }]=\frac{\gamma ^2(1+\nu )}{(1-\nu )(1-\gamma ^2)}(\zeta +\sigma _v^2)^2\Gamma (3) \end{aligned}$$
(95)

where \(\Gamma ()\) represents the Gamma function. Next, to evaluate the moment \(E[\text{ erf}^2(\psi _{\infty })]\) we use approximation \(E[\text{ erf}^2(\psi _{\infty })]\approx (E[\text{ erf }(\psi _{\infty })])^2\). Hence, finally, the unconditional moment

$$\begin{aligned} E[\alpha _{\infty }^2]= & {} \left[ 1-\text{ erf }\left( \frac{(1-\nu )}{2\gamma (\zeta +\sigma _v^2)}\right) \right] ^2e^{-\frac{(1-\nu )}{\gamma (\zeta +\sigma _v^2)}\left[ 1-\frac{(1-\nu )}{2\gamma (\zeta +\sigma _v^2)}\right] }\nonumber \\&+\frac{\gamma ^2(1+\nu )}{(1-\nu )(1-\gamma ^2)}(\zeta +\sigma _v^2)^2\Gamma (3)\left[ 1-e^{-\frac{(1-\nu )}{\gamma (\zeta +\sigma _v^2)}}\right] \nonumber \\ \end{aligned}$$
(96)

Rights and permissions

Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zerguine, A., Ahmad, J., Moinuddin, M. et al. An efficient normalized LMS algorithm. Nonlinear Dyn 110, 3561–3579 (2022). https://doi.org/10.1007/s11071-022-07773-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11071-022-07773-0

Keywords

Navigation