Abstract
The task of adaptive estimation in the presence of random and highly nonlinear environment such as wireless channel estimation and identification of non-stationary system etc. has been always challenging. The least mean square (LMS) algorithm is the most popular algorithm for adaptive estimation and it belongs to the gradient family, thus inheriting their low computational complexity and their slow convergence. To deal with this issue, an efficient normalization of the LMS algorithm is proposed in this work which is achieved by normalizing the input signal with an intelligent mixture of weighted signal and error powers which results in a variable step-size type algorithm. The proposed normalization scheme can provide both significant faster convergence in initial adaptation phase while maintaining a lower steady-state mean-square-error compared to the conventional normalized LMS (NLMS) algorithm. The proposed algorithm is tested on adaptive denoising of signals, estimation of unknown channel, and tracking of random walk channel and its performance is compared with that of the standard LMS and NLMS algorithms. Mean and mean-square performance of the proposed algorithm is investigated in both stationary and non-stationary environments. We derive the closed-form expressions of various performance measures by evaluating multi-dimensional moments. This is done by statistical characterization of required random variables by employing the approach of Indefinite Quadratic Forms. Simulation and experimental results are presented to corroborate our theoretical claims.
Similar content being viewed by others
Data availability
Data sharing not applicable to this article as no datasets were generated or analyzed during the current study.
Notes
\(\varvec{\lambda }=\text{ diag }(\varvec{\varLambda }_x)\) is a vector containing diagonal entries of \(\varvec{\varLambda }_x\)
\(\mathbf{1}\) is a M-dimensional vector with all entries equal to 1
This is because all the entries of \(\mathbf{e}\) are function of past input vectors (i.e., \(\{\mathbf{x}_k\}\), \(k<i\)) and all \(\{\mathbf{x}_i\}\) are independent
It can be seen from (37) that \(d_m=(1-\alpha )\) for \(1\le m \le M\) and \(d_m=\alpha \) for \(M+1\le m \le 2M\).
The eigenvalue spread of a matrix is defined as the ratio of maximum to minimum of its eigenvalues, \(\eta =\frac{\lambda _{\text{ max }}}{\lambda _{\text{ min }}}\)
References
Haykin, S.: Adaptive Filter Theory, 3rd edn. Prentice-Hall, Upper-Saddle River (1996)
Sayed, A.H.: Adaptive Filters. John Wiley & Sons, New York (2008)
Widrow, B., McCool, J.M., Larimore, M.G., Johnson, C.R.: Stationary and Nonstationary Learning Characteristics of the LMS Adaptive Filter. IEEE Proceedings 64, 1151–1162 (1976)
Kwong, R.H., Johnston, E.W.: A variable step size LMS algorithm. IEEE Trans. Signal Process. 40, 1633–1642 (1992)
Harris, R.W., Chabries, D.M., Bishop, F.A.: A variable step size (VS) algorithm. IEEE Trans. Acoust. Speech Signal Process. 34, 499–510 (1986)
Shan, T.J., Kailath, T.: Adaptive algorithms with an automatic gain control feature. IEEE Trans. Acoust. Speech Signal Process. 35, 122–127 (1988)
Methews, V.J., Xie, Z.: A stochastic gradient adaptive filter with gradient adaptive step size. IEEE Trans. Signal Process. 41, 2075–2087 (1993)
Aboulnasr, T., Mayyas, K.: A robust variable step-size LMS type algorithm: analysis and simulation. IEEE Trans. Signal Process. 45, 631–639 (1997)
Sulyman, A.I., Zerguine, A.: Convergence and steady-state analysis of a variable step-size NLMS algorithm. Signal Process. 83, 1255–1273 (2003)
Costa, M., Bermudez, J.: A robust variable step size algorithm for LMS adaptive filters. In: IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP, vol. 3 (2006)
Costa, M., Bermudez, J.: A noise resilient variable step-size LMS algorithm. Signal Process. 88(3), 733–748 (2008)
Zhao, S., Man, Z., Khoo, S., Wu, H.: Variable step-size LMS algorithm with a quotient form. Signal Process. 89, 67–76 (2008)
Asad, S.M., Moinuddin, M., Zerguine, A.: On the convergence of a variable step-size LMF algorithm for quotient form. In: IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2010, Dallas, Texas, USA, March 14–19, (2010)
Nagumo, J.I., Noda, A.: A learning method for system identification. IEEE Trans. Autom. Control 12, 282–287 (1967)
Zerguine, A., Chan, M.K., Al-Naffouri, T.Y., Moinuddin, M., Cowan, C.F.N.: Convergence and tracking analysis of a variable normalized LMF (XE-NLMF) algorithm. Signal Process. 89(5), 778–790 (2008)
Walach, E., Widrow, B.: The least mean fourth (LMF) adaptive algorithm and its family. IEEE Trans. Inf. Theory IT–30, 275–283 (1984)
Chan, M.K., Cowan, C.F.N.: Using a normalised LMF algorithm for channel equalisation with co-channel interference. In: EUSIPCO 2002, pp. 48–51 (2002)
Bershad, N.J.: Behavior of the $\epsilon $-normalized LMS algorithm with Gaussian inputs. IEEE Trans. Acoust. Speech Signal Process. ASSP–35(5), 636–644 (1987)
Huang, H.-C., Lee, J.: A new variable step-size NLMS algorithm and its performance analysis. IEEE Trans. Signal Process. 60(4), 2055–2060 (2012)
Al-Naffouri, T.Y., Moinuddin, M.: Exact performance analysis of the $\epsilon $-NLMS algorithm for colored circular Gaussian inputs. IEEE Trans. Signal Process. 58(10), 5080–5090 (2010)
Al-Naffouri, T.Y., Sayed, A.H.: Transient analysis of adaptive filters with error nonlinearities. IEEE Trans. Signal Process. 51(3), 653–663 (2003)
Gradshteyn, I.S., Ryzhik, I.M.: Table of Integral, Series, and Products, Corrected and Enlarged Edition. Academic Press, INC, New York (1980)
Hosseini, K., Montazeri, A., Alikhanian, H., Kahaei, M.H.: New classes of LMS and LMF adaptive algorithms. In: 2008 3rd International Conference on Information and Communication Technologies: From Theory to Applications, pp. 1–5 (2008)
Belazi, A., Abd El-Latif, A.A.: A simple yet efficient S-box method based on chaotic sine map. OPTIK 130, 1438–1444 (2017)
Rogers, T.D., Whitley, D.C.: Chaos in the cubic mapping. Math. Model. 4, 9–25 (1983)
Ricker, W.E.: Stock and recruitment. J. Fish. Res. Board Can. 11, 559–623 (1954)
Al-Naffouri, T.Y., Moinuddin, M., Ajeeb, N., Hassibi, B., Moustakas, A.L.: On the distribution of indefinite quadratic forms in Gaussian random variables. IEEE Trans. Commun. 64(1), 153–165 (2016)
Acknowledgements
The authors would like to acknowledge the support provided by the Deanship of Research Oversight and Coordination (DROC) at King Fahd University of Petroleum & Minerals (KFUPM) for funding this work through Project No. SB191044.
Funding
The authors have not disclosed any funding.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendices
Appendix A: Evaluation of integral (40)
In this appendix, we evaluate the integral in (40). By knowing the fact that \(\mathbf{u} \sim {\mathcal {C}}{\mathcal {N}}~(\mathbf{0},\varvec{\varLambda })\), the pdf \(p(\mathbf{u})\) is given by
We substitute the expression for \(p(\mathbf{u})\) in (40). In addition, we employ the following integral representation of the unit step function [20]
As a result, the CDF of \(Y_{k{\bar{k}}}\) will take the form
By collecting similar terms, the above CDF can be set up as
In the above equation, the inner integral is nothing but the Gaussian integral (details can be found in [20, 27] for a formal proof) whose solution can be shown to be
As a result, the CDF of \(Y_{k{\bar{k}}}\) is reduced to the following one-dimensional complex integral
The above integral can be solved using the approach of [20] which will finally result in (60).
Appendix B: Evaluation of integral (50)
In this appendix, we provide the evaluation of integral (50). First, we employ the variable transformation \(t=\frac{y}{(1-y)}\) in (50) which results in
Now, expanding the above fraction using partial fraction to get
where the constants \(a_1\), \(a_2\), and \({\tilde{a}}_m\) are found to be
Appendix C: Evaluation of \(E[\alpha _{\infty }]\)
The aim of this appendix is to evaluate the moment \(E[\alpha _i]\) at steady-state (i.e., \(i \rightarrow \infty \)). By looking the relation (69), it can be easily deduced that the conditional moment
where \(\text{ Pr }(\psi _{\infty }>1)\) is the probability that the random variable \(\psi _{\infty }\) exceeds unity. To evaluate this probability, consider the recursion (68) at \(i \rightarrow \infty \) which gives
The assumption A4 allows us to assume the error quantity \(e_a(\infty )\) as complex zero mean Gaussian with variance \(\zeta \) which results in \(|e_{\infty }|^2\) to be central Chi-square random variable with 2 degree of freedom. Thus, we can evaluate the required probability \(\text{ Pr }(\psi _{\infty }>1)\) as follows
Next, to remove the condition on \(\psi _{\infty }\) in (85), we evaluate the moments \(E[\psi _{\infty }]\) as follows
Now, the moment \(E[\text{ erf }(\psi _{\infty })]\) can be evaluated as
where \(f_{\psi }(\psi _{\infty })\) is the pdf of \(\psi \) which can be obtained from the variable transformation using the relation (86) and it is given by
Hence, the moment \(E[\text{ erf }(\psi _{\infty })]\) results in
where we have used the integral solution 6.282-1 in [22]. Finally, by using \(E[\psi _{\infty }]\) and \(E[\text{ erf }(\psi _{\infty })]\), the unconditional moment \(E[\alpha _{\infty }]\) is given by
Appendix D: Evaluation of \(E[\alpha ^2_{\infty }]\)
In this appendix, we derive the moment \(E[\alpha ^2_{\infty }]\). To do so, we start with the following conditional moment
The probability \(\text{ Pr }(\psi _{\infty }>1)\) is derived in (87). Next, to remove the conditions on \(\psi _{\infty }\) and \(\psi ^2_{\infty }\), we evaluate the moments \(E[\text{ erf}^2(\psi _{\infty })]\) and \(E[\psi ^2_{\infty }]\). To do so, we used the relation (68) and (86) to obtain
Thus, it can be shown that
where \(\Gamma ()\) represents the Gamma function. Next, to evaluate the moment \(E[\text{ erf}^2(\psi _{\infty })]\) we use approximation \(E[\text{ erf}^2(\psi _{\infty })]\approx (E[\text{ erf }(\psi _{\infty })])^2\). Hence, finally, the unconditional moment
Rights and permissions
Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Zerguine, A., Ahmad, J., Moinuddin, M. et al. An efficient normalized LMS algorithm. Nonlinear Dyn 110, 3561–3579 (2022). https://doi.org/10.1007/s11071-022-07773-0
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11071-022-07773-0