Skip to main content
Log in

Small Target Detection Using Two-Dimensional Least Mean Square (TDLMS) Filter Based on Neighborhood Analysis

  • Published:
International Journal of Infrared and Millimeter Waves Aims and scope Submit manuscript

Abstract

TDLMS is a general adaptive filter algorithm and when applied to infrared small target detection, traditional structure and implementation of TDLMS may cause some problems in this field. This paper presents a new TDLMS filter structure and implementation incorporating neighborhood analysis and data fusion, which is capable of acquiring and analyzing more information from the vicinity of the target, leading to a more prominent detection result. This enables TDLMS filter to perform better and become more suitable in the field of small target detection. Experiments showed the efficiency of the proposed algorithm.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

References

  1. M. M. Hadhoud, D. W. Thomas, The two-dimensional adaptive LMS (TDLMS) algorithm, IEEE Trans. Circuits Syst. 35(5), 485–494 (1988).

    Article  Google Scholar 

  2. M. Ohki, S. Hashiguchi, Two-dimensional LMS adaptive filters, IEEE Trans. Consum. Electro. 37(1), 66–73 (1991).

    Article  Google Scholar 

  3. H. Cho, R. Priemer, Automatic step size adjustment of the two-dimensional LMS algorithm, Proceedings of the IEEE 37th Midwest Symposium on Circuits and Systems (1994).

  4. T. Soni, R. Zeidler, W. H. Ku, Performance of 2-D adaptive prediction filters for detection of small objects in image data, IEEE Trans. Image Process. 2(3), 327–340 (1993).

    Article  Google Scholar 

  5. T. Wang, C.-L. Wang, A new two-dimensional block adaptive FIR filtering algorithm and its application to image restoration, IEEE Trans. Image Process. 7(2), 238–246 (1998).

    Article  Google Scholar 

  6. M. R. Azimi-Sadjadi, Two-dimensional block diagonal LMS adaptive filtering, IEEE Trans. Signal Process. 42(9), 2420–2429 (1994).

    Article  Google Scholar 

  7. H. Sang, X. Shen, C. Chen, Architecture of a configurable 2-D adaptive filter used for small object detection and digital image processing, Opt. Eng. 42(8), 2182–2189 (2003).

    Article  ADS  Google Scholar 

  8. P. A. Ffrench, J. R. Zeidler, W. H. Ku, Enhanced detectability of small objects in correlated clutter using an improved 2-D adaptive lattice algorithm, IEEE Trans. Image Process. 6(3), 383–397 (1997).

    Article  Google Scholar 

  9. T. Aboulnasr, K. Mayyas, A robust variable step-size LMS-type algorithm: analysis and simulations, IEEE Trans. Signal Process. 45(3), 631–639 (March 1997).

    Article  Google Scholar 

  10. R. H. Kwong, E. W. Johnson, A variable step size LMS algorithm, IEEE Trans. Signal Process. 40(7), 1633–1642 (July 1992).

    Article  MATH  Google Scholar 

  11. S.-C. Pei, C.-Y. Lin, C.-C. Tseng, Two-dimensional LMS adaptive linear phase filters, Circuits and Systems,1993, ISCA’93, 1993 IEEE International Symposium on, 3–6 May. 1993, pp. 311–314.

  12. R. O. Duda, P. E. Hart, D. G. Stork, Pattern Classification, 2nd Ed. (John Wiley & Sons, Inc., 2001).

Download references

Acknowledgment

This work is partially supported by Support Technology Fund of China Aerospace’05 and Innovation Fund of Science and Technology of China Aerospace’06.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yuan Cao.

Appendix

Appendix

In the four-point situation mentioned in section 3.1, the criterion function we use in the proposed algorithm has the form

$$ \begin{array}{*{20}l} {{J = S_{W} + S_{B} = S_{T} } \hfill} \\ {{ = \frac{1} {N}{\sum\limits_{l = 1}^N {{\left( {x_{l} - m} \right)}^{2} } }} \hfill} \\ {{ = \frac{1} {4}{\left[ {{\left( {\overline{Y} _{1} - m} \right)}^{2} + {\left( {\overline{Y} _{2} - m} \right)}^{2} + {\left( {\overline{Y} _{3} - m} \right)}^{2} + {\left( {\overline{Y} _{4} - m} \right)}^{2} } \right]}} \hfill} \\ \end{array} $$

where \( m = \frac{{\overline{Y} _{1} + \overline{Y} _{2} + \overline{Y} _{3} + \overline{Y} _{4} }} {4} \), and \( \overline{Y} \in {\left[ {\overline{Y} _{{\min ,}} \overline{Y} _{{\max }} } \right]} \).

Using the Lagrange multiplier method, and the constraint function is then

$$ \varphi {\left( {m,\overline{Y} _{1} ,\overline{Y} _{2} ,\overline{Y} _{3} ,\overline{Y} _{4} } \right)} = 4m - \overline{Y} _{1} - \overline{Y} _{2} - \overline{Y} _{3} - \overline{Y} _{4} = 0. $$

Let \( f{\left( {\overline{Y} _{1} ,\overline{Y} _{1} ,\overline{Y} _{1} ,\overline{Y} _{1} ,m,\lambda } \right)} = {\left( {\overline{Y} _{1} - m} \right)}^{2} + {\left( {\overline{Y} _{2} - m} \right)}^{2} + {\left( {\overline{Y} _{3} - m} \right)}^{2} {\kern 1pt} + {\left( {\overline{Y} _{4} - m} \right)}^{2} + {\kern 1pt} \lambda {\left( {4m - \overline{Y} _{1} - \overline{Y} _{2} - \overline{Y} _{3} - \overline{Y} _{4} } \right)} \) and we have the following equations:

$$ \left\{ {\begin{array}{*{20}c} {f_{{x1}} = 2{\left( {\overline{Y} _{1} - m} \right)} - \lambda = 0} \\ {f_{{x2}} = 2{\left( {\overline{Y} _{2} - m} \right)} - \lambda = 0} \\ {f_{{x3}} = 2{\left( {\overline{Y} _{3} - m} \right)} - \lambda = 0} \\ {f_{{x4}} = 2{\left( {\overline{Y} _{4} - m} \right)} - \lambda = 0} \\ {f_{m} = 4m - \overline{Y} _{1} - \overline{Y} _{2} - \overline{Y} _{3} - \overline{Y} _{4} + 2\lambda = 0} \\ {f_{\lambda } = 4m - \overline{Y} _{1} - \overline{Y} _{2} - \overline{Y} _{3} - \overline{Y} _{4} = 0} \\ \end{array} } \right. $$

Solve these equations we get

$$ \overline{Y} _{1} = \overline{Y} _{2} = \overline{Y} _{3} = \overline{Y} _{4} $$

which means as long as \( \overline{{Y_{i} }} \) equals to each other, J reaches its minimum 0. Starting from this extremum point, as \( \overline{Y} _{i} \) moves more and more separated from each other, since J doesn’t have any other extremum points, the function will increase monotonously. Therefore, the maximum value will be reached when \( \overline{Y} _{i} \) stops at the boundary of \( \overline{Y} _{{\min }} \) or \( \overline{Y} _{{\max }} \). Let \( \overline{Y} _{1} = \overline{Y} _{{\min }} \) and \( \overline{Y} _{4} = \overline{Y} _{{\max }} \), and there will be three possible combinations of \( \overline{Y} _{i} \), namely

$$ \begin{array}{*{20}c} {}{\overline{Y} _{1} = \overline{Y} _{2} = \overline{Y} _{3} = \overline{Y} _{{\min }} ,\overline{Y} _{4} = \overline{Y} _{{\max }} } \\ {{\text{or}}}{\overline{Y} _{1} = \overline{Y} _{2} = \overline{Y} _{{\min }} ,\overline{Y} _{3} = \overline{Y} _{4} = \overline{Y} _{{\max }} } \\ {{\text{or}}}{\overline{Y} _{1} = \overline{Y} _{{\min }} ,\overline{Y} _{2} = \overline{Y} _{3} = \overline{Y} _{4} = \overline{Y} _{{\max }} } \\ \end{array} $$

Via simple calculation it is easily seen that when \( \overline{Y} _{1} = \overline{Y} _{2} = \overline{Y} _{{\min }} \) and \( \overline{Y} _{3} = \overline{Y} _{4} = \overline{Y} _{{\max }} \), J reaches its maximum value \( \frac{{{\left( {\overline{Y} _{{\max }} - \overline{Y} _{{\min }} } \right)}^{2} }} {4} \).

Rights and permissions

Reprints and permissions

About this article

Cite this article

Cao, Y., Liu, R. & Yang, J. Small Target Detection Using Two-Dimensional Least Mean Square (TDLMS) Filter Based on Neighborhood Analysis. Int J Infrared Milli Waves 29, 188–200 (2008). https://doi.org/10.1007/s10762-007-9313-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10762-007-9313-x

Keywords

Navigation