Skip to main content
Log in

Spatiotemporal optical blob reconstruction for object detection in grayscale videos

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

There has been a significant research devoted towards detection of a moving object in an image sequence. Detected moving objects usually contain some errors (some pixels belonging to the object are marked as non-objects and vice versa). To achieve a refined detection of moving object in the video, there is a need of post processing of the binary blobs detected as objects in every frame of the video. This article introduces a novel blob reconstruction method that overcomes the mentioned limitation through optical flow based nullification, bifurcation, and unification of detected blobs. To claim the performance of the proposed method, a comparison is made with ten widely used object detection methods on twenty four standard moving-object scene videos. Comparison is made based on standard parameters like accuracy, precision, recall, and F-measure. The results clearly indicates the efficacy of the proposed method. Following this, results on a priliminary case study on placodal cell migration during early development of ectodermal organ of human and mice has been made employing the proposed model which promisingly tracks the cell migration.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

References

  1. Barnich O, Van Droogenbroeck M (2011) ViBe: A universal background subtraction algorithm for video sequences. IEEE Trans Image Process 20(6):1709–1724. doi:10.1109/TIP.2010.2101613

    Article  MathSciNet  MATH  Google Scholar 

  2. Correia P, Pereira F (2000) Objective evaluation of relative segmentation quality. In: International Conference on Image Processing, pp 308–311. doi:10.1109/ICIP.2000.900956

  3. El Baf F, Bouwmans T, Vachon B (2008) Fuzzy integral for moving object detection. In: International Conference on Fuzzy Systems, pp 1729–1736. doi:10.1109/FUZZY.2008.4630604

  4. Erdem CE, Sankur B, Tekalp AM (2004) Performance measures for video object segmentation and tracking. IEEE Trans Image Process 13(7):937–951. doi:10.1109/TIP.2004.828427

    Article  Google Scholar 

  5. Goyette N, Jodoin P-M, Porikli F, Konrad J, Ishwar P (2012) Changedetection. net: A new change detection benchmark dataset. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, pp 1–8. doi:10.1109/CVPRW.2012.6238919

  6. Haritaoglu I, Harwood D, Davis LS (1998) W 4: Who? When? Where? What? A real time system for detecting and tracking people. IEEE International Conference on Automatic Face and Gesture Recognition:222–227. doi:10.1109/AFGR.1998.670952

  7. Horn BKP, Schunck BG (1981) Determining optical flow. Artificial Intelligence 17(1-3):185203. doi:10.1016/0004-3702(81)90024-2

    Article  Google Scholar 

  8. Hofmann M, Tiefenbacher P, Rigoll G (2012) Background segmentation with feedback: The pixel-based adaptive segmenter. In: Computer Vision and Pattern Recognition Workshops, pp 38–43. doi:10.1109/CVPRW.2012.6238925

  9. Jacques JCS, Jung CR, Raupp Musse S (2005) Background subtraction and shadow detection in grayscale video sequences. In: 18th Brazilian Symposium on Computer Graphics and Image Processing (SIBGRAPI), pp 189–196. doi:10.1109/SIBGRAPI.2005.15

  10. Jung CR (2009) Efficient background subtraction and shadow removal for monochromatic video sequences. IEEE Trans Multimedia 11(3):571–577. doi:10.1109/TMM.2009.2012924

    Article  Google Scholar 

  11. KaewTraKulPong P, Bowden R (2002) An improved adaptive background mixture model for real-time tracking with shadow detection. In: Video-based surveillance systems, pp 135–144. doi:10.1007/978-1-4615-0913-4_11

  12. Kim W, Kim C (2011) Background subtraction for dynamic texture scenes using Fuzzy color histograms. IEEE Signal Process Lett 19(3):127–130. doi:10.1109/LSP.2011.2182648

    Article  Google Scholar 

  13. Koller D, Weber J, Huang T, Malik J, Ogasawara G, Rao B, Russel S (1994) Towards robust automatic traffic scene analysis in real-time. Int Conf Pattern Recog (ICPR):126–131. doi:10.1109/CDC.1994.411746

  14. Li L, Huang W, Gu IYH, Tian Q (2003) Foreground object detection from videos containing complex background. International conference on Multimedia:2–10. doi:10.1145/957013.957017

  15. Lucas BD, Kanade T (1981) An iterative image registration technique with an application to stereo vision. In: 7th International Joint Conference on Artificial Intelligence, pp 674–679

  16. Maddalena L, Petrosino A (2008) A self-organizing approach to background subtraction for visual surveillance applications. IEEE Trans Image Process 17(7):1168–1177. doi:10.1109/TIP.2008.924285

    Article  MathSciNet  Google Scholar 

  17. McKenna SJ, Raja Y, Gong S (1999) Tracking colour objects using adaptive mixture models. Image and Vision Computing 17(3-4):225–231. doi:10.1016/S0262-8856(98)00104-8

    Article  Google Scholar 

  18. Nair D, Aggarwal JK (1998) Moving obstacle detection from a navigating robot. IEEE Trans Robot Autom 14(3):404–416. doi:10.1109/70.678450

    Article  Google Scholar 

  19. Reddy V, Sanderson C, Lovell BC (2013) Improved foreground detection via block-based classifier cascade with probabilistic decision integration. IEEE Trans Circuits Syst Video Technol 23(1):83–93. doi:10.1109/TCSVT.2012.2203199

    Article  Google Scholar 

  20. Rodriguez P, Wohlberg B (2013) Fast principal component pursuit via alternating minimization. In: IEEE International Conference on Image Processing, pp 69–73. doi:10.1109/ICIP.2013.6738015

  21. Seki M, Fujiwara H, Sumi K (2000) A robust background subtraction method for changing background. IEEE Workshop on Applications of Computer Vision:207–213. doi:10.1109/WACV.2000.895424

  22. Stauffer C, Eric W, Grimson L (2000) Learning patterns of activity using real-time tracking. IEEE Trans Pattern Anal Mach Intell 22(8):747–757. doi:10.1109/34.868677

    Article  Google Scholar 

  23. Thompson WB, Mutch KM, Berzins VA (1985) Dynamic occlusion analysis in optical flow fields. IEEE Trans Pattern Anal Mach Intell 7(4):374–383. doi:10.1109/TPAMI.1985.4767677

    Article  Google Scholar 

  24. Toyama K, Krumm J, Brumitt B, Meyers B (1999) Wallflower: Principles and practice of background maintenance. International Conference on Computer Vision 1:255–261. doi:10.1109/ICCV.1999.791228

    Google Scholar 

  25. Wang Y, Jodoin P-M, Porikli F, Konrad J, Benezeth Y, Ishwar P (2014) CDnet 2014: an expanded change detection benchmark dataset. In: IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp 387–394. doi:10.1109/CVPRW.2014.126

  26. Weickert J, Schnorr C (2001) Variational optic flow computation with a spatio-temporal smoothness constraint. J Math Imaging Vision 14(3):245–255. doi:10.1023/A:1011286029287

    Article  MATH  Google Scholar 

  27. Wren CR, Azarbayejani A, Darrell T, Pentland AP (1997) Pfinder: real-time tracking of the human body. IEEE Trans Pattern Anal Mach Intell 19(7):780–785. doi:10.1109/34.598236

    Article  Google Scholar 

  28. Yao J, Odobez J-M (2007) Multi-layer background subtraction based on color and texture. In: Computer Vision and Pattern Recognition, pp 1–8. doi:10.1109/CVPR.2007.383497

  29. Yao Q, Liu Q, Dietterich TG, Todorovic S, Lin J, Diao G, Yang B, Tang J (2013) Segmentation of touching insects based on optical flow and NCuts. Biosyst Eng 114(2):67–77. doi:10.1016/j.biosystemseng.2012.11.008

    Article  Google Scholar 

  30. Zhou T, Tao D (2011) Godec: Randomized low-rank andamp; sparse matrix decomposition in noisy case, International Conference on Machine Learning

Download references

Acknowledgments

The work presented in this article is supported by Grant No. ETI/359/2014 by Fund for Improvement of S&T Infrastructure in Universities and Higher Educational Institutions (FIST) Program 2016, Department of Science and Technology, Government of India.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sambit Bakshi.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Raman, R., Choudhury, S.K. & Bakshi, S. Spatiotemporal optical blob reconstruction for object detection in grayscale videos. Multimed Tools Appl 77, 741–762 (2018). https://doi.org/10.1007/s11042-016-4234-0

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-016-4234-0

Keywords

Navigation