Skip to main content
Log in

A novel object tracking method based on a mixture model

  • Regular Paper
  • Published:
International Journal of Intelligent Robotics and Applications Aims and scope Submit manuscript

Abstract

Object tracking has been applied in many fields such as intelligent surveillance and computer vision. Although much progress has been made, there are still many puzzles which pose a huge challenge to object tracking. Currently, the problems are mainly caused by occlusion, similar object appearance and background clutters. A novel method based on a mixture model was proposed for solving these issues. The mixture model was integrated into a Bayes framework with the combination of locally dense contexts feature and the fundamental image information (i.e. the relationship between the object and its surrounding regions). This is because that the tracking problem can be seen as a prediction question, which can be solved using the Bayes method. In addition, both scale variations and templet updating are considered to assure the effectiveness of the proposed algorithm. Furthermore, the Fourier Transform (FT) is used when solving the Bayes equation to make the algorithm run in a real-time system. Therefore, the MMOT (Mixture model for object tracking) can run faster and perform better than existing algorithms on some challenging images sequences in terms of accuracy, quickness and robustness.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

References

  • Avidan, S.: Ensemble tracking. IEEE Trans. Pattern Anal. Mach. Intell. 29, 261–271 (2007)

  • Babenko, B., Yang, M.-H., Belongie, S.: Visual tracking with online multiple instance learning. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 983–990. IEEE (2009)

  • Brendel, W., Todorovic, S.: Video object segmentation by tracking regions. In: 2009 IEEE 12th International Conference on Computer Vision, pp. 833–840. IEEE (2009)

  • Danelljan, M., Khan, F.S., Felsberg, M., Van De Weijer, J.: Adaptive color attributes for real-time visual tracking. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 1090–1097. IEEE (2014)

  • Fei, M., Li, J., Liu, H.: Visual tracking based on improved foreground detection and perceptual hashing. Neurocomputing 152, 413–428 (2015)

    Article  Google Scholar 

  • Heber, M., Godec, M., Rüther, M., Roth, P.M., Bischof, H.: Segmentation-based tracking by support fusion. Comput. Vis. Image Understand., vol. 117, pp. 573–586 (2013)

  • Kwon, J., Lee, K.M.: Visual tracking decomposition. In: 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 1269–1276. IEEE (2010)

  • Lasserre, J.A., Bishop, C.M., Minka, T.P.: Principled hybrids of generative and discriminative models. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 1, pp. 87–94. IEEE (2006)

  • Li, H., Li, Y., Porikli, F.: DeepTrack: learning discriminative feature representations by convolutional neural networks for visual tracking. In: Proceedings of the British Machine Vision Conference 2014, pp. 56.1–56.12 (2014)

  • Liu, B., Huang, J., Yang, L., Kulikowsk, C.: Robust tracking using local sparse appearance model and K-selection. In: CVPR 2011, pp. 1313–1320. IEEE (2011)

  • Mei, X., Ling, H., Wu, Y., Blasch, E., Bai, L.: Minimum error bounded efficient 1 tracker with occlusion detection. In: CVPR 2011, pp. 1257–1264. IEEE (2011)

  • Ross, D.A., Lim, J., Lin, R.-S., Yang, M.-H.: Incremental learning for robust visual tracking. Int. J. Comput. Vis. 77, 125–141 (2008)

    Article  Google Scholar 

  • Wang, D., Lu, H., Chen, Y.-W.: Incremental MPCA for color object tracking. In: 2010 20th International Conference on Pattern Recognition, pp. 1751–1754. IEEE (2010)

  • Wang, D., Lu, H., Bo, C.: Fast and robust object tracking via probability continuous outlier model. IEEE Trans. Image Process. 24, 5166–5176 (2015)

    Article  MathSciNet  Google Scholar 

  • Wang, D., Lu, H., Yang, M.-H.: Robust visual tracking via least soft-threshold squares. IEEE Trans. Circuits Syst. Video Technol 26, 1709–1721 (2016)

    Article  Google Scholar 

  • Wu, Y., Lim, J., Yang, M.H.: Online object tracking: a benchmark. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 2411–2418 (2013)

  • Wu, Y., Ling, H., Yu, J., Li, F., Mei, X., Cheng, E.: Blurred target tracking by blur-driven tracker. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1100–1107. IEEE (2011)

  • Xiang, Y., Alahi, A., Savarese, S.: Learning to track: online multi-object tracking by decision making. In: Proceedings of the IEEE International Conference on Computer Vision, vol. 2015 Inter, pp. 4705–4713 (2015)

  • Yao, R., Shi, Q., Shen, C., Zhang, Y., van den Hengel, A.: Part-based visual tracking with online latent structural learning. In: 2013 IEEE Conference on Computer Vision and Pattern Recognition, pp. 2363–2370. IEEE (2013)

  • Yu, Y., Mann, G.K.I., Gosine, R.G.: A single-object tracking method for robots using object-based visual attention. Int. J. Human. Robot. 09, 1250030 (2012)

  • Zhang, K., Zhang, L., Liu, Q., Zhang, D., Yang, M.H.: Fast visual tracking via dense spatio-temporal context learning. In: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 8693 LNCS, pp. 127–141. Springer, Cham (2014)

  • Zhang, K., Zhang, L., Yang, M.-H.: Real-Time Compressive Tracking, pp. 864–877 (2012)

  • Zhang, K., Zhang, L., Yang, M.-H.: Fast compressive tracking. IEEE Trans. Pattern Anal. Mach. Intell. 36, 2002–2015 (2014)

    Article  Google Scholar 

  • Zilka, L., Marek, D., Korvas, M., Jurcicek, F.: Comparison of Bayesian discriminative and generative models for dialogue state tracking. In: Proceedings of the SIGDIAL 2013 Conference, pp. 452–456 (2013)

Download references

Acknowledgements

The authors would like to acknowledge the support from the EU Seventh Framework Programme (FP7)-ICT under Grant no. 611391, Natural Science Foundation of China under Grant no. 51575412, 51575338 and 51575407, China Scholarship Council (Grant no. 201508060340) and Research Project of State Key Lab of Digital Manufacturing Equipment & Technology of China under Grant no. DMETKF2017003.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhaojie Ju.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Gao, D., Ju, Z., Cao, J. et al. A novel object tracking method based on a mixture model. Int J Intell Robot Appl 2, 361–371 (2018). https://doi.org/10.1007/s41315-018-0062-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s41315-018-0062-x

Keywords

Navigation