Skip to main content

Improving FREAK Descriptor for Image Classification

  • Conference paper
  • First Online:
Computer Vision Systems (ICVS 2015)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 9163))

Included in the following conference series:

  • 1845 Accesses

Abstract

In this paper we propose a new set of bio-inspired descriptors for image classification based on low-level processing performed by the retina. Taking as a starting point a descriptor called FREAK (Fast Retina Keypoint), we further extend it mimicking the center-surround organization of ganglion receptive fields. To test our approach we compared the performance of the original FREAK and our proposal on the 15 scene categories database. The results show that our approach outperforms the original FREAK for the scene classification task.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Alahi, A., Ortiz, R., Vandergheynst, P.: FREAK: fast retina keypoint. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 510–517 (2012)

    Google Scholar 

  2. Bradski, G.: The opencv library. Dr. Dobb’s J. Softw. Tools 25, 120–126 (2000)

    Google Scholar 

  3. Chichilnisky, E.J.: A simple white noise analysis of neuronal light responses. Netw.: Comput. Neural Syst. 12(2), 199–213 (2001)

    Article  MATH  Google Scholar 

  4. Lazebnik, S., Schmid, C., Ponce, J.: Beyond bags of features: spatial pyramid matching for recognizing natural scene categories. In: IEEE Conference on Computer Vision and Pattern Recognition, vol. 2, pp. 2169–2178. IEEE Computer Society (2006)

    Google Scholar 

  5. Leutenegger, S., Chli, M., Siegwart, R.: Brisk: binary robust invariant scalable keypoints. In: ICCV 2011, pp. 2548–2555 (2011)

    Google Scholar 

  6. Meng, X., Wang, Z., Wu, L.: Building global image features for scene recognition. Pattern Recogn. 45(1), 373–380 (2012)

    Article  Google Scholar 

  7. Oliva, A., Torralba, A.: Modeling the shape of the scene: a holistic representation of the spatial envelope. Int. J. Comput. Vision 42(3), 145–175 (2001)

    Article  MATH  Google Scholar 

  8. Oliva, A., Torralba, A.: Building the gist of a scene: the role of global image features in recognition. Prog. Brain Res. 155, 23–36 (2006)

    Google Scholar 

  9. Quattoni, A., Torralba, A.: Recognizing indoor scenes. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 413–420 (2009)

    Google Scholar 

  10. Tola, E., Lepetit, V., Fua, P.: DAISY: an efficient dense descriptor applied to wide baseline stereo. IEEE Trans. Pattern Anal. Mach. Intell. 32(5), 815–830 (2010)

    Article  Google Scholar 

  11. Tuytelaars, T.: Dense interest points. In: The Twenty-Third IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2010, pp. 2281–2288, San Francisco, CA, USA, 13–18 June 2010 (2010)

    Google Scholar 

  12. Vedaldi, A., Fulkerson, B.: VLFeat: an open and portable library of computer vision algorithms (2008). http://www.vlfeat.org/

  13. Vu, N.-S., Nguyen, T.P., Garcia, C.: Improving texture categorization with biologically inspired filtering. Image Vis. Comput. 32, 424–436 (2013)

    Article  Google Scholar 

  14. Wang, J., Wang, X., Yang, X., Zhao, A.: CS-FREAK: an improved binary descriptor. In: Tan, T., Ruan, Q., Wang, S., Ma, H., Huang, K. (eds.) IGTA 2014. CCIS, vol. 437, pp. 129–136. Springer, Heidelberg (2014)

    Google Scholar 

  15. Whiten, C., Laganiere, R., Bilodeau, G.A.: Efficient action recognition with MoFREAK. In: Proceedings of the 2013 International Conference on Computer and Robot Vision, pp. 319–325. IEEE Computer Society (2013)

    Google Scholar 

  16. Wohrer, A.: Model and large-scale simulator of a biological retina with contrast gain control. Ph.D. thesis, University of Nice Sophia-Antipolis (2008)

    Google Scholar 

  17. Wu, J., Rehg, J.M.: Centrist: a visual descriptor for scene categorization. IEEE Trans. Pattern Anal. Mach. Intell. 33(8), 1489–1501 (2011)

    Article  Google Scholar 

Download references

Acknowledgement

We thank M. San Biagio for his support in the image classification algorithm. This research received financial support from the 7th Framework Programme for Research of the European Commission, under Grant agreement num 600847: RENVISION project of the Future and Emerging Technologies (FET) programme Neuro-bio-inspired systems (NBIS) FET-Proactive Initiative.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Cristina Hilario Gomez .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2015 Springer International Publishing Switzerland

About this paper

Cite this paper

Hilario Gomez, C., Medathati, K., Kornprobst, P., Murino, V., Sona, D. (2015). Improving FREAK Descriptor for Image Classification. In: Nalpantidis, L., Krüger, V., Eklundh, JO., Gasteratos, A. (eds) Computer Vision Systems. ICVS 2015. Lecture Notes in Computer Science(), vol 9163. Springer, Cham. https://doi.org/10.1007/978-3-319-20904-3_2

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-20904-3_2

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-20903-6

  • Online ISBN: 978-3-319-20904-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics