Skip to main content

Application of Dynamic Features of the Pupil for Iris Presentation Attack Detection

  • Chapter
  • First Online:
Handbook of Biometric Anti-Spoofing

Part of the book series: Advances in Computer Vision and Pattern Recognition ((ACVPR))

Abstract

This chapter presents a comprehensive study on the application of stimulated pupillary light reflex to presentation attack detection (PAD) that can be used in iris recognition systems. A pupil, when stimulated by visible light in a predefined manner, may offer sophisticated dynamic liveness features that cannot be acquired from dead eyes or other static objects such as printed contact lenses, paper printouts, or prosthetic eyes. Modeling of pupil dynamics requires a few seconds of observation under varying light conditions that can be supplied by a visible light source in addition to the existing near-infrared illuminants used in iris image acquisition. The central element of the presented approach is an accurate modeling and classification of pupil dynamics that makes mimicking an actual eye reaction difficult. This chapter discusses new data-driven models of pupil dynamics based on recurrent neural networks and compares their PAD performance to solutions based on the parametric Clynes–Kohn model and various classification techniques. Experiments with 166 distinct eyes of 84 subjects show that the best data-driven solution, one based on long short-term memory, was able to correctly recognize 99.97% of attack presentations and 98.62% of normal pupil reactions. In the approach using the Clynes–Kohn parametric model of pupil dynamics, we were able to perfectly recognize abnormalities and correctly recognize 99.97% of normal pupil reactions on the same dataset with the same evaluation protocol as the data-driven approach. This means that the data-driven solutions favorably compare to the parametric approaches, which require model identification in exchange for a slightly better performance. We also show that observation times may be as short as 3 s when using the parametric model, and as short as 2 s when applying the recurrent neural network without substantial loss in accuracy. Along with this chapter we also offer: (a) all time series representing pupil dynamics for 166 distinct eyes used in this study, (b) weights of the trained recurrent neural network offering the best performance, (c) source codes of the reference PAD implementation based on Clynes–Kohn parametric model, and (d) all PAD scores that allow the reproduction of the plots presented in this chapter. To our best knowledge, this chapter proposes the first database of pupil measurements dedicated to presentation attack detection and the first evaluation of recurrent neural network-based modeling of pupil dynamics and PAD.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. ISO/IEC: Information technology – Biometric presentation attack detection – Part 1: Framework, 30107–3 (2016)

    Google Scholar 

  2. Galbally J, Marcel S, Fierrez J (2014) Image quality assessment for fake biometric detection: application to iris, fingerprint, and face recognition. IEEE Trans Image Process (TIP) 23(2):710–724. https://doi.org/10.1109/TIP.2013.2292332

    Article  MathSciNet  MATH  Google Scholar 

  3. Wei Z, Qiu X, Sun Z, Tan T (2008) Counterfeit iris detection based on texture analysis. In: International conference on pattern recognition, pp 1–4. https://doi.org/10.1109/ICPR.2008.4761673

  4. Doyle JS, Bowyer KW, Flynn PJ (2013) Variation in accuracy of textured contact lens detection based on sensor and lens pattern. In: IEEE international conference on biometrics: theory applications and systems (BTAS), pp 1–7. https://doi.org/10.1109/BTAS.2013.6712745

  5. Ojala T, Pietikainen M, Maenpaa T (2002) Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Trans Pattern Anal Mach Intell (TPAMI) 24(7):971–987. https://doi.org/10.1109/TPAMI.2002.1017623

    Article  MATH  Google Scholar 

  6. Ojansivu V, Heikkilä J (2008) Blur insensitive texture classification using local phase quantization. Springer, Berlin, pp 236–243. https://doi.org/10.1007/978-3-540-69905-7_27

    Google Scholar 

  7. Zhang L, Zhou Z, Li H (2012) Binary Gabor pattern: an efficient and robust descriptor for texture classification. In: IEEE international conference on image processing (ICIP), pp 81–84. https://doi.org/10.1109/ICIP.2012.6466800

  8. Sun Z, Zhang H, Tan T, Wang J (2014) Iris image classification based on hierarchical visual codebook. IEEE Trans Pattern Anal Mach Intell (TPAMI) 36(6):1120–1133. https://doi.org/10.1109/TPAMI.2013.234

    Article  Google Scholar 

  9. Dalal N, Triggs B (2005) Histograms of oriented gradients for human detection. In: IEEE international conference on computer vision and pattern recognition (CVPR), vol 1, pp 886–893. https://doi.org/10.1109/CVPR.2005.177

  10. Doyle JS, Bowyer KW (2015) Robust detection of textured contact lenses in iris recognition using BSIF. IEEE Access 3:1672–1683. https://doi.org/10.1109/ACCESS.2015.2477470

    Article  Google Scholar 

  11. Raghavendra R, Busch C (2015) Robust scheme for iris presentation attack detection using multiscale binarized statistical image features. IEEE Trans Inf Forensics Secur (TIFS) 10(4):703–715. https://doi.org/10.1109/TIFS.2015.2400393

    Article  Google Scholar 

  12. Menotti D, Chiachia G, Pinto A, Schwartz W, Pedrini H, Falcao A, Rocha A (2015) Deep representations for iris, face, and fingerprint spoofing detection. IEEE Trans Inf Forensics Secur (TIFS) 10(4):864–879. https://doi.org/10.1109/TIFS.2015.2398817

    Article  Google Scholar 

  13. Yambay D, Becker B, Kohli N, Yadav D, Czajka A, Bowyer KW, Schuckers S, Singh R, Vatsa M, Noore A, Gragnaniello D, Sansone C, Verdoliva L, He L, Ru Y, Li H, Liu N, Sun Z, Tan T (2017) LivDet iris 2017 – iris liveness detection competition 2017. In: IEEE international joint conference on biometrics (IJCB), pp 1–6

    Google Scholar 

  14. Raja K, Raghavendra R, Busch C (2015) Video presentation attack detection in visible spectrum iris recognition using magnified phase information. IEEE Trans Inf Forensics Secur (TIFS) 10(10):2048–2056. https://doi.org/10.1109/TIFS.2015.2440188

    Article  Google Scholar 

  15. Komogortsev OV, Karpov A, Holland CD (2015) Attack of mechanical replicas: liveness detection with eye movements. IEEE Trans Inf Forensics Secur (TIFS) 10(4):716–725. https://doi.org/10.1109/TIFS.2015.2405345

    Article  Google Scholar 

  16. Pacut A, Czajka A (2006) Aliveness detection for iris biometrics. In: IEEE international Carnahan conferences security technology (ICCST), pp 122–129. https://doi.org/10.1109/CCST.2006.313440

  17. Czajka A, Pacut A, Chochowski M (2011) Method of eye aliveness testing and device for eye aliveness testing, United States Patent, US 8,061,842

    Google Scholar 

  18. Czajka A (2015) Pupil dynamics for iris liveness detection. IEEE Trans Inf Forensics Secur (TIFS) 10(4):726–735. https://doi.org/10.1109/TIFS.2015.2398815

    Article  Google Scholar 

  19. Czajka A, Bowyer KW (2018) Presentation attack detection for iris recognition: an assessment of the state of the art. ACM Comput Surv (CSUR) 1(1):1–35. https://doi.org/10.1145/3232849, arXiv:1804.00194

    Article  Google Scholar 

  20. Sutra G, Dorizzi B, Garcia-Salitcetti S, Othman N (2017) A biometric reference system for iris. OSIRIS version 4.1. http://svnext.it-sudparis.eu/svnview2-eph/ref_syst/iris_osiris_v4.1/. Accessed 1 Aug 2017

  21. Kohn M, Clynes M (1969) Color dynamics of the pupil. Ann N Y Acad Sci 156(2):931–950. Available online at Wiley Online Library (2006)

    Article  Google Scholar 

  22. Bengio Y, Simard P, Frasconi P (1994) Learning long-term dependencies with gradient descent is difficult. IEEE Trans Neural Netw 5(2):157–166. https://doi.org/10.1109/72.279181

    Article  Google Scholar 

  23. Hochreiter S, Schmidhuber J (1997) Long short-term memory. Neural Comput 9(8):1735–1780. https://doi.org/10.1162/neco.1997.9.8.1735

    Article  Google Scholar 

  24. Gers FA, Schmidhuber J (2000) Recurrent nets that time and count. In: International joint conference on neural network (IJCNN), vol 3, pp 189–194. https://doi.org/10.1109/IJCNN.2000.861302

  25. Greff K, Srivastava RK, Koutnk J, Steunebrink BR, Schmidhuber J (2016) LSTM: a search space Odyssey. IEEE Trans Neural Netw Learn Syst (99):1–11. https://doi.org/10.1109/TNNLS.2016.2582924

    Article  MathSciNet  Google Scholar 

  26. Cho K, van Merriënboer B, Bahdanau D, Bengio Y (2014) On the properties of neural machine translation: encoder–decoder approaches. In: Workshop on syntax, semantics and structure in statistical translation (SSST), pp 1–6

    Google Scholar 

  27. Hinton G, Srivastava N, Swersky K (2017) Neural networks for machine learning. Lecture 6a: overview of mini-batch gradient descent. http://www.cs.toronto.edu/~tijmen/csc321. Accessed 28 April 2017

  28. Glorot X, Bengio Y (2010) Understanding the difficulty of training deep feedforward neural networks. In: Teh YW, Titterington M (eds) International conference on artificial intelligence and statistics (AISTATS), Proceedings of machine learning research, vol 9. PMLR, Chia Laguna Resort, Sardinia, Italy, pp 249–256

    Google Scholar 

Download references

Acknowledgements

The authors would like to thank Mr. Rafal Brize and Mr. Mateusz Trokielewicz, who collected the iris images in varying light conditions under the supervision of the first author. The application of Kohn and Clynes model was inspired by research of Dr. Marcin Chochowski, who used parameters of this model as individual features in biometric recognition. This author, together with Prof. Pacut and Dr. Chochowski, has been granted a US patent No. 8,061,842 which partially covers the ideas related to parametric model-based PAD and presented in this work.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Adam Czajka .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Czajka, A., Becker, B. (2019). Application of Dynamic Features of the Pupil for Iris Presentation Attack Detection. In: Marcel, S., Nixon, M., Fierrez, J., Evans, N. (eds) Handbook of Biometric Anti-Spoofing. Advances in Computer Vision and Pattern Recognition. Springer, Cham. https://doi.org/10.1007/978-3-319-92627-8_7

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-92627-8_7

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-92626-1

  • Online ISBN: 978-3-319-92627-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics