Research Article
BibTex RIS Cite

Yüz Tanıma Üretim Hatlarının ve Modüllerinin Birlikte Kullanımının Karşılaştırılması

Year 2024, Volume: 17 Issue: 2, 95 - 107, 30.04.2024
https://doi.org/10.17671/gazibtd.1399077

Abstract

BSon yıllarda, büyük teknoloji şirketleri, dünyanın önde gelen üniversiteleri ve açık kaynak topluluğundan gelen araştırmacılar, yüz tanıma alanında önemli ilerlemeler kaydetmişlerdir. Yapılan deneyler, yüz tanıma yaklaşımlarının insan düzeyinde doğruluk sağladığını ve hatta aştığını göstermektedir. Modern bir yüz tanıma süreci genellikle dört aşamadan oluşur: algılama, hizalama, temsil ve doğrulama. Mevcut yüz tanıma çalışmaları genellikle üretim hatlarındaki temsil aşamasına odaklanmıştır. Bu çalışmada, dokuz farklı son teknoloji yüz tanıma modeli, altı son teknoloji yüz dedektörü, üç mesafe ölçümü ve iki hizalama modunun farklı kombinasyonları için çeşitli deneyler gerçekleştirilmiştir. Bu modüllerin uygulanması ve uyumlu hale getirilmesinin genel performansları, her bir modülün üretim hattındaki spesifik etkisini belirlemek amacıyla değerlendirilmiştir. Çalışmanın teorik ve pratik sonucu olarak, yüz tanıma hatları için en iyi konfigürasyon setlerinin paylaşılması hedeflenmektedir.

References

  • Y. Taigman, M. Yang, M. Ranzato, and L. Wolf, “Deep-face: Closing the gap to human-level performance in face verification”, In Proceedings of the IEEE conference on computer vision and pattern recognition, 1701–1708, 2014.
  • F. Schroff, D. Kalenichenko, and J. Philbin, “Facenet: A unified embedding for face recognition and clustering”, In Proceedings of the IEEE conference on computer vision and pattern recognition, 815–823, 2015.
  • O. M. Parkhi, A. Vedaldi, and A. Zisserman, “Deep face recognition”, In British Machine Vision Conference, 2015.
  • J. Deng, J. Guo, N. Xue, and S. Zafeiriou, “Arcface: Additive angular margin loss for deep face recognition”, In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 4690–4699, 2019.
  • D. E. King, “Dlib-ml: A machine learning toolkit”, The Journal of Machine Learning Research, 10, 1755–1758, 2009.
  • Y. Zhong, W. Deng, J. Hu, D. Zhao, X. Li, and D. Wen, “Sface: Sigmoid-constrained hypersphere loss for robust face recognition”, IEEE Transactions on Image Processing, 30:2587–2598, 2021.
  • B. Amos, B. Ludwiczuk, M. Satyanarayanan, et al. “Openface: A general-purpose face recognition library with mobile applications”, CMU School of Computer Science, 6(2):20, 2016.
  • Y. Sun, X. Wang, and X. Tang, “Deep learning face representation from predicting 10,000 classes”, In Proceedings of the IEEE conference on computer vision and pattern recognition, 1891–1898, 2014.
  • S. I. Serengil and A. Ozpinar, “Lightface: A hybrid deep face recognition framework”, In 2020 Innovations in Intelligent Systems and Applications Conference (ASYU), 23–27. IEEE, 2020.
  • G. Bradski, “The opencv library”, Dr. Dobb’s Journal: Software Tools for the Professional Programmer, 25(11):120–123, 2000.
  • W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C. Fu, and A. C. Berg. “Ssd: Single shot multi-box detector”, In European conference on computer vision, 21–37. Springer, 2016.
  • C. Lugaresi, J. Tang, H. Nash, C. McClanahan, E. Uboweja, M. Hays, F. Zhang, C. Chang, M. G. Yong, J. Lee, et al. “Mediapipe: A framework for building perception pipelines”, arXiv preprint arXiv:1906.08172, 2019.
  • V. Bazarevsky, Y. Kartynnik, A. Vakunov, K. Raveendran, and M. Grundmann. “Blazeface: Sub-millisecond neural face detection on mobile gpus”, arXiv preprint arXiv:1907.05047, 2019.
  • K. Zhang, Z. Zhang, Z. Li, and Y. Qiao. “Joint face detection and alignment using multitask cascaded convolutional networks”, IEEE signal processing letters, 23(10):1499–1503, 2016.
  • J. Deng, J. Guo, E. Ververas, I. Kotsia, and S. Zafeiriou. “Retinaface: Single-shot multi-level face localisation in the wild”. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 5203–5212, 2020.
  • S. I. Serengil and A. Ozpinar. “Hyperextended lightface: A facial attribute analysis framework”, In 2021 International Conference on Engineering and Emerging Technologies (ICEET), 1–4. IEEE, 2021.
  • G. B Huang, M. Mattar, T. Berg, and E. L. Miller, “Labeled faces in the wild: A database for studying face recognition in unconstrained environments”, In Workshop on faces in Real-Life Images: detection, alignment, and recognition, 2008.
  • O. Kramer, “Scikit-learn”, In Machine learning for evolution strategies, pages 45–53. Springer, 2016.
  • N. Kumar, A. C. Berg, P. N. Belhumeur, and S. K. Nayar. “Attribute and simile classifiers for face verification”, In 2009 IEEE 12th international conference on computer vision, 365–372. IEEE, 2009.
  • J. R. Quinlan, C4. 5: programs for machine learning. Elsevier, 2014.
  • S. I. Serengil, Deepface: A lightweight face recognition and facial attribute analysis (age, gender, emotion and race) library for python, https://github.com/serengil/deepface, 15.04.2024.
  • A. Z. Omkar, M. Parkhi, A. Vedaldi, Vgg face descriptor, https://www.robots.ox.ac.uk/∼vgg/software/vgg_face/, 15.04.2024.
  • D. Sandberg, Facenet: Face recognition using tensorflow, https://github.com/davidsandberg/facenet, 15.04.2024.
  • L. D Garse, Keras insightface, https://github.com/leondgarse/Keras_insightface, 15.04.2024.
  • Y. Feng, SFace, https://github.com/opencv/opencv_zoo/tree/main/models/face_recognition_sface, 15.04.2024.
  • V. S. Wang, Keras-openface2, https://github.com/iwantooxxoox/Keras-OpenFace, 15.04.2024
  • S. Ghosh, Deepface, https://github.com/swghosh/DeepFace, 2019. 15.04.2024.
  • Q. Cao, L. Shen, W. Xie, O. M. Parkhi, and A. Zisserman, “Vggface2: A dataset for recognising faces across pose and age”, In 2018 13th IEEE international conference on automatic face & gesture recognition (FG 2018), 67–74. IEEE, 2018.
  • R. Ran, Deepid implementation, https://github.com/Ruoyiran/DeepID, 15.04.2024.
  • I. P. Centeno, Mtcnn, https://github.com/ipazc/mtcnn, 15.04.2024.
  • S. Bertrand, Retinaface-tf2, https://github.com/StanislasBertrand/RetinaFace-tf2, 15.04.2024.
  • S. I. Serengil, Retinaface: Deep face detection library for python, https://github.com/serengil/retinaface, 15.04.2024.
  • K. Yildiz, E. Gunes, A. Bas, “CNN-based Gender Prediction in Uncontrolled Environments”, Duzce University Journal of Science & Technology, 890-898. 2021.
  • H. Goze, O. Yildiz, “A New Deep Learning Model for Real-Time Face Recognition and Time Marking in Video Footage”, Journal of Information Technologies, 167-175. 2022.
  • G. Guodong, N. Zhang, “A survey on deep learning based face recognition”, Computer Vision and Image Understanding, 189, 102805, 2019.
  • M. Hassaballah, S. Aly, “Face recognition: challenges, achievements and future directions”, IET Computer Vision, 9(4), 614-626, 2015.
  • D. Heinsohn, E. Villalobos, L. Prieto, D. Mery, “Face recognition in low-quality images using adaptive sparse representations”, Image and Vision Computing, 85, 46-58, 2019.
  • P. J. Phillips, A. J. O'toole, “Comparison of human and computer performance across face recognition experiments”, Image and Vision Computing, 32(1), 74-85, 2014.
  • E. G. Ortiz, B. C. Becker, “Face recognition for web-scale datasets”, Computer Vision and Image Understanding, 118, 153-170, 2014.

A Benchmark of Facial Recognition Pipelines and Co-Usability Performances of Modules

Year 2024, Volume: 17 Issue: 2, 95 - 107, 30.04.2024
https://doi.org/10.17671/gazibtd.1399077

Abstract

Researchers from leading technology companies, prestigious universities worldwide, and the open-source community have made substantial strides in the field of facial recognition studies in recent years. Experiments indicate that facial recognition approaches have not only achieved but surpassed human-level accuracy. A contemporary facial recognition process comprises four key stages: detection, alignment, representation, and verification. Presently, the focus of facial recognition research predominantly centers on the representation stage within the pipelines. This study conducted experiments exploring alternative combinations of nine state-of-the-art facial recognition models, six cutting-edge face detectors, three distance metrics, and two alignment modes. The co-usability performances of implementing and adapting these modules were assessed to precisely gauge the impact of each module on the pipeline. Theoretical and practical findings from the study aim to provide optimal configuration sets for facial recognition pipelines.

References

  • Y. Taigman, M. Yang, M. Ranzato, and L. Wolf, “Deep-face: Closing the gap to human-level performance in face verification”, In Proceedings of the IEEE conference on computer vision and pattern recognition, 1701–1708, 2014.
  • F. Schroff, D. Kalenichenko, and J. Philbin, “Facenet: A unified embedding for face recognition and clustering”, In Proceedings of the IEEE conference on computer vision and pattern recognition, 815–823, 2015.
  • O. M. Parkhi, A. Vedaldi, and A. Zisserman, “Deep face recognition”, In British Machine Vision Conference, 2015.
  • J. Deng, J. Guo, N. Xue, and S. Zafeiriou, “Arcface: Additive angular margin loss for deep face recognition”, In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 4690–4699, 2019.
  • D. E. King, “Dlib-ml: A machine learning toolkit”, The Journal of Machine Learning Research, 10, 1755–1758, 2009.
  • Y. Zhong, W. Deng, J. Hu, D. Zhao, X. Li, and D. Wen, “Sface: Sigmoid-constrained hypersphere loss for robust face recognition”, IEEE Transactions on Image Processing, 30:2587–2598, 2021.
  • B. Amos, B. Ludwiczuk, M. Satyanarayanan, et al. “Openface: A general-purpose face recognition library with mobile applications”, CMU School of Computer Science, 6(2):20, 2016.
  • Y. Sun, X. Wang, and X. Tang, “Deep learning face representation from predicting 10,000 classes”, In Proceedings of the IEEE conference on computer vision and pattern recognition, 1891–1898, 2014.
  • S. I. Serengil and A. Ozpinar, “Lightface: A hybrid deep face recognition framework”, In 2020 Innovations in Intelligent Systems and Applications Conference (ASYU), 23–27. IEEE, 2020.
  • G. Bradski, “The opencv library”, Dr. Dobb’s Journal: Software Tools for the Professional Programmer, 25(11):120–123, 2000.
  • W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C. Fu, and A. C. Berg. “Ssd: Single shot multi-box detector”, In European conference on computer vision, 21–37. Springer, 2016.
  • C. Lugaresi, J. Tang, H. Nash, C. McClanahan, E. Uboweja, M. Hays, F. Zhang, C. Chang, M. G. Yong, J. Lee, et al. “Mediapipe: A framework for building perception pipelines”, arXiv preprint arXiv:1906.08172, 2019.
  • V. Bazarevsky, Y. Kartynnik, A. Vakunov, K. Raveendran, and M. Grundmann. “Blazeface: Sub-millisecond neural face detection on mobile gpus”, arXiv preprint arXiv:1907.05047, 2019.
  • K. Zhang, Z. Zhang, Z. Li, and Y. Qiao. “Joint face detection and alignment using multitask cascaded convolutional networks”, IEEE signal processing letters, 23(10):1499–1503, 2016.
  • J. Deng, J. Guo, E. Ververas, I. Kotsia, and S. Zafeiriou. “Retinaface: Single-shot multi-level face localisation in the wild”. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 5203–5212, 2020.
  • S. I. Serengil and A. Ozpinar. “Hyperextended lightface: A facial attribute analysis framework”, In 2021 International Conference on Engineering and Emerging Technologies (ICEET), 1–4. IEEE, 2021.
  • G. B Huang, M. Mattar, T. Berg, and E. L. Miller, “Labeled faces in the wild: A database for studying face recognition in unconstrained environments”, In Workshop on faces in Real-Life Images: detection, alignment, and recognition, 2008.
  • O. Kramer, “Scikit-learn”, In Machine learning for evolution strategies, pages 45–53. Springer, 2016.
  • N. Kumar, A. C. Berg, P. N. Belhumeur, and S. K. Nayar. “Attribute and simile classifiers for face verification”, In 2009 IEEE 12th international conference on computer vision, 365–372. IEEE, 2009.
  • J. R. Quinlan, C4. 5: programs for machine learning. Elsevier, 2014.
  • S. I. Serengil, Deepface: A lightweight face recognition and facial attribute analysis (age, gender, emotion and race) library for python, https://github.com/serengil/deepface, 15.04.2024.
  • A. Z. Omkar, M. Parkhi, A. Vedaldi, Vgg face descriptor, https://www.robots.ox.ac.uk/∼vgg/software/vgg_face/, 15.04.2024.
  • D. Sandberg, Facenet: Face recognition using tensorflow, https://github.com/davidsandberg/facenet, 15.04.2024.
  • L. D Garse, Keras insightface, https://github.com/leondgarse/Keras_insightface, 15.04.2024.
  • Y. Feng, SFace, https://github.com/opencv/opencv_zoo/tree/main/models/face_recognition_sface, 15.04.2024.
  • V. S. Wang, Keras-openface2, https://github.com/iwantooxxoox/Keras-OpenFace, 15.04.2024
  • S. Ghosh, Deepface, https://github.com/swghosh/DeepFace, 2019. 15.04.2024.
  • Q. Cao, L. Shen, W. Xie, O. M. Parkhi, and A. Zisserman, “Vggface2: A dataset for recognising faces across pose and age”, In 2018 13th IEEE international conference on automatic face & gesture recognition (FG 2018), 67–74. IEEE, 2018.
  • R. Ran, Deepid implementation, https://github.com/Ruoyiran/DeepID, 15.04.2024.
  • I. P. Centeno, Mtcnn, https://github.com/ipazc/mtcnn, 15.04.2024.
  • S. Bertrand, Retinaface-tf2, https://github.com/StanislasBertrand/RetinaFace-tf2, 15.04.2024.
  • S. I. Serengil, Retinaface: Deep face detection library for python, https://github.com/serengil/retinaface, 15.04.2024.
  • K. Yildiz, E. Gunes, A. Bas, “CNN-based Gender Prediction in Uncontrolled Environments”, Duzce University Journal of Science & Technology, 890-898. 2021.
  • H. Goze, O. Yildiz, “A New Deep Learning Model for Real-Time Face Recognition and Time Marking in Video Footage”, Journal of Information Technologies, 167-175. 2022.
  • G. Guodong, N. Zhang, “A survey on deep learning based face recognition”, Computer Vision and Image Understanding, 189, 102805, 2019.
  • M. Hassaballah, S. Aly, “Face recognition: challenges, achievements and future directions”, IET Computer Vision, 9(4), 614-626, 2015.
  • D. Heinsohn, E. Villalobos, L. Prieto, D. Mery, “Face recognition in low-quality images using adaptive sparse representations”, Image and Vision Computing, 85, 46-58, 2019.
  • P. J. Phillips, A. J. O'toole, “Comparison of human and computer performance across face recognition experiments”, Image and Vision Computing, 32(1), 74-85, 2014.
  • E. G. Ortiz, B. C. Becker, “Face recognition for web-scale datasets”, Computer Vision and Image Understanding, 118, 153-170, 2014.
There are 39 citations in total.

Details

Primary Language English
Subjects Deep Learning
Journal Section Articles
Authors

Sefik Serengil 0000-0002-0345-0088

Alper Özpınar 0000-0003-1250-5949

Publication Date April 30, 2024
Submission Date December 1, 2023
Acceptance Date March 29, 2024
Published in Issue Year 2024 Volume: 17 Issue: 2

Cite

APA Serengil, S., & Özpınar, A. (2024). A Benchmark of Facial Recognition Pipelines and Co-Usability Performances of Modules. Bilişim Teknolojileri Dergisi, 17(2), 95-107. https://doi.org/10.17671/gazibtd.1399077