Skip to main content

EDITORIAL article

Front. Neurosci., 05 September 2023
Sec. Visual Neuroscience
This article is part of the Research Topic Deep Facial Attribute Analysis View all 5 articles

Editorial: Deep facial attribute analysis

  • 1Department of Electronics and Communication Engineering, North China Electric Power University, Beijing, China
  • 2Hebei Key Laboratory of Power Internet of Things Technology, North China Electric Power University, Beijing, China
  • 3Department of Computer Science, Royal Holloway, University of London, Egham, United Kingdom
  • 4Faculty of Data Science, City University of Macau, Macao, Macao SAR, China

Editorial on the Research Topic
Deep facial attribute analysis

Face images bear extremely useful social cues, including age, gender, identity, and emotional states, etc. Face attribute is the specific description of the facial features. Face attribute analysis is to analyze the attributes and characteristics of facial images. It has important deployments in many fields. Face attribute analysis based on deep learning techniques has achieved outstanding breakthrough.

As an important physical and social characteristics of human beings, age estimation based on facial images has attracted significant attention, such as cross-age face recognition, harmonious human-computer interaction, image and video retrieval, face-based age prediction, and marketing analysis. Shi et al. summarize modern face-based age estimation methods, and proposed an attention-based convolution (ABC) age estimation framework, namely an improved Swin Transformer with ABC. Their framework improves the accuracy of face age estimation and provides a novel approach for inspiring future investigation.

Face verification and face recognition is to retrieve and identity faces from large-scale face databases or from live surveillance videos based on given samples to provide instant authentication (Hu et al., 2014; Ge et al., 2016). These techniques are widely used because of its effectiveness and efficiency, but the security of face recognition cannot be ignored (Yang et al., 2023).

Facial expression recognition (FER) forms the basis for machines to understand human emotions and detect abnormalities. With the transition of FER from laboratory-controlled to challenging in-the-wild conditions and the recent success of deep learning techniques in various fields, recent deep FER systems generally focus on the important issues: over-fitting caused by a lack of sufficient training data and expression-unrelated variations, such as illumination, head pose and identity bias. Li and Deng (2022) provide a comprehensive survey on deep FER, including datasets and algorithms that provide insights into these intrinsic problems. They review existing novel deep neural networks and related training strategies that are designed for FER based on both static images and dynamic image sequences, and reveal their advantages and limitations. Then extend the survey to include additional related issues and application scenarios, and review the challenges and corresponding opportunities in this field as well as future directions for the design of robust deep FER systems. To meet the requirements of high precision and real-time expression recognition on edge devices, Kong et al. (2022) proposed a real-time facial expression recognition method based on iterative transfer learning and efficient attention network (EAN) for edge resource-constrained scenes. This method can significantly reduce the model complexity and achieve high recognition accuracy rates.

Facial expressions are also used for examination and diagnosis of medical conditions. Meng et al. analyzed that individuals with Autism Spectrum Disorder (ASD) demonstrate a vigilance-avoidance attention pattern toward emotional faces. They employed the eye-tracking technology to examine the characteristics and temporal course of attention bias toward emotional faces in individuals with ATs.

The face attribute editing based on deep learning model is a hot topic of face algorithm and application. In recent years, there are many applications in Internet products, such as face make-up, face age modification, face cartoon avatar generation, face modification and so on. Zhao et al. proposed a style Attention based Global-local Aware GAN for Personalized Facial Caricature Generation. In order to integrate the facial characteristics of a subject, they introduce a landmark-based warp controller for personalized shape exaggeration. To fuse the facial feature with caricature style appropriately, they introduce a style-attention module, which adopts an attention mechanism to reduce the collapsed cases and increase the quality of generated caricatures.

In order to verify the effectiveness of deep learning based facial attribute analysis, various facial databases have been collected and published for algorithm training and testing. But because of the influence of ethnicity, Monteiro et al. provided evidence that facial asymmetry measurements may not be fully comparable across different population, which could potentially affect the reliability of research results carried out with different databases. They verified differences between two face databases, the Chicago Face Database (composed of Asian, Black, Latin, White, and Multiracial subjects) and the LACOP Database (composed of Brazilian subjects). Their work found consistent differences between the two face databases taking into account the ethnicity of the faces. These Results reinforce the need to create and expand multi-ethnic face image databases, allowing for face research in specific populations. Their findings are important not only for face perception research but also for computer vision.

The collection of the studies presented in this Research Topic of Frontiers in Endocrinology will encourage integration and fertilization across disciplines, and promote the further development in this field.

Author contributions

YK: Funding acquisition, Investigation, Writing—original draft. KZ: Funding acquisition, Writing—review and editing. LZ: Writing—review and editing. GW: Writing—review and editing.

Funding

YK was supported by the Fundamental Research Funds for the Central Universities, Grant/Award Number: 2020YJ006. KZ was supported by the National Natural Science Foundation of China, Grant/Award Numbers: 62076093 and 61871182.

Acknowledgments

Thanks for KZ, LZ, and GW critically review and edit the manuscript. Thanks for Jing Yuan Ph.D., Submissions Manager, Neurosciences and thanks for Frontiers Editorial Office.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher's note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

Ge, S., Zhao, S., and Li, C. (2016). Low-resolution face recognition in the wild via selective knowledge distillation. IEEE Trans. Image Process. 28, 2051–2062. doi: 10.1109/TIP.2018.2883743

PubMed Abstract | CrossRef Full Text | Google Scholar

Hu, J., Lu, J., and Tan, Y.-P. (2014). “Discriminative deep metric learning for face verification in the wild,” in 2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (Columbus, OH: IEEE). doi: 10.1109/CVPR.2014.242

PubMed Abstract | CrossRef Full Text | Google Scholar

Kong, Y., Zhang, S., Zhang, K., Ni, Q., and Han, J. (2022). Real-time facial expression recognition based on iterative transfer learning and efficient attention network. IET Image Process. 16, 1694–1708. doi: 10.1049/ipr2.12441

CrossRef Full Text | Google Scholar

Li, S., and Deng, W. (2022). “Deep facial expression recognition: a survey,” in IEEE Transactions on Affective Computing, Vol. 13 (IEEE), 1195–1215. doi: 10.1109/TAFFC.2020.2981446

CrossRef Full Text | Google Scholar

Yang, X., Liu, C., Xu, L., Wang, Y., Dong, Y., Chen, N., et al. (2023). “Towards effective adversarial textured 3d meshes on physical face recognition,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (Vancouver, BC).

Google Scholar

Keywords: facial attribute, age estimation, face recognition, facial expression recognition, facial caricature generation, autistic traits

Citation: Kong Y, Zhang K, Zhang L and Wu G (2023) Editorial: Deep facial attribute analysis. Front. Neurosci. 17:1280831. doi: 10.3389/fnins.2023.1280831

Received: 21 August 2023; Accepted: 28 August 2023;
Published: 05 September 2023.

Edited and reviewed by: Benjamin Thompson, University of Waterloo, Canada

Copyright © 2023 Kong, Zhang, Zhang and Wu. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Yinghui Kong, kongyhbd2015@ncepu.edu.cn

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.