Skip to main content

FDEA: Face Dataset with Ethnicity Attribute

  • Conference paper
  • First Online:
  • 1850 Accesses

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 13022))

Abstract

Face attributes play an important role in face-related applications. However, existing face attributes (such as expression, age, and skin color) are subject to change. The ethnicity attribute is precious due to its invariance over time, but has not been developed well. This is partly because there is no large enough dataset and labeled accurately with ethnicity attribute. This paper proposes a new Face Dataset with Ethnicity Attribute (FDEA), intended for ethnicity recognition benchmark. For this purpose, we first collect an initial face dataset from CelebA and LFWA [10], MORPH [13], UTKFace [20], FairFace [8], and the web. The samples extracted from CelebA are not labeled with ethnicity attribute. To this end, we employ nine annotators to label these samples from CelebA, while cleaning the remaining samples manually. Finally, our FDEA contains 157,801 samples and is divided into three classes: Caucasian (54,438), Asian (61,522), and African (41,841). Moreover, we carry out a benchmark experiment by testing eight mainstream backbones on the proposed FDEA. The baseline results of the three-classification accuracy are all over 0.92. FDEA is publicly available at https://github.com/GZHU-DVL/FDEA.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    https://www.google.com/imghp.

  2. 2.

    https://www.wikipedia.org.

  3. 3.

    https://www.google.com.

  4. 4.

    https://www.bing.com.

  5. 5.

    dlib.net.

References

  1. Afifi, M., Abdelhamed, A.: AFIF4: deep gender classification based on AdaBoost-based fusion of isolated facial features and foggy faces. J. Vis. Commun. Image Represent. 62, 77–86 (2019)

    Article  Google Scholar 

  2. Banks, M.: Ethnicity: Anthropological Constructions. Routledge, New York (1996)

    Google Scholar 

  3. Boutellaa, E., Hadid, A., Bengherabi, M., Ait-Aoudia, S.: On the use of Kinect depth data for identity, gender and ethnicity classification from facial images. Pattern Recogn. Lett. 68, 270–277 (2015)

    Article  Google Scholar 

  4. Chaudhuri, B., Vesdapunt, N., Wang, B.: Joint face detection and facial motion retargeting for multiple faces. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 9719–9728 (2019)

    Google Scholar 

  5. Das, A., Dantcheva, A., Bremond, F.: Mitigating bias in gender, age and ethnicity classification: a multi-task convolution neural network approach. In: Proceedings of the European Conference on Computer Vision Workshops (ECCVW). pp. 1–13 (2018)

    Google Scholar 

  6. Donahue, J., Ji, Y., Vinyals, O., Hoffman, J., Zhang, N., Tzeng, E., Darrell, T.: DeCAF: a deep convolutional activation feature for generic visual recognition. In: Proceedings of the 31st International Conference on Machine Learning (ICML), pp. 647–655 (2013)

    Google Scholar 

  7. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778 (2016)

    Google Scholar 

  8. Karkkainen, K., Joo, J.: FairFace: face attribute dataset for balanced race, gender, and age. arXiv:1908.04913 (2019)

  9. Li, S., Deng, W., Du, J.: Reliable crowdsourcing and deep locality-preserving learning for expression recognition in the wild. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2852–2861 (2017)

    Google Scholar 

  10. Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV), pp. 3730–3738 (2015)

    Google Scholar 

  11. Miller, D., Brossard, E., Seitz, S., Kemelmacher-Shlizerman, I.: MegaFace: a million faces for recognition at scale. arXiv:1505.02108 (2019)

  12. Narang, N., Bourlai, T.: Gender and ethnicity classification using deep learning in heterogeneous face recognition. In: Proceedings of the IEEE International Conference on Biometrics (ICB), pp. 1–8 (2016)

    Google Scholar 

  13. Ricanek, K., Tesafaye, T.: MORPH: a longitudinal image database of normal adult age-progression. In: Proceedings of the 7th International Conference on Automatic Face and Gesture Recognition (FG06), pp. 341–345 (2006)

    Google Scholar 

  14. Rothe, R., Timofte, R., Gool, L.V.: DEX: deep expectation of apparent age from a single image. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV), pp. 10–15 (2015)

    Google Scholar 

  15. Rothe, R., Timofte, R., Gool, L.V.: Deep expectation of real and apparent age from a single image without facial landmarks. Int. J. Comput. Vis. 126, 144–157 (2018)

    Article  MathSciNet  Google Scholar 

  16. Sun, Y., Yu, J.: General-to-specific learning for facial attribute classification in the wild. J. Vis. Commun. Image Represent. 56, 83–91 (2018)

    Article  Google Scholar 

  17. Wang, M., Deng, W., Hu, J., Tao, X., Huang, Y.: Racial faces in the wild: reducing racial bias by information maximization adaptation network. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV), pp. 692–702 (2019)

    Google Scholar 

  18. Wu, W., Qian, C., Yang, S., Wang, Q., Cai, Y., Zhou, Q.: Look at boundary: a boundary-aware face alignment algorithm. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2129–2138 (2018)

    Google Scholar 

  19. Zhang, S., Chi, C., Lei, Z., Li, S.Z.: RefineFace: refinement neural network for high performance face detection. arXiv:1909.04376 (2019)

  20. Zhang, Z., Song, Y., Qi, H.: Age progression/regression by conditional adversarial autoencoder. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5810–5818 (2017)

    Google Scholar 

Download references

Acknowledgement

The authors would like to thank Peixin Tian for his help in using online mapping engine for the class judgment on confusing images. This paper is supported in part by the National Natural Science Foundation of China under Grant 61872099, in part by the Science and Technology Program of Guangzhou under Grant 201904010478, and in part by the Scientific Research Project of Guangzhou University under Grant YJ2021004.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yuan-Gen Wang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Chen, J., Liu, T., Ou, FZ., Wang, YG. (2021). FDEA: Face Dataset with Ethnicity Attribute. In: Ma, H., et al. Pattern Recognition and Computer Vision. PRCV 2021. Lecture Notes in Computer Science(), vol 13022. Springer, Cham. https://doi.org/10.1007/978-3-030-88013-2_36

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-88013-2_36

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-88012-5

  • Online ISBN: 978-3-030-88013-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics