Suppressing Uncertainty in Gaze Estimation

Authors

  • Shijing Wang Beijing Key Laboratory of Traffic Data Analysis and Mining, Beijing Jiaotong University, China
  • Yaping Huang Beijing Key Laboratory of Traffic Data Analysis and Mining, Beijing Jiaotong University, China

DOI:

https://doi.org/10.1609/aaai.v38i6.28368

Keywords:

CV: Biometrics, Face, Gesture & Pose, ML: Calibration & Uncertainty Quantification

Abstract

Uncertainty in gaze estimation manifests in two aspects: 1) low-quality images caused by occlusion, blurriness, inconsistent eye movements, or even non-face images; 2) uncorrected labels resulting from the misalignment between the labeled and actual gaze points during the annotation process. Allowing these uncertainties to participate in training hinders the improvement of gaze estimation. To tackle these challenges, in this paper, we propose an effective solution, named Suppressing Uncertainty in Gaze Estimation (SUGE), which introduces a novel triplet-label consistency measurement to estimate and reduce the uncertainties. Specifically, for each training sample, we propose to estimate a novel ``neighboring label'' calculated by a linearly weighted projection from the neighbors to capture the similarity relationship between image features and their corresponding labels, which can be incorporated with the predicted pseudo label and ground-truth label for uncertainty estimation. By modeling such triplet-label consistency, we can largely reduce the negative effects of unqualified images and wrong labels through our designed sample weighting and label correction strategies. Experimental results on the gaze estimation benchmarks indicate that our proposed SUGE achieves state-of-the-art performance.

Published

2024-03-24

How to Cite

Wang, S., & Huang, Y. (2024). Suppressing Uncertainty in Gaze Estimation. Proceedings of the AAAI Conference on Artificial Intelligence, 38(6), 5581-5589. https://doi.org/10.1609/aaai.v38i6.28368

Issue

Section

AAAI Technical Track on Computer Vision V