TrEP: Transformer-Based Evidential Prediction for Pedestrian Intention with Uncertainty

Authors

  • Zhengming Zhang Purdue University
  • Renran Tian Indiana University-Purdue University Indianapolis
  • Zhengming Ding Tulane University

DOI:

https://doi.org/10.1609/aaai.v37i3.25463

Keywords:

CV: Vision for Robotics & Autonomous Driving, HAI: Human Computation, ML: Deep Neural Network Algorithms, APP: Mobility, Driving & Flight

Abstract

With rapid development in hardware (sensors and processors) and AI algorithms, automated driving techniques have entered the public’s daily life and achieved great success in supporting human driving performance. However, due to the high contextual variations and temporal dynamics in pedestrian behaviors, the interaction between autonomous-driving cars and pedestrians remains challenging, impeding the development of fully autonomous driving systems. This paper focuses on predicting pedestrian intention with a novel transformer-based evidential prediction (TrEP) algorithm. We develop a transformer module towards the temporal correlations among the input features within pedestrian video sequences and a deep evidential learning model to capture the AI uncertainty under scene complexities. Experimental results on three popular pedestrian intent benchmarks have verified the effectiveness of our proposed model over the state-of-the-art. The algorithm performance can be further boosted by controlling the uncertainty level. We systematically compare human disagreements with AI uncertainty to further evaluate AI performance in confusing scenes. The code is released at https://github.com/zzmonlyyou/TrEP.git.

Downloads

Published

2023-06-26

How to Cite

Zhang, Z., Tian, R., & Ding, Z. (2023). TrEP: Transformer-Based Evidential Prediction for Pedestrian Intention with Uncertainty. Proceedings of the AAAI Conference on Artificial Intelligence, 37(3), 3534-3542. https://doi.org/10.1609/aaai.v37i3.25463

Issue

Section

AAAI Technical Track on Computer Vision III