READ: Large-Scale Neural Scene Rendering for Autonomous Driving

Authors

  • Zhuopeng Li Zhejiang University
  • Lu Li Zhejiang University
  • Jianke Zhu Zhejiang University, Alibaba-Zhejiang University Joint Institute of Frontier Technologies

DOI:

https://doi.org/10.1609/aaai.v37i2.25238

Keywords:

CV: Vision for Robotics & Autonomous Driving, ROB: Applications

Abstract

With the development of advanced driver assistance systems~(ADAS) and autonomous vehicles, conducting experiments in various scenarios becomes an urgent need. Although having been capable of synthesizing photo-realistic street scenes, conventional image-to-image translation methods cannot produce coherent scenes due to the lack of 3D information. In this paper, a large-scale neural rendering method is proposed to synthesize the autonomous driving scene~(READ), which makes it possible to generate large-scale driving scenes in real time on a PC through a variety of sampling schemes. In order to effectively represent driving scenarios, we propose an ω-net rendering network to learn neural descriptors from sparse point clouds. Our model can not only synthesize photo-realistic driving scenes but also stitch and edit them. The promising experimental results show that our model performs well in large-scale driving scenarios.

Downloads

Published

2023-06-26

How to Cite

Li, Z., Li, L., & Zhu, J. (2023). READ: Large-Scale Neural Scene Rendering for Autonomous Driving. Proceedings of the AAAI Conference on Artificial Intelligence, 37(2), 1522-1529. https://doi.org/10.1609/aaai.v37i2.25238

Issue

Section

AAAI Technical Track on Computer Vision II