Decoupling Degradations with Recurrent Network for Video Restoration in Under-Display Camera

Authors

  • Chengxu Liu Xi'an Jiaotong University Shaanxi Yulan Jiuzhou Intelligent Optoelectronic Technology Co., Ltd
  • Xuan Wang MEGVII Technology
  • Yuanting Fan Xi'an Jiaotong University
  • Shuai Li MEGVII Technology
  • Xueming Qian Xi'an Jiaotong University Shaanxi Yulan Jiuzhou Intelligent Optoelectronic Technology Co., Ltd

DOI:

https://doi.org/10.1609/aaai.v38i4.28144

Keywords:

CV: Low Level & Physics-based Vision

Abstract

Under-display camera (UDC) systems are the foundation of full-screen display devices in which the lens mounts under the display. The pixel array of light-emitting diodes used for display diffracts and attenuates incident light, causing various degradations as the light intensity changes. Unlike general video restoration which recovers video by treating different degradation factors equally, video restoration for UDC systems is more challenging that concerns removing diverse degradation over time while preserving temporal consistency. In this paper, we introduce a novel video restoration network, called D2RNet, specifically designed for UDC systems. It employs a set of Decoupling Attention Modules (DAM) that effectively separate the various video degradation factors. More specifically, a soft mask generation function is proposed to formulate each frame into flare and haze based on the diffraction arising from incident light of different intensities, followed by the proposed flare and haze removal components that leverage long- and short-term feature learning to handle the respective degradations. Such a design offers an targeted and effective solution to eliminating various types of degradation in UDC systems. We further extend our design into multi-scale to overcome the scale-changing of degradation that often occur in long-range videos. To demonstrate the superiority of D2RNet, we propose a large-scale UDC video benchmark by gathering HDR videos and generating realistically degraded videos using the point spread function measured by a commercial UDC system. Extensive quantitative and qualitative evaluations demonstrate the superiority of D2RNet compared to other state-of-the-art video restoration and UDC image restoration methods.

Published

2024-03-24

How to Cite

Liu, C., Wang, X., Fan, Y., Li, S., & Qian, X. (2024). Decoupling Degradations with Recurrent Network for Video Restoration in Under-Display Camera. Proceedings of the AAAI Conference on Artificial Intelligence, 38(4), 3558-3566. https://doi.org/10.1609/aaai.v38i4.28144

Issue

Section

AAAI Technical Track on Computer Vision III