BARET: Balanced Attention Based Real Image Editing Driven by Target-Text Inversion

Authors

  • Yuming Qiao Tsinghua University OPPO Research Institute
  • Fanyi Wang OPPO Research Institute
  • Jingwen Su OPPO Research Institute
  • Yanhao Zhang OPPO Research Institute
  • Yunjie Yu OPPO Research Institute
  • Siyu Wu Zhejiang University
  • Guo-Jun Qi OPPO Research Institute Westlake University

DOI:

https://doi.org/10.1609/aaai.v38i5.28255

Keywords:

CV: Computational Photography, Image & Video Synthesis, CV: Applications, CV: Language and Vision, CV: Multi-modal Vision

Abstract

Image editing approaches with diffusion models have been rapidly developed, yet their applicability are subject to requirements such as specific editing types (e.g., foreground or background object editing, style transfer), multiple conditions (e.g., mask, sketch, caption), and time consuming fine-tuning of diffusion models. For alleviating these limitations and realizing efficient real image editing, we propose a novel editing technique that only requires an input image and target text for various editing types including non-rigid edits without fine-tuning diffusion model. Our method contains three novelties: (I) Target-text Inversion Schedule (TTIS) is designed to fine-tune the input target text embedding to achieve fast image reconstruction without image caption and acceleration of convergence. (II) Progressive Transition Scheme applies progressive linear interpolation between target text embedding and its fine-tuned version to generate transition embedding for maintaining non-rigid editing capability. (III) Balanced Attention Module (BAM) balances the tradeoff between textual description and image semantics. By the means of combining self-attention map from reconstruction process and cross-attention map from transition process, the guidance of target text embeddings in diffusion process is optimized. In order to demonstrate editing capability, effectiveness and efficiency of the proposed BARET, we have conducted extensive qualitative and quantitative experiments. Moreover, results derived from user study and ablation study further prove the superiority over other methods.

Published

2024-03-24

How to Cite

Qiao, Y., Wang, F., Su, J., Zhang, Y., Yu, Y., Wu, S., & Qi, G.-J. (2024). BARET: Balanced Attention Based Real Image Editing Driven by Target-Text Inversion. Proceedings of the AAAI Conference on Artificial Intelligence, 38(5), 4560-4568. https://doi.org/10.1609/aaai.v38i5.28255

Issue

Section

AAAI Technical Track on Computer Vision IV