16 May 2024 MA-GAN: the style transfer model based on multi-adaptive generative adversarial networks
Min Zhao, XueZhong Qian, Wei Chen
Author Affiliations +
Abstract

Existing style transfer methods need more texture structure of style images with aesthetic guidance, resulting in the loss of a large amount of texture details, which affects the visual effect. We propose a style transfer model based on multi-adaptive generative adversarial networks (MA-GANs) to address this. Specifically, the aesthetic ability learned by the discriminator is used for feature extraction, and the obtained more generalized features are passed into the multi-attention aesthetic module, which includes a collaborative self-attention (CSA) module and a self-attention normalization (SAN) module. The CSA calculates the correlation between the entangled style features and aesthetic features, capturing texture details and style geometric structures. The SAN balances the content semantic structure with the style geometric structure, harmoniously integrating the style pattern into the content image, thus achieving a more practical style transfer. Extensive qualitative and quantitative experimental studies demonstrate the superiority of MA-GAN in visual quality, enabling the synthesis of art images with smooth brushstrokes and rich colors.

© 2024 SPIE and IS&T
Min Zhao, XueZhong Qian, and Wei Chen "MA-GAN: the style transfer model based on multi-adaptive generative adversarial networks," Journal of Electronic Imaging 33(3), 033017 (16 May 2024). https://doi.org/10.1117/1.JEI.33.3.033017
Received: 6 January 2024; Accepted: 7 May 2024; Published: 16 May 2024
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Color

Semantics

Feature extraction

Image quality

Education and training

Visualization

Data modeling

Back to Top