Abstract
Deep image denoising networks have achieved impressive success with the help of a considerably large number of synthetic train datasets. However, real-world denoising is a still challenging problem due to the dissimilarity between distributions of real and synthetic noisy datasets. Although several real-world noisy datasets have been presented, the number of train datasets (i.e., pairs of clean and real noisy images) is limited, and acquiring more real noise datasets is laborious and expensive. To mitigate this problem, numerous attempts to simulate real noise models using generative models have been studied. Nevertheless, previous works had to train multiple networks to handle multiple different noise distributions. By contrast, we propose a new generative model that can synthesize noisy images with multiple different noise distributions. Specifically, we adopt recent contrastive learning to learn distinguishable latent features of the noise. Moreover, our model can generate new noisy images by transferring the noise characteristics solely from a single reference noisy image. We demonstrate the accuracy and the effectiveness of our noise model for both known and unknown noise removal.
Code is available at https://github.com/shlee0/NoiseTransfer.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
Histograms are computed with 256 bins evenly distributed in [−256, 256].
- 2.
For visualization, the camera pipeline matlab code (https://github.com/AbdoKamel/simple-camera-pipeline) is used.
- 3.
It compares the results for the first 60 training epochs, but is sufficient to confirm the effect of the proposed losses, allowing fair and efficient ablations.
References
Abdelhamed, A., Afifi, M., Timofte, R., Brown, M.S.: NTIRE 2020 challenge on real image denoising: dataset, methods and results. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) (2020)
Abdelhamed, A., Brubaker, M.A., Brown, M.S.: Noise flow: noise modeling with conditional normalizing flows. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV) (2019)
Abdelhamed, A., Lin, S., Brown, M.S.: A high-quality denoising dataset for smartphone cameras. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018)
Agustsson, E., Timofte, R.: NTIRE 2017 challenge on single image super-resolution: dataset and study. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) (2017)
Anaya, J., Barbu, A.: Renoir-a dataset for real low-light image noise reduction. J. Vis. Commun. Image Represent. 51, 144–154 (2018)
Anwar, S., Barnes, N.: Real image denoising with feature attention. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV) (2019)
Arbelaez, P., Maire, M., Fowlkes, C., Malik, J.: Contour detection and hierarchical image segmentation. PAMI 33(5), 898–916 (2011). https://doi.org/10.1109/TPAMI.2010.161
Bachman, P., Hjelm, R.D., Buchwalter, W.: Learning representations by maximizing mutual information across views. In: Advances in Neural Information Processing Systems (NIPS) (2019)
Henz, B., Gastal, E.S.L., Oliveira, M.M.: Synthesizing camera noise using generative adversarial networks. IEEE Trans. Vis. Comput. Graph. 27(3), 2123–2135 (2021). https://doi.org/10.1109/TVCG.2020.3012120
Cai, Y., Hu, X., Wang, H., Zhang, Y., Pfister, H., Wei, D.: Learning to generate realistic noisy images via pixel-level noise-aware adversarial training. In: Beygelzimer, A., Dauphin, Y., Liang, P., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems (NIPS) (2021)
Chang, K.-C., et al.: Learning camera-aware noise models. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12369, pp. 343–358. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58586-0_21
Chen, J., Chen, J., Chao, H., Yang, M.: Image blind denoising with generative adversarial network based noise modeling. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3155–3164 (2018)
Chen, T., Kornblith, S., Norouzi, M., Hinton, G.: A simple framework for contrastive learning of visual representations. In: International Conference on Machine Learning (ICML), pp. 1597–1607. PMLR (2020)
Chen, T., Zhai, X., Ritter, M., Lucic, M., Houlsby, N.: Self-supervised GANs via auxiliary rotation loss. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 12154–12163 (2019)
Goodfellow, I., et al.: Generative adversarial nets. In: Advances in Neural Information Processing Systems (NIPS), vol. 27. Curran Associates, Inc. (2014)
Hadsell, R., Chopra, S., LeCun, Y.: Dimensionality reduction by learning an invariant mapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), vol. 2, pp. 1735–1742. IEEE (2006)
Han, J., Shoeiby, M., Petersson, L., Armin, M.A.: Dual contrastive learning for unsupervised image-to-image translation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) (2021)
He, K., Fan, H., Wu, Y., Xie, S., Girshick, R.: Momentum contrast for unsupervised visual representation learning. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 9729–9738 (2020)
Hong, Z., Fan, X., Jiang, T., Feng, J.: End-to-end unpaired image denoising with conditional adversarial networks. In: Association for the Advancement of Artificial Intelligence (AAAI), vol. 34, pp. 4140–4149 (2020)
Jang, G., Lee, W., Son, S., Lee, K.M.: C2N: practical generative noise modeling for real-world denoising. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV), pp. 2350–2359 (2021)
Jeong, J., Shin, J.: Training GANs with stronger augmentations via contrastive discriminator. In: Proceedings of the International Conference on Learning Representations (ICLR) (2021)
Kang, M., Park, J.: ContraGAN: contrastive learning for conditional image generation. In: Advances in Neural Information Processing Systems (NIPS), vol. 33, pp. 21357–21369. Curran Associates, Inc. (2020)
Khosla, P., et al.: Supervised contrastive learning. In: Advances in Neural Information Processing Systems (NIPS), vol. 33, pp. 18661–18673. Curran Associates, Inc. (2020)
Kim, D.W., Ryun Chung, J., Jung, S.W.: GRDN: grouped residual dense network for real image denoising and GAN-based real-world noise modeling. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) (2019)
Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. In: Proceedings of the International Conference on Learning Representations (ICLR) (2015)
Lee, K.S., Tran, N.T., Cheung, N.M.: Infomax-GAN: improved adversarial image generation via information maximization and contrastive learning. In: IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), pp. 3942–3952 (2021)
Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 17452–17462 (2022)
Lin, H., Zhuang, Y., Huang, Y., Ding, X., Liu, X., Yu, Y.: Noise2Grad: extract image noise to denoise. In: Zhou, Z.H. (ed.) IJCAI, pp. 830–836. IJCAI (2021). https://doi.org/10.24963/ijcai.2021/115
Oord, A.V.D., Li, Y., Vinyals, O.: Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748 (2018)
Park, T., Efros, A.A., Zhang, R., Zhu, J.-Y.: Contrastive learning for unpaired image-to-image translation. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12354, pp. 319–345. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58545-7_19
Plotz, T., Roth, S.: Benchmarking denoising algorithms with real photographs. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1586–1595 (2017)
Salimans, T., et al.: Improved techniques for training GANs. In: Advances in Neural Information Processing Systems (NIPS), vol. 29. Curran Associates, Inc. (2016)
Wang, L., et al.: Unsupervised degradation representation learning for blind super-resolution. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2021)
Xu, J., Li, H., Liang, Z., Zhang, D., Zhang, L.: Real-world noisy image denoising: a new benchmark. arXiv preprint arXiv:1804.02603 (2018)
Ye, M., Zhang, X., Yuen, P.C., Chang, S.F.: Unsupervised embedding learning via invariant and spreading instance feature. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2019)
Yue, Z., Zhao, Q., Zhang, L., Meng, D.: Dual adversarial network: toward real-world noise removal and noise generation. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12355, pp. 41–58. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58607-2_3
Zamir, S.W., et al.: CycleiSP: real image restoration via improved data synthesis. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2020)
Acknowledgements
This work was supported by Samsung Electronics Co., Ltd, and Samsung Research Funding Center of Samsung Electronics under Project Number SRFCIT1901-06.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
1 Electronic supplementary material
Below is the link to the electronic supplementary material.
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Lee, S., Kim, T.H. (2023). NoiseTransfer: Image Noise Generation with Contrastive Embeddings. In: Wang, L., Gall, J., Chin, TJ., Sato, I., Chellappa, R. (eds) Computer Vision – ACCV 2022. ACCV 2022. Lecture Notes in Computer Science, vol 13843. Springer, Cham. https://doi.org/10.1007/978-3-031-26313-2_20
Download citation
DOI: https://doi.org/10.1007/978-3-031-26313-2_20
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-26312-5
Online ISBN: 978-3-031-26313-2
eBook Packages: Computer ScienceComputer Science (R0)