Abstract
Neural networks have become a prominent approach to solve inverse problems in recent years. Amongst the different existing methods, the Deep Image/Inverse Priors (DIPs) technique is an unsupervised approach that optimizes a highly overparametrized neural network to transform a random input into an object whose image under the forward model matches the observation. However, the level of overparametrization necessary for such methods remains an open problem. In this work, we aim to investigate this question for a two-layers neural network with a smooth activation function. We provide overparametrization bounds under which such network trained via continuous-time gradient descent will converge exponentially fast with high probability which allows to derive recovery prediction bounds. This work is thus a first step towards a theoretical understanding of overparametrized DIP networks, and more broadly it participates to the theoretical understanding of neural networks in inverse problem settings.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Allen-Zhu, Z., Li, Y., Song, Z.: A Convergence theory for deep learning via over-parameterization. In: ICML, pp. 242–252 (2019)
Arridge, S., Maass, P., Ozan, Ö., Schönlieb, C.-B.: Solving inverse problems using data-driven models. Acta Numer. 28, 1–174 (2019)
Bartlett, P.L., Montanari, A., Rakhlin, A.: Deep learning: a statistical viewpoint. Acta Numer. 30, 87–201 (2021)
Chizat, L., Oyallon, E., Bach, F.: On Lazy training in differentiable programming. In: NeurIPS (2019)
Du, S.S., Zhai, X., Póczos, B., Singh, A.: Gradient descent provably optimizes over-parameterized neural networks. In: ICLR (2019)
Fang, C., Dong, H., Zhang, T.: Mathematical Models of Overparameterized Neural Networks. Proc. IEEE 109(5), 683–703 (2021)
Jacot, A., Gabriel, F., and Hongler, C.: Neural tangent kernel: convergence and generalization in neural networks. In: NeurIPS (2018)
Liu, J., Sun, Y., Xu, X., Kamilov, U.S.: Image restoration using total variation regularized deep image prior. In: IEEE ICASSP, pp. 7715–7719 (2019)
Mataev, G., Milanfar, P., and Elad, M.: DeepRED: deep image prior powered by RED. In: ICCV, pp. 0–0 (2019)
Monga, V., Li, Y., Eldar, Y.C.: Algorithm unrolling: interpretable, efficient deep learning for signal and image processing. IEEE SPM 38(2), 18–44 (2021)
Mukherjee, S., Hauptmann, A., Öktem, O., Pereyra, M., Schönlieb, C.-B.: Learned reconstruction methods with convergence guarantees (2022). arXiv:2206.05431 [cs]. Sept. 2022
Ongie, G., Jalal, A., Metzler, C.A., Baraniuk, R.G., Dimakis, A.G., Willett, R.: Deep learning techniques for inverse problems in imaging. IEEE J-SAIT 1(1), 39–56 (2020)
Oymak, S., Soltanolkotabi, M.: Overparameterized nonlinear learning: gradient descent takes the shortest Path? In: ICML, pp. 4951–4960 (2019)
Oymak, S., Soltanolkotabi, M.: Toward moderate overparameterization: global convergence guarantees for training shallow neural networks. IEEE J-SAIT 1, 84–105 (2020)
Prost, J., Houdard, A., Almansa, A., Papadakis, N.: Learning local regularization for variational image restoration. In: SSVM, pp. 358–370 (2021)
Shi, Z., Mettes, P., Maji, S., Snoek, C.G.M.: On measuring and controlling the spectral bias of the deep image prior. Int. J. Comput. Vis. 130(4), 885–908 (2022). https://doi.org/10.1007/s11263-021-01572-7
Tropp, J.A.: An introduction to matrix concentration inequalities. arXiv:1501.01571 [cs, math, stat] (2015). arXiv: 1501.01571
Ulyanov, D., Vedaldi, A., and Lempitsky, V.: Deep image prior. Int. J. Comput. Vis. 128(7), 1867–1888 (2020). arXiv: 1711.10925
Venkatakrishnan, S.V., Bouman, C.A., and Wohlberg, B.: Plug-and-Play priors for model based reconstruction. In: GlobalSIP, pp. 945–948 (2013)
Zukerman, J., Tirer, T., and Giryes, R.: BP-DIP: A Backprojection based deep image prior. In: EUSIPCO 2020, pp. 675–679 (2021)
Acknowledgements
The authors thank the French National Research Agency (ANR) for funding the project ANR-19-CHIA-0017-01-DEEP-VISION.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Buskulic, N., Quéau, Y., Fadili, J. (2023). Convergence Guarantees of Overparametrized Wide Deep Inverse Prior. In: Calatroni, L., Donatelli, M., Morigi, S., Prato, M., Santacesaria, M. (eds) Scale Space and Variational Methods in Computer Vision. SSVM 2023. Lecture Notes in Computer Science, vol 14009. Springer, Cham. https://doi.org/10.1007/978-3-031-31975-4_31
Download citation
DOI: https://doi.org/10.1007/978-3-031-31975-4_31
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-31974-7
Online ISBN: 978-3-031-31975-4
eBook Packages: Computer ScienceComputer Science (R0)