Skip to main content
Log in

Accelerated Variational PDEs for Efficient Solution of Regularized Inversion Problems

  • Published:
Journal of Mathematical Imaging and Vision Aims and scope Submit manuscript

Abstract

We further develop a new framework, called PDE acceleration, by applying it to calculus of variation problems defined for general functions on \(\mathbb {R}^n\), obtaining efficient numerical algorithms to solve the resulting class of optimization problems based on simple discretizations of their corresponding accelerated PDEs. While the resulting family of PDEs and numerical schemes are quite general, we give special attention to their application for regularized inversion problems, with particular illustrative examples on some popular image processing applications. The method is a generalization of momentum, or accelerated, gradient descent to the PDE setting. For elliptic problems, the descent equations are a nonlinear damped wave equation, instead of a diffusion equation, and the acceleration is realized as an improvement in the CFL condition from \(\varDelta t\sim \varDelta x^{2}\) (for diffusion) to \(\varDelta t\sim \varDelta x\) (for wave equations). We work out several explicit as well as a semi-implicit numerical scheme, together with their necessary stability constraints, and include recursive update formulations which allow minimal-effort adaptation of existing gradient descent PDE codes into the accelerated PDE framework. We explore these schemes more carefully for a broad class of regularized inversion applications, with special attention to quadratic, Beltrami, and total variation regularization, where the accelerated PDE takes the form of a nonlinear wave equation. Experimental examples demonstrate the application of these schemes for image denoising, deblurring, and inpainting, including comparisons against primal–dual, split Bregman, and ADMM algorithms.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15

Similar content being viewed by others

Notes

  1. Nonconvex problems are also widely used, see, e.g., [17].

  2. Primal–dual and split Bregman also avoid the non-smoothness of the \(L^{1}\) norm, which is an issue in descent-based approaches and often requires some form of regularization.

  3. Acceleration can also be used in the sense that in Nesterov acceleration the gradient is forward looking and computed ahead of the current state [16]. The semi-implicit case which is an extension from the ODE framework of Nesterov uses a similar look ahead for its update scheme.

  4. A discrete version of what is often called the symbol of the underlying linear differential operator that is being approximated.

  5. For completeness, the first-order backward difference scheme can also be written recursively in the form \(\varDelta u^{n}=\left( 1-a\varDelta t\right) \varDelta u^{n-1}-\varDelta t^{2}\nabla E^{n}\).

References

  1. Attouch, H., Goudou, X., Redont, P.: The heavy ball with friction method, I. The continuous dynamical system: global exploration of the local minima of a real-valued function by asymptotic analysis of a dissipative dynamical system. Commun. Contemp. Math. 2(01), 1–34 (2000)

    Article  MathSciNet  Google Scholar 

  2. Aubert, G., Kornprobst, P.: Mathematical Problems in Image Processing: Partial Differential Equations and the Calculus of Variations. Springer, New York (2006)

    Book  Google Scholar 

  3. Bähr, M., Breuß, M., Wunderlich, R.: Fast explicit diffusion for long-time integration of parabolic problems. In: AIP Conference Proceedings, vol. 1863, p. 410002. AIP Publishing (2017)

  4. Bottou, L.: Large-scale machine learning with stochastic gradient descent. In: Proceedings of COMPSTAT’2010, pp. 177–186. Springer (2010)

  5. Boyd, S., Vandenberghe, L.: Convex Optimization. Cambridge University Press, Cambridge (2004)

    Book  Google Scholar 

  6. Baravdish, G., Svensson, O., Gulliksson, M., Zhang, Y.: A damped flow for image denoising. arXiv preprint arXiv:1806.06732 (2018)

  7. Calatroni, L., Chambolle, A.: Backtracking strategies for accelerated descent methods with smooth composite objectives. arXiv preprint arXiv:1709.09004 (2017)

  8. Calder, J., Yezzi, A.: An accelerated PDE framework for efficient solutions of obstacle problems. Preprint (2018)

  9. Chambolle, A., Pock, T.: A first-order primal–dual algorithm for convex problems with applications to imaging. J. Math. Imaging Vis. 40(1), 120–145 (2011)

    Article  MathSciNet  Google Scholar 

  10. Chambolle, A., Pock, T.: An introduction to continuous optimization for imaging. Acta Numer. 25, 161–319 (2016)

    Article  MathSciNet  Google Scholar 

  11. Chan, T.F., Vese, L.A.: Active contours without edges. IEEE Trans. Image Process. 10(2), 266–277 (2001)

    Article  Google Scholar 

  12. Goldstein, T., Osher, S.: The split Bregman method for l1-regularized problems. SIAM J. Imaging Sci. 2(2), 323–343 (2009)

    Article  MathSciNet  Google Scholar 

  13. Goudou, X., Munier, J.: The gradient and heavy ball with friction dynamical systems: the quasiconvex case. Math. Program. 116(1–2), 173–191 (2009)

    Article  MathSciNet  Google Scholar 

  14. Hafner, D., Ochs, P., Weickert, J., Reißel, M., Grewenig, S.: FSI schemes: fast semi-iterative solvers for PDEs and optimisation methods. In: German Conference on Pattern Recognition, pp. 91–102. Springer (2016)

  15. Kimmel, R., Malladi, R., Sochen, N.: Image processing via the beltrami operator. In: Asian Conference on Computer Vision, pp. 574–581. Springer (1998)

  16. Nesterov, Y.: A method of solving a convex programming problem with convergence rate o (1/k2). Sov. Math. Dokl. 27, 372–376 (1983)

    MATH  Google Scholar 

  17. Perona, P., Malik, J.: Scale-space and edge detection using anisotropic diffusion. IEEE Trans. Pattern Anal. Mach. Intell. 12(7), 629–639 (1990)

    Article  Google Scholar 

  18. Polyak, B.T.: Some methods of speeding up the convergence of iteration methods. USSR Comput. Math. Math. Phys. 4(5), 1–17 (1964)

    Article  Google Scholar 

  19. Ratner, V., Zeevi, Y.Y.: Image enhancement using elastic manifolds. In: 14th International Conference on Image Analysis and Processing (ICIAP 2007), pp. 769–774 (2007). https://doi.org/10.1109/ICIAP.2007.4362869

  20. Rudin, L.I., Osher, S., Fatemi, E.: Nonlinear total variation based noise removal algorithms. Phys. D Nonlinear Phenom. 60(1–4), 259–268 (1992)

    Article  MathSciNet  Google Scholar 

  21. Sochen, N., Kimmel, R., Malladi, R.: A general framework for low level vision. IEEE Trans. Image Process. 7(3), 310–318 (1998)

    Article  MathSciNet  Google Scholar 

  22. Su, W., Boyd, S., Candes, E.: A differential equation for modeling Nesterov’s accelerated gradient method: theory and insights. In: Advances in Neural Information Processing Systems, pp. 2510–2518 (2014)

  23. Sundaramoorthi, G., Yezzi, A.: Accelerated optimization in the PDE framework: formulations for the manifold of diffeomorphisms. arXiv:1804.02307 (2018)

  24. Sundaramoorthi, G., Yezzi, A.: Variational PDE’s for acceleration on manifolds and applications to diffeomorphisms. In: Neural Information Processing Systems (2018)

  25. Sutskever, I., Martens, J., Dahl, G., Hinton, G.: On the importance of initialization and momentum in deep learning. In: International Conference on Machine Learning, pp. 1139–1147 (2013)

  26. Trefethen, L.N.: Finite difference and spectral methods for ordinary and partial differential equations. unpublished text, available at http://web.comlab.ox.ac.uk/oucl/work/nick.trefethen/pdetext.html (1996)

  27. Ward, C., Whitaker, N., Kevrekidis, I., Kevrekidis, P.: A toolkit for steady states of nonlinear wave equations: continuous time Nesterov and exponential time differencing schemes. arXiv:1710.05047 (2017)

  28. Weickert, J., Grewenig, S., Schroers, C., Bruhn, A.: Cyclic schemes for PDE-based image analysis. Int. J. Comput. Vis. 118(3), 275–299 (2016)

    Article  MathSciNet  Google Scholar 

  29. Wibisono, A., Wilson, A.C., Jordan, M.I.: A variational perspective on accelerated methods in optimization. Proc. Natl. Acad. Sci. 113(47), E7351–E7358 (2016)

    Article  MathSciNet  Google Scholar 

  30. Yezzi, A., Sundaramoorthi, G.: Accelerated optimization in the PDE framework: formulations for the active contour case. arXiv:1711.09867 (2017)

  31. Yezzi, A., Sundaramoorthi, G., Benyamin, M.: PDE acceleration for active contours. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2019)

  32. Zosso, D., Bustin, A.: A primal–dual projected gradient algorithm for efficient beltrami regularization. UCLA CAM Report 14–52 (2014). ftp://ftp.math.ucla.edu/pub/camreport/cam14-52.pdf

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jeff Calder.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

J. Calder was supported by NSF–DMS Grant 1713691, and A. Yezzi was supported by NSF–CCF Grant 1526848 and ARO W911NF–18–1–0281, and NIH R01 HL143350.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Benyamin, M., Calder, J., Sundaramoorthi, G. et al. Accelerated Variational PDEs for Efficient Solution of Regularized Inversion Problems. J Math Imaging Vis 62, 10–36 (2020). https://doi.org/10.1007/s10851-019-00910-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10851-019-00910-2

Keywords

Mathematics Subject Classification

Navigation