Nesterov's Accelerated Gradient Method for Nonlinear Ill-Posed Problems with a Locally Convex Residual Functional

In this paper, we consider Nesterov's Accelerated Gradient method for solving Nonlinear Inverse and Ill-Posed Problems. Known to be a fast gradient-based iterative method for solving well-posed convex optimization problems, this method also leads to promising results for ill-posed problems. Here, we provide a convergence analysis for ill-posed problems of this method based on the assumption of a locally convex residual functional. Furthermore, we demonstrate the usefulness of the method on a number of numerical examples based on a nonlinear diagonal operator and on an inverse problem in auto-convolution.


Introduction
In this paper, consider nonlinear inverse problems of the form where F : D(F ) ⊂ X → Y is a continuously Fréchet-differentiable, nonlinear operator between real Hilbert spaces X and Y. Throughout this paper we assume that (1.1) has a solution x * , which need not be unique. Furthermore, we assume that instead of y, we are only given noisy data y δ satisfying y − y δ ≤ δ . (1.2) Since we are interested in ill-posed problems, we need to use regularization methods in order to obtain stable approximations of solutions of (1.1). The two most prominent examples of such methods are Tikhonov regularization and Landweber iteration.
In Tikhonov regularization, one attempts to approximate an x 0 -minimum-norm solution x † of (1.1), i.e., a solution of F (x) = y with minimal distance to a given initial guess x 0 , by minimizing the functional where α is a suitably chosen regularization parameter. Under very mild assumptions on F , it can be shown that the minimizers of T δ α , usually denoted by x δ α , converge subsequentially to a minimum norm solution x † as δ → 0, given that α and the noise level δ are coupled in an appropriate way [9]. While for linear operators F the minimization of T δ α is straightforward, in the case of nonlinear operators F the computation of x δ α requires the global minimization of the then also nonlinear functional T δ α , which is rather difficult and usually done using various iterative optimization algorithms.
This motivates the direct application of iterative algorithms for solving (1.1), the most popular of which being Landweber iteration, given by where ω is a scaling parameter and x 0 is again a given initial guess. Seen in the context of classical optimization algorithms, Landweber iteration is nothing else than the gradient descent method applied to the functional and therefore, in order to arrive at a convergent regularization method, one has to use a suitable stopping rule. In [9] it was shown that if one uses the discrepancy principle, i.e., stops the iteration after k * steps, where k * is the smallest integer such that with a suitable constant τ > 1, then Landweber iteration gives rise to a convergent regularization method, as long as some additional assumptions, most notably the (strong) tangential cone condition where B 2ρ (x 0 ) denotes the closed ball of radius 2ρ around x 0 , is satisfied. Since condition (1.7) poses strong restrictions on the nonlinearity of F which are not always satisfied, attempts have been made to use weaker conditions instead [32]. For example, assuming only the weak tangential cone condition to hold, one can show weak convergence of Landweber iteration [32]. Similarly, if the residual functional Φ 0 (x) defined by (1.5) is (locally) convex, weak subsequential convergence of the iterates of Landweber iteration to a stationary point of Φ 0 (x) can be proven. Even though they both lead to convergence in the weak topology, besides some results presented in [32], the connections between the local convexity of the residual functional and the (weak) tangential cone condition remain largely unexplored. In his recent paper [24], Kindermann showed that both the local convexity of the residual functional and the weak tangential cone condition imply another condition, which he termed N C(0, β > 0), and which is sufficient to guarantee weak subsequential convergence of the iterates. As is well known, Landweber iteration is quite slow [23]. Hence, acceleration strategies have to be used in order to speed it up and make it applicable in practise. Acceleration methods and their analysis for linear problems can be found for example in [9] and [13]. Unfortunately, since their convergence proofs are mainly based on spectral theory, their analysis cannot be generalized to nonlinear problems immediately. However, there are some acceleration strategies for Landweber iteration for nonlinear ill-posed problems, for example [26,30].
As an alternative to (accelerated) Landweber-type methods, one could think of using second order iterative methods for solving (1.1), such as the Levenberg-Marquardt method [14,20] x δ k+1 = x δ k + (F (x δ k ) * F (x δ k ) + α k I) −1 F (x δ k ) * (y δ − F (x δ k )) , (1.9) or the iteratively regularized Gauss-Newton method [6,22] x δ The advantage of those methods [23] is that they require much less iterations to meet their respective stopping criteria compared to Landweber iteration or the steepest descent method. However, each update step of those iterations might take considerably longer than one step of Landweber iteration, due to the fact that in both cases a linear system involving the operator has to be solved. In practical applications, this usually means that a huge linear system of equations has to be solved, which often proves to be costly, if not infeasible. Hence, accelerated Landweber type methods avoiding this drawback are desirable in practise.
In case that the residual functional Φ δ (x) is locally convex, one could think of using methods from convex optimization to minimize Φ δ (x), instead of using the gradient method like in Landweber iteration. One of those methods, which works remarkably well for nonlinear, convex and well-posed optimization problems of the form was first introduced by Nesterov in [25] and is given by where again ω is a given scaling parameter and α ≥ 3 (with α = 3 being common practise). This so-called Nesterov acceleration scheme is of particular interest, since not only is it extremely easy to implement, but Nesterov himself was also able to prove that it generates a sequence of iterates x k for which there holds where x * is any solution of (1.11). This is a big improvement over the classical rate O(k −1 ). The even further improved rate O(k −2 ) for α > 3 was recently proven in [2]. Furthermore, Nesterov's acceleration scheme can also be used to solve compound optimization problems of the form where both Φ(x) and Ψ(x) are convex functionals, and is in this case given by where the proximal operator prox ωΨ (.) is defined by If in addition to being convex, Ψ is proper and lower-semicontinous and Φ is continuously Fréchet differentiable with a Lipschitz continuous gradient, then it was again shown in [2] that the sequence defined by (1.15) satisfies 17) or even O(k −2 ) if α > 3, which is again much faster than ordinary first order methods for minimizing (1.14). This accelerating property was exploited in the highly successful FISTA algorithm [4], designed for the fast solution of linear ill-posed problems with sparsity constraints. Since for linear operators the residual functional Φ δ is globally convex, minimizing the resulting Tikhonov functional (1.3) exactly fits into the category of minimization problems considered in (1.15).
Motivated by the above considerations, one could think of applying Nesterov's acceleration scheme (1.12) to the residual functional Φ δ , which leads to the algorithm (1. 18) In case that the operator F is linear, Neubauer showed in [28] that, combined with a suitable stopping rule and under a source condition, (1.18) gives rise to a convergent regularization method and that convergence rates can be obtained. Furthermore, the authors of [18] showed that certain generalizations of Nesterov's acceleration scheme, termed Two-Point Gradient (TPG) methods and given by give rise to convergent regularization methods, as long as the tangential cone condition (1.7) is satisfied and the stepsizes α δ k and the combination parameters λ δ k are coupled in a suitable way. However, the convergence analysis of the methods (1.19) does not cover the choice i.e., the choice originally proposed by Nesterov and the one which shows by far the best results numerically [17,18,21]. The main reason for this is that the techniques employed there works with the monotonicity of the iteration, i.e., the iterate x δ k+1 always has to be a better approximation of the solution x * than x δ k , which is not necessarily satisfied for the choice (1.20).
The key ingredient for proving the fast rates (1.13) and (1.17) is the convexity of the residual functional Φ(x). Since, except for linear operators, we cannot hope that this holds globally, we assume that Φ 0 (x), i.e., the functional Φ δ (x) defined by (1.5) with exact data y = y δ , corresponding to δ = 0, is convex in a neighbourhood of the initial guess. This neighbourhood has to be sufficiently large encompassing the sought solution x * , or equivalently, the initial guess x 0 has to be sufficiently close to the solution x * . Assuming that F (x) = y has a solution x * in B ρ (x 0 ), where now and in the following, B ρ (x 0 ) denotes the closed ball with radius ρ around x 0 , the key assumption is that Φ 0 is convex in B 6ρ (x 0 ). As mentioned before, Nesterov's acceleration scheme yields a nonmonotonous sequence of iterates, which might possible leave the ball B 6ρ (x 0 ). However, by assumption the sought for solution x * lies in the ball B ρ (x 0 ). Hence, defining the functional which we consider throughout this paper.

Convergence Analysis I
In this section we provide a convergence analysis of Nesterov's accelerated gradient method (1.22). Concerning notation, whenever we consider the noise-free case y = y δ corresponding to δ = 0, we replace δ by 0 in all variables depending on δ, e.g., we write Φ 0 instead of Φ δ . For carrying out the analysis, we have to make a set of assumptions, already indicated in the introduction. 1. The operator F : D(F ) ⊂ X → Y is continuously Fréchet differentiable between the real Hilbert spaces X and Y with inner products ., . and norms . . Furthermore, let F be weakly sequentially closed on B 2ρ (x 0 ).
4. The functional Φ 0 defined by (1.5) with δ = 0 is convex and has a Lipschitz continuous gradient ∇Φ 0 with Lipschitz constant L on B 6ρ (x 0 ), i.e., 5. For α in (1.22) there holds α > 3 and the scaling parameter ω satisfies 0 < ω < 1 L . Note that since B 2ρ (x 0 ) is weakly closed and given the continuity of F , a sufficient condition for the weak sequential closedness assumption to hold is that F is compact.
We now turn to the convergence analysis of Nesterov's accelerated gradient method (1.22). Throughout this analysis, if not explicitly stated otherwise, Assumption 2.1 is in force. Note first that from F being continuously Fréchet differentiable, we can derive that there exists anω such that Next, note that since B 2ρ (x) denotes a closed ball around x, the functional Ψ, in addition to being proper and convex, is also lower-semicontinous, an assumption required in the proofs in [2], which we need in various places of this paper. Furthermore, it immediately follows from the definition (1.16) of the proximal operator prox ωΨ (.) that is a convex set, prox ωΨ (.) is nothing else than the metric projection onto B 2ρ (x 0 ), and is therefore Lipschitz continuous with Lipschitz constant smaller or equal to 1. Consequently, given an estimate of ρ, the implementation of prox ωΨ (.) is exceedingly simple in this setting, and therefore, one iteration step of (1.22) and (1.4) require roughly the same amount of computational effort. Finally, note that due to the convexity of Φ 0 , the set S defined by is a convex subset of B 2ρ (x 0 ) and hence, there exists a unique x 0 -minimum-norm solution x † , which is defined by which is nothing else than the orthogonal projection of x 0 onto the set S.
The following convergence analysis is largely based on the ideas of the paper [2] of Attouch and Peypouquet, which we reference from frequently throughout this analysis. Following their arguments, we start by making the following Definition 2.1. For Φ δ and Ψ defined by (1.5) and (1.21), we define (2.7) The energy functional E δ is defined by where the sequence w δ k is defined by (2.10) Using Definition 2.1, we can now write to update step for x δ k+1 in the form , and furthermore, it is possible to write As a first result, we show that both z δ k and x δ k stay within B 6ρ (x 0 ) during the iteration.
Lemma 2.1. Under the Assumption 2.1, the sequence of iterates x δ k and z δ k defined by and the fact that by the definition of prox ωΨ (x), x δ k is always an element of B 2ρ (x 0 ). Since the functional Θ 0 is assumed to be convex in B 6ρ (x 0 ), we can deduce: Proof. This lemma is also used in [2]. However, the sources for it cited there do not exactly cover our setting with Φ δ being defined on D(F ) ⊂ X only. Hence, we here give an elementary proof of the assertion. Note first that due to the Lipschitz continuity of Φ 0 in B 6ρ (x 0 ) and the fact that ω < 1/L we have and therefore, combining the above two inequalities, we get
We want to derive a similar inequality also for the functionals Θ δ . The following lemma is of vital importance for doing that: as well as (2.14) Then there holds from which the statement of the theorem immediately follows.
Next, we show that the R i and hence, also R, can be bounded in terms of δ + δ 2 .
Proposition 2.4. Let Assumption 2.1 hold, let x ∈ B 2ρ (x 0 ) and z ∈ B 6ρ (x 0 ) and let the R 1 , . . . , R 5 be defined by (2.14). Then there holds Proof. The following somewhat long but elementary proof uses mainly the boundedness and Lipschitz continuity assumptions made above. For the following, let x ∈ B 2ρ (x 0 ) and z ∈ B 6ρ (x 0 ). We treat each of the R i terms separately, starting with Next, we look at Similarly to above, for the next term we get Furthermore, together with the Lipschitz continuity of prox ωΨ (.), we get Finally, for the last term, we get which concludes the proof.
As an immediate consequence, we get the following Corollary 2.5. Let Assumption 2.1 hold and let x, z ∈ B 6ρ (x 0 ). If we define then there holds Proof. This immediately follows from Lemma 2.2 and Proposition 2.4.
Combining the above, we are now able to arrive at the following important result: Proposition 2.6. Let Assumption 2.1 hold, let the sequence of iterates x δ k and z δ k be given by (1.22) and let c 1 and c 2 be defined by (2.15). If we define Proof. This immediately follows from Lemma 2.1 and Corollary 2.5.
Using the above proposition, we are now able to derive the important Theorem 2.7. Let Assumption 2.1 hold and let the sequence of iterates x δ k and z δ k be given by (1.22) and let ∆(δ) be defined by (2.16). Then there holds Proof. This proof is adapted from the corresponding result in [2], the difference being the term ∆(δ). We start by multiplying inequality (2.17) by k k+α−1 and inequality (2.18) by α−1 k+α−1 . Adding the results and using the fact that (2.20) Next, observe that it follows from (2.11) that After developing and multiplying the above expression by Replacing this in inequality (2.20) above, we get Equivalently, we can write this as Multiplying by 2ω and therefore, since there holds Together with the definition (2.8) of E δ , this implies or equivalently, after rearranging, we get which concludes the proof.
Inequality (2.19) is the key ingredient for showing that (1.22), combined with a suitable stopping rule, gives rise to a convergent regularization method. In order to derive a suitable stopping rule, note first that in the case of exact data, i.e., δ = 0, inequality (2.19) reduces to Since by Assumption 2.1 the functional Φ 0 is convex, the arguments used in [2] are applicable, and we can deduce the following: Theorem 2.8. Let Assumption 2.1 hold, let the sequence of iterates x 0 k and z 0 k be given by (1.22) with exact data y = y δ , i.e., δ = 0 and let S be defined by (2.5). Then the following statements hold: • The sequence (E 0 (k)) is non-increasing and lim k→∞ E 0 (k) exists.
• For each k ≥ 0, there holds • There exists anx in S, such that the sequence (x 0 k ) converges weakly tox, i.e., Proof. The statements follow from Facts 1-4, Remark 2 and Theorem 3 in [2].
Thanks to Theorem 2.8, we now know that Nesterov's accelerated gradient method (1.22) converges weakly to a solutionx from the solution set S in case of exact data y = y δ , i.e., δ = 0.
Hence, it remains to consider the behaviour of (1.22) in the case of inexact data y δ . As mentioned above, the key for doing so is inequality (2.19). We want to use it to show that, similarly to the exact data case, the sequence (E δ (k)) is non-increasing up to some k ∈ N. To do this, note first that E δ (k) is positive as long as which is true, as long as On the other hand, the term in (2.19) is positive, as long as which is satisfied, as long as which obviously implies (2.23). These considerations suggest, given a small τ > 1, to choose the stopping index k * = k * (δ, y δ ) as the smallest integer such that Concerning the well-definedness of k * , we are able to prove the following Lemma 2.9. Let Assumption 2.1 hold, let the sequence of iterates x δ k and z δ k be given by (1.22) and let c 1 and c 2 be defined by (2.15). Then the stopping index k * defined by (2.26) with τ > 1 is well-defined and there holds

27)
Proof. By the definition (2.16) of ∆(δ) and due to it follows from (2.26) that for all k < k * there holds which can be rewritten as where we have used that τ > 1. Since the left hand side in the above inequality goes to ∞ for k → ∞, while the right hand side stays bounded, it follows that k * is finite and hence well-defined for δ = 0. Furthermore, since which can see by multiplying the above inequality by k(α − 3), and since (2.28) also holds for k = k * − 1, we get Reordering the terms, we arrive at from which the assertion now immediately follows.  4), where one only obtains k * = O(δ −2 ). In order to obtain the rate k * = O(δ −1 ) for Landweber iteration, apart from others, a source condition of the form has to hold, which is not required for Nesterov's accelerated gradient method (1.22). Before we turn to our main result, we first prove a couple of important consequences of (2.19) and the stopping rule (2.26). Proposition 2.10. Let Assumption 2.1 be satisfied, let x δ k and z δ k be defined by (1.22) and let E δ be defined by (2.8). Assuming that the stopping index k * is determined by (2.26) with some τ > 1, then, for all 0 ≤ k ≤ k * , the sequence (E δ (k)) is non-increasing and in particular, E δ (k) ≤ E δ (0). Furthermore, for all 0 ≤ k ≤ k * there holds

30)
as well as Now, summing over this inequality and using telescoping and the fact that E δ (k * ) ≥ 0 we immediately arrive at (2.32), which concludes the proof.
From the above proposition, we are able to deduce two interesting corollaries.
Corollary 2.12. Under the assumptions of Proposition 2.10 there holds Proof. Using the fact that both x δ k , x * ∈ B 2ρ (x 0 ), it follows from the definition of Θ δ that Θ δ (x δ k ) = Φ δ (x δ k ) and Θ δ (x * ) = Φ δ (x * ) Hence, it follows with y − y δ ≤ δ that Together with the definition of the stopping rule (2.26), this implies that for all k ≤ k * −1 Using this in (2.32) yields from which the statement now immediately follows.
Again, this shows that k * = O(δ −1 ), i.e., k * ≤ cδ −1 , however this time the constant c does not depend on c 1 and c 2 , an observation which we use when analysing (1.22) under slightly different assumptions then Assumption 2.1 below.
We are now able to prove one of our main results: Theorem 2.13. Let Assumption 2.1 hold and let the iterates x δ k and z δ k be defined by (1.22). Furthermore, let k * = k * (δ, y δ ) be determined by (2.26) with some τ > 1 and let the solution set S be given by (2.5). Then there exists anx ∈ S and a subsequencex δ k * of x δ k * which converges weakly tox as δ → 0, i.e., If S is a singleton, then x δ k * converges weakly to the then unique solutionx ∈ S. Proof. This proof follows some ideas of [15]. Let y n := y δn be a sequence of noisy data satisfying y − y n ≤ δ n . Furthermore, let k n := k * (δ n , y n ) be the stopping index determined by (2.26) applied to the pair (δ n , y n ). There are two cases. First, assume that k is a finite accumulation point of k n . Without loss of generality, we can assume that k n = k for all n ∈ N. Thus, from (2.26), it follows that which, together with the triangle inequality, implies Since for fixed k the iterates x δ k depend continuously on the data y δ , by taking the limit n → ∞ in the above inequality we can derive For the second case, assume that k n → ∞ as n → ∞. Since x δn kn ∈ B 2ρ (x 0 ), it is bounded and hence, has a weakly convergent subsequence xδ ñ kn , corresponding to a subsequenceδ n of δ n andk n := k * (δ n , yδ n ). Denoting the weak limit of xδ ñ kn byx, it remains to show thatx ∈ S. For this, observe that it follows from (2.33) that where we have used thatk n → ∞ andδ n → 0 as n → ∞, which follows from the assumption that so do the sequences k n and δ n , and the fact that E δ (0) stays bounded for δ → 0. Hence, since we know that y δ → y as δ → 0, we can deduce that F xδ ñ kn → y , as n → ∞ , and therefore, using the weak sequential closedness of F on B 2ρ (x 0 ), we deduce that F (x) = y, i.e.,x ∈ S, which was what we wanted to show. It remains to show that if S is a singleton then x δ k * converges weakly tox. Since this was already proven above in the case that k n has a finite accumulation point, it remains to consider the second case, i.e., k n → ∞. For this, consider an arbitrary subsequence of x δ k * . Since this sequence is bounded, it has a weakly convergent subsequence which, by the same arguments as above, converges to a solutionx ∈ S. However, since we have assumed that S is a singleton, it follows that x δ k * converges weakly tox, which concludes the proof.
Remark. In Theorem 2.13, we have shown weak subsequential convergence to an element x in the solution set S. However, this element might be different from the x 0 -minimum norm solution x † defined by (2.6), unless of course in case that S is a singleton.

Convergence Analysis II
Some simplifications of the above presented convergence analysis are possible if we assume that instead of only Φ 0 , all the functionals Φ δ are convex. Hence, for the remainder of this section, we work with the following Assumption 3.1. Let ρ be a positive number such that B 6ρ (x 0 ) ⊂ D(F ).
1. The operator F : D(F ) ⊂ X → Y is continuously Fréchet differentiable between the real Hilbert spaces X and Y with inner products ., . and norms . . Furthermore, let F be weakly sequentially closed on B 2ρ (x 0 ).

The equation
F (x) = y has a solution x * ∈ B ρ (x 0 ).

5.
For α in (1.22) there holds α > 3 and the scaling parameter ω satisfies 0 < ω < 1 L . Note that Assumption 3.1 is only a special case of Assumption 2.1. Hence, the above convergence analysis presented above is applicable and we get weak convergence of the iterates of (1.22). However, the stopping rule (2.26) depends on the constants c 1 and c 2 defined by (2.15), which are not always available in practise. Fortunately, using the Assumption 3.1, we can get rid of c 1 and c 2 . The key idea is to observe that the following lemma holds: Proof. This follows from the convexity of Θ δ in the same way as in Lemma 2.2.
From the above lemma, it follows that the results of Corollary 2.5 and Proposition 2.6 hold with ∆(δ) = 0. Therefore, the stopping rule (2.26) simplifies to for some τ > 1, which is nothing else than the discrepancy principle (1.6). Note that in contrast to (2.26), only the noise level δ needs to be known in order to determine the stopping index k * . With the same arguments as above, we are now able to prove our second main result: Theorem 3.2. Let Assumption 3.1 hold and let the iterates x δ k and z δ k be defined by (1.22). Furthermore, let k * = k * (δ, y δ ) be determined by (3.2) with some τ > 1 and let the solution set S be given by (2.5). Then for the stopping index k * there holds k * = O(δ −1 ). Furthermore, there exists anx ∈ S and a subsequencex δ k * of x δ k * which converges weakly tox as δ → 0, i.e., If S is a singleton, then x δ k * converges weakly to the then unique solutionx ∈ S. Proof. The proof of this theorem is analogous to the proof of Theorem 2.13. The only main difference is the well definedness of k * , which now cannot be derived from Lemma 2.9 but follows from (2.32) by Corollary 2.12, which also yields k * = O(δ −1 ).
Remark. Note that since Theorem 3.2 only gives an asymptotic result, i.e., for δ → 0, the requirement in Assumption 3.1 that the functionals Φ δ have to be convex for all δ > 0 can be relaxed to 0 ≤ δ ≤δ, as long as we only consider data y δ satisfying the noise constraint y − y δ ≤ δ ≤δ.
Remark. Note that if the functionals Φ δ are globally convex and uniformly Lipschitz continuous, which is for example the case if F is a bounded linear operator, then one can choose ρ arbitrarily large in the definition of Ψ. Now, as we have seen above, the proximal mapping prox ωΨ (.) is nothing else than the projection onto B 2ρ (x 0 ). This implies that for practical purposes, prox ωΨ (.) may be dropped in (1.22), which means that one effectively uses (1.18) instead of (1.22).

Strong Convexity and Nonlinearity Conditions
In this section, we consider the question of strong convergence of the iterates of (1.22) and comment on the connection between the assumption of local convexity and the (weak) tangential cone condition.
Concerning the strong convergence of the iterates of (1.22) and (1.18), note that it could be achieved if the functional Φ 0 were locally strongly convex, i.e., if since then, for the choice of x 1 = x 0 k and x 2 = x * , one gets from which, since we have F (x 0 k ) − y → 0 as δ → 0, it follows that x δ k converges strongly to x * as δ → 0. Hence, retracing the proof of Theorem 2.13, one would get Unfortunately, already for linear ill-posed operators F = A, strong convexity of the form (4.1) cannot be satisfied, since then one would get which already implies the well-posedness of Ax = y in B 2ρ (x 0 ). However, defining it was shown in [16,Lemma 3.3] that there holds Hence, if one could show that x 0 k ∈ M τ for some τ > 0 and all k ∈ N, then it would follow that from which strong convergence of x 0 k , and consequently also of x δ k * to x † would follow. In essence, this was done in [28] with tools from spectral theory in the classical framework for analysing linear ill-posed problem [9] under the source condition x † ∈ R(A * ).
Remark. Note that it is sometimes possible, given weak convergence of a sequence x k ∈ X to some elementx ∈ X , to infer strong convergence of x k tox in a weaker topology. For example, if x k ∈ H 1 (0, 1) converges weakly tox in the H 1 (0, 1) norm, then it follows that x k converges strongly tox with respect to the L 2 (0, 1) norm. Many generalizations of this example are possible. Note further that in finite dimensions, weak and strong convergence coincide.
In the remaining part of this section, we want to comment on the connection of the local convexity assumption (2.1) to other nonlinearity conditions like (1.7) and (1.8) commonly used in the analysis of nonlinear-inverse problems.
First of all, note that due to the results of Kindermann [24], we know that both convexity and the (weak) tangential cone condition imply weak convergence of Landweber iteration (1.4). However, it is not entirely clear in which way those conditions are connected.
One connection of the two conditions was given in [32], where it was shown that the nonlinearity condition implies a certain directional convexity condition. Another connection was provided in [24], where it was shown that the tangential cone condition implies a quasi-convexity condition. However, it is not clear whether or not the tangential cone condition implies convexity or not. What we can say is that convexity does not imply the (weak) tangential cone condition, which is shown in the following This nonlinear Hammerstein operator was extensively treated as an example problem for nonlinear inverse problems (see for example [15,27]). It is well known that for this operator the tangential cone condition is satisfied around x † as long as x † ≥ c > 0. However, the (weak) tangential cone condition is not satisfied in case that x † ≡ 0. Moreover, it can easily be seen (for example from (5.1)) that Φ 0 (x) is globally convex, which shows that convexity does not imply the tangential cone condition.

Example Problems
In this section, we consider two examples to which we apply the theory developed above. Most importantly, we prove the local convexity assumption for both Φ 0 and Φ δ , with δ small enough. Furthermore, based on these example problems, we present some numerical results, demonstrating the usefulness of method (1.22), and supporting the findings of [17-19, 21, 28], which are also shortly discussed.
For this, note that if F is twice continuously Fréchet differentiable, then convexity of Φ δ is equivalent to positive semi-definiteness of its second Fréchet derivative [31].
More precisely, we have that (3.1) is equivalent to which is our main tool for the upcoming analysis.

Example 1 -Nonlinear Diagonal Operator
For our first (academic) example, we look at the following class of nonlinear diagonal operators Furthermore, note that solving F (x) = y is equivalent to from which it is easy to see that we are dealing with an ill-posed problem.
We now turn to the convexity of Φ δ (x) around a solution x † .
Proof. Due to (5.1) it is sufficient to show that Using the definition of F , the fact that e n is an orthonormal basis of 2 and that F (x † ) = y, this inequality can be rewritten into which after simplification, becomes Since the right of the above two sums is always positive, in order for the above inequality to be satisfied it suffices to show that Now, since by the triangle inequality we have y 2 n − (y δ n ) 2 = y n − y δ n y n + y δ n ≤ y − y δ 2 y + y δ 2 ≤ δ 2 y 2 + y − y δ 2 ≤ δ (2 y 2 + δ) , (5.5) it follows that in order to prove (5.4) it suffices to show Now, writing x = x † + ε, this can be rewritten into Since ε 2 n ≥ 0, the above inequality is satisfied given that which can always be satisfied given that x † n > 0 for all n ∈ {1 , . . . , M }.
After proving local convexity of the residual functional around the solution, we now proceed to demonstrate the usefulness of (1.22) based on the following numerical Example 5.1. For this example we choose f n as in (5.2) with M = 100. For the exact solution x † we take the sequence x † n = 100/n which leads to the exact data y n = F (x † ) n = 10 4 /n 3 , n ≤ 100 , 10 2 /n 2 , n > 100 .
Therefore, the functional Φ 0 is convex in B 6ρ (x 0 ) given that ρ ≤ 1/28 ≈ 0.036, which is for example the case for the choice (5.6) Furthermore, for any noise levelδ small enough, one has that for all δ ≤δ the functional Φ δ is convex in B 6ρ (x 0 ) as long as which for example is satisfied if ρ ≤ 1 −δ(2 y 2 +δ) 28 .
For numerically treating the problem, instead of considering full sequences x = (x n ) n∈N , we only consider x = (x n ) n=1,...,N where we choose N = 200 in this example. This means that we are considering the following discretized version of F : We now compare the behaviour of method (1.22) with its non-accelerated Landweber counterpart (1.4) when applied to the problem with x † and x 0 as defined above. For both methods, we choose the same scaling parameter ω = 3.2682 * 10 −5 estimated from the norm of F (x † ) and we stop the iteration with the discrepancy principle (1.6) with τ = 1. Furthermore, random noise with a relative noise level of 0.001% was added to the data to arrive at the noisy data y δ and, following the argument presented after (3.2) and since the iterates x δ k remain bounded even without it, we drop the proximal operator prox ωΨ (.) in (1.22). The results of the experiments, computed in MATLAB, are displayed in

Example 2 -Auto-Convolution Operator
Next we look at an example involving an auto-convolution operator. Due to its importance in laser optics, the auto-convolution problem has been extensively studied in the literature [1,5,11], its ill-posedness has been shown in [8,10,12] and its special structure was successfully exploited in [29]. For our purposes, we consider the following version of the auto-convolution operator where we interpret functions in L 2 (0, 1) as 1-periodic functions on R. For the following, denote by (e (k) ) k∈Z the canonical real Fourier basis of L 2 (0, 1), i.e., and by x k := x, e (k) the Fourier coefficients of x. It follows that It was shown in [7] that if only finitely many Fourier components x k are non-zero, then a variational source condition is satisfied leading to convergence rates for Tikhonov regularization. We now use this assumption of a sparse Fourier representation to prove convexity of Φ δ for the auto-convolution operator in the following Proposition 5.2. Let x † be a solution of F (x) = y such that there exists an index set Λ N ⊂ Z with |Λ N | = N such that for the Fourier coefficients x † k of x † there holds Furthermore, let ρ > 0 andδ ≥ 0 be small enough such that and let x 0 ∈ B ρ (x † ). Then for all 0 ≤ δ ≤δ, the functional Φ δ (x) is convex in B 6ρ (x 0 ). and therefore, the convexity condition (5.9) simplifies to the following two inequalities 100 ≥ 280ρ +δ 2 y L 2 +δ , 1 ≥ 28ρ +δ 2 y L 2 +δ .
Hence, for the noise-free case (i.e.,δ = 0) the functional Φ 0 is convex in B 6ρ (x 0 ) given that ρ ≤ 1/28 ≈ 0.036 and that x 0 ∈ B ρ (x † ), which is for example the case for the choice x 0 = 10 + 27 28 √ 2 sin(2πs). For discretizing the problem, we choose a uniform discretization of the interval [0, 1] into N = 32 equally spaced subintervals and introduce the standard finite element hat functions {ψ i } N i=0 on this subdivision, which we use to discretize both X and Y. Following the idea used in [26], we discretize F by the finite dimensional operator For computing the coefficients f i (x), we employ a 4-point Gaussian quadrature rule on each of the subintervals to approximate the integral in (5.11). Now we again compare method (1.22) with (1.4). This time, the estimated scaling parameter has the value ω = 0.005 and random noise with a relative noise level of 0.01% was added to the data. Again the discrepancy principle (1.6) with τ = 1 was used and the proximal operator prox ωΨ (.) in (1.22) was dropped. The results of the experiments, computed in MATLAB, are displayed in the left part of Table 5.2. Again the results clearly illustrate the advantages of Nesterov's acceleration strategy, which substantially decreases the required number of iterations and computational time, while leading to a relative error of essentially the same size as Landweber iteration.
The initial guess x 0 used for the experiment above is quite close to the exact solution x † . Although this is necessary for being able to guarantee convergence by our developed theory, it is not very practical. Hence, we want to see what happens if the solution and the initial guess are so far apart that they are no longer within the guaranteed area of convexity. For this, we consider the choice of x † (s) = 10 + √ 2 sin (8πs) and x 0 (s) = 10 + √ 2 sin (2πs). The result can be seen in the right part of Table 5.2. Landweber iteration was stopped after 10000 iterations without having reached the discrepancy principle since no more progress was visible numerically. Consequently, it is clearly outperformed by (1.22), which manages to converge already after 797 iterations, and with a much better relative error. The resulting reconstructions, depicted in Figure 5.1, once again underline the usefulness of (1.22).
As an interesting remark, note that it seems that for the second example Landweber iteration gets stuck in a local minimum, while (1.22), after staying at this minimum for a while, manages to escape it, which is likely due to the combination step in (1.22).  ) and x † (s) = 10 + √ 2 sin (8πs) and x 0 (s) = 10 + √ 2 sin (2πs) (right table). even though the key assumption of local convexity is not always known to hold for them. First of all, in [17] the parameter estimation problem of Magnetic Resonance Advection Imaging (MRAI) was solved using a method very similar to (1.22). In MRAI, one aims at estimating the spatially varying pulse wave velocity (PWV) in blood vessels in the brain from Magnetic Resonance Imaging (MRI) data. The PWV is directly connected to the health of the blood vessels and hence, it is used as a prognostic marker for various diseases in medical examinations. The data sets in MRAI are very large, making the direct application of second order methods like (1.9) or (1.10) difficult. However, since methods like (1.22) can deal with those large datasets, they were used in [17] for reconstructions of the PWV.

Further Examples
Secondly, in [18], numerical examples for various TPG methods (1.19), including the iteration (1.18), were presented. Among those is an example based on the imaging technique of Single Photon Emission Computed Tomography (SPECT). Various numerical tests show that among all tested TPG methods, the method (1.18) clearly outperforms the rest, even though the local convexity assumption is not known to hold in this case. This is also demonstrated on an example based on a nonlinear Hammerstein operator.
Thirdly, method (1.18) was used in [19] to solve a problem in Quantitative Elastography, namely the reconstruction of the spatially varying Lamé parameters from full internal static displacement field measurements. Method (1.18) was used to obtain all reconstruction results presented in that paper, since ordinary first-order methods like Landweber iteration (1.4) were too slow to satisfy the demands required in practise.
Finally, in the numerical examples presented in [21], method (1.18) was used to accelerate the employed gradient/Kaczmarz methods. Furthermore, a convergence analysis of (1.18) for linear ill-posed problems including numerical examples is given in [28].