The structure of optimal parameters for image restoration problems

We study the qualitative properties of optimal regularisation parameters in variational models for image restoration. The parameters are solutions of bilevel optimisation problems with the image restoration problem as constraint. A general type of regulariser is considered, which encompasses total variation (TV), total generalized variation (TGV) and infimal-convolution total variation (ICTV). We prove that under certain conditions on the given data optimal parameters derived by bilevel optimisation problems exist. A crucial point in the existence proof turns out to be the boundedness of the optimal parameters away from $0$ which we prove in this paper. The analysis is done on the original -- in image restoration typically non-smooth variational problem -- as well as on a smoothed approximation set in Hilbert space which is the one considered in numerical computations. For the smoothed bilevel problem we also prove that it $\Gamma$ converges to the original problem as the smoothing vanishes. All analysis is done in function spaces rather than on the discretised learning problem.

The image depends on α and belongs in our setting to a generic function space X. Here J is a generic energy modelling our prior knowledge on the image u α . The quality of the solution u α of variational imaging approaches like this one crucially relies on a good choice of the parameters α. We are particularly interested in the case with K a generic bounded forward operator, Φ a fidelity function, and A j linear operators acting on u. The values A j u are penalised in the total variation or Radon norm µ j = µ M(Ω;R m j ) , and combined constitute the image regulariser. In this context, α represents the regularisation parameter that balances the strength of regularisation against the fitness Φ of the solution to the idealised forward model K. The size of this parameter depends on the level of random noise and the properties of the forward operator. Choosing it too large results in over-regularisation of the solution and in turn may cause the loss of potentially important details in the image; choosing it too small under-regularises the solution and may result in a noisy and unstable output. In this work we will discuss and thoroughly analyse a bilevel optimisation approach that is able to determine the optimal choice of α in J(; α).
Recently, bilevel approaches for variational models have gained increasing attention in image processing and inverse problems in general. Based on prior knowledge of the problem in terms of a training set of image data and corresponding model solutions or knowledge of other model determinants such as the noise level, optimal reconstruction models are conceived by minimising a cost functional -called F in the sequel -constrained to the variational model in question. We will explain this approach in more detail in the next section. Before, let us give an account of the state of the art of bilevel optimisation for model learning. In machine learning bilevel optimisation is well established. It is a semi-supervised learning method that optimally adapts itself to a given dataset of measurements and desirable solutions. In [41,42,27,28,18,19], for instance the authors consider bilevel optimization for finite dimensional Markov random field (MRF) models. In inverse problems the optimal inversion and experimental acquisition setup is discussed in the context of optimal model design in works by Haber, Horesh and Tenorio [33,31,32], as well as Ghattas et al. [13,8]. Recently parameter learning in the context of functional variational regularisation models also entered the image processing community with works by the authors [23,14], Kunisch, Pock and co-workers [36,17] and Chung et al. [20]. A very interesting contribution can be found in a preprint by Fehrenbach et al. [30] where the authors determine an optimal regularisation procedure introducing particular knowledge of the noise distribution into the learning approach.
Apart from the work of the authors [23,14], all approaches for bilevel learning in image processing so far are formulated and optimised in the discrete setting. Our subsequent modelling, analysis and optimisation will be carried out in function space rather than on a discretisation of the variational model. In this context, a careful analysis of the bilevel problem is of great relevance for its application in image processing. In particular, the structure of optimal regularisers is important, among others, for the development of solution algorithms. In particular, if the parameters are bounded and lie in the interior of a closed connected set, then efficient optimization methods can be used for solving the problem. Previous results on optimal parameters for inverse problems with partial differential equations have been obtained in, e.g., [16].
In this paper we study the qualitative structure of regularization parameters arising as solution of bilevel optimisation problem of variational models. In our framework the variational models are typically convex but non-smooth and posed in Banach spaces. The total variation and total generalized variation regularisation models are particular instances. Alongside the optimisation of the non-smooth variational model, we also consider a smoothed approximation in Hilbert space which is typically the one considered in numerical computation. Under suitable conditions, we prove that -for both the original non-smooth optimisation problem as well as the regularised Hilbert space problemthe optimal regularisers are bounded and lie in the interior of the positive orthant. The conditions necessary to prove this turn out to be very natural conditions on the given data in the case of an L 2 cost functional F . Indeed, for the total variation regularisers with L 2 -squared cost and fidelity, we will merely require TV(f ) > TV(f 0 ), with f 0 the ground-truth and f the noisy image. That is, the noisy image should oscillate more in terms of the total variation functional, than the ground-truth. For second-order total generalised variation [11], we obtain an analogous condition. Apart from the standard L 2 costs, we also discuss costs that constitute a smoothed L 1 norm of the gradient of the original data -we will call this the Huberised total variation cost in the sequel -typically resulting in optimal solutions superior to the ones minimising an L 2 cost. For this case, however, the interior property of optimal parameters could be verified for a finite dimensional version of the cost only. Eventually, we also show that as the numerical smoothing vanishes the optimal parameters for the smoothed models tend to optimal parameters of the original model.
The results derived in this paper are motivated by problems in image processing. However, their applicability goes well beyond that and can be generally applied to parameter estimation problems of variational inequalities of the second kind, for instance the parameter estimation problem in Bingham flow [22]. Previous analysis in this context either required box constraints on the parameters in order to prove existence of solutions or the addition of a parameter penalty to the cost functional [3,6,7,37]. In this paper, we require neither but rather prove that under certain conditions on the variational model and for reasonable data and cost functional, optimal parameters are indeed positive and guaranteed to be bounded away from 0 and ∞. As we will see later this is enough for proving existence of solutions and continuity of the solution map. The next step from our work in this here is deriving numerically useful characterisations of solutions to the ensuing bi-level programs. For the most basic problems considered herein this has been done in [37] under numerical H 1 regularisation. We will consider in the follow-up work [24] the optimality conditions for higher-order regularisers and the new cost functionals introduced in this work. For an extension characterisation of optimality systems for bi-level optimisation in finite dimensions, we point the reader to [26] as a starting point.
Outline of the paper In Section 2 we introduce the general bilevel learning problem, stating assumptions on the setup of the cost functional F and the lower level problem given by a variational regularisation approach. The bilevel problem is discussed in its original non-smooth form (P) as well as in a smoothed form in a Hilbert space setting (P γ, ) in Section 2.2, that will be the one used in the numerical computations. The bilevel problem is put in context with parameter learning for nonsmooth variational regularisation models, typical in image processing, by proving the validity of the assumptions for examples such as TV, TGV and ICTV regularisation. The main results of the paperexistence of positive optimal parameters for L 2 , Huberised TV and L 1 type costs and the convergence of the smoothed numerical problem to the original non-smooth problem -are stated in Section 3. Auxiliary results, such as coercivity, lower semicontinuity and compactness results for the involved functionals, is the topic of Section 4. Proofs for existence and convergence of optimal parameters are contained in Section 5. The paper finishes with a brief numerical discussion in Section 6.

The general problem setup
Let Ω ⊂ R m be an open bounded domain with Lipschitz boundary. This will be our image domain.
Usually Ω = (0, w) × (0, h) for w and h the width and height of a two-dimensional image, although no such assumptions are made in this work. Our noisy or corrupted data f is assumed to lie in a Banach space Y , which is the dual of Y * , while our ground-truth f 0 lies in a general Banach space Z. Usually, in our model, we choose Z = Y = L 2 (Ω; R d ). This holds for denoising or deblurring, for example, where the data is just a corrupted version of the original image. Further d = 1 for grayscale images, and d = 3 for colour images in typical colour spaces. For sub-sampled reconstruction from Fourier samples, we might use a finite-dimensional space Y = C n -there are however some subtleties with that, and we refer the interested reader to [1].
As our parameter space for regularisation functional weights we take where N is the dimension of the parameter space, that is α = (α 1 , . . . , α N ), N ≥ 1. Observe that we allow infinite and zero values for α. The reason for the former is that in case of TGV 2 , it is not reasonable to expect that both α 1 and α 2 are bounded; such conditions would imply that TGV 2 performs better than both TV 2 and TV. But we want our learning algorithm to find out whether that is the case! Regarding zero values, one of our main tasks is proving that for reasonable data, optimal parameters in fact lie in the interior int P ∞ α = (0, ∞] N . This is required for the existence of solutions and the continuity of the solution map parametrised by additional regularisation parameters needed for the numerical realisation of the model. We also set for some occasions when we need a bounded parameter. Remark 2.1. In much of our treatment, we could allow for spatially dependent parameters α. However, the parameters would need to lie in a finite-dimensional subspace of C 0 (Ω; R N ) in our theory. Minding our general definition of the functional J γ,0 ( · ; α) below, no generality is lost by taking α to be vectors in R N . We could simply replace the sum in the functional as a larger sum modelling integration over parameters with values in a finite-dimensional subspace of C 0 (Ω; R N ).
The following covers our assumptions with regard to A, K, and Φ. We discuss various specific examples in Section 2.1 immediately after stating the assumptions.
Assumption A-KA (Operators A and K). We assume that Y is Banach spaces, and X a normed linear space, both themselves duals of Y * and X * , respectively. We then assume that the linear operators are bounded. Regarding K, we also assume the existence of a bounded a right-inverse K † : R(K) → X, where R(K) denotes the range of the operator K. That is, KK † = I on R(K). We further assume that is a norm on X, equivalent to the standard norm. In particular, by the Banach-Alaoglu theorem, any Assumption A-Φ (The fidelity Φ). We suppose Φ : Y → (−∞, ∞] is convex, proper, weakly* lower semicontinuous, and coercive in the sense that We assume that 0 ∈ dom Φ, and the existence of f ∈ arg min v∈Y Φ(v) such that f = Kf for somē f ∈ X. Finally, we require that either K is compact, or Φ is continuous and strongly convex. We also require the following technical assumption on the relationship of the regularisation terms and the fidelity Φ. Roughly speaking, in most interesting cases, it says that for each , we can closely approximate the noisy data f with functions of order . But this is in a lifted sense, not directly in terms of derivatives.

Specific examples
We now discuss a few examples to motivate the abstract framework above.
we recover the standard L 2 -squared fidelity, modelling Gaussian noise. On a bounded domain Ω, Assumption A-Φ follows immediately.
Example 2.2 (Total variation denoising). Let us take K 0 as the embedding of X = BV(Ω) ∩ L 2 (Ω) into Z = L 2 (Ω), and A 1 = D. We equip X with the norm This makes K 0 a bounded linear operator. If the domain Ω has Lipschitz boundary, and the dimension satisfies n ∈ {1, 2}, the space BV(Ω) continuously embeds into L 2 (Ω) [2,Corollary 3.49]. Therefore, we may identify X with BV(Ω) as a normed space. Otherwise, if n ≥ 3, without going into the details of constructing X as a dual space, 1 we define weak* convergence in X as combined weak* convergence in BV(Ω) and L 2 (Ω). Any bounded sequence in X will then have a weak* convergent subsequence. This is the only property we would use from X being a dual space. Now, combined with Example 2.1 and the choice K = K 0 , Y = Z, we get total variation denoising for the sub-problem (D α ). Assumption A-KA holding is immediate from the previous discussion. Assumption A-δ is also easily satisfied, as with f ∈ BV(Ω) ∩ L 2 (Ω), we may simply pickf δ,1 = f for the only possible case = 1. Observe however that K is not compact unless n = 1, see [2,Corollary 3.49], so the strong convexity of Φ is crucial here. If the data f ∈ L ∞ (Ω), then it is wellknown that solutionsû to (D α ) satisfy û L ∞ (Ω) ≤ f L ∞ (Ω) . We could therefore construct a compact embedding by adding some artificial constraints to the data f . This changes in the next two examples, as boundedness of solutions for higher-order regularisers is unknown; see also [43]. Example 2.3 (Second order total generalised variation denoising). We take X = (BV(Ω) ∩ L 2 (Ω)) × BD(Ω), the first part with the same topology as in Example 2.2. We also take Z = L 2 (Ω), denote u = (v, w), and set for E the symmetrised differential. With K = K 0 and Y = Z, this yields second-order total generalised variation (TGV 2 ) denoising [11] for the sub-problem (D α ). Assuming for simplicity that α 1 , α 2 > 0 are constants, to show Assumption A-KA, we recall from [12] the existence of a constant c = c(Ω) such that where the norm v BV(Ω) := TGV 2 (1,1) (v) + v L 1 (Ω) . We may thus approximate For some C > 0, it follows This shows u X ≤ C u X . The inequality u X ≥ u X follows easily from the triangle inequality, namely Thus · X is equivalent to · X .
Next we observe that clearly Dv k − w k * Dv − w in M(Ω; R n ) and Ew k * Ev in M(Ω; R n×n ) if v k * v in BV(Ω) and w k * w weakly* in BD(Ω). Thus A 1 and A 2 are weak* continuous. Assumption A-KA follows.

Example 2.5 (Cost functionals)
. For the cost functional F , given noise-free data f 0 ∈ Z = L 2 (Ω), we consider in particular the L 2 cost as well as the Huberised total variation cost with noise-free data f 0 ∈ Y := BV(Ω). For the definition of the Huberised total variation, we refer to the Section 2.2 on the numerics of the bi-level framework (P).
Example 2.6 (Sub-sampled Fourier transforms for MRI). Let K 0 and the A j s be given by one of the regularisers of Example 2.2 to 2.4. Also take the cost F = F L 2 2 or F L 1 η ∇ as in Example 2.5, and Φ as the squared L 2 fidelity of Example 2.1. However, let us now take K = T K 0 for some bounded linear operator T : Z → Y . The operator T could be, for example, a blurring kernel or a (sub-sampled) Fourier transform, in which case we obtain a model for learning the parameters for deblurring or recontruction from Fourier samples. The latter would be important, for example for magnetic resonance imaging (MRI) [5,44,45]. Unfortunately, our theory does no extend to many of these cases because we will require, roughly, Example 2.7 (Parameter estimation in Bingham flow). Bingham fluids are materials that behave as solids if the magnitude of the stress tensor stays below a plasticity threshold, and as liquids if that quantity surpasses the threshold. In a cross sectional pipe, of section Ω, the behaviour is modeled by the energy minimization functional where µ > 0 stands for the viscosity coefficient, α > 0 for the plasticity threshold and f ∈ H −1 (Ω). In many practical situations, the plasticity threshold is not known in advance and has to be estimated from experimental measurements. One then aims at minimizing a least squares term The bilevel optimization problem can then be formulated as problem (P)-(D α ), with the choices Concentrating in the rest of this paper primarily on image processing applications, we will however briefly return to Bingham flow in Example 5.1.

Considerations for numerical implementation
For the numerical solution of the denoising sub-problem, we will in a follow-up work [24] expand upon the infeasible semi-smooth quasi-Newton approach taken in [34] for L 2 -TV image restoration problems. This depends on Huber-regularisation of the total variation measures, as well as enforcing smoothness through Hilbert spaces. This is usually done by a squared penalty on the gradient, i.e., H 1 regularisation, but we formalise this more abstractly in order to simplify our notation and arguments later on. Therefore, we take a convex, proper, and weak* lower-semicontinous smoothing functional H : X → [0, ∞], and generally expect it to satisfy the following.
This is in particular the case with Example 2.
domain Ω. In both of these cases, weak* lower semicontinuity is apparent; for completeness we record this in Lemma 4.1 in Section 4.2. In case of Example 2.2, (2.5) is immediate from approximating u strictly by functions in C ∞ (Ω) using standard strict approximation results in BV(Ω) [2]. In case of Example 2.3, this also follows by a simple generalisation of the same argument to TGV-strict approximation, as presented in [43,10].

Definition 2.1.
Given γ ∈ (0, ∞], we define for the norm · 2 on R m , the Huber regularisation We observe that this can equivalently be written using convex conjugates as Then if µ = f L n + µ s is the Lebesgue decomposition of µ ∈ M(Ω; R m ) into the absolutely continuous part f L n and the singular part µ s , we set The measures |µ| γ is the Huber-regularisation of the total variation measures |µ|, and we define its Radon norm as the Huber regularisation of the Radon norm of µ, that is Remark 2.4. The parameter γ varies in the literature. In this paper, we use the convention of [37], where large γ means small Huber regularisation. In [34], small γ means small Huber regularisation. That is, their regularisation parameter is 1/γ in our notation.

Shorthand notation
Writing for notational lightness Further, given α ∈ P α , we define the "marginalised" regularisation functional Here K −1 stands for the preimage, so the constraint isv ∈ X with v = Kv. Then in the case (γ, ) = ((∞, 0) and F = F 0 • K, our problem may also be written as This gives the problem a much more conventional flair, as the following examples demonstrate.

Example 2.9 (TV as a marginal). Consider the total variation regularisation of Example 2.2. Then
Example 2.10 (TGV 2 as a marginal). In case of the TGV 2 regularisation of Example 2.3, we have

Main results
Our task now is to study the characteristics of optimal solutions, and their existence. Our results, based on natural assumptions on the data and the original problem (P), derive properties of the solutions to this problem and all numerically regularised problems (P γ, ) sufficiently close to the original problem: large γ > 0 and small > 0. We denote by u α,γ, any solution to (D γ, ) for any given α ∈ P α , and by α γ, any solution to (P γ, ). Solutions to (D α ) and (P) we denote, respectively, by u α = u α,∞,0 , and α = α ∞,0 .

L 2 -squared cost and L 2 -squared fidelity
Our main existence result regarding L 2 -squared costs and L 2 -squared fidelities is the following.
If, moreover, A-H holds, then there existγ ∈ (0, ∞) and¯ ∈ (0, ∞) such that the problem (P γ, ) with (γ, ) ∈ [γ, ∞] × [0,¯ ] admits a solution α γ, ∈ int P ∞ α , and the solution map We prove this result in Section 5. Outer semicontinuity of a set-valued map S : R k ⇒ R m means [39] that for any convergent sequence x k → x and S(x k ) y k → y, we have y ∈ S(x). In particular, the outer semicontinuity of S means that as the numerical regularisation vanishes, the optimal parameters for the regularised models (P γ, ) tend to optimal parameters of the original model (P).
Also observe that our result requires, Φ • K to measure all the data that F measures, in the more precise sense given by (3.1). If (3.1) did not hold, an oscillating solution u α for α ∈ ∂P ∞ α , could largely pass through the nullspace of K, hence have low value for the objective J of the inner problem, yet have a large cost given by F . and Then there exist¯ ,γ > 0 such that any optimal solution α γ, to the problem That is, for the optimal parameter to be strictly positive, the noisy image f should, in terms of the total variation, oscillate more than the noise-free image f 0 . This is a very natural condition: if the noise somehow had smoothed out features from f 0 , then we should not smooth it anymore by TV regularisation! Proof. Assumption A-KA, A-δ, and A-H we have already verified in Example 2.2. We then observe that K 0 = K, so we are in the setting of Remark 3.1. Following the mapping of the TV problem to the general framework using the construction in Example 2.2, we have K = I and K † = I embeddings with Y = L 2 (Ω). K † is bounded on R(K) = L 2 (Ω) ∩ BV(Ω). Moreover, by Example 2.9, T (v;ᾱ) =ᾱTV(v). Thus (3.3) with the choice t = 1 reduces to (3.4).
Example 3.1 (Fourier reconstructions). Let K 0 be given, for example as constructed in Example 2.2 or Example 2.3. If we take K = FK 0 for F the Fourier transform -or any other unitary transformthen (3.1) is satisfied and With F * f, f 0 ∈ R(K 0 ) and t = 1 this just reduces to Unfortunately, our results do not cover parameter learning for reconstruction from partial Fourier samples exactly because of (3.1). What we can do is to find the optimal parameters if we only know a part of the ground-truth, but have full noisy data.

Huberised total variation and other L 1 -type costs with L 2 -squared fidelity
We now consider the alternative "Huberised total variation" cost functional from 2.5. Unfortunately, we are unable to derive for F L 1 η ∇ easily interpretable conditions as for the F L 2 2 . If we discretise the definition in the following sense, then we however get natural conditions. So, we let f 0 ∈ Z, assuming Z is a reflexive Banach space, and pick η ∈ (0, ∞]. We define We may now approximate F L 1 η ∇ by We return to this approximation after the following general results on F L 1 η . 1) and f ∈ R(K). Suppose Assumption A-KA and A-δ hold. If for someᾱ ∈ int P α and t > 0 holds then the problem (P) admits a solutionα ∈ int P ∞ α .
We prove this result in Section 5. Remark 3.2. If K 0 = K, the condition (3.6) has the much more legible form Also if K is compact, then the compactness of K 0 follows from (3.1). In the following applications, K is however not compact for typical domains Ω ⊂ R 2 or R 3 , so we have to make K 0 compact by making the range finite-dimensional.

Corollary 3.3 (Total variation
Gaussian denoising with discretised Huber-TV cost). Suppose that the data satisfies f, f 0 ∈ BV(Ω) ∩ L 2 (Ω) and for some t > 0 and ξ ∈ V the condition Then there exists¯ ,γ > 0 such any optimal solution α γ, to the problem This says that for the optimal parameter to be strictly positive, the noisy image f should oscillate more than the image f + t div ξ in the direction of the (discrete) total variation flow. This is a very natural condition, and we observe that the non-discretised counterpart of (3.8) for γ = ∞ would be where we define for a measure µ ∈ M(Ω; R m ) the sign Sgn(µ) := {ξ ∈ L 1 (Ω; µ) | µ = ξ|µ|}.
That is, − div ξ is the total variation flow.
Proof. Analogous to Corollary 3.1 regarding the L 2 cost.

A few auxiliary results
We record in this section some general results that will be useful in the proofs of the main results. These include the coercivity of the functional J γ,0 ( · ; λ, α), recorded in Section 4.1. We then discuss some elementary lower semicontinuity facts in Section 4.2. We provide in Section 4.3 some new results for passing from strict convergence to strong convergence

Coercivity
Observe that Since Ω is bounded, it follows that given δ > 0, for large enough γ > 0 and every We will use these properties frequently. Based on the coercivity and norm equivalence properties in Assumption A-KA and Assumption A-Φ, the following proposition states the important fact that J γ,0 is coercive with respect to · X and thus also the standard norm of X.

Proposition 4.1. Suppose Assumption A-KA and Assumption A-Φ hold, and that
This implies sup i u i X < ∞. By the equivalence of norms in Assumption A-KA, we immediately obtain (4.2).

Lower semicontinuity
We record the following elementary lower semicontinuity facts that we have already used to justify our examples.
Proof. In each case, let {v i } ∞ i=1 converge to v. Denoting by G the involved functional, we write it as a convex conjugate, G(v) = sup{ v, ϕ − G * (ϕ)}. Taking a supremising sequence {ϕ j } ∞ j=1 for this functional at any point v, we easily see lower semicontinuity by considering the sequences In case (ii) we use the fact that ϕ j ∈ L 2 (Ω; R d ) ⊂ [L p (Ω; R d )] * when Ω is bounded.
In case (i), how exactly we write G(µ) = ν − µ γ,M as a convex conjugate demands explanation. We first of all recall that for g ∈ R n , the Huber-regularised norm may be written in dual form as Therefore, we find that This has the required form.
We also show here that the marginal regularisation functional T γ ( · ;ᾱ) is weakly* lower semicontinuous on Y . Choosing K as in Example 2.2 and Example 2.3, this provides in particular a proof that TV and TGV 2 are lower semicontinuous with respect to weak convergence in L 2 (Ω) when n = 1, 2.

Lemma 4.2.
Supposeᾱ ∈ int P ∞ α , and Assumption A-KA holds. Then T γ ( · ;ᾱ) is lower semicontinuous with respect to weak* convergence in Y , and continuous with respect to strong convergence in R(K).

Proof. Let v k * v weakly* in Y. By the Banach-Steinhaus theorem, the sequence is bounded in
Therefore, if we pick > 0 andv k ∈ K −1 v k such that then referral to Assumption A-KA, yields for some constant c > 0 the bound Without loss of generality, we may assume that because otherwise there is nothing to prove. Then {v k } ∞ k=1 is bounded in X, and therefore admits a weakly* convergent subsequence. Letv be the limit of this, unrelabelled, sequence. Since K is continuous, we find that Kv = v. But R γ ( · ;ᾱ) is clearly weak* lower semicontinuous in X; see Lemma 4.1. Thus Since > 0 was arbitrary, this proves weak* lower semicontinuity.
The see continuity with respect to strong convergence in R(K), we observe that if v = Ku ∈ R(K), then by the boundedness of the operators {A j } N j=1 we get for some constant C > 0. So we know that T γ ( · ;ᾱ)|R(K) is finite-valued and convex. Therefore it is continuous [29, Lemma I.2.1].

From Φ-strict to strong convergence
In Proposition 5.2, forming part of the proof of our main theorems, we will need to pass from "Φ-strict convergence" of Ku k to v to strong convergence, using the following lemmas. The former means that Φ(Ku k ) → Φ(v) and Ku k v weakly* in Y . By strong convexity in a Banach space Y , we mean the existence of γ > 0 such that for every y ∈ Y and z ∈ ∂Φ(y) ⊂ Y * holds where (z|y) denotes the dual product, and the subdifferential ∂Φ(y) is defined by z satisfying the same expression with γ = 0. With regard to more advanced strict convergence results, we point the reader to [25,38,35].
Remark 4.1. By standard convex analysis [29], v ∈ dom ∂Φ if Φ has a finite-valued point of continuity and v ∈ int dom Φ.

Proof. We first of all note that
. From the strong convexity of Φ, for some γ > 0 then Taking the limit infimum, we observe This proves strong convergence.
We now use the lemma to show strong convergence of minimising sequences.

Lemma 4.4. Suppose Φ is strongly convex, satisfies Assumption A-Φ, and that C ⊂ Y is non-empty, closed, and convex with int
Proof. By the strict convexity of Φ, implied by strong convexity, and the assumptions on C,v is unique and well-defined. Moreoverv ∈ dom ∂Φ. Indeed, our assumptions show the existence of a point v ∈ int C ∩ dom Φ. The indicator function δ C is then continuous at v, and so standard subdifferential calculus (see, e.g., [29, Proposition I.5.6]) implies that Using the coercivity of Φ in Assumption A-Φ we then find that {v k } ∞ k=1 is bounded in Y , at least after moving to an unrelabelled tail of the sequence with Φ(v k ) ≤ Φ(v) + 1. Since Y is a dual space, the unit ball is weak* compact, and we deduce the existence of a subsequence, unrelabelled, and v ∈ Y such that v k v weakly* in Y . By the weak* lower semicontinuity (Assumption A-Φ), we deduce Since each v k ∈ C, and C is closed, also v ∈ C. Therefore, by the strict convexity and the definition ofv, necessarily v =v. Therefore v k v weakly* in Y , and Φ(v k ) → Φ(v). Lemma 4.3 now shows that v k →v strongly in Y .

Proofs of the main results
We now prove the existence, continuity, and non-degeneracy (interior solution) results of Section 3 through a series of lemmas and propositions, starting from general ones that are then specialised to provide the natural conditions presented in Section 3.

Existence and lower semicontinuity under lower bounds
Our principal tool for proving existence is the following proposition. We will in the rest of this section concentrate on proving the existence of the set K in the statement. We base this on the natural conditions of Section 3.

Proposition 5.1 (Existence on compact parameter domain). Suppose Assumption A-KA and A-Φ hold. With
then there exists a solution α γ, ∈ int P ∞ α to (P γ, ). Moreover, the mapping is lower semicontinuous within int P ∞ α .
The proof depends on the following two lemmas that will be useful later on as well.

Lemma 5.1 (Lower semicontinuity of the fidelity with varying parameters). Suppose Assumption A-KA, A-Φ, and A-H hold. Let
Proof. Letˆ > 0 be such that α j ≥ˆ , (j = 1, . . . , N ). We then deduce for large k and some C = C(Ω, γ) that Here we have assumed the final inequality to hold. This comes without loss of generality, because otherwise there is nothing to prove. Observe that this holds even if {α k } ∞ k=1 is not bounded. In particular, if > 0, restricting k to be large, we may assume that We recall that We want to show lower semicontinuity of each of the terms in turn. We start with the smoothing term. If > 0, using (5.3), we write By the convergence k → , and the weak* lower semicontinuity of H, we find that Thus (5.5) follows.
The fidelity term Φ • K is weak* lower semicontinuous by the continuity of K and the weak* lower semicontuity of Φ. It therefore remains to consider the terms in (5.4) involving both the regularisation parameters α, as well as the Huberisation parameter γ. Indeed using the dual formulation (2.6) of the Huberised norm, we have for some constant C = C (ˆ , Ω) that Thus, if α ∈ P α , we get It follows from (5.2) that the sequence where the final step follows from Lemma 4.1.

Towards Γ-convergence and continuity of the solution map
The next lemma, immediate from the previous one, will form the first part of the proof of continuity of the solution map. As its condition, we introduce a stronger form of (5.1) that is uniform over a range of and γ.

Lemma 5.3 (Γ-lower limit of the cost map in terms of regularisation). Suppose Assumption A-KA, A-Φ, and A-H hold. Let
Proof. Consequence of (5.6b) of Lemma (5.2) and the weak* lower semicontinuity of F .
The next lemma will be used to get partial strong convergence of minimisers as we approach ∂P α . This will then be used to derive simplified conditions for this not happening. This result is the counterpart of Lemma 5.2 that studied convergence of reconstructions away from the boundary, and depends on the additional Assumption A-δ. This is the only place where we use the assumption, and replacing this lemma by one with different assumptions would allow us to remove Assumption A-δ.

Lemma 5.4 (Convergence of reconstructions at the boundary). Suppose Assumption A-KA, A-Φ, and A-δ hold, and that
Proof. We denote for short u k := u α k ,γ k , k , and note that f is unique by the strong convexity of Φ. Sinceᾱ ∈ ∂P ∞ α , there exist an index ∈ {1, . . . , N } such that α k → 0. We let be the first such index, and pick arbitrary δ > 0. We takef δ, as given by Assumption A-δ, observing that the construction still holds with Huberisation, that is, for any γ ∈ (0, ∞] and in particular any γ = γ k , we have If we are aiming for¯ > 0, let us also pick f δ, by application of Assumption A-H to u =f δ, . Otherwise, with¯ = 0, let us just set f δ, =f δ, . Since Observe that it is no problem if some index α k j = ∞, because by definition as a minimiser u k achieves smaller value thanf δ, above, and for the latter A jfδ, γ k ,j = 0. Choosing¯ > 0 small enough, it follows for ∈ [0,¯ ] that Choosing k large enough, we thus see that Letting δ 0, we see that Φ(Ku k ) → Φ(f ). Lemma 4.4 with C = Y therefore shows that Ku k → f strongly in Y .

Minimality and co-coercivity
Our remaining task is to show the existence of K for (5.1), and of a uniform K -see (5.15) below -for the application of Lemma 5.3. When the fidelity and cost functionals satisfy some additional conditions, we will now reduce this to the existence of α ∈ int P α satisfying F (u α ) < F (f ) for a specificf ∈ K −1 f . So far, we have made no reference to the data, the ground-truth f 0 or the corrupted measurement data f . We now assume this in an abstract way, and need a type of source condition, called minimality, relating the ground truth f 0 to the noisy data f . We will get back to how this is obtained later.
Remark 5.1. If we can take C = 0, then the final condition just says that K * ϕv ∈ ∂F (v). This is a rather strong property. Also, instead of t → t p , we could in the following proofs use any strictly increasing energy ψ : To deal with the smoothing term H with > 0, we also need co-coercivity; for the justification of the term for the condition in (5.11) below, more often seen in the context of monotone operators, we refer to the equivalences in [4,Theorem 18.15].
If F is (K, p)-co-coercive at (u, λ) for every u ∈ X and λ ∈ ∂F (u), we say that F is (K, p)-co-coercive.
If p = 2, we say that F is simply K-co-coercive.
From this we immediately get lim sup k→∞ F (u k ) ≤ F (v).
(i) For each ∈ [0,ˆ ] and γ ∈ [γ, ∞] there exists a compact set K ⊂ int P ∞ α such that (5.1) holds. (ii) If, moreover, Assumption A-H holds and F is (K, p)-co-coercive for any p > 0, then there exist a compact set K ⊂ int P ∞ α such that In both cases, the existence of K says that every solutionᾱ to (P γ, ) satisfiesᾱ ∈ K.
Proof. We note that f is unique by the strong convexity of Φ. Let us first prove (i). In fact, let us pick γ,¯ > 0 and assume with α fixed that We want to show the existence of a compact set K ⊂ P ∞ α such that solutionsα to (P γ, ) satisfyα ∈ K whenever (γ, ) ∈ [γ, ∞] × [0,¯ ] forγ ∈ [γ, ∞) and¯ ∈ (0,¯ ] to be determined during the course of the proof. We thus let (α k , Since this set is compact, we may assume that α k →α ∈ P ∞ α , and k →ˆ , and γ k →γ. Supposeα ∈ ∂P ∞ α . By Lemma 5.4 then Ku k → f strongly in Y for small enough¯ , with no conditions onγ. Further by the (K, q)-minimality off and Lemma 5.5 then If we fix γ k := γ and k , and pick {α k } ∞ k=1 is a minimising sequence for (P γ, ), we find that (5.17) is in contradiction to (5.14). Necessarily thenα ∈ int P ∞ α . By the lower semicontinuity result of Proposition 5.1,α therefore has to solve (P γ, ). We have proved (i), because, if K did not exist, we could choose α k →α ∈ ∂P α .
We now show the Γ-convergence of the cost map, and as a consequence the outer semicontinuity of the solution map. For an introduction to Γ-convergence, we refer to [9,21]. Proof. Lemma 5.3 shows the Γ-lower limit (5.10). We still have to show the Γ-upper limit. This means that givenα ∈ K and (γ k , k ) → (γ, ) within [0,¯ ]×[γ, ∞], we have to show the existence of a sequence We claim that we can take α k =α. With u k := u α k ,γ k , k , Lemma 5.2 gives a subsequence satisfying Ku k → Kû strongly withû a minimiser of J γ, ( · ; α). We just have to show that Since F is (K, p)-co-coercive, andû by assumption (K, p)-minimal, this follows from Lemma 5.5.

The L 2 -squared fidelity with (K, 2)-co-coercive cost
In what follows, we seek to prove (5.14) for the L 2 -squared fidelity with (K, 2)-co-coercive cost functionals by imposing more natural conditions derived from (5.20) in the next lemma. Lemma 5.6 (Natural conditions for L 2 -squared 2-co-coercive case). Suppose Assumption A-KA and A-δ hold. Let Y be a Hilbert space, f ∈ R(K), and Then (5.14) holds iff is for someᾱ ∈ int P α and t ∈ (0, 1/C], where C is the co-coercivity constant. Here we recall the definition of T ( · ;ᾱ) from (2.7).
Summarising the developments so far, we may state: Iff is (K, 2)-minimal, F is (K, 2)-co-coercive at (f , K * ϕf ), and for someᾱ ∈ int P α and t ∈ (0, 1/C], then the claims of Proposition 5.2 hold. Proof. It is easily checked that Assumption A-Φ holds. Lemma 5.6 then verifies the remaining conditions of Proposition 5.2, which shows the existence of K in both cases. Finally, Proposition 5.1 shows the existence ofα ∈ int P ∞ α solving (P γ, ). For the continuity of the solution map, we refer to Proposition 5.3.

L 2 -squared fidelity with L 2 -squared cost
We may finally finish the proof of our main result on the L 2 fidelity Φ(v) : Proof of Theorem 3.1. We have to verify the conditions of Proposition 5.4, primarily the (K, 2)minimality off , the K-cocoercivity of F , and (5.26). Regarding minimality and co-coercivity, we From this (I, 2)-co-coercivity of F 0 with C = 1 is clear, as is the (I, 2)-minimality with regard to F 0 of every v ∈ Y . By extension, F is easily seen to be (K 0 , 2)-co-coercive, and every u ∈ X (K 0 , 2)-minimal. Using (3.1), (K, 2)-co-coercivity of F with C = C 0 is immediate, as is the (K, 2)-minimality of every u ∈ X.
Regarding (5.26), we need to find ϕf such that K * ϕf = ∇F (f ). We have From this we observe that ϕf exists, because (3.1) implies N (K) ⊂ N (K 0 ), and hence R(K * ) ⊂ R(K * 0 ). Here N and R stand for the nullspace and range, respectively. Setting K * ϕf = K * 0 (K 0f − f 0 ) and using KK † = I on R(K), we thus find that Observe that since N (K) ⊂ N (K 0 ), this expression does not depend on the choice off ∈ K −1 f . Following Remark 2.2, we can replacef = K † f . It follows that (3.3) implies (5.26).

Remark 5.3.
Provided that Φ satisfies Assumption A-Φ, A-δ, and A-H, it is easy to extend Lemma 5.6 and consequently Theorem 3.1 to the case whereŶ ⊃ Y is a Hilbert space, f ∈ Y * , and Y still a reflexive Banach space. As Y ⊂Ŷ =Ŷ * ⊂ Y * , in this case, we still have In particular Therefore the expression (5.23) still holds, which is the only place where we needed the specific form of Φ.
Example 5.1 (Bingham flow). As a particular case of this remark, we takeŶ = Y = H 1 0 (Ω). Then Y * = H −1 (Ω). With f ∈ L 2 (Ω), the Riesz representation theorem allows us to write for some f ∈ H −1 (Ω), which we may identify with f . Therefore, Theorem 3.1 can be extended to cover the Bingham flow of Example 2.7. In particular, we get the same condition for interior solutions as in Corollary 3.1, namely TV(f ) > TV(f 0 ).

A more general technique for the L 2 -squared fidelity
We now study another technique that does not require (K, 2)-minimality and (K, 2)-co-coercivity at f . We still however require Φ to be the L 2 -squared fidelity, and uᾱ to be (K, p)-minimal.  Here we use the shorthand ϕ sᾱ := ϕ u sᾱ . The difficulty is going to the limit, because we do not generally have any reasonable form of convergence of {ϕ sᾱ } s>0 . If we did indeed have ϕ sᾱ → ϕf , then (5.28) and consequently (5.32) would be implied by the condition (5.26) we derived using (K, 2)-cocoercivity. We will in the next subsection go to the limit with finite-dimensional functionals that are not (K, 2)-co-coercive and hence the earlier theory does not apply.
Summing up the developments so far, we may in contrast to Proposition 5.4 that depended onf and co-coercivity, state: If for someᾱ ∈ int P α , t > 0, the solution uᾱ is (K, p)-minimal with Proof. It is easily checked that Assumption A-Φ holds. Lemma 5.8 then verifies the remaining conditions of Proposition 5.2, which shows the existence of K in both cases. Finally, Proposition 5.1 shows the existence ofα ∈ int P ∞ α solving (P γ, ). For the continuity of the solution map, we refer to Proposition 5.3.

L 2 -squared fidelity with Huberised L 1 -type cost
We now study the Huberised total variation cost functional. We cannot in general prove that solutions u α for small α are better than f . Consider, for example f 0 a step function, and f a noisy version without the edge destroyed. The solution u α might smooth out the edge, and then we might have This destroys all hope of verifying the conditions of Lemma 5.8 in the general case. If we however modify the set of test functions in the definition of L 1 η ∇ to be discrete we can prove this bound. Alternatively, we could assume uniformly bounded divergence from the family of test functions. We have left this case out for simplicity, and prove our results for general L 1 costs with finite-dimensional Z. Lemma 5.9 (Conditions for L 1 η cost). Suppose Z is a reflexive Banach space, and K 0 : X → Z is linear and bounded, and satisfies (3.1). Then, whenever (5.26) holds, F (u) := F L 1 η (K 0 u) is (K, 1)-cocoercive and (5.32) holds for someᾱ ∈ int P α with both uᾱ andf being (K, 1)-minimal.
As is easily verified, ∂F L 1 η is outer semicontinuous with respect to strong convergence in the domain Z and weak* convergence in the codomain Z * . That is, given v k → v and z k * z with z k ∈ ∂F L 1 η (u k ), we have z ∈ ∂F L 1 η (v). By Lemma 5.4 and (3.1), we have K 0 u k → K 0f strongly in Z. Since B is bounded, we may therefore find a sequence s k 0 with λ s kᾱ * λ 0 ∈ ∂F (f ) weakly* in Z * . Since by assumption K 0 is compact, then also K * 0 is compact [40,Theorem 4.19]. Consequently ϕ s kᾱ → ϕ 0 := (K 0 K † ) * λ 0 strongly in Y after possibly moving to an unrelabelled subsequence. Let us now consider the right hand side of (5.32) forᾱ = s kᾱ . Since f = Kf , and we have proved that ϕ u s kᾱ ∈ R(K), Lemma 4.2 shows that lim k→0 T (f − tϕ u s kᾱ ;ᾱ) = T (f − tϕ 0 ;ᾱ).

Proof of Theorem 3.2.
From the proof of Lemma 5.9, we observe that (5.26), can be expanded as where λ 0 ∈ V with λ 0 ∈ ∂F L 1 η (K 0f ). As in the proof of Theorem 3.1, this is in fact independent of the choice off , so may replacef = K † f . Thus λ 0 ∈ ∂F L 1 η (K 0 K † f ). By Lemma 5.9, the conditions of Proposition 5.5 are satisfied, so we may apply it together with Proposition 5.1 to conclude the proof.
Remark 5.5. The considerations of Remark 5.3 also apply to Lemma 5.8 and consequently Theorem 3.2. That is, the results hold for the cost whereŶ ⊃ Y is a Hilbert space, f ∈ Y * , and Y a reflexive Banach space. Indeed, again the specific form of Φ was only used for the optimality condition (5.31), which is also satisfied by the form (5.34).

Numerical verification and insight
In order to verify the above theoretical results, and to gain further insight into the cost map I γ, , we computed the values for a grid of values of α, for both TV and TGV 2 denoising, and L 2 2 and L 1 η ∇ cost functionals. This we did for two different images, the parrot image depicted in Figure 1 and the Scottish southern uplands image depicted in Figure 2. The results are visualised in Figure 3 and Figure  4, respectively. For TV, the parameter range was α ∈ U := {0.001, 0.01, 0.02, . . . 0.5}/n (altogether 51 values), where n = 256 is the edge length of the rectangular test image. For TGV 2 the parameter range was α ∈ U × (U/n). We set γ = 100, = 1e −10 , and computed the denoised image u α,γ, by the SSN denoising algorithm that we report separately in [24] with more extensive numerical comparisons and further applications.
As we can see, the optimal α clearly seems to lie away from the boundary of the parameter domain P α , confirming the theoretical studies for the squared L 2 cost L 2 2 , and the discrete version of the Huberised TV cost L 1 η ∇. The question remains: do these results hold for the full Huberised TV?
We further observe from the numerical landscapes that the cost map I γ, is roughtly quasiconvex in the variable α for both TV and TGV 2 . In the β variable of TGV 2 the same does not seem to hold, as around the optimal solutoin the level sets tend to expand along α as β increases, until starting to reach their limit along β. However, the level sets around the optimal solution also tend to be very elongated on the β axes. This suggests that TGV 2 is reasonably robust with respect to choice of β, as long as it is in the right range.

A data statement for the EPSRC
This is a theoretical mathematics paper, and any data used merely serves as a demonstration of mathematically proven results. Moreover, photographs that are for all intents and purposes statistically comparable to the ones used for the final experiments, can easily be produced with a digital camera, or downloaded from the internet. This will provide a better evaluation of the results than the use of exactly the same data as we used.  : Cost functional value versus α for TV denoising, for both the parrot and uplands test images, for both L 2 2 and L 1 η ∇ cost functionals. For the parrot image, optimal α from [24] for the initialisation 0.1/n, resp. (1/n 2 , 0.1/n), is indicated by an asterisk. For the landscape image, optimal α from [24] for the initialisation 0.01/n, resp. (0.1/n 2 , 0.01/n), is indicated by an asterisk.  α). For the parrot image, optimal α from [24] for the initialisation 0.1/n, resp. (1/n 2 , 0.1/n), is indicated by an asterisk. For the landscape image, optimal α from [24] for the initialisation 0.01/n, resp. (0.1/n 2 , 0.01/n), is indicated by an asterisk.