Convergence analysis of Tikhonov regularization for non-linear statistical inverse learning problems

We study a non-linear statistical inverse learning problem, where we observe the noisy image of a quantity through a non-linear operator at some random design points. We consider the widely used Tikhonov regularization (or method of regularization, MOR) approach to reconstruct the estimator of the quantity for the non-linear ill-posed inverse problem. The estimator is defined as the minimizer of a Tikhonov functional, which is the sum of a data misfit term and a quadratic penalty term. We develop a theoretical analysis for the minimizer of the Tikhonov regularization scheme using the ansatz of reproducing kernel Hilbert spaces. We discuss optimal rates of convergence for the proposed scheme, uniformly over classes of admissible solutions, defined through appropriate source conditions.


Introduction
In this study, we shall consider non-linear operator equations of the form ( 1) where the non-linear mapping A : D(A) ⊆ H 1 → H 2 is acting between the real separable Hilbert spaces H 1 and H 2 .Such non-linear inverse problems occur in many situations, and examples are given in the seminal monograph [9].Of special importance are problems of parameter identification in partial differential equations, and we mention the monograph [12,Chapt. 1], and the more recent [25].
Within the classical setup it is assumed to observe noisy data g δ ∈ H 2 with g δ − A(f ) H2 ≤ δ, where the number δ > 0 denotes the noise level.In supervised learning, it is assumed that the image space H 2 consists of functions, given on some domain X and taking values in another Hilbert space Y .Moreover, function evaluation is continuous, such that for x ∈ X the values g(x) = A(f )(x) are well defined elements in Y .The goal is to learn the unknown and indirectly observed quantity f ∈ H 1 from examples, given in the form of i.i.d.samples z = {(x i , y i )} m i=1 ∈ (X × Y ) m , where the elements y i , i = 1, . . ., m are noisy observations of g(x i ), i = 1, . . ., m at random points x i , i = 1, . . ., m of the form (2) y i := g(x i ) + ε i for i = 1, . . ., m, where g = A(f ).
We assume that the random observations of z are drawn independently and identically according to some unknown joint probability distribution ρ on the sample space Z = X × Y .The noise terms (ε i ) m i=1 are independent centered random variables satisfying E yi [ε i |x i ] = 0.The cardinality m of the samples z is called sample size.
In the case of random observations, the literature is much more scarce than for the classical setup.Milestone work includes [19] which considers asymptotic analysis for the generalized Tikhonov regularization for (3) using the linearization technique.The reference [3] considers a 2-step approach, however, it is assumed that the norm in L 2 (X, ν; Y ) (the space of square integrable functions with respected the probability measure ν on X) is observable, an unrealistic assumption if the only information on ν is available through the points (x 1 , . . ., x m ).The references [1] and [11] consider respectively a Gauss-Newton algorithm and the MOR method for certain non-linear inverse problem, but also in the idealized setting of Hilbertian white or colored noise, which can only cover sampling effects when L 2 (X, ν; Y ) is known.Loubes et al. [14] consider (3) under a fixed design and concentrate on the problem of model selection.Finally, the recent work [23] analyzes rates of convergence in a model where observations are of the form h(Kf )(x) perturbed by noise, but only in a white noise model and for specific, uni-variate non-linear link functions h, linear operator K.
A widely used approach to stabilizing the estimation problem (2) is Tikhonov regularization or regularized least-squares algorithm or method of regularization (MOR).The estimate of the true solution of ( 2) is obtained by minimizing an objective function consisting of an error term measuring the fit to the data plus a smoothness term measuring the complexity of the quantity f .For the non-linear statistical inverse learning problem (2), the regularization scheme over the hypothesis space H 1 can be described as (3) f z,λ = argmin
Here f ∈ H 1 denotes some initial guess of the true solution, which offers the possibility to incorporate a priori information.The regularization parameter λ is positive which controls the trade-off between the error term measuring the fitness of data and the complexity of the solution measured in the norm in H 1 .The objective of this paper is to analyze the theoretical properties of the regularized least-squares estimator f z,λ , in particular, the asymptotic performance of the algorithm is evaluated by the bounds and the rates of convergence of the regularized least-squares estimator f z,λ in the reproducing kernel ansatz.Precisely, we develop a non-asymptotic analysis of Tikhonov regularization (3) for the non-linear statistical inverse learning problem based on the tools that have been developed for the modern mathematical study of reproducing kernel methods.The challenges specific to the studied problem are that the considered model is an inverse problem (rather than a pure prediction problem) and non-linear.The upper rate of convergence for the regularized least-squares estimator f z,λ to the true solution is described in probabilistic sense by exponential tail inequalities.For sample size m, a positive decreasing function m → ε(m), and for confidence level 0 < η < 1, we establish bounds of the form The function m → ε(m) describes the rate of convergence as m → 0. The upper rate of convergence is complemented by a minimax lower bound for any learning algorithm for considered non-linear statistical inverse problem.The lower rate result shows that the error rate attained by Tikhonov regularization scheme for suitable parameter choice of the regularization parameter is optimal on a suitable class of probability measures.Now we review previous results concerning regularization algorithms on different learning schemes which are directly comparable to our results: Rastogi et al. [22] and Blanchard et al. [4].For convenience, we tried to present the most essential points in a unified way in Table 1.
In this table, the parameter r corresponds to a (Hölder type) smoothness assumption for the unknown true solution, and the parameter b > 1 corresponds to the decay rate of the eigenvalues of the covariance operator, both to be introduced below in Assumption 6, and Assumption 7, respectively.
The model (2) covers non-parametric regression under random design (which we also call the direct problem, i.e., A = I), and the linear statistical inverse learning problem.Thus, introducing a general non-linear operator A gives a unified approach to the different learning problems.In the direct learning Table 1.Convergence rates of the regularized least-squares algorithms on different learning schemes setting, Rastogi et al. [22] obtained minimax optimal rates of convergence for general regularization under general source condition.Blanchard et al. [4] considered the general regularization for the linear statistical inverse learning problem.They generalized the convergence analysis of the direct learning scheme to the inverse learning setting and achieved the minimax optimal rates of convergence for general regularization under a Hölder source condition.They considered that the image of the operator A is a reproducing kernel Hilbert space which is a special case of our general assumption that Im(A) is contained in a reproducing kernel Hilbert space.Here, we consider Tikhonov regularization for the non-linear statistical inverse learning problem.We obtain minimax optimal rates of convergence under a general source condition.
The assumptions on the non-linear operator A (see Assumption 5, and the condition (11), below) allow us to estimate the error bounds for the source condition under some additional constraint, which for Hölder source condition (φ(t) = t r ) corresponds to the range 1 2 ≤ r ≤ 1.The structure of the paper is as follows.In Section 2, we introduce the basic setup and notation for supervised learning problems in a reproducing kernel Hilbert space framework.In Sections 3 and 4, we discuss the main results of this paper on consistency and error bounds of the regularized least-squares solution f z,λ under certain assumptions on the (unknown) joint probability measure ρ, and on the (nonlinear) mapping A. We establish minimax rates of convergence over the regularity classes defined through appropriate source conditions by using the concept of effective dimension.In Section 5, we present a concluding discussion on some further aspects of the results.In the appendix, we establish the concentration inequalities, perturbation results and the proofs of consistency results, upper error bounds and lower error bounds.

Setup and basic definitions
In this section, we discuss the mathematical concepts and definitions used in our analysis.We start with a brief description of the reproducing kernel Hilbert spaces since our approximation schemes will be built in such spaces.The vector-valued reproducing kernel Hilbert spaces are the extension of real-valued reproducing kernel Hilbert spaces, see e.g.[18].Definition 2.1.Let X be a non-empty set, (Y, •, • Y ) be a real separable Hilbert space and H be a Hilbert space H of functions from X to Y .If the linear functional F x,y : H → R, defined by is continuous for every x ∈ X and y ∈ Y , then H is called vector-valued reproducing kernel Hilbert space.
For the Banach space of bounded linear operators L(Y ) : Y → Y , a function K : X × X → L(Y ) is said to be an operator-valued positive semi-definite kernel if for each pair (x, z) ∈ X × X, K(x, z) * = K(z, x), and for every finite set of points {x i } N i=1 ⊂ X and For every operator-valued positive semi-definite kernel, K : X × X → L(Y ), there exists a unique vector-valued reproducing kernel Hilbert space (H, •, • H ) of functions from X to Y satisfying the following conditions: (i) For all x ∈ X and y ∈ Y , the function belongs to H; this allows us to define the linear mapping Moreover, there is a one-to-one correspondence between operator-valued positive semi-definite kernels and vector-valued reproducing kernel Hilbert spaces [18].In special case, when Y is a bounded subset of R, the reproducing kernel Hilbert space is said to be real-valued reproducing kernel Hilbert space.In this case, the operator-valued positive semi-definite kernel becomes the symmetric, positive semi-definite kernel K : X × X → R and each reproducing kernel Hilbert space H can described as the completion of the span of the set {K x ∈ H : x ∈ X} for K x : X → R : t → K x (t) = K(x, t).Moreover, for every function f in the reproducing kernel Hilbert space H, the reproducing property can be described as First, we assume that the input space X be a Polish space and the output space (Y, •, • Y ) be a real separable Hilbert space.Hence, the joint probability measure ρ on the sample space Z = X × Y can be described as ρ(x, y) = ρ(y|x)ρ X (x), where ρ(y|x) is the conditional distribution of y given x and ρ X is the marginal distribution on X.
We specify the abstract framework for the present study.We consider that random observations {(x i , y i )} m i=1 follow the model y = A(f )(x) + ε with the centered noise ε.
Assumption 1 (True solution f ρ ).The conditional expectation w.r.t.ρ of y given x exists (a.s.), and there exists The element f ρ is the true solution which we aim at estimating.

Assumption 2 (Noise condition).
There exist some constants M, Σ such that for almost all x ∈ X, This Assumption is usually referred to as a Bernstein-type assumption.
Concerning the Hilbert space H 2 , we assume the following throughout the paper.
Assumption 3 (Vector valued reproducing kernel Hilbert space H 2 ).We assume H 2 to be a vector-valued reproducing kernel Hilbert space of functions f : X → Y corresponding to the kernel K : X × X → L(Y ) such that (i) For all x ∈ X, K x : Y → H 2 is a Hilbert-Schmidt operator, and Note that in case of real-valued functions (Y ⊂ R), Assumption 3 simplifies to the condition that the kernel is measurable and We denote L K := I * K I K : H 2 → H 2 the corresponding covariance operator.

Consistency
We establish consistency in RMS sense and almost surely of Tikhonov regularization in the sense that f z,λ − f ρ H1 as |z| = m → ∞.For this we need weak assumptions on the operator.Assumption 4 (Lispschitz continuity).We suppose that D(A) is weakly closed with nonempty interior and A : D(A) ⊂ H 1 → H 2 is Lipschitz continuous, one-to-one.
The inequality I K g L 2 (X,ρ X ;Y ) ≤ κ g H2 for g ∈ H 2 and the continuity of the operator A : For the continuous and weakly sequentially closed opeator A, there exists a global minimizer of the functional in (3).But it is not necessarily unique since A is non-linear (see [25,Section 4.1.1]).
The proofs of Theorems 3.1, 3.3 will be given in Appendix B.
Theorem 3.1.Suppose that Assumptions 1, 3, 4 hold true and Y dρ(x, y) < ∞.Let f z,λ denote a (not necessarily unique) solution to the minimization problem (3) and assume that the regularization parameter λ(m) > 0 is chosen such that Then we have that As can be seen from the proof, the existence of arbitrary moments, as required in Assumption 2 is not needed.Instead, only the existence of second moments is used, as seen from the introduction of σ ρ .
The previous result can be strengthened as follows.
Theorem 3.3.Suppose that Assumptions 1-4 hold true.Let f z,λ denote a (not necessarily unique) solution to the minimization problem (3) and assume that the regularization parameter λ(m) > 0 is chosen such that 1 i.e., if a sequence (fm) m∈N ⊂ D(A) converges weakly to some f ∈ H 1 and if the sequence (A(fm)) m∈N ⊂ L 2 (X, ρ X ; Y ) converges weakly to some g ∈ L 2 (X, ρ X ; Y ), then f ∈ D(A) and A(f ) = g.
Then we have that

Convergence rates
In order to derive rates of convergence additional assumptions are made on the operator A. We need to introduce the corresponding notion of smoothness of the true solution f ρ from Assumption 1.We discuss the class of probability measures defined through the appropriate source condition which describe the smoothness of the true solution.
Following the work of Engl et al. [9,Chapt. 10] on 'classical' non-linear inverse problems, we consider the following assumption: Assumption 5 (Non-linearity of the operator).We assume that D(A) is convex with nonempty interior, A : and (iii) there exists γ ≥ 0 such that for all f ∈ B d (f ρ ) ∩ D(A) ⊂ H 1 we have, Remark 4.1.The condition (iii) also holds true under the stronger assumption that A is Lipschitz for the operator norm (see [9, Chapt.10]), i.e., A sufficient condition for weak sequential closedness is that D(A) is weakly closed (e.g.closed and convex) and A is weakly continuous.Note that under the Fréchet differentiability of A : D(A) ⊂ H 1 → H 2 (Assumption 5 (ii)), the operator A is Lipschitz continuous with Lipschitz constant L.
To illustrate the general setting, we consider a family of integral operators on the Sobolev space satisfying the above assumptions, where the kernel K is completely explicit.
2 , which is defined as the completion of C ∞ c (R d ) with respect to the norm given by: The Sobolev space W k,2 (R d ) is a reproducing kernel Hilbert space with the reproducing kernel K, given by (see [24,Sec. 1.3.5]) where • is the Euclidean norm in R d .
It satisfies Assumption 3 with We consider the non-linear operator A : H → H given by: where (assumed to be finite).The Fréchet derivative of A at f is given by Then we have and so that Assumption 5 is satisfied.
Under the above non-linearity assumption on the operator A we now introduce the corresponding operators which will turn out to be useful in the analysis of regularization schemes.
We recall that I K denotes the canonical injection map H 2 → L 2 (X, ρ X ; Y ).We define the operator We denote T = T ρ := B * ρ B ρ : H 1 → H 1 the corresponding covariance operator.The operators L K from Section 2, and T are positive, self-adjoint and compact operators, even trace-class operators.
Observe that the operator B depends on I K and f ρ , thus on the joint probability measure ρ itself.It is bounded and satisfies B H1→L 2 (X,ρ X ;Y ) ≤ κL.
The consistency results as established in Section 3, yield convergence of the minimizers f z,λ , as |z| = m tends to infinity, and the parameter λ is chosen appropriately.However, the rates of convergence may be arbitrarily slow.This phenomenon is known as the no free lunch theorem [8].Therefore, we need some prior assumptions on the probability measure ρ in order to achieve uniform rates of convergence for learning algorithms.
The general source condition f ρ ∈ Ω(ρ, φ, R), by allowing for the index functions φ, cover a wide range of source conditions, such as Hölder source condition φ(t) = t r with r ≥ 0, and logarithmic-type source condition φ(t) = t p log −ν 1 t with p ∈ N, ν ∈ [0, 1].The source sets Ω(ρ, φ, R) are precompact sets in H 1 , since the operator T is compact.Observe that in contrast with the linear case, in the equation f ρ − f = φ(T )g from Assumption 6, the true solution f ρ appears on both sides, since the operator T itself depends on it (through A (f ρ )).This condition is more easily interpreted as a condition on the "initial guess" f , so that the initial error ( f − f ρ ) should satisfy a source condition with respect to the operator linearized at the true solution.Assumption 6 is usually referred to as a general source condition, see e.g.[17], which is a measure of regularity of the true solution f ρ .This is inspired, on the one hand, by the approach considered in previous works on statistical learning using kernels, and, on the other hand, by the "classical" literature on non-linear inverse problems.The true solution f ρ is represented in terms of the marginal probability distribution ρ X over the input space X, and of the linearized operator at the true solution, respectively.Both aspects enter into Assumption 6.
Following the concept of Bauer et al. [2], and Blanchard et al. [4], we consider the class of probability measures P φ which satisfy both the noise assumption 2 and which allow for the smoothness assumption 6.This class depends on the observation noise distribution (reflected in the parameters M > 0, Σ > 0) and the smoothness properties of the true solution f ρ (reflected in the parameters R > 0, φ > 0).For the convergence analysis, the output space need not be bounded as long as the noise condition for the output variable is fulfilled.
The class P φ may further be constrained, by imposing properties of the covariance operator L K from above.Thus we consider the set of probability measures P φ,b ⊂ P φ which also satisfy the following condition: Assumption 7 (Eigenvalue decay condition).The eigenvalues (t n ) n∈N of the covariance operator L K follow a polynomial decay, i.e., for fixed positive constants β and b > 1, Now under Assumption 5 (ii) using the relation for singular values s j (U V ) ≤ U s j (V ) for j ∈ N (see Chapter 11 [20]) we obtain, Hence the polynomial decay condition on eigenvalues of the operator L K implies that the eigenvalues of T also follows the polynomial decay.
We achieve optimal minimax rates of convergence using the concept of effective dimension of the operator L K .For the trace class operator L K , the effective dimension is defined as For the infinite dimensional operator L K , the effective dimension is a continuously decreasing function of λ from ∞ to 0. For further discussion on the effective dimension, we refer to the literature [13,15].
Under Assumptions 3, 5 (ii), the effective dimension N (λ) can trivially be estimated as follows, ( 8) However, we know from [5,Prop. 3] that, under Assumption 7, we have the improved bound ( 9) 4.1.Upper rates of convergence.In Theorems 4.3-4.4,we present the upper error bounds for the regularized least-squares solution f z,λ over the class of probability measures P φ .We establish the error bounds for both the direct learning setting in the sense of the L 2 (X, ρ X ; Y )-norm reconstruction error and the inverse problem setting in the sense of the H 1 -norm reconstruction error f z,λ − f ρ H1 .Since the explicit expression of f z,λ is not known, we use the definition (3) of the regularized least-squares solution f z,λ to derive the error bounds.We use the linearization techniques for the operator A in the neighborhood of the true solution f ρ under the (Fréchet) differentiability of A.
We estimate the error bounds for the regularized least-squares estimator by measuring the complexity of the true solution f ρ and the effect of random sampling.The rates of convergence are governed by the noise condition (Assumption 2), the general source condition (Assumption 6) and the ill-posedness of the problem, as measured by an assumed power decay (Assumption 7) of the eigenvalues of T with exponent b > 1.The effect of random sampling and the complexity of f ρ are measured through Assumption 2 and Assumption 6 in Proposition A.3 and Proposition C.1, respectively.We briefly discuss two additional assumptions of the theorem.Condition (10) below says that as the regularization parameter λ decreases, the sample size must increase.This condition will be automatically satisfied under the parameter choice considered later in Theorem 4.5.The additional assumption ( 11) is a "smallness" condition which imposes a constraint between w H1 and the non-linearity as measured by the parameter γ in Assumption 5 (iii).
In order for the latter norm to be finite for any function satisfying the source condition f ρ ∈ Ω(ρ, φ, R), it requires that φ(t)/ √ t remains bounded near 0, in particular if φ(t) = t r , that r ≥ 1 2 .The error bound discussed in the following theorem holds non-asymptotically, but this holds with sufficiently small regularization parameter λ and sufficiently large sample size m.For fixed η and λ, we can choose sufficiently large sample size m such that (10) 8κ Under the source condition f ρ − f = φ(T )g for φ(t) = √ tψ(t), we have that f ρ − f = T 1/2 ψ(T )g = T 1/2 w for ψ(T )g = w.We assume that (11) 2γ w H1 < 1.
The proofs of Theorems 4.3-4.5 will be given in Appendix C.
In the above theorem we discussed the error bounds for the Hölder source condition (Assumption 6) with φ(t) = √ t.In the following theorem, we discuss the error bound for the general source condition with the suitable assumptions on the function φ.
Note that error bounds for f z,λ − f ρ H1 in both Theorem 4.3 and Theorem 4.4 are the same upto a constant factor which depends on the parameters γ, L, w H1 .
In Theorems 4.3-4.4,the error estimates reveal the interesting fact that the error terms consist of increasing and decreasing functions of λ which led to propose a choice of regularization parameter by balancing the error terms.We derive the rates of convergence for the regularized least-squares estimator based on a data independent (a priori) parameter choice of λ for the classes of probability measures P φ and P φ,b .The effective dimension plays a crucial role in the error analysis of regularized least-squares learning algorithm.In Theorem 4.5, we derive the rate of convergence for the regularized least-squares solution f z,λ under the general source condition f ρ ∈ Ω(ρ, φ, R) for the parameter choice rule for λ based on the index function φ and the sample size m.For the class of probability measures P φ,b , the polynomial decay condition (Assumption 7) on the spectrum of the operator T also enters into the picture and the parameter b enters in the parameter choice by the estimate (9) of effective dimension.For this class, we derive the minimax optimal rate of convergence in terms of the index function φ, the sample size m and the parameter b.Theorem 4.5.Under the same assumptions of Theorem 4.4, the convergence of the regularized leastsquares estimator f z,λ in (3) to the true solution f ρ can be described as: (i) For the class of probability measures P φ with the parameter choice λ = Θ −1 m −1/2 where Θ(t) = tφ(t), we have where C depends on the parameters γ, L, w H1 , R, κ, M , Σ and (ii) For the class of probability measures P φ,b under Assumption 7 and the parameter choice 2b φ(t), we have where C depends on the parameters γ, L, w H1 , R, κ, M , Σ, b, β and Notice that the rates given for the class P φ is worse than the one for the (smaller) class P φ,b , which is easily seen from the fact that t 1/2+1/(2b) ≥ t for b > 1, and hence Ψ(t) ≥ Θ(t) for t ∈ [0, 1].
We obtain the following corollary as a consequence of Theorem 4.5.
Corollary 4.6.Under the same assumptions of Theorem 4.4 with the Hölder's source condition f ρ ∈ Ω(ρ, φ, R), φ(t) = t r , the convergence of the regularized least-squares estimator f z,λ in (3) to the true solution f ρ can be described as: (i) For the class of probability measures P φ with the parameter choice λ = m − 1 2r+2 , for all 0 < η < 1, we have with the confidence 1 − η, (ii) For the class of probability measures P φ,b with the parameter choice λ = m − b 2br+b+1 , for all 0 < η < 1, we have with the confidence 1 − η, We obtain the following corollary as a consequence of Theorem 4.3.
Corollary 4.7.Under the same assumptions of Theorem 4.3 with the Hölder's source condition f ρ ∈ Ω(ρ, φ, R), φ(t) = t 1/2 , the convergence of the regularized least-squares estimator f z,λ in (3) to the true solution f ρ can be described as: (i) For the class of probability measures P φ with the parameter choice λ = m − 1 3 , for all 0 < η < 1, we have with the confidence 1 − η, and where C 1 and C 2 depends on the parameters γ, L, w H1 , κ, M , Σ.
(ii) For the class of probability measures P φ,b with the parameter choice λ = m − b 2b+1 , for all 0 < η < 1, we have with the confidence 1 − η, and where C 1 and C 2 depends on the parameters γ, L, w H1 , κ, M , Σ, b, β.
Now we compare the error bounds established for the direct learning setting in the sense of and the inverse problem setting in the sense of the H 1 -norm reconstruction error f z,λ − f ρ H1 .Since under the condition (10) Thus bounding the prediction norm corresponds to a learning bound in which first norm consists of some target function T 1/2 f ρ which has additional smoothness 1/2, on the other hand the second term is square of the reconstruction error in H 1 -norm, therefore this might result in a higher rate.Indeed, this heuristics is validated from the Theorem 4.3 and Corollary 4.7, where we observe that the prediction norm has the faster convergence rate than the reconstruction error in H 1 -norm.
The assumptions on the non-linear operator A (see Assumption 5, and the condition ( 11)) allow us to estimate the reconstruction error bounds in H 1 -norm for Hölder source condition (φ(t) = t r ) corresponds to the range 1 2 ≤ r.It is well-known that Tikhonov regularization has the saturation effect at r = 1 (since it has qualification 1), therefore we cannot improve the rates of convergence beyond r = 1.From (12) we observe that for the prediction error we have additional smoothness 1/2 in the bound on the right hand side, therefore we only estimate the prediction error for r = 1 2 .
4.2.Lower rates of convergence.In this section, we discuss the lower rates of convergence for non-linear statistical inverse learning problem over a subclass of the probability measures P φ,b .The Kullback-Leibler information and Fano inequalities are the main ingredients in the analysis of the estimates for the minimum possible error.Kullback-Leibler divergence between two probability measures ρ 1 and ρ 2 is defined as where g is the density of ρ 1 with respect to ρ 2 , that is, ρ 1 (E) = E g(z)dρ 2 (z) for all measurable sets E.
To obtain the lower bound, we define a family of probability measures ρ f parameterized by suitable vectors f ∈ D(A) ⊂ H 1 .We assume that Y is finite-dimensional space with a basis {v j } d j=1 .Then for each f ∈ D(A) ⊂ H 1 , we associate the probability measure on the sample space Z: where a j (x) = J − A(f ), K x v j H2 , b j (x) = J + A(f ), K x v j H2 , J = 4κ A(f ) H2 and δ y−ξ denotes the Dirac measure on Y with unit mass at y = ξ.
Following the analysis of Caponnetto et al. [5] and DeVore et al. [7] we establish the lower rates of convergence for the non-linear statistical inverse problem that can be attained by any learning algorithm.The main steps are the following.In order to obtain the lower rates of convergence for learning algorithms, we generate N ε -vectors (f 1 , . . ., f Nε ) depending on ε < ε 0 for some ε 0 > 0, with N ε → ∞ as ε → 0 such that any two of these vectors are separated by constant times ε with respect to the norm in Hilbert space H 1 (Proposition D.2 (i)).Then we construct the probability measures ρ i = ρ fi from (13), parameterized by f i 's (1 ≤ i ≤ N ε ) with small KullbackLeibler divergence to each other (Proposition D.2 (ii)) and are therefore statistically close.Finally, we obtain the lower rates of convergence on applying [7, Lemma 3.3] using Kullback-Leibler information.
Assumption 8.For the lower rates of convergence, we assume the following conditions on the non-linear operator A: (i) A is Fréchet differentiable.
(ii) The Fréchet derivative of A at the initial guess f (of the solution of the functional (3)) is bounded, i.e., there exists L < ∞ such that: (iii) There exists γ ≥ 0 such that for all f, f ∈ D(A) ⊂ H 1 in a sufficiently large ball around f we have, (iv) The function φ is a continuous increasing function with φ(0) = 0 and θ(t) = φ(t 2 ) is Lipschitz continuous with the constant L θ .For the operators In contrast to upper rates of convergence for Tikhonov regularization, we require the additional assumption (iv) on A for the lower rates.This condition is the generalization of the following condition used in [10] for the Landweber iteration: Note that in the linear case R f ≡ I; therefore, Assumption 8 (iv) may be interpreted as a further restriction on the "non-linearity of A.
The proof of the following theorem will be given in Appendix D.
where A denotes the set of all learning algorithms l : z → f l z .
We obtain the following corollary as a consequence of Theorem 4.8.
Corollary 4.9.Under the same assumptions of Theorem 4.8, for any learning algorithm with Hölder's source condition f ρ ∈ Ω(ρ, φ, R), φ(t) = t r , the lower rates of convergence can be described as The choice of parameter λ(m) is said to be optimal, if for this choice of parameter, the upper rate of convergence coincides with the minimax lower rate.For the class of probability measures P φ,b with the parameter choice λ = Ψ −1 (m −1/2 ), Theorem 4.5 shares the upper rate of convergence with the lower rate of convergence in Theorem 4.8.Therefore the choice of the parameter is optimal.

Discussion
Our analysis guarantees the consistency of Tikhonov regularization algorithm and provides a finite sample bound for non-linear statistical inverse learning problem in vector-valued reproducing kernel ansatz, therefore the results can be applied to the multitask learning problem.We also discussed the asymptotic worst-case analysis for any learning algorithm in this setting, showing optimality in the minimax sense on a suitable class of priors.The rates of convergence presented in Section 4 are asymptotic in nature, i.e., all parameters are fixed as m → ∞.This provides a mathematical foundation for nonlinear inverse problems in the statistical learning framework.The considered framework generalizes previously proposed settings for different learning schemes: direct, linear inverse learning problem.
Impact of effective dimension.The upper rates were represented in terms of the index function φ from Assumption 6, and the effective dimension N (λ) of the governing operator L K .This is seen from the basic probabilistic bound, given in Proposition A.3, and this holds regardless of the fact that λ → N (λ) decays at a polynomial rate.However, the construction for the lower bounds makes use of this constraint.Also, the Corollaries 4.6 and 4.7 can be given a handy representation of the upper bounds under power type decay.
Saturation effect.In Theorem 4.3 we highlighted the upper rates, both for the errors f z,λ − f ρ H1 , and I K {A(f z,λ ) − A(f ρ }) L 2 (X,ρ X ;Y ) in the limiting case when smoothness is given through the index function φ(t) = √ t; and these differ by a factor √ λ.We emphasize that for higher smoothness φ(t) = √ ψ(t) with an additional index function ψ this cannot be expected to remain valid.This is known from the linear case and is due to the saturation effect of Tikhonov regularization.
Relation to classical regularization theory.Within the present study, the smoothness assumption 6 is based on the composed operator

This is in contrast to classical regularization theory, when the corresponding operator is [A (f ρ )]
* A (f ρ ).By assuming an appropriate link condition between the operators [A (f ρ )] * A (f ρ ) and T one can transfer the obtained rates results from the present context to the standard ones, and we refer to the corresponding calculus established in [16].
Parameter choice.The a-priori parameter choice considered in our analysis depends on the smoothness parameters b, φ.In practice, a posteriori parameter choice rule (data-dependent) for the regularization parameter λ such as the discrepancy principle, balancing principle, quasi-optimality principle with theoretical justification is required, so that we can turn our results to data-dependent minimax adaptivity even in the absence of a priori knowledge of the regularity parameters.

Appendix A. Notation and probabilistic estimates
Here we introduce some relevant operators.
Definition A.1 (Sampling operator).For a discrete ordered set x = (x i ) m i=1 , the sampling operator S x : H 2 → Y m is defined as We equip the product Hilbert space Y m with the scalar product (y i ) m i=1 , (y i ) m i=1 = 1 m m i=1 y i , y i , and denote the associated Hilbert norm y x : Y m → H 2 is given by Under Assumption 3, the sampling operator is bounded by κ, since The sampling versions are the operators B x := S x •(A (f ρ )) and T x := B * x B x .The operator T x is positive and self-adjoint.Under the Assumptions 3, 5 (ii), the operator B x is bounded and satisfies B x H1→Y m ≤ κL.We also recall that L K = I * K I K for the canonical injection map . These operators will be used in our analysis.
The following inequality is based on the results of Pinelis and Sakhanenko [21].
Proposition A.2. Let H be a real separable Hilbert space and ξ be a random variable on (Ω, ρ) with values in H.If there exist two constant Q and S satisfying then for any 0 < η < 1 and for all m ∈ N, In the following proposition, we measure the effect of random sampling using Assumption 2. The quantities describe the probabilistic estimates of the perturbation measure due to random sampling.These bounds are standard in learning theory.
Proposition A.3.Let z be i.i.d.random samples with Assumptions 1-3, then for m ∈ N and 0 < η < 1, each of the following estimate holds with the confidence 1 − η, Proof.To estimate the first expression, we consider the random variable ξ 1 (z) = (L K + λI) −1/2 K x (y − A(f ρ )(x)) from (Z, ρ) to reproducing kernel Hilbert space H 2 .Under the Assumption 1 we obtain and , ∀n ≥ 2.
On applying Proposition A.2 with Q = κM and S = Σ N (λ) follows that, The second expression can be estimated easily by considering the random variable ξ 2 (z) = K x (y − A(f ρ )(x)) from (Z, ρ) to reproducing kernel Hilbert space H 2 , while the proof of the third expression can be obtained from Theorem 2 in De Vito et al. [6].
Proposition A.4.For m ∈ N and 0 < η < 1, under with Assumptions 3, the following estimates hold with the confidence Proof.From Proposition A.3 under the Assumptions 3, the following inequality holds with the confidence 1 Then under the condition ( 14), we get with the confidence 1 For the second inequality, we consider Consequently, using (15) in the inequality ( 16) we obtain with the probability 1 − η/2, Applying [4,Prop. 5.7] we get with the probability 1 − η/2,

Appendix B. Proof of the consistency results
Throughout the analysis we use the following identity in the real Hilbert space H: Proof of Theorem 3.1.By the definition of f z,λ as a solution to the minimization problem (3), we get the inequality Consequently, we get Under the Lipschitz continuity of the operator A (Assumption 4) (i.e., A(f Now squaring both sides and taking expectation with respect to z we obtain, ( 20) Under Assumptions 1, 3 and < ∞ we have that and from [26, Lemma 1] we have, ( 22) Using (21) in (20) we get, ( 23)

H1
from which with the parameter choice rule (4) we deduce that Hence, we observe that a 2 := sup < ∞.Now, we show that there exists a subsequence of (f z,λ ) m∈N , denoted by (f z(k),λ ) k∈N , such that for some f ∈ H 1 and for all f ∈ H 1 .
Let {e i : i ∈ N} be a complete orthonormal basis of the separable Hilbert space H 1 .By the Cauchy-Schwarz inequality, we have peating the same arguments, we again get a subsequence (f z(k),λ ) k∈N such that E z(k) f z(k),λ − f ρ , e 2 H1 → ξ 2 as k → ∞ for some ξ 2 ∈ R and so on.Therefore, we obtain a diagonal sequence (f z(k),λ ) k∈N with the property: Hence, f := for k ≥ K.This proves the claim (25).
From inequality (18) we get: Under the Lipschitz continuity of A, from ( 21), ( 22), (24) with the parameter choice rule (4) we obtain (26) We have D(A) is weakly closed and A : H 1 → H 2 is Lipschitz continuous, this implies that Now from ( 25), ( 26) we obtain a subsequence again denoted by (f surely, hence the weak closedness and one-to-one assumption on assumption on I K A imply that f = 0. Our next aim is to prove the convergence (5).By contradiction, assume that there exists an ε > 0 and a subsequence (f z(k),λ ) k∈N such that We have the identity Using the same arguments as above, we can again find a further subsequence (f z(k),λ ) k∈N such that Hence from the inequalities ( 24) and (28) we obtain lim sup which contradicts (27).This completes the proof of the desired result (5).
Proof of Theorem 3.3.From the inequality (19) and Proposition A.3 under Assumptions 1-4, the following inequality holds with the confidence 1 − η/2: Choosing the parameter η(m) = 4/m 2 , we obtain Therefore, the sum of the probabilities of the events E m is finite: Hence applying the Borel-Cantelli lemma we get, from which with the parameter choice rule (6) we deduce that almost surely.Note that f z,λ is finite almost surely due to (30).Hence, there exists a subsequence of (f z,λ ) m∈N which weakly converges to some f .We denote the subsequence by (f z(k),λ ) k∈N , i.e., f z(k),λ f .
The next step of the proof is to show that f = f ρ .
From inequality (18) and Proposition A.3 under Assumptions 1-3, the following inequality holds with confidence 1 − η, Using the arguments similar to above, with the parameter choice rule (6) we obtain as k → ∞ almost surely, hence the weak closedness and one-to-one assumption on assumption on I K A imply that f = f ρ .Our next aim is to prove the convergence (7).By contradiction, assume that there exists an ε > 0 and a subsequence (f z(k),λ ) k∈N such that We have the identity Using the same arguments as above, we can again find a further subsequence (f z(k),λ ) k∈N which weakly converges to f ρ .Hence from the inequalities (30) and (32) we obtain almost surely, lim sup which contradicts (31).This completes the proof of the desired result (7).

Appendix C. Proof of upper rates
Here, we introduce some operators ∆ := S x A(f ρ ) − y and Ξ := S x (S * x S x + λI) −1 S * x used in the analysis of upper rates.
Proof of Theorem 4.3.By the definition of f z,λ as a solution to the minimization problem (3), the inequality holds true:
For the analysis of Tikhonov regularization under general source condition, we consider the linearized and population version (i.e. using theoretical expectation under ρ) of the regularization scheme (3): Under Assumption 1, using the fact E(f In the following proposition, we estimate the error bound of approximation error f l λ − f ρ which describes the complexity of the true solution f ρ .The approximation error is independent of the samples z. Proposition C.1.Suppose Assumptions 1, 6 holds true.Then under the assumption that φ(t) and t/φ(t) are non-decreasing functions, we have Proof.From the definition of f l λ in (36) and Assumption 6 we get, Under the assumption that φ(t) and t/φ(t) are non-decreasing functions, we obtain, Under Assumption 6 from Proposition C.1, we observe that f l λ ∈ D(A)∩B d (f ρ ), provided λ is sufficiently small.
In the following theorem, we estimate the quantity f l λ − f z,λ and use the bound of approximation error from the above proposition to find the error bound for f ρ − f z,λ .
Proof of Theorem 4.4.The main idea of the proof is to compare f z,λ and f l λ .From the definition of f z,λ in (3), we have (37) Using the linearization of operator A in (33) we reexpress the inequality (37) as follows, Now we decompose the second and third term in the right hand side as follows: The fourth term in the right hand side is negative, therefore it can be ignored, leading to: The definition of f l λ in (36) implies that Therefore, from inequality (38), using Assumption 5 (ii) and (39) we get: where Using the inequalities (34), ( 35), ( 41) in (40) we obtain, where δ 1 = 3L where γ 2 = 1 − 2γ w H1 .
We have, which implies Using the triangle inequality where which implies the desired result.

Appendix D. Proof of lower rates
The following proposition is a variant of Proposition 4 [5] for the non-linear statistical inverse problem.
Proof.The first point can be easily observed.Now we check the condition on the probability measure ρ f for the second point.Under the condition (46) for the conditional probability measure ρ f (y|x) we have, which implies that for the solution f ρ = f the probability measure ρ f satisfies Assumption 2.
(ii) Let ρ i := ρ fi , ρ j := ρ fj be given by ( 13) for f i ∈ Ω(ρ i , φ, R) and f j ∈ Ω(ρ j , φ, R), i, j = 1, . . ., N ε , then the KullbackLeibler information K(ρ fi , ρ fj ) fulfills the inequality: Further, it holds where Proof.For the initial guess f of the solution of the functional (3), let (e n ) n∈N be an orthonormal basis of the Hilbert space H 1 of eigenvectors of the operator T = A ( f ) * I * K I K A ( f ) corresponding to the eigenvalues (t n ) n∈N .For given ε > 0, we define Under the polynomial decay condition α ≤ n b t n on the eigenvalues of the operator T , we get , where x is the greatest integer less than or equal to x.
Suppose F (f ) = φ(T )g + f for B = I K A (f ), T = B * B and some g ∈ H 1 , then from Assumptions 3, 8 (iv) for the Lipschitz continuous function θ(t) = φ(t 2 ) from Propositions D.4, D.5 under the Lipschitz continuity of the Fréchet derivative of the operator A we obtain, where Hence for each g i defined in (52) from (53) there exist where which can be satisfied by making the quantity g i H1 arbitrarily small as ε → 0.
Under Assumption 8 (iv) from eqn. (52) for all 1 ≤ i, j ≤ N ε , we get, Then under Assumption 8 (iv) and (55) we have, which implies that From Assumptions 3, 8 (ii) and (54) we get, where Now from Assumptions 8 (iv), (v) and (55) we get, Note that the Lipschitz continuity of the Fréchet derivative of the operator A (Assumption 8 (iii)) imply that Hence, for 1 ≤ i, j ≤ N ε , from the inequality (55), (57) we have, Therefore, φ(T ) = R f φ(T * ) and φ(T * ) = R −1 f φ(T ).Now using the relation for singular values s j (AB) ≤ A s j (B) for j ∈ N (see Chapter 11 [20]) we obtain, Consequently, for small enough f i − f H1 corresponding to small ε, the eigenvalues of T i and T * decay in same order, hence in the polynomial order.
The following theorem is a restatement of Theorem 3.1 of [7] in non-linear statistical inverse problem setting.Proof.Let ε ≤ ε 0 and f 1 , . . ., f Nε be as in Proposition D.2.Then we define the sets, It is clear from (56) that A i ∩ A j = ∅ if i = j.On applying Lemma 3.3 [7] with the probability measures ρ m fi , 1 ≤ i ≤ N ε , we obtain that either a family of bounded linear operators and ζ is a positive constant.(v) The eigenvalues (t n ) n∈N of the operator T = A ( f ) * I * K I K A ( f ) follow the polynomial decay: For fixed positive constants α, β and b > 1, αn −b ≤ t n ≤ βn −b ∀n ∈ N.

Theorem 4 . 8 . 1 2 + 1 2b
Let z be i.i.d.samples drawn according to the probability measure ρ ∈ P φ,b under the hypothesis dim(Y ) = d < ∞.Then under Assumptions 3, 8 for Ψ(t) = t φ(t), the estimator f l z corresponding to any learning algorithm l (z → f l z ∈ H 1 ) converges with the following lower rate: