Sharp template estimation in a shifted curves model

This paper considers the problem of adaptive estimation of a template in a randomly shifted curve model. Using the Fourier transform of the data, we show that this problem can be transformed into a stochastic linear inverse problem. Our aim is to approach the estimator that has the smallest risk on the true template over a finite set of linear estimators defined in the Fourier domain. Based on the principle of unbiased empirical risk minimization, we derive a nonasymptotic oracle inequality in the case where the law of the random shifts is known. This inequality can then be used to obtain adaptive results on Sobolev spaces as the number of observed curves tend to infinity. Some numerical experiments are given to illustrate the performances of our approach.


Model and objectives
The goal of this paper is to study a special class of stochastic inverse problems. We consider the problem of estimating a curve f , called template or shape function, from the observations of n noisy and randomly shifted curves Y 1 , . . . Y n coming from the following Gaussian white noise model: dY j (x) = f (x − τ j )dx + ǫdW j (x), x ∈ [0, 1], j = 1, . . . , n (1.1) where W j are independent standard Brownian motions on [0, 1], ǫ represents a level of noise common to all curves, the τ j 's are unknown random shifts, f is the unknown template to recover, and n is the number of observed curves that may be let going to infinity to study asymptotic properties. This model is realistic in many situations where it is reasonable to assume that the observed curves represent replications of almost the same process and when a large source of variation in the experiments is due to transformations of the time axis. Such a model is commonly used in many applied areas dealing with functional data such as neuroscience (see e.g. [IRT08]) or biology (see e.g. [Ron98]). A well known problem in functional data analysis is the alignment of similar curves that differ by a time transformation to extract their common features, and (1.1) is a simple model where f represents such common features (see [RS02], [RS05] for a detailed introduction to curve alignment problems in statistics). The function f : R → R is assumed to be of period 1 so that the model (1.1) is well defined, and the shifts τ j are supposed to be independent and identically distributed (i.i.d.) random variables with density g : R → R with respect to the Lebesgue measure dx on R. Estimating f can be seen as a stochastic inverse problem as this template is not observed directly, but through n independent realizations of the stochastic operator A τ : L 2 p ([0, 1]) → L 2 p ([0, 1]) defined by where L 2 p ([0, 1]) denotes the space of squared integrable functions on [0, 1] with period 1, and τ is random variable with density g. The additive Gaussian noise makes this problem ill-posed, and [BG09] have shown that estimating f in such models is in fact a deconvolution problem where the density g of the random shifts plays the role of the convolution operator. For the L 2 risk on [0, 1], [BG09] have derived the minimax rate of convergence for the estimation of f over Besov balls as n tends to infinfity. This minimax rate depends both on the smoothness of the template and on the decay of the Fourier coefficients of the density g. This is a well known fact for standard deterministic deconvolution problem in statistics, see e.g. [Fan91], [Don95], but the results in [BG09] represent a novel contribution and a new point of view on template estimation in stochastic inverse problems such as (1.1).
However, the approach followed in [BG09] is only asymptotic, and the main goal of this paper is to derive non-asymptotic results to study the estimation of f by keeping fixed the number n of observed curves.

Deconvolution formulation
Let us first explain how the model (1.1) can be transformed into a deconvolution problem as the one studied in [DJKP95]. Denote G the following density function defined on [0; 1] as The density G exists as soon as g satisfies the weak condition g(x) ≤ C 1+|x| ν for any ν > 1 and suitable constant C. Note that the Fourier coefficients of G are given by The observations Y j can be written as where ξ j is a second noise term defined as . Hence, our model can be seen as a deconvolution problem with a noisy operator H : f → f ⋆ G + ξ and a more classical independent additive noise W . Note also that the realizations H j : f → f ⋆ G + ξ j are unbiased realizations of the operator H but presents a variance term which depends on the function f we want to estimate. This appears to be a new setting in the field of inverse problem with unknown operators as considered in [CH05], [EK01], [HR05], [Mar06] and [CR07].
We will see in the sequel that the additive noise ξ which depends on f slightly modifies the quadratic risk and the way to estimate f when compared to classical procedures used in standard inverse problems with a deterministic operator.

Fourier Analysis and an inverse problem formulation
Supposing that f ∈ L 2 p ([0, 1]), we denote by θ k its k th Fourier coefficient, namely: In the Fourier domain, the model (1.1) can be rewritten as where z k,j are i.i.d. N C (0, 1) variables, i.e. complex Gaussian variables with zero mean and such that E|z k,j | 2 = 1. This means that the real and imaginary parts of the z k,j 's are Gaussian variables with zero mean and variance 1/2. Thus, we can compute the sample mean of the k th Fourier coefficient over the n curves as and the ξ k 's are i.i.d. complex Gaussian variables with zero mean and variance 1. The Fourier coefficientsc k in equation (1.4) can be viewed as observations coming from a statistical inverse problem. Indeed, the standard sequence space model of an ill-posed statistical inverse problem is (see [CGPT02] and the references therein) where the γ k 's are eigenvalues of a known linear operator, z k are random noise variables and σ is a level of noise which goes to zero for studying asymptotic properties. The issue in such models is to recover the coefficients θ k from the observations c k under various conditions on the decay to zero of the γ k 's as |k| → +∞. A large class of estimators for the problem (1.6) can be written asθ where λ = (λ k ) k∈Z is a sequence of reals called filter. Various estimators of this form have been studied in a number of papers, and we refer to [CGPT02] for more details.
In a sense, we can view equation (1.4) as an inverse problem (with σ = ǫ √ n ) where the eigenvalues of the linear operator are the Fourier coefficients of the density g of the shifts i.e.
Indeed, let us assume that the density g of the random shifts is known. In this case, to estimate the Fourier coefficients of f , one can perform a deconvolution step of the form wherec k is defined in (1.4) and λ = (λ k ) k∈Z is a filter whose choice will be discussed later on. Theoretical properties and optimal choices for the filter λ will be presented in the case where the coefficients γ k are known. Such a framework is commonly used in inverse problems such as (1.6) to obtain consistency results and to study asymptotic rates of convergence, where it is generally supposed that the law of the additive error is Gaussian with zero mean and known variance σ 2 , see e.g [CGPT02]. In model (1.1), the random shifts may be viewed as a second source of noise and for the theoretical analysis of this problem the law of this other random noise is also supposed to be known. Recently, some papers have addressed the problem of regularization with partially known operator. For instance, [CH05] consider the case where the eigenvalues are unknown but independently observed. They deal with the model: (1.8) where (ξ k ) k∈N and (η k ) k∈N denote i.i.d standard gaussian variables. In this case, each coefficient θ k can be estimated byγ −1 k c k . Similar models have been considered in [CR07], [Mar06] or [Mar09]. In a more general setting, we may refer to [EK01] and [HR05].
In this paper, our framework is sligthly different in the sense that the operator is stochastic, but the regularization is operated using deterministic eigenvalues. Hence the approach followed in the previous papers is no directly applicable to model (1.1). We believe that estimating f in model (1.1) without the knowledge of g remains a difficult task, and this paper is a first step to address this issue.

Previous work in template estimation and shift recovery
The problem of estimating the common shape of a set of curves that differ by a time transformation is usually referred to as the curve registration problem, and it has received a lot of attention in the literature over the last two decades. Among the various methods that have been proposed, one can distinguish between landmark-based approaches which aim at aligning common structural points of the curves (typically locations of extrema) see e.g [GK95], [GK92], [Big06], and nonparametric modeling of the warping functions to align a set of curves see e.g [RL01], [WG97], [LM04]. However, in these papers, studying consistent estimates of the common shape f as the number of curves n tends to infinity is generally not considered.
In the simplest case of shifted curves, various approaches have been developed. Self-modelling regression methods proposed by [KG88] are semiparametric models where each observed curve is a parametric transformation of a common regression function. Such models are usually referred to as shape invariant models and estimation in this setting is usually done by iterating the following two steps: estimation of the parameters of the transformations (here the shifts) given a reference curve, and nonparametric estimation of a template by aligning the observed curves given a set of known transformation parameters. [KG88] studied the consistency of such a two steps procedure in an asymptotic framework where both the number of functions n and the number of observed points per curves grows to infinity. Due to the asymptotic equivalence between the white noise model and nonparametric regression with an equi-spaced design (see [BL96]), such an asymptotic framework in our setting would correspond to the case where both n tends to infinity and ǫ is let going to zero. In this paper we prefer to focus only on the case where n may be let going to infinity, and to leave fixed the level of additive noise in each observed curve.
Based on a model with curves observed at discrete time points, semiparametric estimation of the shifts and the shape function is proposed in [LMG07] and [Vim08] as the number of observations per curve grows, but with a fixed number n of curves. A generalisation of this approach for the estimation of scaling, rotation and translation parameters for two-dimensional images is also proposed in [BGV08], but also with a fixed number of observed images. Semiparametric and adaptive estimation of a shift parameter in the case of a single observed curve in a white noise model is also considered by [DGT06] and [Dal07]. Estimation of a common shape for randomly shifted curves and asymptotic in n is considered in [Ron98] from the point of view of semiparametric estimation when the parameter of interest is infinite dimensional.
However, in all the above cited papers rates of convergence or oracle inequalities for the estimation of the template are generally not studied. Moreover, our procedure differs from the approaches classically used in curve registration as our estimator is obtained in only one very simple step, and it is not based on an alternative scheme between estimation of the shifts and averaging of back-transformed curves given estimated values of the shifts parameters.
Finally, note that [CL08] and [IRT08] consider a model similar to (1.1), but they rather focus on the the estimation of the density g of the shifts as n tends to infinity. Using such an approach could be a good start for studying the estimation of the template f without the knowledge of g. However, we believe that this is far beyond the scope of this paper, and we prefer to leave this problem open for future work.

Organization of the paper
In Section 2, we consider an estimator of the shape function f based on spectral cut-off when the eigenvalues γ k are known. Based on the principle of unbiased risk minimization developed by [CGPT02], we derive an oracle inequality that is then used to derive an adaptive estimator of f on Sobolev spaces. This estimator is based on the Fourier transform of the curves with a data-based choice of the frequency cut-off. In Section 3, we study asymptotic properties of this estimator in terms of minimax rates of converge over Sobolev balls. Finally in Section 4, a short simulation study is proposed to illustrate the numerical properties of the estimator. All proofs are deferred to a technical section at the end of the paper.

Estimation of the common shape
In the following, we assume that the Fourier coefficients γ k are known. In this situation it is possible to choose a data-dependent filter λ ⋆ that mimic the performances of an optimal filter λ 0 called oracle that would be obtained if we knew the true template f . The performances of this filter are related to the performances of the filter λ 0 via an oracle inequality. In this section, most of our results are non-asymptotic and are thus related to the approach proposed in [CGPT02] to study standard statistical inverse problems via oracle inequalities.

Smoothness assumptions for the density g
In a deconvolution problem, it is well known that the difficulty of estimating f is quantified by the decay to zero of the γ k 's as |k| → +∞. Depending how fast these Fourier coefficients tend to zero as |k| → +∞, the reconstruction of f will be more or less accurate. This phenomenon was systematically studied by [Fan91] in the context of density deconvolution. In this paper, the following type of assumption on g is considered: Assumption 2.1 The Fourier coefficients of g have a polynomial decay i.e. for some real β ≥ 0, there exists two constants (2.1) Remark that the knowledge of the constants C max , C min and β will not be necessary for the construction of our estimator.

Risk decomposition
Assuming that γ k = 0 for all k ∈ Z, we recall that an estimator of the θ k 's is given by, see where λ = (λ k ) k∈Z is a real sequence. Examples of commonly used filters include projection weights λ k = 1 1 |k|≤N for some integer N , and the Tikhonov weights λ k = 1/(1 + (|k|/ν 2 ) ν 1 ) for some parameters ν 1 > 0 and ν 2 > 0. Based on theθ k 's, one can estimate the signal f using the Fourier reconstruction formula. The problem is then to choose the sequence (λ k ) k∈Z in an optimal way with respect to an appropriate risk. For a given filter λ we use the classical ℓ 2 -norm to define the risk of the estimatorθ(λ) = (θ k ) k∈Z Note that analyzing the above risk (2.2) is equivalent to analyze the mean integrated The following lemma gives the bias-variance decomposition of R(λ, θ).
Lemma 2.1 For any given nonrandom filter λ, the risk of the estimatorθ(λ) can be decomposed as For a fixed number of curves n and a given shape function f , the problem of choosing an optimal filter in a set of possible candidates is to find the best tradeoff between low bias and low variance in the above expression. However, this decomposition does not correspond exactly to the classical bias-variance decomposition for linear inverse problems. Indeed, the variance term in (2.3) is the sum of two terms and differs from the classical expression of the variance for linear estimator in statistical inverse problems. Using our notations, the classical variance term |γ k | 2 and appears in most of linear inverse problems.
However, contrary to standard inverse problems, the variance term of the risk also depends on the Fourier coefficients θ k of the unknown function f to recover. Indeed, our data γ −1 kc k are noisy observations of θ k : and we invert the problem using the sequence (γ k ) k∈N instead of (γ k ) k∈N , which is involved in the construction of the coefficient c k . It explains the presence of the second term V 2 . In particular, the quadratic risk is expressed in its usual form in the case whereγ k = γ k .
A similar phenomenon occurs with the model (1.8), although it is more difficult to quantify. Indeed, in this setting:γ −1 Hence, we also observe an additionnal term depending on θ. This term is controled using a Taylor expension but the quadratic risk cannot be expressed in a simple form. We refer to [Mar09] for a discussion with some numerical simulation and to [CH05], [EK01], [HR05], [Mar06] and [CR07].

An oracle estimator and unbiased estimation of the risk
Suppose that one is given a finite set of possible candidate filters Λ = (λ N ) N ∈ I , with λ N = (λ N k ) k∈Z , N ∈ I ⊂ N which satisfy some general conditions to be discussed later on. In the case of projection filters, Λ can be for example the set of filters λ N k = 1 1 |k|≤N , k ∈ Z for N = 1, . . . , m 0 .
Given a set of filters Λ, the best estimator corresponds to the filter λ 0 , called oracle, which minimizes the risk R(λ, θ) over Λ i.e.
This filter is called an oracle because it cannot be computed in practice as the sequence of coefficients θ is unknown. However, the oracle λ 0 can be used as a benchmark to evaluate the quality of a data-dependent filter λ ⋆ chosen in the set Λ. This is the main interpretation of the oracle inequality that we will develop in the next section. Now, suppose that it is possible to construct an unbiased estimatorΘ 2 k of |θ k | 2 . For any nonrandom filter λ, usingΘ 2 k , one can compute an estimatorŨ (λ, X) of the risk R(λ, θ). Then, for choosing a data-dependent filter, the principle of unbiased risk estimation (see [CGPT02] for further details) simply suggests to minimize the criterion U (λ, X) over λ ∈ Λ instead of the criterion R(λ, θ). Our data-dependent choice of λ is thus (2.5) Typically, in practice, all the filters λ ∈ Λ are such that λ k = 0 (or vanishingly small) for all k large enough. Hence, for such choices of filters, numerical computation of the above expression is thus feasible since it only involves the computation of finite sums.

Unbiased Risk Estimation (URE)
For the sake of simplicity, we only consider spectral cut-off schemes in the following. In this case, Λ corresponds to the set of filters (11 |k|≤N ) k∈Z for N ∈ N. All the results presented in this paper could be generalized to wider families of estimators (Tikhonov, Landweber, Pinsker,...). The price to pay is to get longer and more technical proofs. From Lemma 2.1, the quadratic risk R(θ, λ) := R(θ, N ) of a projection filter can be written as: We aim to minimize R with respect to N while θ is unknown.
as an unbiased estimator of |θ k | 2 , we minimize U defined as (2.6) which is an unbiased risk estimator of R(θ, N ) − θ 2 2 . Unfortunately, such a criterion does not lead to satisfying results. Instead of the approach developed in [CH05], we take into account the error generated by the use of an approximation of the eigenvalues. The estimator related to the criterion (2.6) involves processes that require a specific treatment. In order to contain these processes, we will consider in the following the criterion Remark thatŪ (Y, N ) can be written as U (Y, N )+pen(N ) where (pen(N )) N ∈N denotes a penalty term. It appears from the proofs that this penalty is a natural candidate for the control of the processes involved in the behavior of the estimator constructed below. The associated data-based filter is defined as Remark that we do not minimize our criterionŪ (Y, N ) over N but rather for N ≤ m 0 . Indeed, each coefficient θ k is estimated by γ −1 kc k where γ k = E[γ k ]. Hence, the ratio γ −1 kγ k should be as close as possible to 1. Since γ k → 0 as k → +∞ and the variance ofγ k is constant in k, it seems clear that large k should be avoided. Similar bounds on the resolution level are used in papers related to partially known operator: see for instance [CH05] or [EK01]. This bounds have to be carefully chosen but are not of first importance. In general, estimating the operator is easier than estimating the function f .

Sharp estimator of the risk
We are now able to propose a first adaptive estimator. In the following, we denote by θ ⋆ the estimator related to the bandwidth N ⋆ namely (2.10) The next theorem summarizes the performances of θ ⋆ through a simple oracle inequality. The proof is postponed to the Section 5.
From Theorem 2.1, our estimator θ ⋆ presents a behavior similar to the minimizer ofR(θ, N ). This term only differs from the quadratic risk by a log term. This result can be explained by the choice of the criterion (2.7). The two last terms in the right hand side of (2.11) are at least of order 1/n and may be thus considered as negligible in most cases.
In the next section, we prove that our estimator attains the minimax of convergence on many functional spaces. In particular, the log term and the bandwidth m 0 have no influence on the performances of our estimator from a minimax point of view.

Rough estimator
In the procedure described above, we have decided to take into account the error generated by the use of a the sequence (γ k ) k∈N instead of (γ k ) k∈N . Although their setting is slightly different from ours, papers dealing with regularization with unknown operator consider implicitly this error as negligible for the regularization. The goal is then to prove that the related estimator are not affected by the noise in the operator, i.e. this error is avoided in the oracle.
It is thus also possible to apply a similar scheme in our setting and consider the bias enlightened in Lemma 2.1 as negligible. We introducẽ (2.13) that corresponds to the usual quadratic risk in an inverse problems setting. From now on, our aim is to mimic the oracle forR(θ, N ), i.ẽ To this end, we use exactly the same scheme than for the construction of θ ⋆ starting fromR(θ, N ) instead of R(θ, N ). Definẽ (2.14) Then, we introduceÑ = arg min where m 0 has been introduced in (2.9). Hence, this estimator only differs from the previous one by the choice of the regularization parameterÑ . The performances ofθ are detailed bellow.
We will see in Section 3 that the performances of θ ⋆ andθ are essentially the same from a minimax point of view. The existing differences may be revealed by the comparison of the oracle inequalities obtained in Theorems 2.1 and 2.2, although this is always a difficult task. Sincē R(θ, N ) only differs from R(θ, N ) by a log term, we may be interested in the residual of order θ 2 . For fixed ǫ and n, this term may have importance compared to R(θ, N ), in particular for large θ 2 . Hence, the second estimator may be incongruous when estimating function with large norm.
More carefully,θ is a pertinent choice as soon asR(θ, N ) is close to R(θ, N ). This can be strengthened by the study of the quadratic risk defined in Lemma 2.1. For instance, with a fixed ǫ, this will be the case for function with 'small' Fourier coefficients (in particular small norms). On the other hand, as soon as ǫ becomes 'small', the behaviour ofR(θ, N ) and R(θ, N ) may strongly differs. This may produce significant differences on the performances of both θ ⋆ andθ.

Minimax rates of convergence for Sobolev balls
We provide in this section a short discussion about the performances of our estimator from the asymptotic minimax point of view. For this, let 1 ≤ p, q ≤ ∞ and A > 0, and suppose that f belongs to a Besov ball B s p,q (A) of radius A (see e.g. [DJKP95] for a precise definition of Besov spaces). [BG09] have derived the following asymptotic minimax lower bound for the quadratic risk over a large class of Besov balls.
Then, there exists a universal constant M 1 depending on A, s, p, q such that wheref n ∈ L 2 p ([0, 1]) denotes any estimator of the common shape f , i.e a measurable function of the random processes Y j , j = 1, . . . , n Therefore, Theorem 3.1 extends the lower bound n provided N ⋆ ≤ m 0 . It can be checked that the choice (2.9) implies that m 0 ∼ n 1 2β and thus for a sufficiently large n, we have that N ⋆ < m 0 . Similarly the choice N ⋆ ∼ n 1 2s+2β+1 yields that Now, remark that for the two estimators θ ⋆ andθ, both Theorems 2.1 and 2.2 yield that E θ θ ⋆ − θ 2 = O inf N ≤m 0R (θ, N ) and E θ θ − θ 2 = O (inf N ≤m 0 R(θ, N )) as n → +∞, since additional terms in bounds (2.11) and (2.16) are of the order O( 1 n 1−ζ ) for a sufficiently small positive ζ. Hence, combining the above arguments one finally obtains the following result: Corollary 1 Suppose that the density g satisfies the polynomial decay condition (2.1) at rate β for its Fourier coefficients. Then, as n → +∞ From the lower bound obtained in Theorem 3.1 we conclude that, for s ≥ 2β + 1, the performances of the estimatorθ are asymptotically optimal from the minimax point of view, while the estimator θ ⋆ is near-optimal up to a log 2 (n) factor. This near-optimal rate of convergence of θ ⋆ is due to the use of the penalised criterionŪ (Y, N ), see (2.7), with a penalty term involving a log 2 (n) n factor used to eliminate the term 1 n |k|≤N |γ k | −4 | |c k | 2 − ǫ 2 n in the unbiased risk U (Y, N ), see (2.6). This shows that the performances of θ ⋆ andθ are essentially the same from a minimax point of view.

Numerical experiments
For the mean pattern f to recover, we consider the smooth function shown in Figure 1(a). Then, we simulate n = 100 randomly shifted curves with shifts following a Laplace distribution g(x) = 1 √ 2σ exp − √ 2 |x| σ with σ = 0.1. Gaussian noise with a moderate variance (different to that used in the Laplace distribution) is then added to each curve. A subsample of 10 curves is shown in Figure 1(b). The Fourier coefficients of the density g are given by γ k = 1 1+2σ 2 π 2 k 2 which corresponds to a degree of ill-posedness β = 2.
The condition (2.9) thus leads to the choice m 0 = 32. Minimisation of the criterions (2.8) and (2.15) leads respectively to the choices N ⋆ = 13 andÑ = 30. An example of estimation by spectral cut-off using either the value of N ⋆ orÑ is displayed in Figure 1(c) and Figure 1(d). The estimator obtained with the frequency cut-off N ⋆ = 13 is very satisfactory, while the choicẽ N = 30 seems to be too large as the resulting estimator in Figure 1(d) is not as smooth as the estimator with N ⋆ = 13. This result tends to suggest that minimisingŪ (Y, N ) leads to a smaller choice for the frequency cut-off than the one obtained by the minimisation of the criterionŨ (Y, N ). This is confirmed by the results displayed in Figure 2 which gives the histogram of the selected values for N ⋆ andÑ over M = 100 independent replications of the above described simulations. Clearly the value of N ⋆ is generally much smaller thanÑ , and thus minimising (2.15) may lead to undersmoothing which illustrates numerically our discussion in Section 2 on the differences between θ ⋆ andθ.

Proofs
Proof of Theorem 2.1. The proof uses the following scheme. In a first time, we compute the quadratic risk of θ ⋆ and we prove that it is close toR(θ, N ⋆ ). The aim of the second part is to prove thatŪ (Y, N ⋆ ) is close toR(θ, N ⋆ ), even for a random bandwidth N ⋆ . Then, we use the fact that N ⋆ minimizes the criterionŪ (Y, N ⋆ ) over the integer smaller than m 0 and we compute the expectation of U (Y, N ) for all deterministic N in order to obtain an oracle inequality.
In a first time, where for a given z ∈ C, Re(z) denotes the real part of z andz the conjuguate. The last equality can be rewritten as N ) is defined in (2.13). Thanks to Lemma 5.1, setting K = 1, Now, consider a bound for A 2 . For all N ∈ N set Σ N = |k|≤N |γ k | −4 . Then, for all p ∈]1, 2[ and 1 > γ > 0: The last step can be derived from a Doob inequality: see for instance [CG06]. Thanks to the polynomial Assumption 2.1 on the sequence (γ k ) k and setting p = 2 × (2β + 1)/(4β + 1), we obtain Then, for all 1 > B > 0, using the Cauchy-Schwarz and Young inequalities with the bounds (5.2) and (5.3) Thus, for any K > 0, whereR(θ, N ) is defined in (2.12). This concludes the first step of our proof. Now, we writē U (Y, N ⋆ ) in terms ofR(θ, N ⋆ ). In the following, we define x n = (1 − n −1 ). We havē This equality can be rewritten as First consider the bound of E 1 . Thanks to Lemma 5.2 and some simple algebra The terms E 2 and E 3 are bounded using respectively (5.3) and Lemma 5.3. We get (5.8) We are now interested in the second residual term of (5.6). Thanks to the definition ofc k : for some D > 0 independent of ǫ and n.Indeed, we can use essentialy the same algebra as for the bound of the terms E 1 , E 2 and E 3 and the inequality |γ k | −2 ≤ n log 2 n , ∀k ≤ m 0 .
Proof of Theorem 2.2. The proof follows the same main lines as for Theorem 2.1. Inequality (5.1) provides: Thanks to Lemma 5.1 and an inequality of [CGPT02], we obtain for all 0 < γ < 1: Then, for all B > 0, using the Cauchy-Schwarz and Young inequalities with the bounds (5.2),(5.3): With the choice B = √ γ, we obtain from (5.1)-(5.4): . (5.14) Then, This equality can be rewritten as Hence, Using previous results: (5.16) The terms E 2 and E 3 are bounded using respectively (5.3) and Lemma 5.3. We get: Hence, ¿From the definition of N ⋆ , we immediatly get: In order to conclude the proof, we prove that E θ U (Y, N 0 ) is close to R(θ, N 0 ). First remark that: Since for all k ∈ N: we obtain, Therefore, Using (5.5) and (5.19), we get: SinceR(θ, N ) ≤ R(θ, N ), we eventually get: This concludes the proof of Theorem 2.2.
PROOF. Let Q > 0 a deterministic term which will be chosen later.
Thanks to (2.8) and (2.9) For all |k| ≤ m 0 , using an integration by part Let x ≥ Q. A Bernstein type inequality provides Hence, for all |k| ≤ m 0 , where C denotes a positive constant independent of Q. Let K > 0. Choosing for instance Q = n −1 K log 2 (n), we obtain |γ k | −2 |θ k | 2 + Cnm 0 log 2 (n) e −K log 2 (n)/4 , where C denotes a positive constant independent of ǫ and n. This concludes the proof of Lemma 5.1.
Lemma 5.2 Let N ⋆ defined in (2.8). For all deterministic bandwidth N and 0 < γ < 1, we have where C denotes a positive constant independent of ǫ and n.
Just set K = γ 2 in order to conclude the proof of Lemma 5.2.
Lemma 5.3 Let N ⋆ the bandwidth defined in (2.8). For all deterministic bandwidth N and 0 < γ < 1, we have