Nonparametric estimation of covariance functions by model selection

We propose a model selection approach for covariance estimation of a multi-dimensional stochastic process. Under very general assumptions, observing i.i.d replications of the process at fixed observation points, we construct an estimator of the covariance function by expanding the process onto a collection of basis functions. We study the non asymptotic property of this estimate and give a tractable way of selecting the best estimator among a possible set of candidates. The optimality of the procedure is proved via an oracle inequality which warrants that the best model is selected.


Introduction
Estimating the covariance function of stochastic process is a fundamental issue with many applications, ranging from geostatistics, financial series or epidemiology for instance (we refer to [23], [13] or [8] for general references for applications). While parametric methods have been extensively studied in the statistical literature (see [8] for a review), nonparametric procedures have only recently received a growing attention. One of the main difficulty in this framework is to impose that the estimator is also a covariance function, preventing the direct use of usual nonparametric statistical methods. In this paper, we propose to use a model selection procedure to construct a nonparametric estimator of the covariance function of a stochastic process under general assumptions for the process. In particular we will not assume Gaussianity nor stationarity. Consider a stochastic process X(t) with values in R, indexed by t ∈ T , a subset of R d , d ∈ N. Throughout the paper, we assume that its covariance function is finite, i.e σ (s, t) = cov (X (s) , X (t)) < +∞ for all s, t ∈ T and, for sake of simplicity, zero mean E (X (t)) = 0 for all t ∈ T . The observations are X i (t j ) for i = 1, . . . , N , j = 1, . . . , n, where the observation points t 1 , ..., t n ∈ T are fixed, and X 1 , . . . , X N are independent copies of the process X.
Functional approximations of the processes X 1 ,. . . ,X N from data (X i (t j )) are involved in covariance function estimation. When dealing with functional data analysis (see, e.g., [20]), smoothing the processes X 1 ,. . . ,X N is sometimes carried out as a first step before computing the empirical covariance such as spline interpolation for example (see for instance in [9]) or projection onto a general finite basis. Let x i = (X i (t 1 ) , . . . , X i (t n )) ⊤ be the vector of observations at the points t 1 , . . . , t n with i ∈ {1, . . . , N } . Let {g λ } λ∈M be a collection of possibly independent functions g λ : T → R where M denote a generic countable set of indices. Then, let m ⊂ M be a subset of indices of size |m| ∈ N and define the n × |m| matrix G with entries g jλ = g λ (t j ), j = 1, . . . , n, λ ∈ m. G will be called the design matrix corresponding to the set of basis functions indexed by m.
In such setting, usual covariance estimation is a two-step procedure: first, for each i = 1, . . . , N , fit the regression model (by least squares or regularized least squares), where ǫ i are random vectors in R n , to obtain estimates a i = (â i,λ ) λ∈m ∈ R |m| of a i where in the case of standard least squares estimation (assuming for simplicity that G ⊤ G is invertible) Then, estimation of the covariance is obtained by computing the following estimate This corresponds to approximate the process X by a truncated processX i defined as X i (t) = λ∈mâ i,λ g λ (t) , i = 1, . . . , N, and to choose the empirical covariance ofX as an estimator of the covariance of X, defined by In this paper, we consider the estimator (1.2) as the least squares estimator of the following matrix regression model where Ψ is a symmetric matrix and U i are i.i.d matrix errors. Fitting the models (1.1) and (1.4) by least squares naturally leads to the definition of different contrast and risk functions as the estimation is not performed in the same space (R |m| for model (1.1) and R |m|×|m| for model (1.4)). By choosing an appropriate loss function, least squares estimation in model (1.4) also leads to the natural estimate (1.2) derived from least square estimation in model (1.1). A similar estimate can be found in [11]. However, in this paper, we tackle the problem of model selection, i.e. choosing an appropriate data-based subset of indices m ∈ M, which is very distinct in model (1.1) and model (1.4). Indeed, model selection for (1.1) depends on the variability of the vectors x i 's while for (1.4) it depends on the variability of the matrices x i x ⊤ i 's. One of the main contributions of this paper is to show that considering model (1.4) enables to handle a large variety of cases and to build an optimal model selection estimator of the covariance without too strong assumptions on the model. Moreover it will be shown that considering model (1.4) leads to the estimator Ψ defined in (1.3) which lies in the class of non-negative definite matrices and thus provides a proper covariance matrix Σ = G ΨG ⊤ .
A similar method has been developed for smooth interpolation of covariance functions in [6], but restricted to basis functions that are determined by reproducing kernels in suitable Hilbert spaces and a different fitting criterion. Similar ideas are also tackled in [19]. These authors deal with the estimation of Σ within the covariance class Γ = GΨG ⊤ induced by an orthogonal wavelet expansion. However, their fitting criterion is not general since they choose the Gaussian likelihood as a contrast function, and thus their method requires specific distributional assumptions. We also point out that computation of the Gaussian likelihood requires inversion of GΨG ⊤ , which is not directly feasible if rank (G) < n or some diagonal entities of the non-negative definite (n.n.d) matrix Ψ are zero.
Hence, to our knowledge, no previous work has proposed to use the matrix regression model (1.4) under general moments assumptions of the process X using a general basis expansion for nonparametric covariance function estimation. We point out that the asymptotic behaviour will be taken with respect to the number of replications N while the observation points t i , i = 1, . . . , n remain fixed.
The paper falls into the following parts. The description of the statistical framework of the matrix regression is given in Section 2. Section 3 is devoted to the main statistical results. Namely we study the behavior of the estimator for a fixed model in Section 3.1 while Section 3.2 deals with the model selection procedure and provide the oracle inequality. Section 4 states a concentration inequality that is used in all the paper, while some numerical experiments are described in Section 5. The proofs are postponed to a technical Appendix.

Nonparametric model selection for covariance estimation
Recall that X = (X (t)) t∈T is an R-valued stochastic process, where T denotes some subset of R d , d ∈ N. Assume that X has finite moments up to order 4, and zero mean, i.e E (X (t)) = 0 for all t ∈ T . The covariance function of X is denoted by σ (s, t) = cov (X (s) , X (t)) for s, t ∈ T and recall that X 1 , . . . , X N are independent copies of the process X.
In this work, we observe at different points t 1 , . . . , t n ∈ T independent copies of the process, denoted by X i (t j ), with i = 1, . . . , N , j = 1, . . . , n. Set x i = (X i (t 1 ) , . . . , X i (t n )) ⊤ the vector of observations at the points t 1 , . . . , t n for each i = 1, . . . , N . The matrix Σ =E x i x ⊤ i = (σ(t j , t k )) 1≤j≤n,1≤k≤n is the covariance matrix of X at the observations points. Let x and S denote the sample mean and the sample covariance (non corrected by the mean) of the data x 1 , . . . , x N , i.e.
Our aim is to build a model selection estimator of the covariance of the process observed with N replications but without additional assumptions such as stationarity nor Gaussianity. The asymptotics will be taken with respect to N , the number of copies of the process.

Notations and preliminary definitions
First, define specific matrix notations. We refer to [18] or [14] for definitions and properties of matrix operations and special matrices. As usual, vectors in R k are regarded as column vectors for all k ∈ N. For any matrix A, A ⊤ is the transpose of A, tr (A) is the trace of A, A is the Frobenius matrix norm defined as A 2 = tr AA ⊤ , λ max (A) is the maximum eigenvalue of A, and ρ (A) is the spectral norm of A, that is ρ (A) = λ max (A) for A a n.n.d matrix.
In the following, we will consider matrix data as a natural extension of the vectorial data, with different correlation structure. For this, we introduce a natural linear transformation, which converts any matrix into a column vector. The vectorization of a k ×n matrix A = (a ij ) 1≤i≤k,1≤j≤n is the kn×1 column vector denoted by vec (A), obtained by stacking the columns of the matrix A on top of one another. That is vec(A) = [a 11 , . . . , a k1 , a 12 , . . . , a k2 , . . . , a 1n , . . . , a kn ] ⊤ .
For a symmetric k × k matrix A, the vector vec (A) contains more information than necessary, since the matrix is completely determined by the lower triangular portion, that is, the k(k + 1)/2 entries on and below the main diagonal. Hence, we introduce the symmetrized vectorization, which corresponds to a half-vectorization, denoted by vech(A). More precisely, for any matrix A = (a ij ) 1≤i≤k,1≤j≤k , define vech(A) as the k(k + 1)/2 × 1 column vector obtained by vectorizing only the lower triangular part of A. That is vech(A) = [a 11 , . . . , a k1 , a 22 , . . . , a n2 , . . . , a (k−1)(k−1) , a (k−1)k , a kk ] ⊤ . There exist a unique linear transformation which transforms the half-vectorization of a matrix to its vectorization and vice-versa called, respectively, the duplication matrix and the elimination matrix. For any k ∈ N, the k 2 × k (k + 1) /2 duplication matrix is denoted by D k , 1 k = (1, . . . , 1) ⊤ ∈ R k and I k is the identity matrix in R k×k .
If A =(a ij ) 1≤i≤k,1≤j≤n is a k × n matrix and B =(b ij ) 1≤i≤p,1≤j≤q is a p × q matrix, then the Kronecker product of the two matrices, denoted by A ⊗ B, is the kp × nq block matrix For any random matrix Z = (Z ij ) 1≤i≤k,1≤j≤n , its expectation is denoted by E (Z) = (E (Z ij )) 1≤i≤k,1≤j≤n . For any random vector z = (Z i ) 1≤i≤k , let V (z) = (cov (Z i , Z j )) 1≤i,j≤k be its covariance matrix. With this notation, V (x 1 ) = V (x i ) = (σ (t j , t k )) 1≤j≤n,1≤k≤n is the covariance matrix of X.
Let m ∈ M, and recall that to the finite set G m = {g λ } λ∈m of functions g λ : T → R we associate the n×|m| matrix G with entries g jλ = g λ (t j ), j = 1, . . . , n, λ ∈ m. Furthermore, for each t ∈ T , we write G t = (g λ (t) , λ ∈ m) ⊤ . For k ∈ N, S k denotes the linear subspace of R k×k composed of symmetric matrices. For G ∈R n×|m| , S (G) is the linear subspace of R n×n defined by Let S N (G) be the linear subspace of R nN ×n defined by and let V N (G) be the linear subspace of R n 2 N defined by All these spaces are regarded as Euclidean spaces with the scalar product associated to the Frobenius matrix norm.

Model selection approach for covariance estimation
The approach that we will develop to estimate the covariance function σ is based on the following two main ingredients: first, we consider a functional expansioñ X to approximate the underlying process X and take the covariance ofX as an approximation of the true covariance σ.
For this, let m ∈ M and consider an approximation to the process X of the following form: where a λ are suitable random coefficients. For instance if X takes its values in L 2 (T ) (the space of square integrable real-valued functions on T ) and if (g λ ) λ∈M are orthonormal functions in L 2 (T ), then one can take Several basis can thus be considered, such as a polynomial basis on R d , Fourier expansion on a rectangle T ⊂ R d (i.e. g λ (t) = e i2π ω λ ,t , using a regular grid of discrete set of frequencies ω λ ∈ R d , λ ∈ m that do not depend on t 1 , . . . , t n ). One can also use, as in [9], tensorial product of B-splines on a rectangle T ⊂ R d , with a regular grid of nodes in R d not depending on t 1 , . . . , t n or a standard wavelet basis on R d , depending on a regular grid of locations in R d and discrete scales in R + . Another class of natural expansion is provided by Karhunen-Loeve expansion of the process X (see [1] for more references).
Therefore, it is natural to consider the covariance function ρ of X as an approximation of σ. Since the covariance ρ can be written as where, after reindexing the functions if necessary, G t = (g λ (t) , λ ∈ m) ⊤ and Hence we are led to look for an estimate σ of σ in the class of functions of the form (2.2), with Ψ ∈ R |m|×|m| some symmetric matrix. Note that the choice of the function expansion in (2.1), in particular the choice of the subset of indices m, will be crucial in the approximation properties of the covariance function ρ. This estimation procedure has several advantages: it will be shown that an appropriate choice of loss function leads to the construction of symmetric n.n.d matrix Ψ (see Proposition 3.1) and thus the resulting estimate is a covariance function, so the resulting estimator can be plugged in other procedures which requires working with a covariance function. We also point out that the large amount of existing approaches for function approximation of the type (2.1) (such as those based on Fourier, wavelets, kernel, splines or radial functions) provides great flexibility to the model (2.2). Secondly, we use the Frobenius matrix norm to quantify the risk of the covariance matrix estimators. Recall that Σ = (σ (t j , t k )) 1≤j,k≤n is the true covariance matrix while Γ = (ρ (t j , t k )) 1≤j,k≤n will denote the covariance matrix of the approximated processX at the observation points. Hence Comparing the covariance function ρ with the true one σ over the design points t j , implies quantifying the deviation of Γ from Σ. For this consider the following loss function where x= (X (t 1 ) , . . . , X (t n )) ⊤ and . is the Frobenius matrix norm. Note that where the constant C does not depend on Ψ. The Frobenius matrix norm provides a meaningful metric for comparing covariance matrices, widely used in multivariate analysis, in particular in the theory on principal components analysis. See also [5], [22] and references therein for other applications of this loss function.
To the loss L corresponds the following empirical contrast function L N , which will be the fitting criterion we will try to minimize We point out that this loss is exactly the sum of the squares of the residuals corresponding to the matrix linear regression model This remark provides a natural framework to study the covariance estimation problem as a matrix regression model. Note also that the set of matrices GΨG ⊤ is a linear subspace of R n×n when Ψ ranges over the space of symmetric matrices S m . To summarize our approach, we finally propose following two-step estimation procedure: in a first step, for a given design matrix G, define and take Σ = G ΨG ⊤ as an estimator of Σ. Note that Ψ will be shown to be a n.n.d matrix (see Proposition 3.1) and thus Σ is also a n.n.d matrix. Since the minimization of L N (Ψ) with respect to Ψ is done over the linear space of symmetric matrices S m , it can be transformed to a classical least squares linear problem, and the computation of Ψ is therefore quite simple. For a given design matrix G, we will construct an estimator for Γ = GΨG ⊤ which will be close to Σ = V (x 1 ) as soon asX is a sharp approximation of X. So, the role of G and thus the choice of the subset of indices m is crucial since it determines the behavior of the estimator. Hence, in second step, we aim at selecting the best design matrix G = G m among a collection of candidates {G m , m ∈ M}. For this, methods and results from the theory of model selection in linear regression can be applied to the present context. In particular the results in [2], [7] or [16,17] will be useful in dealing with model selection for the framework (2.4). Note that only assumptions about moments, not specific distributions of the data, are involved in the estimation procedure.
Remark 2.1. We consider here a least-squares estimates of the covariance. Note that suitable regularization terms or constraints could also be incorporated into the minimization of L N (Ψ) in order to impose desired properties for the resulting estimator, such as smoothness or sparsity conditions as in [15].

Oracle inequality for covariance estimation
The first part of this section describes the properties of the least squares estimator Σ = G ΨG ⊤ while the second part builds a selection procedure to pick automatically the best estimate among a collection of candidates.

Least squares covariance estimation
Given some n × |m| fixed design matrix G associated to a finite family of |m| basis functions, the least squares covariance estimator of Σ is defined by The corresponding estimator of the covariance function σ is where G ⊤ G − is any generalized inverse of G ⊤ G (see [10] for a general definition), and In particular, if Y 1 , . . . , Y N ∈ S n (i.e., if they are symmetric matrices) then any minimizer has the form If Y 1 , . . . , Y N are n.n.d then these matrices Ψ are n.n.d.
If we assume that (G ⊤ G) −1 exists, then Proposition 3.1 shows that we retrieve the expression (1.3) for Ψ that has been derived from least square estimation in model (1.1).
Then, the least squares covariance estimate defined by (3.1) is given by the n.n.d matrix Moreover Σ has the following interpretations in terms of orthogonal projections: The proof of this theorem is a direct application of Proposition 3.1. Hence for a given design matrix G, the least squares estimatorΣ =Σ(G) is well defined and has the structure of a covariance matrix. It remains to study how to pick automatically the estimate when dealing with a collection of design matrices coming from several approximation choices for the random process X.

Main result
Consider a collection of indices m ∈ M with size |m|. Let also {G m : m ∈ M} be a finite family of design matrices G m ∈ R n×|m| , and let Σ m =Σ(G m ), m ∈ M, be the corresponding least squares covariance estimators. The problem of interest is to select the best of these estimators in the sense of the minimal The main theorem of this section provides a non-asymptotic bound for the risk of a penalized strategy for this problem. For all m ∈ M, write We assume that D m ≥ 1 for all m ∈ M. The estimation error for a given model m ∈ M is given by Given θ > 0, define the penalized covariance estimator Σ = Σ m by For the proof of this result, we first restate this theorem in a a vectorized form which turns to be a k-variate extensions of results in [2] (which are covered when k = 1) and are stated in Section 4.1. Their proof rely on model selection techniques and a concentration tool stated in Section 4.2.
Remark 3.4. Note that the penalty depends on the quantity δ m which is unknown in practice. Indeed, the penalty relies on Φ=V vec x 1 x ⊤ 1 , which reflects the correlation structure of the data. In the original paper by Baraud [3], an estimator of the variance is proposed to overcome this issue. However, the consistency proof relies on a concentration inequality which turns to be a χ 2 like inequality. Extending this inequality to our case would mean to be able to construct concentration bounds for matrices xx ⊤ , implying Wishart distributions. Some results exist in this framework [21], but adapting this kind of construction to our case is a hard task which falls beyond the scope of this paper.
However, we point out that for practical purpose, when N is large enough, this quantity can be consistently estimated using the empirical version of Φ since the x i , i = 1, . . . , N are i.i.d observed random variables, which is given by Hence, there is a practical way of computing the penalty. The influence of the use of such an estimated penalty is studied in Section 5. Note also that if τ > 0 denotes any bound of δ 2 m such that δ 2 m ≤ τ for all m, then Theorem 3.3 remains true with δ 2 m replaced by τ in all the statements.
We have obtained in Theorem 3.3 an oracle inequality since, using (3.6) and (3.8), one immediately sees that Σ has the same quadratic risk as the "oracle" estimator except for an additive term of order O 1 N and a constant factor. Hence, the selection procedure is optimal in the sense that it behaves as if the true model were at hand. To describe the result in terms of rate of convergence, we have to pay a special attention to the bias terms Σ − Π m ΣΠ m 2 . In a very general framework, it is difficult to evaluate such approximation terms. If the process has bounded second moments, i.e for all j = 1, . . . , n, we have E X 2 (t j ) ≤ C, then we can write Since n is fixed and the asymptotics are given with respect to N , the number of replications of the process, the rate of convergence relies on the quadratic error of the expansion of the process.
To compute the rate of convergence, this approximation error must be controlled.
where γ 2 λ is the eigenvalue corresponding to the eigenfunction g λ of the operator (Kf ) (t) = b a σ (s, t) f (s) ds. If X (t) is a Gaussian process then the random variables Z λ are Gaussian and stochastically independent. Hence, a natural approximation of X (t) is given by Assume that the γ λ 's have a polynomial decay of rate α > 0, namely γ λ ∼ λ −α , then we get an approximation error of order O (|m| + 1) −2α . Hence, we get that (under appropriate conditions on the design points t 1 , . . . , t n ) then the quadratic risk is of order N − 2α 2α+1 as soon as |m| ∼ N 1/(2α+1) belongs to the collection of models M N . In another framework, if we consider a spline expansion, the rate of convergence for the approximation given in [9] are of the same order.
Hence we have obtained a model selection procedure which enables to recover the best covariance model among a given collection. This method works without strong assumptions on the process, in particular stationarity is not assumed, but at the expend of necessary i.i.d observations of the process at the same points. We point out that this study requires a large number of replications N with respect to the number of observation points n. Moreover, since for a practical use of this methodology, an estimator of the penalty must be computed, relying on the estimation of the 4-th order moment, the need for a large amount of data is crucial even if the simulations are still, quite satisfactory, for not so large sample. This settings is quite common in epidemiology where a phenomenon is studied at a large number of locations but only during a short time. Hence our method is not designed to tackle the problem of covariance estimation in the high dimensional case n >> N . This topic has received a growing attention over the past years and we refer to [4] and references therein for a survey.

Oracle inequality for multidimensional regression model
Recall that we consider the following model The key point is that previous model can be rewritten in vectorized form in the following way y i = Aβ+u i , i = 1, . . . , N, Note that this model is equivalent to the following regression model where y = (y 1 ) ⊤ , . . . , (y N ) ⊤ ⊤ ∈ R N n 2 is the data vector, is a known fixed matrix, β=vech (Ψ) is an unknown vector parameter as before, and u = (u 1 ) ⊤ , . . . , (u N ) ⊤ ⊤ ∈ R N n 2 is such that E (u) = 0. It is worth of noting that this regression model has several peculiarities in comparison with standard ones.
i) The error u has a specific correlation structure, namely ii) In contrast with standard multivariate models, each coordinate of y depends on all the coordinates of β.
iii) For any estimator Σ = G ΨG ⊤ that be a linear function of the sample covariance S of the data x 1 ,. . . ,x N (and so, in particular, for the estimator minimizing L N ) it is possible to construct an unbiased estimator of its quadratic risk E Σ− Σ 2 .
More generally, assume we observe y i , i = 1, . . . , N random vectors of R k , with k ≥ 1 (k = n 2 in the particular case of model (4.1)), such that where f i ∈R k are nonrandom and ε 1 , . . . , ε N are i.i.d. random vectors in R k with E (ε 1 ) = 0 and V (ε 1 ) = Φ. For sake of simplicity, we identify the function g : X → R k with vectors (g (x 1 ) , . . . , g (x N )) ⊤ ∈ R N k and we denote by Proposition 4.1. Let q > 0 be given such that there exists p > 2 (1 + q) satisfying E ε 1 p < ∞. Then, for some constants K (θ) > 1 and c (θ, p, q) > 0 we have that This theorem is equivalent to Theorem 3.3 using the vectorized version of the model (4.3) and turns to be an extension of Theorem 3.1 in [2] to the multivariate case. In a similar way, the following result constitutes also a natural extension of Corollary 3.1 in [2]. It is also closely related to the recent work in [12].
where ∆ p was defined in Proposition 4.1.
Under regularity assumptions for the function f , depending on a smoothness parameter s, the bias term is of order . Hence, for q = 1 we obtain the usual rate of convergence N − 2s 2s+1 for the quadratic risk as soon as the optimal choice D m = N 1 2s+1 belongs to the collection of models, yielding the optimal rate of convergence for the penalized estimator.

Concentration bound for random processes
Recall that k ≥ 1. The following result is a k-variate extension of results in [2] (which are covered when k = 1). Its proof is deferred to the Appendix. . For all p ≥ 2 such that E ε 1 p < ∞ it holds that, for all x > 0, where the constant C (p) depends only on p.

Numerical examples
In this section we illustrate the practical behaviour of the covariance estimator by model selection proposed in this paper. In particular, we study its performance when computing the criterion using the estimated penalty described in Section 3.2. The programs for our simulations were implemented using MAT-LAB and the code is available on request.
We will consider i.i.d copies X 1 , . . . , X n of different Gaussian processes X on T = [0, 1] with values in R, observed at fixed equi-spaced points t 1 , . . . , t n in [0, 1] for a fixed n, generated according to where m * denotes the true model dimension, (g * λ ) λ (λ = 1, . . . , m * ) are orthonormal functions on [0, 1], and the coefficients a 1 , . . . , a m * are independent and identically distributed Gaussian variables with zero mean. Note that E (X (t j )) = 0 for all j = 1, . . . , n, and that the covariance function of the process X at the points t 1 , . . . , t n is given by for all 1 ≤ j, k ≤ n. The corresponding covariance matrix is Σ = (σ (t j , t k )) 1≤j,k≤n . We will write X = (X i (t j )) an n × N matrix. The columns of X are denoted by The covariance estimation by model selection is computed as follows. Let (g λ ) λ be an orthonormal basis on [0, 1] (wich may differ from the original basis functions g * λ ). For a given M > 0, candidate models are chosen among the collection M = {{1, . . . , m} : m = 1, . . . , M }. To each set indexed by m we associate the matrix (model) G m ∈ R n×m , with entries (g λ (t j )) 1≤j≤n,1≤λ≤m , which corresponds to a number m of basis functions g 1 , . . . , g m in the expansion to approximate the process X. We aim at choosing a good model among the family of models {G m : m ∈ M} in the sense of achieving the minimum of the quadratic risk The ideal model m 0 is the minimizer of the risk function m → R (m).
Note that for all m = 1, . . . , M , Σ m is the least squares covariance estimator of Σ corresponding to the model m as in Theorem 3.2 and the constant C does not depend on m. Thus, L N and P C can be regarded as the empirical contrast function and the penalized criterion respectively that will be used for visual presentations of the results. For each model m = 1, . . . , M we evaluate the penalized criterion (5.3) with θ = 1 and expect that the minimum of P C is attained at a value m δ 2 close to m 0 . The quantity δ 2 as pointed out in Section 3.2, which is unknown in practice but can be consistently estimated by (3.9), yielding the plug-in estimate δ 2 We study the influence of using δ 2 = δ 2 m 1≤m≤M rather than δ 2 = δ 2 m 1≤m≤M on the model selection procedure. Actually, we first compute the following approximation of the risk R, and then, compute the estimator of the penalized criterion P C We denote by m δ 2 the point at which the penalized criterion estimate P C attains its minimum value, i.e., the model selected by minimizing P C.
In the following examples we plot the empirical contrast function L N (m = 1, . . . , M ), the risk function R, the approximate risk function R, the penalized criterion P C and the penalized criterion estimate P C. We also show figures of the true covariance function σ (t, s) for s, t ∈ [0, 1] and the penalized covariance estimate based on P C, i.e., σ (t, s) = G ⊤ m,t Ψ m G m,s , where m = m δ 2 , Ψ m is obtained as in Theorem 3.2 and G m,t = (g 1 (t) , . . . , g m (t)) ⊤ ∈ R m for all t ∈ [0, 1]. Furthermore, we will focus attention on finite sample settings, i.e., those in which the number of repetitions N is not notably large (in comparison with the number n of design points t j ). Example 1. Let g * 1 , . . . , g * m * be the Fourier basis functions We simulate a sample of size N = 50 according to (5.1) with n = m * = 35 and V (a λ ) = 1 for all λ = 1, . . . , m * . Set M = 31 and consider the models obtained by choosing m Fourier basis functions. In this setting, it can be shown that the minimum of the quadratic risk R is attained at m 0 = N 2 − 1, which for N = 50 gives m 0 = 24. Figures 1a, 1b, 1c and 1d present the results obtained for a simulated sample. Figure 1a shows that the approximate risk function R reproduces the shape of the risk function R, so replacing δ 2 by δ 2 into the Penalized criterion Penalized criterion estimate 1c. Penalized criterion P C and its estimate P C.
1d. True covariance function σ and estimated covariance function σ.  Penalized criterion Penalized criterion estimate 2c. Penalized criterion P C and its estimate P C.
2d. True covariance function σ and estimated covariance function σ. contrary, both minimization of the penalized criterion P C and its estimate P C lead to select the best model, i.e., m δ 2 = 24 and m δ 2 = 24 (see Figure 1c). This also demonstrates that replacing δ 2 by δ 2 into the penalized criterion does not notably deteriorate the performance of the model selection procedure in this example. Figure 1d shows that, in spite of the small sample size N = 50, a quite nice approximation to the true covariance function σ is achieved by its penalized covariance estimate σ based on P C. It is clear that the selected model m δ 2 is a random variable that depends on the observed sample X through the penalized criterion estimate P C. Figure 1e illustrates such a variability by plotting the curves P C corresponding to several simulated samples. It can be observed that the selected model m δ 2 is close to the ideal model m 0 , and the risk R evaluated at the selected model is much less than that of the largest model M = 31 that would be chosen by using the empirical contrast function.
Example 2. Using the Fourier basis (5.4) we simulate a sample of size N = 50 according to (5.1) with n = m * = 35 as in the previous example, but now we set a geometric decay of the variances (or eigenvalues of the covariance operator of the process X) V (a λ ), λ = 1, . . . , m * ; namely, V (a 1 ) = r and Penalized criterion Penalized criterion estimate 3c. Penalized criterion P C and its estimate P C.
3d. True covariance function σ and estimated covariance function σ. V (a λ+1 ) = V (a λ ) r for λ = 2, 3, .., where r = 0.95. We consider a collection of models up to M = 34, with m Fourier basis functions. In this setting it can be proved that the minimum of the risk R is attained at m 0 = (log (2/ [(1 − r)(N − 2) + 2])) / log(r), which yields m 0 = 16 for the actual values N = 50 and r = 0.95. The results obtained from a simulated sample are shown in Figures 2a, 2b, 2c and 2d. It can be noted that the empirical contrast function is strictly decreasing without any "elbow" effect, while the selected model by both the penalized criterion and the penalized criterion estimate is m δ 2 = m δ 2 = 16, which is the best model m 0 according to the risk R.  Example 4. In this example we use different basis functions for generating the data and for estimating the covariance. Specifically, the process is generated using a wavelet basis and the collection of models considered in the model selection procedure corresponds to different numbers of Fourier basis functions up to M = 31. We simulate a sample of size N = 50 according to (5.1) using the Symmlet 8 wavelet basis, with n = m * = 32. We set the variances of the random coefficients a λ with a geometric decay likewise in Example 2, i.e., V (a 1 ) = r and V (a λ+1 ) = V (a λ ) r, where r = 0.95. The results of one simulation are displayed in Figures 4a, 4b, 4c and 4d. Here it can be also observed that the penalized estimation procedure shows good performance even when using an estimate of the penalized criterion, leading to choosing the model m δ 2 = m δ 2 = 16 = m 0 .
Summarizing the results of these simulated examples, we may conclude that for not so large sample sizes N : a) The empirical contrast function L N is useless to select a model that attains a low risk. It is a strictly decreasing function whose minimization leads to simply choose the largest model M within the set of candidate models m = 1, . . . , M . Furthermore, frequently the curve L N does not have an "elbow" that could guide researchers to choose a suitable model by exploratory analysis. b) The covariance function estimator by model selection introduced in this paper shows good performance in a variety of examples when based on the penalized criterion P C but also when using the estimated penalty P C.

Proofs of preliminary results
Proof of Proposition 3.1 Proof. a) The minimization problem is equivalent to minimize The where y = vec Y . Minimization of this quadratic function with respect to β in R |m|(|m|+1)/2 is equivalent to solve the normal equation Finally, it can be verified that Ψ given by (3.3) satisfies this equation as a consequence of the fact that such Ψ it holds that b) It straightforwardly follows from part a).

Proofs of main results
Proof of Proposition (4.1) Proof. The proof follows the guidelines of the proof in [2]. More generally we will prove that for any η > 0 and any sequence of positive numbers L m , if the penalty function pen : M −→ R + is chosen to satisfy: then for each x > 0 and p ≥ 2 where we have set To obtain (4.4), take η = θ 2 = L m . As for each m ∈ M, we get that for all q > 0, we derive from (6.4) and (6.3) that for all p > 2 (1 + q) m using that P (H (f ) > u) ≤ 1. Indeed, for m ∈ M such that D m ≥ 1, using that Inequality (6.5) enables to conclude that (4.4) holds assuming (6.3).
We now turn to the proof of (6.3). Recall that, we identify the function g : X → R k with vectors (g (x 1 ) . . . g (x N )) ⊤ ∈ R N k and we denote by a, b N = For each m ∈ M we denote by P m the orthogonal projector onto the linear space (g (x 1 ) . . . g (x N )) ⊤ : g ∈ L m ⊂ R N k . This linear space is also denoted by L m . From now on, the subscript m denotes any minimizer of the function For any g ∈ R N k we define the least-squares loss function by Using the definition of γ N we have that for all g ∈ R N k , Then we derive that and therefore By the definition of f , we know that for all m ∈ M and for all g ∈ L m . Then So we get from (6.6) and (6.7) that In the following we set for each m ′ ∈ M, otherwise.
For any x > 0, We first bound P 2,m ′ (x). Let t be some positive number, u im ′ , ε i with ε i i.i.d. and with zero mean, then by Rosenthal's inequality we know that for some constant c (p) that depends on p only Using also that by definition u m ′ 2 (6.20) We deduce from (6.18), (6.19) and (6.20) that Then for some constant c ′ (p) that only depends on p By this last inequality and (6.17) we get that Let υ be some positive number depending on η only to be chosen later. We take t such that N t 2 = min υ, α 3 (L m ′ D m ′ + x) δ 2 m ′ and set N p 2 (m ′ ) = υL m ′ D m ′ δ 2 m ′ . We get The last inequality holds using (6.21). We now bound P 1,m ′ (x) for those m ′ ∈ M such that D m ′ ≥ 1. By using our version of Corollary 5.1 in Baraud with A = P m ′ , Tr A = D m ′ and ρ A = 1, we obtain from (4.5) that for any positive x m ′ P N P m ′ ε such that A = A ⊤ A. Then Now take U i = M i ε, ε ∈ R (N k) . Then for each positive number t and p > 0 P (ζ (ε) ≥ E (ζ (ε)) + t) ≤ P (|ζ (ε) − E (ζ (ε))| > t) ≤ t −p E (|ζ (ε) − E (ζ (ε))| p ) by Markov inequality