Uniform convergence and asymptotic confidence bands for model-assisted estimators of the mean of sampled functional data

When the study variable is functional and storage capacities are limited or transmission costs are high, selecting with survey sampling techniques a small fraction of the observations is an interesting alternative to signal compression techniques, particularly when the goal is the estimation of simple quantities such as means or totals. We extend, in this functional framework, model-assisted estimators with linear regression models that can take account of auxiliary variables whose totals over the population are known. We first show, under weak hypotheses on the sampling design and the regularity of the trajectories, that the estimator of the mean function as well as its variance estimator are uniformly consistent. Then, under additional assumptions, we prove a functional central limit theorem and we assess rigorously a fast technique based on simulations of Gaussian processes which is employed to build asymptotic confidence bands. The accuracy of the variance function estimator is evaluated on a real dataset of sampled electricity consumption curves measured every half an hour over a period of one week.


Introduction
Survey sampling techniques, which consist in randomly selecting only a part of the elements of a population, are interesting alternatives to signal compression when one has to deal with very large populations of quantities that evolve along time. With the development of automatic sensors such very large datasets of temporal data are not unusual anymore and survey sampling techniques offer a good trade-off between accuracy of the estimators and size of the analyzed data. Examples can be found in different domains such as internet traffic monitoring (see Callado et al. (2009)) or estimation of energy consumption measured by individual smart meters. Motivated by the estimation of mean consumption electricity profiles measured every half an hour over one week, Cardot and Josserand (2011) have introduced Horvitz-Thompson estimators of the mean function and have shown, under weak hypotheses on the regularity of the functional trajectories and the sampling design, that one gets uniformly convergent estimators. They also prove a functional central limit theorem, in the space of continuous functions, that can, in part, justify the construction of asymptotic confidence bands. More recently, Cardot et al. (2012b) made a comparison, in terms of precision of the mean estimators of electricity load curves and width of the confidence bands, of different sampling approaches that can take auxiliary information into account. One of the conclusions of this empirical study was that very simple strategies based on simple sampling designs (such as simple random sampling without replacement) could be improved much if some well chosen auxiliary information, whose total is known for the whole population, is also taken into account at the estimation stage, with model-assisted estimators. Important variables for the electricity consumption such as temperature or geographical location were not available for these datasets so that only one auxiliary information, the mean past consumption over the previous period, was taken into account. Its correlation with the current consumption is always very high (see Figure 1) so that linear regression models are natural candidates for assisting the Horvitz-Thompson estimator. More generally, one advantage of linear approaches is that they only require the knowledge of the auxiliary variable totals in the population. More sophisticated nonlinear or nonparametric approaches would have required to know the values of the auxiliary variables for all the elements of the population.
Thus, we focus in this paper on linear relationships between the set of auxiliary variables and the response at each instant t of the current period. The regression coefficients vary in time (see Faraway (1997) or Ramsay and Silverman (2005)) so that the model-assisted estimator can be seen as a direct extension, to a functional or varying-time context, of the generalized regression (GREG) estimators studied in Robinson and Särndal (1983) and Särndal et al. (1992). Note also that from another point of view, the model-assisted estimator can be obtained using a calibration technique (Deville and Särndal (1992)).
Confidence bands are then built using a simulation technique developed in Faraway (1997), Cuevas et al. (2006) and Degras (2011). We first estimate the covariance function of the mean estimator and then, assuming asymptotic normality, perform simulations of a centered Gaussian process whose covariance function is the covariance function estimated at the previous step. We can, this way, obtain an approximation to the law of the "sup" and deduce confidence bands for the mean trajectory. In a recent work, Cardot et al. (2012a) have given a rigorous mathematical justification of this technique for sampled functional data and Horvitz-Thompson estimators for the mean. The required theoretical ingredients that can justify such a procedure are the functional central limit theorem for the mean estimator, in the space of continuous functions equipped with the sup-norm, as well as a uniformly consistent estimator of the variance function. The aim of this paper is to study the asymptotic properties of model-assisted estimators and to show that we obtain, under classical assumptions, a uniformly consistent estimator of the mean as well as of its variance function. One additional difficulty is that, for model-assisted estimators, the variance function cannot be derived exactly and we can only have asymptotic approximations. Then, we deduce that the confidence bands built via simulations have asymptotically the desired coverage. In Section 2, we introduce notations and we suggest a slight modification of the model-assisted estimators which permits control of the variance of the regression coefficient estimator. Under classical assumptions on the sampling design and on the regularity of the trajectories, we state, in Section 3, the uniform convergence of the model assisted-estimators to the mean function. Under additional assumptions on the design we also prove that we can get a consistent estimator of the covariance function and a functional central limit theorem that can justify rigorously that the confidence bands built with the procedure based on Gaussian process simulations attain asymptotically the desired level of confidence. In Section 4, we assess the precision of the variance estimator on the real dataset consisting of electricity consumption curves studied in Cardot et al. (2012b) and observe that, in our context, the approximation error is negligible compared to the sampling error. A brief discussion about possible extensions and future investigation is proposed in Section 5. All the proofs are gathered in an Appendix.

Notations and estimators
2.1. The Horvitz Thompson estimator for functional data Let us consider a finite population U N = {1, ..., N } of size N supposed to be known, and suppose that, for each unit k of the population U N , we can observe a deterministic curve Y k = (Y k (t)) t∈[0,T ] . The target is the mean trajectory µ N (t), t ∈ [0, T ], defined as follows: We consider a sample s, with size n, drawn from U N according to a fixed-size sampling design p N (s), where p N (s) is the probability of drawing the sample s. For simplicity of notations, the subscript N is omitted when there is no ambiguity. We suppose that the first and second order inclusion probabilities satisfy π k = P(k ∈ s) > 0, for all k ∈ U , and π kl = P(k&l ∈ s) > 0 for all k, l ∈ U N , k = l. Without auxiliary information, the population mean curve µ(t) is often estimated by the Horvitz-Thompson estimator, defined as follows where 1 k is the sample membership indicator, 1 k = 1 if k ∈ s and 1 k = 0 denotes expectation with respect to the sampling design.
The Horvitz-Thompson covariance function of µ between two instants r and t, computed with respect to the sampling design, is defined as follows Note that for r = t, we obtain the Horvitz-Thompson variance function.

The mean curve estimator assisted by a functional linear model
Let us suppose now that for each unit k ∈ U we can also observe p real variables, X 1 , ..., X p , and let us denote by x k = (x k1 , ..., x kp ) , the value of the auxiliary variable vector for each unit k in the population. We introduce an estimator based on a linear regression model that can use these variables in order to improve the accuracy of µ. By analogy to the real case (see e.g. Särndal et al. (1992)) we suppose that the relationship between the functional variable of interest and the auxiliary variables is modeled by the superpopulation model ξ defined as follows: where β(t) = (β 1 (t), . . . , β p (t)) is the vector of functional regression coefficients, kt are independent (across units) and centered continuous time processes, E ξ ( kt ) = 0, with covariance function Cov ξ ( kt , kr ) = Γ(t, r), for (t, r) ∈ [0, T ] × [0, T ]. This model is a direct extension to several variables of the functional linear model proposed by Faraway (1997). If x k and Y k are known for all units k ∈ U and if the matrix , the ordinary least squares estimator. Then, the mean curve µ(t) can be estimated by the generalized difference estimator (see Särndal et al. (1992), Chapter 6) defined as follows for all t ∈ [0, T ], In practice, we do not know Y k except for k ∈ s, and it is not possible to computeβ(t). An estimator of µ(t) is obtained by substituting each total iñ β(t) by its Horvitz-Thompson estimator. Thus, if the matrix G = 1 N k∈s Remark that the denominator N is used in the expression ofβ(t) for asymptotic purposes and need not be estimated. The model-assisted estimator µ M A (t) is then defined by replacingβ(t) by β(t) in (5), where Y k (t) = x k β(t). Since k∈U Y k (t) = k∈U x k β(t), the only required information to build µ M A (t) is x k and Y k (t) for all the units k ∈ s as well as the population totals of the auxiliary variables, k∈U x k .
Remark 1. If the vector of auxiliary information contains the intercept (constant term), then it can be shown (see Särndal (1980)) that the Horvitz-Thompson estimator of the estimated residuals Y k (t) − Y k (t) is equal to zero for each t ∈ [0, T ]. This means that the model-assisted estimator µ M A reduces in this case to the mean in the population of the predicted values Y k , Moreover, if only the intercept term is used, namely Y k (t) = β(t) + ε kt for all k ∈ U, then the estimatorμ M A is simply the well known Hájek estimator, which is sometimes preferred to the Horvitz-Thompson estimator (see e.g. Särndal et al. (1992), Chapter 5.7).
Remark 2. Estimator µ M A (t) may also be obtained by using a calibration approach (Deville and Särndal (1992)) which consists in looking for weights w ks , k ∈ s, that are as close as possible, according to some distance, to the sampling weights 1/π k while estimating exactly the population totals of the auxiliary information, Considering the chi-square distance leads to the following choice of weights x k π k and the calibration estimator s w ks Y k (t)/N for the mean µ(t) is equal to µ M A (t) defined in (6).

A regularized estimator for asymptotics
The construction of the estimator µ M A (t) is based on the assumption that the matrix G is invertible. To show the uniform convergence, we consider a modification of G which will permit control of the expected norm of its inverse. Such a trick has already been used for example in Bosq (2000) and Guillas (2001). Since G is a p × p symmetric and non negative matrix it is possible to write it as follows where η j,n is the j th eigenvalue, η 1,n ≥ · · · ≥ η p,n ≥ 0, and v jn is the corresponding orthonormal eigenvector. Let us consider a real number a > 0 and define the following regularized estimator of G, It is clear that G a is always invertible and where . is the spectral norm for matrices. Furthermore, if η p,n ≥ a then G = G a . If a > 0 is small enough, we show under standard conditions on the moments of the variables X 1 , . . . , X p and on the first and second order inclusion probabilities that P( G = G a ) = P(η p,n < a) = O(n −1 ) (see Lemma A.1 in the Appendix). Consequently, it is possible to estimate the mean function µ N (t) by the following estimator where Y k,a (t) = x k β a (t) and β a (t) = G −1 a 1 N k∈s

Discretized observations
Note finally that with real data, we do not observe Y k (t) at all instants t in [0, T ] but only for a finite set of D measurement times, 0 = t 1 < ... < t D = T . In functional data analysis, when the noise level is low and the grid of discretization points is fine, it is usual to perform a linear interpolation or to smooth the discretized trajectories in order to obtain approximations of the trajectories at every instant t ∈ [0, T ] (see Ramsay and Silverman (2005)).
If there are no measurement errors, if the trajectories are regular enough (but not necessarily differentiable) and if the grid of discretization points is dense enough, Cardot and Josserand (2011) showed that linear interpolation can provide sufficiently accurate approximations to the trajectories so that the approximation error can be neglected compared to the sampling error for the Horvitz-Thompson estimator. Note also that even if the observations are corrupted by noise, it has been shown by simulations in Cardot et al. (2012a) that smoothing does really improve the accuracy of the Horvitz-Thompson estimator only when the noise level is high.
Thus, for each unit k in the sample s, we build the interpolated trajectory and we define β a,d (t) as the estimator of β(t) based on the discretized observations as follows Therefore, the estimator of the mean population curve µ(t) based on the discretized observations is obtained by linear interpolation between µ M A,a (t i ) and

Asymptotic properties under the sampling design
All the proofs are postponed in an Appendix.

Assumptions
To derive the asymptotic properties under the sampling design p(·) of µ M A,d we must suppose that both the sample size and the population size become large. More precisely, we consider the superpopulation framework introduced by Isaki and Fuller (1982) with a sequence of growing and nested populations U N with size N tending to infinity and a sequence of samples s N of size n N drawn from U N according to the sampling design p N (s N ). The first and second order inclusion propabilities are respectively denoted by π kN and π klN . For simplicity of notations and when there is no ambiguity, we drop the subscript N . To prove our asymptotic results we need the following assumptions.
A1. We assume that lim There are two positive constants C 2 and C 3 and 1 ≥ β > 1/2 such that, for all N and for all ( A4. We assume that there is a positive constant C 4 such that for all k ∈ U, x k 2 < C 4 . A5. We assume that, for N > N 0 , the matrix G is invertible and that the number a chosen before satisfies G −1 < a −1 . Assumptions A1 and A2 are classical hypotheses in survey sampling and deal with the first and second order inclusion probabilities. They are satisfied for many usual sampling designs with fixed size (see for example Hájek (1981), Robinson and Särndal (1983) and Breidt and Opsomer (2000)).
Assumption A3 is a minimal regularity condition already required in Cardot and Josserand (2011). Even if pointwise consistency, for each fixed value of t, can be proved without any condition on the Hölder coefficient β, this regularity condition is necessary to get a uniform convergence result. A counterexample is given in Hahn (1977) when β ≤ 1/2. More precisely it is shown that the sample mean i.i.d copies of a uniformly bounded continuous random function defined on a compact interval may not satisfy the Central Limit Theorem in the space of continuous functions. The hypothesis β > 1/2 also implies that the trajectories of the residual processes kt , see (4), are regular enough (but not necessarily differentiable). Assumption A4 could certainly be weakened at the expense of longer proofs. Assumption A5 means that for all u ∈ R, with u = 0, we have u Gu ≥ au u. The same kind of assumption is required in Isaki and Fuller (1982) to get the pointwise convergence in probability whereas Robinson and Särndal (1983) introduce a much stronger condition (condition A7 in their article) which directly deals with the mean square convergence of the estimator of the vector β of regression coefficients.

Uniform consistency ofμ M A,d
We aim at showing that µ M A,d is uniformly consistent for µ, namely that, for all ε > 0, when N tends to infinity. The suitable space for proving the uniform convergence is the space of continuous functions on [0, T ], denoted by C[0, T ], equipped with its natural distance ρ; for two elements f, g ∈ C[0, T ], the distance between f and g is Remark that with assumption A3 the trajectories Y k are continuous for all k ∈ U, and thus the mean curve µ belongs to C[0, T ] as well as its estimator µ M A,d , by construction.
We first state the uniform consistency of the estimator β a,d (t) towards its population counterpartβ(t) under conditions on the number and the repartition of discretization points.
We can now state a similar type of result for the estimator of the mean function.
Proposition 3.2. Let assumptions (A1)-(A5) hold. If the discretization scheme satisfies max i∈{1,.., We deduce from Proposition 3.2 that estimator µ M A,d (t) is asymptotically unbiased as well as design consistent. Note that the approximation error (with linear interpolation) is negligible, compared to the sampling variability, under the additional assumption on the repartition of the discretization points. This assumption also tolds us that less discretization points are required for smoother trajectories.
Let us also remark that, for each t, where 1 k is the sample membership, so that it is not difficult to prove, under previous assumptions and by using lemma A.4 in the Appendix, that for all

Covariance function estimation under the sampling design
We undertake in this section a detailed study of the covariance function of estimatorμ M A,d . The covariance function is computed with respect to the sampling design p(·) and from relation (9), we can deduce that µ M A,d is a nonlinear function of Horvitz-Thompson estimators, so the usual Horvitz-Thompson covariance formula given by (3) can not be used anymore. Nevertheless, in light of relation (11), the covariance function of µ M A,d between two instants r and t may be approximated by the covariance Cov p (μ(r),μ(t)), which in turn is equal to the Horvitz-Thompson covariance applied to the residuals Y k −Ỹ k . Let us denote by γ M A the approximative covariance function of µ M A,d defined as follows This approximation explains that model-assisted estimators will perform much better than Horvitz-Thompson estimators if the residuals Y k (t) −Ỹ k (t) are small compared to Y k (t). The covariance function γ MA (r, t) can be estimated by the Horvitz-Thompson variance estimator for the estimated residuals where To prove the consistency of the covariance estimator γ MA,d (r, t), let us introduce additional assumptions that involve higher-order inclusion probabilities as well as conditions on the fourth order moments of the trajectories.
A6. We assume that A7. There are two positive constants C 5 and C 6 such that Condition A6 has already been assumed by Breidt and Opsomer (2000) in a nonparametric model-assisted context and in Cardot and Josserand (2011) for Horvitz-Thompson estimators. It can be checked that it is fulfilled for simple random sampling without replacement (SRSWOR) or stratified sampling with SRSWOR within each strata. More generally, it is fulfilled for high entropy sampling designs. Boistard et al. (2012) prove that it is fulfilled for the rejective sampling whereas Cardot et al. (2012c) check that it is true for sampling designs, such as Sampford sampling, whose Kullback-Leibler divergence with respect to rejective sampling, tends to zero when the population size increases.
Proposition 3.3. Assume (A1)-(A7) hold and the sequence of discretization schemes satisfy lim N →∞ max i∈{1,.., Since nγ MA (r, t) remains bounded, the previous proposition tells us that γ MA,d is consistent pointwise and the variance function estimator is uniformly convergent. Note also that the condition on the number of discretization points is much weaker than in Proposition 3.2 because we do not give here rates of convergence. To obtain such rates, we would also need to impose additional assumptions on the sampling design.

Asymptotic normality and confidence bands
We assume a supplementary assumption in order to get the asymptotic normality of the functional estimatorμ MA,d in the space of continuous functions.
A8. We assume that for each fixed value of t ∈ [0, 1], This assumption is satisfied for usual sampling designs (see e.g. Fuller (2009), Chapter 2.2). Note that using relation (11), we can write for any fixed value t ∈ [0, T ], is also pointwise asymptotically Gaussian when conditions of Proposition 3.1 hold. Let us state now a much stronger result which indicates that the convergence to a Gaussian distribution also occurs for the trajectories, in the space of continuous functions (see Billingsley (1968), Chapter 2).
Proposition 3.4. Assume (A1)-(A5) and (A8) hold. If the discretization scheme satisfies max i={1,.., where indicates the convergence in distribution in C[0, T ] with the uniform topology and Z is a Gaussian process taking values in C[0, T ] with mean 0 and covariance function γ Z (r, t) = lim n→+∞ nγ MA (r, t).
The "sup" functional defined on the space of continuous functions being continuous, the Proposition 3.4 implies that the real random variable sup t | √ n { µ MA,d (t) − µ(t)} | converges in distribution to sup t |Z(t)|. We thus consider confidence bands for µ of the form where c is a suitable number and σ(t) = n γ MA,d (t, t). Note that the fact that µ belongs to the confidence band defined in (14) is equivalent to Given a confidence level 1 − α ∈ (0, 1), one way to build such confidence band, that is to say one way to find an adequate value for c α , is to perform simulations of a centered Gaussian functions Z defined on [0, T ] with mean 0 and covariance function n γ MA,d (r, t) and then compute the quantile of order 1−α of sup t∈[0,T ] Z(t)/ σ(t) . In other words, we look for a constant c α , which is in fact a random variable since it depends on the estimated covariance function γ MA,d , such that The asymptotic coverage of this simulation based procedure has been rigorously studied for the Horvitz-Thompson estimators of the mean of sampled and noisy trajectories in Cardot et al. (2012a) whereas Cardot et al. (2012b) have successfully employed this approach on real load curves with model-assisted estimators. The next proposition, which can be seen as a functional version of Slutsky's Lemma, provides a rigorous justification of this latter technique.
Proposition 3.5. Assume (A1)-(A8) hold and the discretization scheme satisfies max i∈{1,.., Let Z be a Gaussian process with mean zero and covariance function γ Z (as in Proposition 3.4). Let ( Z N ) be a sequence of processes such that for each N , conditionally on the estimator γ MA,d defined in (13), Z N is Gaussian with mean zero and covariance n γ MA,d . Suppose that γ Z (t, t) is a continuous function and inf t γ Z (t, t) > 0. Then, as N → ∞, the following convergence holds uniformly in c, where σ(t) = n γ MA,d (t, t) and σ(t) = γ Z (t, t).
As in Cardot et al. (2012a), it is possible to deduce from the previous proposition that the chosen value c α = c α ( γ MA,d ) provides asymptotically the desired coverage since it satisfies

An illustration on electricity consumption curves
We consider, as in Cardot et al. (2012b), a population of N = 15069 electricity consumption curves, measured every 30 minutes over a period of one week. Each element k of the population is thus a vector with size 336, denoted by (Y k (t), t ∈ {1, . . . , 336}). The auxiliary information X of values x k , k ∈ U is simply the mean consumption of each meter k ∈ U recorded during the week before the sample is drawn. As shown in Figure 1, the real variable X is strongly correlated with the consumption at each instant t of the current period of estimation so that a linear model with a functional response is well adapted for model-assisted estimation.
We draw samples s i of size n, for i = 1, . . . , I = 10000 with simple random sampling without replacement (SRSWOR) so that π k = n/N for k ∈ {1, . . . , N }. We define, for each sample s i , the model-assisted estimator of the mean curve, where Cardot et al. (2012b) noted that, for the same sample size, the mean square error of estimation of the mean curve is divided by four compared to the Horvitz-Thompson estimator with SRSWOR when considering the model-assisted estimator defined in (15). There is only one covariate in this study and we did not encounter any problem with the invertibility of matrix G, the value of parameter a is thus a = 0.
We also defineμ(t) = 1 I I i=1μ a Monte Carlo approach based on the I = 10000 samples drawn with simple random sampling without replacement. The approximation to the true variance function is thus given by for (r, t) ∈ {1, . . . , 336}.
The following quadratic loss criterion which measures a relative error is used to evaluate, for each sample, the accuracy of the variance estimator defined in (13), We also decompose, over the I = 10000 estimations, the relative mean square error (RMSE) of the estimator into an approximation error (RB(γ M A,d ) 2 ) and a variance term (V R(γ M A,d )) that can be related to the sampling error, for the ith sample. The relative bias of the estimatorγ M A,d may be written as The RMSE as well as the approximation error and statistics (quantiles) for E r are given in Table 1. We can note that logically the RMSE decreases as the sample size increases and that even for moderate sample sizes, the estimations are rather precise. A closer look on how the RMSE is decomposed reveals that estimation error is mainly due to the sampling error, via the variance term whereas the approximation error term RB(γ M A,d ) 2 is negligible. This fact can be observed in Figure 2 were we plot the true variance function γ emp over the considered period, its approximation γ M A as well as an estimation γ M A,d with a sample with size n = 1500, whose error according to criterion (17) is close to the mean error (E r ≈ 0.02).
We have also plotted in Figure 3 the difference between the empirical covariance function γ emp and its approximation γ M A and in Figure 4 the difference between γ M A and its estimation γ M A,d for a sample with size n = 1500 whose error, E r ≈ 0.02, is close to the mean value. Once again, it is clearly seen that the approximation error to the true covariance function (see Figure 3) is much smaller than the sampling error (see Figure 4). We can also remark some strong periodic pattern which reflects the natural daily periodicity in the electricity consumption behavior and that is related to the temporal correlation of the unknown residual process kt defined in (4).

Concluding remarks
We have made in this paper an asymptotic study of model-assisted estimators, with linear regression models with functional response, when the target is the mean of functional data with discrete observations in time. This work can be extended in many directions. For example, one could consider more sophisticated regression models than model (4) such as non linear or nonparametric models with functional response by adapting, in a survey sampling context, models studied in the functional data analysis literature (see Chiou et al. (2004), Cardot (2007), or Ferraty et al. (2011)). However, one important drawback of such more sophisticated approaches is that they would require to know x k for all the units k in the population as opposed to only their population totals.
An interesting direction for future investigation would be to consider noisy and possibly sparse measurements in time. For the Horvitz-Thompson estimator, local polynomials are employed in Cardot et al. (2012a) in order to first smooth the trajectories and it would certainly be possible to adapt the techniques developed in this work to the model-assisted estimation procedure.
Another promising direction for future research would be to adapt modelassisted estimators for time-varying samples. When one works with large networks of sensors it can be possible to consider a sequence of samples s(t) that evolve along time. A preliminary work (see Degras (2012)), which focuses on Horvitz-Thompson estimators and stratified sampling clearly shows that such time-varying samples can outperform sampling designs that are fixed in time.  Throughout the proofs we use the letter C to denote a generic constant whose value may vary from place to place. We also denote by α k = 1 k π k − 1, k ∈ U and by ∆ kl = π kl − π k π l , k, l ∈ U.

A.1. Some useful Lemmas
Note that the result showed in the first Lemma is sometimes stated as an assumption (see e.g Robinson and Särndal (1983)). It is used to prove the convergence of the estimator of the mean in terms of mean square error.
Proof For i), we just need to remark that, under hypotheses (A3), (A4) and (A5), The proof of point ii) is similar, but also requires the use of lower bounds on the first order inclusion probabilities (assumption (A2)), The following Lemma states the pointwise mean square convergence for any fixed value of t ∈ [0, T ].
Lemma A.4. Suppose that assumptions (A1)-(A5) hold. Then, there is a positive constant ζ 1 such that, for all t ∈ [0, T ], Proof . The demonstration is similar to the proof of Lemma A.5 and is thus omitted.

Proof . A direct decomposition leads to
where . Using now assumptions (A2)-(A4) and the Cauchy-Schwarz inequality, we get Using now Lemma A.1, we can bound for some constant C. Now, with assumptions (A1)-(A5) and following the same arguments as in the proof of Lemma A.2, we also have for some positive constant C. Combining (22), (23) and (24), the result is proved.

A.2. Proof of Proposition 3.1 and Proposition 3.2
The proof of Proposition 3.1 is omitted. It is analogous to the proof of Proposition 3.2, which is given below. The different steps are similar to the proof of Proposition 1 in Cardot and Josserand (2011).
(25) and study each term at the right-hand side of the inequality separately.
We use the following decomposition: Writing, we directly get, with hypotheses A1-A3 and with similar arguments as in the proof of Lemma A.2, that for some constant C, The second term at the right-hand side in (27) is dealt with using maximal inequalities. More exactly, we use Corollary 2.2.5 in van der Vaart and Wellner (2000). Consider for this, the Orlicz norm of some random variable X which is defined as follows For the particular case ψ(u) = u 2 , the Orlicz norm is simply the well-known L 2 norm, ||X|| ψ = E p (X 2 ). Let us introduce for (r, t) ∈ [0, T ] 2 , the semimetric d(r, t) defined by and consider D( , d), the packing number, which is defined as the maximum number of points in [0, T ] whose distance between each pair is strictly larger than . Then, Corollary 2.2.5 in van der Vaart and Wellner (2000) states that there is a constant K > 0 such that We show below that there is a constant C such that d 2 (r, t) ≤ C|t − r| 2β and thus, since β > 1/2, the integral at the right-hand side of (29) is finite. Let us first decompose where By assumptions (A2)-(A4) and Lemma A.5, we can bound, for some constant C, Considering now d 2 (r, t), we have where With hypotheses (A1)-(A3), one can easily obtain that there is a positive constant C such that and thanks to Lemma A.2 and to Lemma A.3, we can bound Combining now (33) and (34) with (30) and (31), we get that for some constant C.

A.3. Proof of the consistency of the covariance function
We first prove that for any (r, t) ∈ [0, T ] 2 , the estimator γ MA,d (r, t) of the covariance function converges to γ MA (r, t).
Then we prove the uniform convergence of the variance estimator γ MA,d (t, t) by showing its convergence in distribution to zero in the space of continuous functions. The proof is decomposed into two classical steps (see for example Theorem 8.1 in Billingsley (1968)). We first show the pointwise convergence, by considering the convergence of all finite linear combinations, and then we check that the sequence is tight by bounding the increments.
Step 1. Pointwise convergence We want to show, that for each (t, r) ∈ [0, T ] 2 , we have when N tends to infinity, where γ MA,a (r, t) is defined by We study separately the interpolation and the estimation errors.

Interpolation error
Under the assumption on the grid of discretization points, one can get after some algebra that n| γ MA,d (r, t) − γ MA,a (r, t)| = o(1).
The hypotheses on the moments of the inclusion probabilities and Lemma A.6 give us as well as so that a 1 → 0 and a 3 → 0 when N → ∞. Then, the Cauchy-Schwarz inequality allows us to get that a 2 → 0 when N → ∞ and E p (A 1 (r, t) 2 ) → 0 when N → ∞. Let us show now that E p (|A 4 (r, t)|) → 0 when N → ∞. With Lemma A.4, and assumptions (A1)-(A5), we have In a similar way, we can bound E p (|A 2 (r, t)|) as follows, Step 2. Uniform convergence of the variance estimator The pointwise convergence of the variance function proved in the previous step clearly implies the convergence of all finite linear combinations : for all p ∈ {1, 2, . . .}, for all (c 1 , . . . , c p ) ∈ R p and for all (t 1 , . . . , t p ) ∈ [0, T ] p , we have in probability as N tends to infinity. Thus, we deduce with the Cramer-Wold device that the vector n ( γ MA,a (t 1 , t 1 ) − γ MA (t 1 , t 1 ), . . . , γ MA,a (t p , t p ) − γ MA (t p , t p )) converges in distribution to 0 (in R p ).
We need now to prove that the sequence of random functions γ MA,a (t, t) is tight in C[0, T ] by using a bound on its increments. Let us introduce the following criterion, To conclude we show in the following that d 2 γ (t, r) ≤ C|t − r| 2β for a constant C and all (r, t) ∈ [0, T ] 2 . Using (36), the distance is decomposed into four parts.

A.4. Proofs related to the asymptotic normality and the confidence bands
The steps of the proof of Proposition 3.4 are similar to the steps of the proof of Proposition 3.3. We first examine the finite combinations and invoke the Cramer-Wold device. Then we prove the tightness thanks to inequalities on the increments. Let us first deal with the interpolation error, which is negligible under the assumption on the grid of discretization points, as shown in (26).
If we now consider p distinct discretization instants 0 ≤ t 1 < t 2 . . . < t p ≤ 1, it is immediate to check that for any vector c ∈ R p , √ n Indeed, by linearity, there exists a vector of random weights (w 1 , . . . , w N ) which does not depend on time t such that and p j=1 c j µ(t j ) = k∈U w k p j=1 c j Y k (t j ) also satisfies a CLT, with asymptotic variance σ 2 c , under the moment conditions (A7). Thus, any finite linear combination is asymptotically Gaussian and we can conclude that the vector √ n ( µ(t 1 ) − µ(t 1 ), . . . , µ(t p ) − µ(t p )) is asymptotically Gaussian with the Cramer-Wold device.
It remains to check the tightness of the functional process and this is a direct consequence of (30) and (35). Indeed, denoting by Z n (t) = √ n ( µ MA,a (t) − µ(t)) , there is a constant C such that, for all (r, t) ∈ [0, T ] 2 , and, since β > 1/2, the sequence Z n is tight in C[0, T ], in view of Theorem 12.3 of Billingsley (1968).
We prove now Proposition 3.5, the last result of the paper. The proof consists in showing the weak convergence of the sequence of distributions ( Z N ) to the law of Z in C([0, T ]).
For any vector of p points 0 ≤ t 1 < . . . < t p ≤ T, the finite dimensional convergence of the distribution of the Gaussian vector ( Z N (t 1 ), . . . , Z N (t p )) to the distribution of (Z(t 1 ), . . . , Z(t p )) is an immediate consequence of the uniform convergence of the covariance function stated in Proposition 3.3. We can conclude with Slutsky's Lemma noting that for any (c 1 , . . . , c p ) ∈ R p , Now, we need to check the tightness of ( Z N ) in C([0, T ]). Given γ MA,d , we have for (r, t) ∈ [0, T ] 2 , E p Z N (t) − Z N (r) 2 | γ MA,d = n ( γ MA,d (t, t) − 2 γ MA,d (r, t) + γ MA,d (r, r)) and after some algebra, we obtain thanks to Assumption (A2) that Let us first study the term k∈U (Y k,d (t) − Y k,d (r)) 2 in the previous inequality and without loss of generality suppose that t > r. To check the continuity of the trajctories, we only need to consider points r and t that are close to each other. If t and r belong to the same interval, say [t i , t i+1 ], then it is easy to check, with Assumption (A4) that If we suppose now that r ∈ [t i−1 , t i ] and t ∈ [t i , t i+1 ], then we have and using the same decomposition as in (49), we directly get that The second term at the right-hand side of inequality (48) is dealt with similar arguments and the decomposition used in the proof of Lemma A.3, so that . Thus, the trajectories of the Gaussian process are continuous on [0, T ] whenever β > 0 (see e.g Theorem 1.4.1 in Adler and Taylor (2007)) and the sequence ( Z N ) converges weakly to Z in C([0, T ]) equipped with the supremum norm.
Using again Proposition 3.3, we have, uniformly in t, σ Z (t) = σ Z (t) + o p (1), where σ 2 Z (t) = n γ MA,d (t, t). Since, by hypothesis σ 2 Z (t) = γ Z (t, t) is a continuous function and inf t γ Z (t, t) > 0, we get with Slutsky's lemma that ( Z N / σ Z ) converges weakly to Z/σ Z in C([0, T ]). By definition of the weak convergence in C([0, T ]) and the continuous mapping theorem, we also deduce that the real random variable M N = sup t∈[0,T ] | Z N (t)|/ σ Z (t) converges in distribution to M = sup t∈[0,T ] |Z(t)|/σ Z (t), so that for each c ≥ 0, Note finally, that under the previous hypotheses on γ Z (see e.g. Pitt and Tran (1979)), the real random variable M = sup t∈[0,T ] (|Z(t)|/σ Z (t)) has an absolutely continuous and bounded density function so that the convergence holds uniformly in c (see e.g. Lemma 2.11 in van der Vaart (1998)).

A.5. Some useful lemmas
We state here without any proof some results that are needed for the study of the convergence of the covariance function. They rely on applications of the Cauchy-Schwarz inequality and on the assumptions on the moments of the trajectories and the inclusion probabilities.