Nonparametric Laguerre estimation in the multiplicative censoring model

. We study the model Y i = X i U i , i = 1 , . . . , n where the U i ’s are i.i.d. with β (1 , k ) density, k ≥ 1, the X i ’s are i.i.d. , nonnegative with unknown density f . The sequences ( X i ) , ( U i ) , are independent. We aim at estimating f on R + from the observations ( Y 1 , . . . , Y n ). We propose projection estimators using a Laguerre basis. A data-driven procedure is described in order to select the dimension of the projection space, which performs automatically the bias variance compromise. Then, we give upper bounds on the L 2 -risk on speciﬁc Sobolev-Laguerre spaces. Lower bounds matching with the upper bounds within a logarithmic factor are proved. The method is illustrated on simulated data. May 24, 2016

where (X i ) are i.i.d.nonnegative random variables with unknown density f , (U i ) are i.i.d. with β(1, k) density given by f U (u) := ρ k (u) = k(1 − u) k−1 1 I [0,1] (u), with k ≥ 1 and the sequences (X i ), (U i ) are independent.For k = 1, i.e. if U i has uniform density on [0, 1], model (1) is refered to as the multiplicative censoring model and has been widely investigated in the past decades.It was first introduced in Vardi (1989) and covers several important statistical problems, such as estimation under monotonicity constraints or deconvolution of an exponential variable.Such a model is usually applied in survival analysis (see e.g.van Es et al. (2000)).Numerous papers deal with the estimation of f by various nonparametric methods.A nonparametric maximum likelihood approach is investigated in Vardi (1989), Vardi and Zhang (1992), Asgharian et al. (2012).However, in the latter papers, authors assume that a m-sample of direct observations X 1 , . . ., X m is available in addition to the Y -sample and the method does not apply to the case m = 0. Using only the Y -sample, projection methods have been proposed.In Andersen and Hansen (2001), considering the estimation of f as an inverse problem, the authors apply singular value decomposition in different bases.Their procedure is not adaptive.Abbaszadeh et al. (2012Abbaszadeh et al. ( , 2013) ) use projection estimators on wavelets bases to estimate the density f and its derivatives.They provide adaptive estimators, upper bounds of the L p -risks but no lower bounds.Kernel estimators of f and of the survival function F (x) = 1 − F (x), where F is the cumulative distribution function, are studied in Brunel et al., (2015).Extensions of model (1) are considered in Chesneau (2013) who assumes that the U i 's are a product of independent uniform variables and the sequence (X i ) is α-mixing.
In this paper, we consider the extension of the multiplicative sensoring model to the case where U i has β(1, k) distribution and propose nonparametric estimators of f built as projection estimators on a Laguerre basis under the assumption that f ∈ L 2 (R + ).Laguerre bases, which are orthonormal bases of L 2 (R + ), are well fitted for nonparametric estimation of R + -supported functions.Moreover, the support of the density under estimation being hidden by the noise, it is an advantage to have basis functions with non compact support.These bases have been recently used by several authors, for instance, in Comte et al. (2015), for Laplace deconvolution of a signal observed with noise, in Comte and Genon-Catalot (2015), for estimation of the mixing distribution of a Poisson mixture model, in Mabon (2015), for deconvolution of densities on R + .Laguerre bases are related to Sobolev-Laguerre spaces which were introduced in Shen (2000) and with more details in Bongioanni and Torrea (2007).The regularity properties of a function f belonging to a Sobolev-Laguerre space are characterized by the rate of decay of the coefficients of the development of f in the Laguerre basis.The link between projection coefficients and regularity conditions in these spaces has been described in Comte and Genon-Catalot (2015).In the present paper, we choose a Laguerre basis and first establish explicit formulae linking the projection coefficients of f k,Y , the density of Y in model (1), in the Laguerre basis to those of f .This allows to define a collection ( fm ) of estimators of f .We obtain a L 2 -risk bound for fm .Then, we propose a data-driven choice mk of the dimension m leading to an adaptive estimator f mk .Using Sobolev-Laguerre regularity spaces, we determine upper bounds for the rate of convergence of the L 2 -risk.Then, we study lower bounds and prove that upper and lower bounds match up to a logarithmic term.The lower bound on Sobolev-Laguerre balls is difficult to obtain and follows several technical steps.We start proving it in the case of direct observations of the X i 's, that is in the simple density model and then we obtain it for model (1) when k = 1.To avoid more technical developments, we just indicate how to extend it for all integer k.
In Section 2, we describe the Laguerre basis, build the projection estimators of f and provide the upper bound of their L 2 -risk.This leads to the adaptive procedure.In Section 3, we introduce the Sobolev-Laguerre regularity spaces and obtain upper bounds on the rate of convergence of the projection estimators on Sobolev-Laguerre balls.To prove lower bounds, one can follow the general scheme described, e.g. in Tsybakov (2009).However, in the considered situation it is more natural to construct alternatives as finite combinations of Laguerre functions with coefficients taking values in {0, 1}.Such a construction makes the problem of attributing a hypothesis to Sobolev-Laguerre ball straightforward.Then the lower bounds are obtained via a modification of the Hamming distance and a corresponding extension of the Varshamov-Gilbert bound.In Section 4, we implement the adaptive estimators of f , based on direct observations X 1 , . . ., X n and on multiplicative censored observations Y 1 , . . ., Y n for k = 1, 2 and for various densities f .The method provides very good results for direct observations, which remain convincing for censored data.Extensions and concluding remarks are given in Section 5.

Projection estimators in the Laguerre basis
2.1.Laguerre basis.Below we denote the scalar product and the L 2 -norm on L 2 (R + ) by: Consider the Laguerre polynomials (L j ) and the Laguerre functions (ϕ j ) given by The collection (ϕ j ) j≥0 constitutes a complete orthonormal system on L 2 (R + ), and is such that (see Abramowitz and Stegun (1964)): ( We assume that f ∈ L 2 (R + ), so that we can develop f on the Laguerre basis: Let S m be the m-dimensional subspace of L 2 (R + ) spanned by (ϕ 0 , ϕ 1 , . . ., ϕ m−1 ).The function is the orthogonal projection of f on S m .Below, we define estimators âj of a j (f ) from the observations Y 1 , . . ., Y n .This leads to a collection of projection estimators ( fm = m−1 j=0 âj ϕ j , m ≥ 1).

2.2.
Preliminary properties and formulas.Let f k,Y denote the density of Y i given by (1).
A straightforward computation leads to Moreover, another simple computation yields: (5) Thus, f k,Y belongs to L 2 (R + ).In this paragraph, we prove that the coefficients (a j (f ), j ≥ 0) are linked with the coefficients (a j (f k,Y ) = f k,Y , ϕ j , j ≥ 0) of the density f k,Y on the Laguerre basis by a linear relation.This requires preliminary steps.
Let us remark that a density satisfying (4) is k-monotone, i.e. that (−1) ℓ f k,Y is nonincreasing and convex for ℓ = 0, . . ., k−2 if k ≥ 2 and simply nonincreasing if k = 1.This property is proved in Williamson (1956).Therefore, model (1) covers the case of observations with k-monotone densities.Note that k-monotone densities are considered in Balabdaoui andWellner (2007, 2010) or Chee and Wang (2014), from the point of view of estimating f k,Y (not f ) under the k-monotonicity constraint.
In Proposition 2.1 below, we state an inversion formula giving f from f k,Y defined by (4) proved in Williamson (1956).For convenience of the reader, we give a proof in the appendix.Proposition 2.1.Let f k,Y and f be linked by formula (4) and set k,Y (y).
Note that, setting f 0,Y = f, F 0,Y = F , these formulae contain the case where Y i = X i (U i = 1).So, below, we consider the case k = 0 in our results as the case of direct observations of the X i 's.With the two following propositions, we give the links between the coefficients of f and f k,Y on the Laguerre basis.Proposition 2.2.Assume that EX k−1 < +∞.Then, for all j ≥ 0 and k ≥ 1, (8) Proposition 2.2 provides a simple way of defining estimators of a j (f ) by replacing the righthand side of ( 8) by its empirical counterpart based on the observed Y -sample.Moreover, the proof relies explicitly on the fact that the ϕ j 's are not compactly supported.This is due to the integrations by parts used to obtain the result.Proposition 2.3 hereafter gives another way of expressing the coefficients and is helpful for studying the rates of estimators.Define the matrices m with size m × (m + k) by and the (b j,p ℓ )'s can be recursively computed by Proposition 2.3.By convention, we set ϕ j = 0 if j ≤ −1 and define the column vectors of coefficients of f on (ϕ 0 , . . ., ϕ m−1 ) and of f k,Y on (ϕ 0 , . . ., ϕ m+k−1 ): Then, ).Moreover, the coefficients h ℓ j,k satisfy For each k, the coefficients have to be computed.In our simulations (Section 4), we use the two values k = 1, 2 and the coefficients are the following.For k = 1, [H m ] j,ℓ = 0 if ℓ = j, j − 1, j + 1, (11) [H (1)  m ] j,j−1 = − For the study of the risk bounds, we need evaluate two norms of the matrix H m | F .We recall their definitions.The squared spectral radius of the matrix A, ρ 2 (A) = λ max (A t A), is equal to the largest eigenvalue of A t A, where A t denotes the transpose of A. The Frobenius squared norm of A is given by |A| 2 F = Tr(A t A) where Tr(M ) is the trace of matrix M .The following result is deduced from Proposition 2.3.
3. Projection estimator and upper risk bound.Proposition 2.3 leads us to define a collection of projection estimators of f by: Note that, by Proposition 2.2, we have the other formula The following proposition gives the risk bound for the estimator fm .
Proposition 2.4.Let fm , f m be given by ( 12) and (3).Then we have, for all k, m ≥ 0, where x ∧ y = inf(x, y).Moreover, it holds . Let us discuss the two terms in the infimum appearing in the first bound of Proposition 2.4.In light of Corollary 2.1, we have m ) but f k,Y ∞ may be infinite.On the other hand, the two terms have the same order, given in second inequality of Proposition 2.4.

Adaptive estimation.
The risk bound decomposition of Proposition 2.4 classically involves a squared bias term f − f m 2 = j≥m a 2 j (f ) which is decreasing with m and a variance term of order m 2k+1 /n which is increasing with m.Therefore, to select relevantly m, we have to perform a compromise.This can be done asymptotically by evaluating rates of convergence (see below), or, as we do now, on finite sample by a model section strategy.In view of the discussion on the risk bound, we define for k ≥ 0, (14) mk = arg min where Note that the definition of mk mimicks the squared-bias variance compromise as m ) is obtained by a numerical algorithm (function eigs applied to (H Assume that E(1/X) < +∞.Let fm be given by (12) and mk by (14).There exists a constant κ 0 such that for any κ ≥ κ 0 , we have where C 1 is a numerical constant (C 1 = 4 suits) and C 2 depends on k and E(1/X).
It follows from Theorem 2.1 that the estimator f mk is adaptive in the sense that its risk automatically realizes the squared bias-variance compromise.
The constant κ 0 provided by the proof is generally not optimal; finding the optimal theoretical value of κ in the penalty is far from easy (see for instance Birgé and Massart (2007) in a Gaussian regression model).This is why it is standard to calibrate the value κ in the penalty by preliminary simulations.

Rates of convergence on Sobolev-Laguerre balls
We now study the asymptotic point of view to find the dimension m opt which realizes the bias variance compromise of the risk bound given in Proposition 2.4.We have already identified the rate of the variance term as m 2k+1 /n.We now look at the bias term f − f m 2 .Classically, the bias rate is evaluated by choosing a regularity space for the function f .Sobolev-Laguerre spaces are well fitted to our framework.
3.1.Sobolev-Laguerre spaces.For s ≥ 0, the Sobolev-Laguerre space with index s (see Bongioanni and Torrea (2007)) is defined by: ( 15) where For s integer, the property |h| 2 s < +∞ can be linked with regularity properties of the function h.We give details in the Appendix.We define the ball W s (D) : Upper rates.We can deduce from Proposition 2.4 the rates of convergence of the estimator on Sobolev-Laguerre balls.For Let fm be given by (12).Then choosing where C(D, s, k) is a constant depending on D, s and k.
The rate may be interpreted as follows: we have an inverse problem, where s measures the smoothness and k the ill-posedeness.
For direct observations of X 1 , . . ., X n (k = 0), this rate is the same as the one obtained by Juditsky and Lambert-Lacroix (2004) for estimation of a density on R, over Hölder classes of densities.
Faster rates of convergence may be obtained if the bias is smaller.Exponential distributions provide examples of such a case.If X has exponential distribution E(θ), θ > 0, then the coefficients are given by a ) k and the bias can be explicitly computed, Then the bias is exponentially decreasing and the rate of convergence is of order The result can be extended to Gamma and mixed Gamma densities, see Comte and Genon-Catalot (2015), Mabon (2015).Thus, the Laguerre basis method provides excellent rates for the class of mixed gamma densities.Nevertheless, we stress that the adaptive procedure does not require any knowledge on the rate of the bias and still automatically realizes the finite sample bias-variance compromise and also automatically reaches the best possible asymptotic rate.So far, we have used that the ϕ j s are bounded.However, Szegö (1975) p.198 and p. 241.gives the following asymptotic bound: ∀a > 0, sup x>a |ϕ j (x)| ≤ Cj −1/4 .Therefore, for densities with support [a, +∞[ with a > 0, we have j 0 ≤j≤m E(ϕ 2 j (Y 1 )) ≤ C ′ m 1/2 and the variance term of fm has order O(m 2k+1/2 /n) instead of O(m 2k+1 /n).By choosing m opt = [n s+2k+(1/2) ], the upper rate becomes on this restricted class of densities, of order O(n −s/(s+2k+(1/2)) ).Lower bounds for this class would require a completely different proof.

Lower bounds.
We prove that the upper rate obtained in Proposition 3.1 is optimal on Sobolev-Laguerre balls.This reveals unexpectedly difficult.We first treat the case k = 0 (U i = 1, direct observations of X i ).Then, we deal with k = 1 and give indications on how to extend the result to all k > 1.The upper bound matches the lower bound up to a logarithmic term.Theorem 3.1.Assume that s is an integer, s > 1 and X 1 , . . ., X n are observed.Then for any estimator fn , for any ǫ > 0 and for n large enough, The proof is based on Theorem 2.7 in Tsybakov (2009), and induces several steps.The main difficulty of the construction is to ensure that the density alternative proposal is really a density on R + , and is in particular nonnegative.
Next we consider the case k = 1, but the step from k = 0 (case of direct observation of X) to k = 1 suggests how to get a general result, see Remark 6.1 in the proof.However, given technicalities of the proof, we detail only the case k = 1.Theorem 3.2.Assume that s is an integer, s > 1 and consider the model Y = XU , for X and U independent, U ∼ U ([0, 1]) with only Y observed.Then for any estimator fn of f the density of X, for any ǫ > 0 and for n large enough, ψn , ψn = n −s/(s+3) (log n) (1+ǫ)/(1+s/3) .
All factors and parameters are chosen to have the true densities with the same scales.After preliminary simulation experiments, direct estimation is penalized with κ 1 = 0.75.For U following a uniform distribution on [0, 1], we use κ 2 = 0.25 and κ 3 = 0.025 for U following a Beam of estimators are given in Figures 1-2 and show clearly the performance of the method via variability bands.The Laguerre basis provides excellent estimation when using direct data, and the problem gets more difficult in presence of censoring.Increasing k (we have k = 1 when U ∼ U ([0, 1]) and k = 2 when U ∼ β(1, 2)) makes the problem more difficult.This is why Gamma mixtures are hard to reconstruct in presence of multiplicative censoring (see Figure 2).Selected dimensions can be of various orders (between 3 and 12 in our examples) and vary or be very stable (see the standard deviations).
Table 1 gives the Mean Integrated Squared Error (MISE) for two sample sizes (n = 400 and n = 2000) and the three cases for the same X sample; ISE are computed on the interval of observation.The kernel estimator implemented for comparison is obtained via the function ksdensity of Matlab.The projection method is in general better than the kernel estimator, mX = 12.32 (2.32) mY,1 = 6.72 (0.45) mY,2 = 6.58 (0.50)  with slight improvement for models (iii) and (iv), and a much more important one in the Gamma and in the mixed Gamma case of models (i) and (ii).This was expected as theoretical rates are better for Gamma or mixed Gamma when using Laguerre projection method.Clearly, the inverse problem faced in the multiplicative censoring case makes the problem more difficult and the MISEs higher.

Extensions and concluding remarks
In this paper, we propose a nonparametric adaptive estimator of the density f of X i in the model Y i = X i U i where X i are i.i.d.nonnegative random variables and the sequences (U i ) i and (X i ) i are independent.We develop the case of U i ∼ β(1, k) for k ∈ N, where k = 0 corresponds to the direct observation of the X i 's (i.e.U i ≡ 1).Using a Laguerre basis a collection of projection estimators is built and a date driven procedure is proposed to select the dimension of the projection space.The risk bound of the adaptive estimator provides a an automatic bias variance compromise which is non asymptotic.From the asymptotic point of view, we prove upper rates over Sobolev Laguerre balls.We obtain lower bounds matching with the previous rates up to a logarithmic factor, the proof of which requires specific extensions of the classical scheme.
The method can be extended to other noise distributions.As in Chesneau (2013), we can consider that U i = U m H (1) Y ).Propositions 2.4 and 2.1 can be generalized without difficulty.
Another possible extension of the noise distribution is to consider that U i follows a β(r, k) distribution, r ≥ 1.Indeed, an inversion formula extending Proposition 2.1 holds.Denoting by f r,k,Y the density of x r−1 .
Therefore, we can obtain an analogous of Proposition 2.3 and develop a complete study.
It is worth stressing that the model Z i = X i U i + V i can be treated by our approach.Indeed Laguerre functions are a convenient tool for deconvolution on R + , as done in Mabon (2015).Moreover we provide a precise description of the strategy in the model Another way of treating the subject could be to take the logarithm of (1) and estimate the density of log(X) by deconvolution (mainly Fourier methods).This method can work for a large class of noise distributions.On the other hand, the function which is estimated is f log(X) , the density of log(X).The relation f X (x) = f log(X) (log(x))/x implies that the estimator is not defined in 0 and the integrated risk has to be computed on [a, +∞[, with a > 0. This is a significant drawback and justifies the use of the Laguerre strategy.
6. Proofs 6.1.Proof of Propositions 2.2 and 2.3 for k = 1.We first look at the case k = 1 before the general k-monotone case. Set This yields (8) for k = 1.
We have to compute 2yL ′ j (2y) − yL j (2y) or tL ′ j (t) − t 2 L j (t).Combining relations ( 17)-( 19), we get Thus, b j,1 ℓ = 0 for ℓ = j − 1, j, j + 1 and This gives the result for k = 1.✷ and by integration by part we have provided that all terms appearing in the integration by parts are null, i.e.: Therefore, we obtain Formula (8) after proving that ( 22) holds.
Proof of ( 22): Let Using the Leibniz formula and interchanging sums yields As ϕ (t) j (y) is continuous at 0 and tends to 0 at +∞, we only need to prove that Σ t (y) tends to 0 at 0 and +∞.We look at the coefficient of ϕ By (7), Σ 0 (y) = (−1) k k!(F (y) − F k,Y (y)).As F and F k,Y are continuous c.d.f. on R + , they are null at 0 and both tend to 1 at +∞.Therefore, as y tends to 0 and +∞, Σ 0 (y) → 0.
Proof of Lemma 6.1 Initialization of ( 27) for p = 0 is obvious.Formula (20) shows that the induction formula (28) holds for p = 0 (p = 0 to p = 1).Next, we differentiate (27) and multiply by y, to get Now using (20), we get Taking into account that −py p ϕ (p) j (y) = − j+p ℓ=0∨(j−p) p b j,p ℓ ϕ ℓ (y) gives formula (28).Inequality ( 29) is obtained by straightforward induction.The proof of Proposition 2.2 is now complete.✷ We consider first m ≥ k and use (10) to get Next write that x 2 ℓ ≤ (2k + 1) m+k ℓ=1 x 2 ℓ Therefore we get If m < k, the bound obviously holds.Now we prove the lower bound on Indeed, coefficients b j,p ℓ are zero if ℓ > j + p (see formula (27)).Therefore, as h j,1 j+1 = (j + 1)/2, we get, by elementary induction that We obtain which ends the proof.✷ 6.5.Proof of Proposition 2.4.The risk bound of the estimator can be written as follows where f m = m−1 j=0 a j (f )ϕ j is the projection of f on S m = span(ϕ 0 , . . ., ϕ m−1 ) and f − f m 2 is the usual bias of a projection estimate.Next we have, see (12), So, Var(ϕ j (Y 1 )) This gives a first bound.For the second one, we can write, if f Y ∞ < +∞, which gives the second part of the bound.It follows from Corollary 2.1 that mρ 2 (H F are both of orders m 2k+1 , but the second bound involves f Y ∞ .This term is unknown, difficult to estimate and additional assumption is required to ensure its finiteness, for instance E(1/X) < +∞ for k = 1.✷ 6.6.Proof of Theorem 2.1.In the proof, we omit the index k in M (k) n , pen k (m) and mk .Let M = max M n the maximal element of the collection.Let for m ≤ M , S m = { t ∈ R M t = (t 1 , . . ., t m , 0, . . ., 0)} and for any t ∈ R M , let M âM+k−1 (Y ) M , where x 2 M is the Euclidean norm in R M and •, • M the associated scalar product.For t ∈ S m , we denote by t m the vector of R m with the m first coordinates of t (those which are not necessarily zero).Thanks to the particular form of the matrices Therefore the vector f(m) in R M with m first coordinates âm−1 and following coordinates null By definition of m, we have We get by plugging this in (31), Let p(m, m ′ ) be such that 4p(m, m ′ ) ≤ pen(m) + pen(m ′ ) and use (30), to get where c depends on k and f Y ∞ = E(1/X).The proof of (32) follows the line of the proof of Proposition 7.1 in Mabon (2015) and delivers the value of κ 0 .Thus, we obtain the result announced in Theorem 2.1.

Next we have for any
k is the so-called Hamming distance.Now to apply Theorem 2.7 p.101 in Tsybakov (2009), we need to extend the Varshamov-Gilbert bound (see Lemma 2.9 p. 104 in Tsybakov ( 2009)) as follows.
Thus, in order to prove that f θ is a nonnegative function, it is enough to show that (36) sup for any fixed λ > 0, µ > 0 and for sufficiently large a > 0. Then by taking λ = α, µ = β and δ = δ ′ K −α log −β (K) for small enough constant δ ′ > 0 not depending on K, we get We proceed in two steps for the proof of (36).First we study the supremum for large values of x, 2x ≥ cν, ν = 4K + 2, c > 0 and then for intermediate values of x (2a < 2x ≤ bν with b < 1).
To prove (37), we first study the case µ = 0 and λ integer.
Corollary 6.2.Under conditions of the previous lemma, we have a representation where for any x > cν with c > 1 the sequence a n is bounded (uniformly in x), positive and increasing in n for ν = 4n + 2 ≤ x.
Proof of Step 1.First we prove (37) for µ = 0 and λ integer.From (38), ( 39) and (40), we have k (r)a ℓ+k (x) is nonnegative and nondecreasing and a ℓ+k (x) is bounded.Inequality (37) for µ = 0 and λ an integer follows from the next Lemma.Lemma 6.8.Let K 1 < K 2 be two integers and let ρ n an increasing sequence of nonnegative numbers, then for any x > 4K 2 + 2, we have Proof.Due to the Abel summation theorem, we get where S n .
For h ∈ W s with s integer, we set h 2 0 = h 2 and for s ≥ 1 Then the following property holds.
first one is the spectral radius ρ(H (k) m ) and the second one is the Frobenius norm |H (k)

Corollary 2 . 1 .
For m ≥ 1 and k ≥ 0, there exist constants c(k), C(k) depending on k only, such that c

Figure 1 .
Figure 1.True density f of Model (i) (Gamma distribution) in bold (blue).50 estimators of f , left: from direct observation of X in dotted (red); middle: from observation of Y = XU with U ∼ U ([0, 1]), in dotted (green); right: from observation of Y = XU with U ∼ β(1, 2), in dotted (green).First line: n = 400.Second line: n = 2000.Above each plot, mX (resp.mY,1 , resp mY,2 ) is the mean of the selected dimensions from X (resp.from Y ) with standard deviation in parenthesis.

Figure 2 .
Figure 2. True density f of Model (ii) (Mixed Gamma) on the first line, Model (iii)(Lognormal) on the second line, and Model (iv) on the third line (Beta distribution) in bold (blue).Left: 50 estimators of f from direct observation of X in dotted (red).Middle: 50 estimators of f from observation of Y , in dotted (red) with U ∼ U ([0, 1]).Right: 50 estimators of f from observation of Y , in dotted (red) with U ∼ β(1, 2), n = 2000 in all cases.Above each plot, mX (resp.mY,1 , resp mY,2 ) is the mean of the selected dimensions for X (resp.for Y ) with standard deviation in parenthesis.
i 's i.i.d. and uniform.Then denoting the density of Y i by f (ℓ) Y , Proposition 2.3 applies and yields ℓ ϕ ℓ (y), where b j,0 ℓ = δ ℓ,j and for p ≥ 0,

Table 1 .
MISE × 1000 with std × 1000 in parenthesis for 100 estimation of f with kernel or Laguerre projection estimators in the case of direct observation of X and with Laguerre projection in case Y = XU is observed and