Strong Uniform Convergence Rates of Wavelet Density Estimators with Size-Biased Data

This paper considers the strong uniform convergence of multivariate density estimators in Besov space Bsp,q(Rd) based on sizebiased data. We provide convergence rates of wavelet estimators when the parametric μ is known or unknown, respectively. It turns out that the convergence rates coincide with that of Giné and Nickl’s (Uniform Limit Theorems for Wavelet Density Estimators, Ann. Probab., 37(4), 1605-1646, 2009), when the dimension d = 1, p = q = ∞, and ω(y) ≡ 1.


Introduction
Let  1 ,  2 , . . .,   be independent and identically distributed (..) continuous random variables defined on a probability space (Ω, F, ) with the common density function where  denotes a known positive function and  stands for an unknown density function of the unobserved continuous random variable  and  = () = ∫ R  ()() < +∞.In this setup  and  mean the target density and weighted density function, respectively, and the resulting data are size-biased data.Then we want to estimate the unknown density function  from a sequence of biased data  1 ,  2 , . . .,   .Wavelet methods are of interest in nonparametric statistics thanks to their ability to estimate efficiently a wide variety of unknown functions, especially for those with discontinuities or sharp spikes.Hence, wavelet methods have been widely used for this density estimation model (1).Ramírez and Vidakovic [1] propose a linear wavelet estimator and show it to be  2 consistent.Shirazi and Doosti [2] expand their work to multivariate case.Chesneau, Dewan, and Doosti [3] extend the independence to both positively and negatively associated cases.They show a convergence rate for mean integrated squared error (MISE).An upper bound of wavelet estimation on   (1 ≤  < +∞) risk in negatively associated case is given by Liu and Xu [4].Kou and Guo [5] discuss the MISE of wavelet estimators in strong mixing case.For the strong convergence of density estimation, Masry [6] studies the strong convergence rates over a compact subset in Besov space   , (R  ), when () ≡ 1 (the model (1) reduces to the classical density estimation) and the sample is strong mixing.Recently, Giné and Nickl [7] investigate the same problem by wavelet method and obtain the optimal strong convergence rates in Besov space   ∞,∞ (R), when the data is ...To our knowledge, there does not exist research on the strong uniform convergence for the model (1).
The aim of this paper is to discuss the strong uniform convergence rates of wavelet estimators in Besov space   , (R  ) based on size-biased data.First of all, we construct a linear wavelet estimator f when the parametric  is known and give its convergence rate.However, people always do not know  in many practical applications.For this reason, an estimator μ of  is given.Then we develop a new linear wavelet estimator f in which the parametric  is replaced by μ.Finally, we establish the convergence rate of estimator f .
If a scaling function  satisfies Condition (), i.e., then the function Condition () is not very restrictive.Examples include bounded and compactly supported measurable functions.

Daubechies scaling functions satisfy Condition (𝑆).
A wavelet basis can be used to characterize Besov spaces.The next lemma provides equivalent definitions for those spaces, for which we need one more notation: a scaling function Lemma 1 ([8]).Let  be -regular,  ℓ (ℓ = 1, 2, . . ., ,  = 2  − 1) be the corresponding wavelets and and 0 <  < , then the following assertions are equivalent: ( The Besov norm of  can be defined by with We also need the following classical inequality in the proof of our theorems.

Estimation with Known 𝜇
In this paper, we require supp Y i ⊆ [0, 1]  in the model (1).This is similar to Chesneau, Dewan, and Doosti [3], Liu and Xu [4], and Kou and Guo [5].We choose -dimensional scaling function with  2 (⋅) being the one-dimensional Daubechies scaling function.Then  is -regular ( > 0) when  gets large enough.Note that  2 has compact support [0, 2 − 1] and the corresponding wavelet has compact support where A linear wavelet estimator is defined by where It follows from (1) that This means α 0 , is an unbiased estimate of   0 , .The following notations are needed to state our theorems. ≲  denotes  ≤  for some constant  > 0;  ≳  means  ≲ ;  ∼  stands for both  ≲  and  ≲ . . ( Remark 3. When () ≡ 1, our model reduces to the classical nonparametric density estimation.Then our result is same as the convergence rate in Masry [6].On the other hand, we find that sup with  = 1 and  =  = ∞.This coincides with the convergence rate in Theorem 3 of Giné and Nickl [7]. .
A careful observation of (12) shows the construction of f () strictly depends on , which needs  known.However, the parametric  is always unknown in many practical applications.So we will deal with the unknown case in the following section.

Estimation with Unknown 𝜇
In this section, we provide a strong convergence rate of wavelet estimator for the model (1) with unknown parametric .A first step is to give an estimator of  from the given data  1 ,  2 , . . .,   .Similar to Chesneau, Dewan, and Doosti [3] and Liu and Xu [4], we introduce By (1), Now, we define a practical linear wavelet estimator  . (55)