Next Article in Journal
Constrained Parameter Estimation for a Mechanistic Kinetic Model of Cobalt–Hydrogen Electrochemical Competition during a Cobalt Removal Process
Next Article in Special Issue
Robust Procedures for Estimating and Testing in the Framework of Divergence Measures
Previous Article in Journal
Bounds on the Lifetime Expectations of Series Systems with IFR Component Lifetimes
Previous Article in Special Issue
Distance-Based Estimation Methods for Models for Discrete and Mixed-Scale Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Rare Event Analysis for Minimum Hellinger Distance Estimators via Large Deviation Theory

1
Department of Statistics, George Mason University, Fairfax, VA 22030, USA
2
Department of Mathematical Sciences, University of Copenhagen, Universitetsparken 5, DK-2100 Copenhagen Ø, Denmark
*
Author to whom correspondence should be addressed.
Entropy 2021, 23(4), 386; https://doi.org/10.3390/e23040386
Submission received: 22 February 2021 / Revised: 14 March 2021 / Accepted: 15 March 2021 / Published: 24 March 2021

Abstract

:
Hellinger distance has been widely used to derive objective functions that are alternatives to maximum likelihood methods. While the asymptotic distributions of these estimators have been well investigated, the probabilities of rare events induced by them are largely unknown. In this article, we analyze these rare event probabilities using large deviation theory under a potential model misspecification, in both one and higher dimensions. We show that these probabilities decay exponentially, characterizing their decay via a “rate function” which is expressed as a convex conjugate of a limiting cumulant generating function. In the analysis of the lower bound, in particular, certain geometric considerations arise that facilitate an explicit representation, also in the case when the limiting generating function is nondifferentiable. Our analysis involves the modulus of continuity properties of the affinity, which may be of independent interest.

1. Introduction

In a variety of applications, the use of divergence-based inferential methods is gaining momentum, as these methods provide robust alternatives to traditional maximum likelihood-based procedures. Since the work of [1,2], divergence-based methods have been developed for various classes of statistical models. A comprehensive treatment of these ideas is available, for instance, in [3,4]. The objective of this paper is to study the large deviation tail behavior of the minimum divergence estimators and, more specifically, the minimum Hellinger distance estimators (MHDE).
To describe the general problem, suppose Θ R d , and let F = { f θ ( · ) : θ Θ } denote a family of densities indexed by θ . Let { X n : n 1 } denote a class of i.i.d. random variables, postulated to have a continuous density with respect to Lebesgue measure and belonging to the family F , and let X be a generic element of this class. We denote by g ( · ) the true density of X.
Before providing an informal description of our results, we begin by recalling that the square of the Hellinger distance (SHD) between two densities h 1 ( · ) and h 2 ( · ) on R is given by
HD 2 ( h 1 , h 2 ) = h 1 1 2 h 2 1 2 2 2 = 2 2 R h 1 ( x ) h 2 ( x ) 1 2 d x .
The quantity R h 1 ( x ) h 2 ( x ) 1 2 d x is referred to as the affinity between h 1 ( · ) and h 2 ( · ) and denoted by A ( h 1 , h 2 ) . Hence, the SHD between the postulated density and the true density is given by SHD ( θ ) = HD 2 ( f θ , g ) . When Θ is compact, it is known that there exists a unique θ g Θ minimizing the SHD ( θ ) . Furthermore, when g ( · ) = f θ 0 ( · ) and F satisfies an identifiability condition, it is well known that θ g coincides with θ 0 ; cf. [1]. Turning to the sample version, we replace g ( · ) by g n ( · ) in the definition of SHD, obtaining the objective function SHD n ( θ ) = HD 2 ( f θ , g n ) and
g n ( x ) = 1 n b n i = 1 n K x X i b n ,
where the kernel K ( · ) is a probability density function and b n 0 and n b n as n .
It is known that when the parameter space Θ is compact, there exists a unique θ ^ n Θ minimizing SHD n ( θ ) , and that θ ^ n converges almost surely to θ g as n ; cf. [1]. Furthermore, under some natural assumptions,
n 1 2 ( θ ^ n θ g ) d G ,
where, under the probability measure associated with g ( · ) , G is a Gaussian random vector with mean vector 0 and covariance matrix Σ g . If g ( · ) = f θ 0 ( · ) , then the variance of G coincides with the inverse of the Fisher information matrix I ( θ 0 ) , yielding statistical efficiency. When the true distribution g ( · ) does not belong to F , we will call this the “model misspecifed case,” while when g F , we will say that the “postulated model” holds.
In this paper, we focus on the large deviation behavior of { θ ^ n : n 1 } ; namely, the asymptotic probability that the estimate θ ^ n will achieve values within a set away from the central tendency described in (2). We establish results of the form
log P g ( θ ^ n B ) n inf θ B I ( θ ) ,
for some “rate function” I and given Borel subset B Θ . Similar large deviation estimates for maximum likelihood estimators (MLE) have been investigated in [5,6,7], and for general M-estimators in [8,9]. These results allow for a precise description of the probabilities of Type I and Type II error in both the Neymann–Pearson and likelihood ratio test frameworks. Furthermore, large deviation bounds allow one to identify the best exponential rate of decrease of Type II error amongst all tests that satisfy a bound on the Type I error, as in Stein’s lemma (cf. [10]). Additional evidence of the importance of large deviation results for statistical inference has been described in [11] and in the book [12].
One of our initial goals was to derive sharp probability bounds for Type I and Type II error in the context of robust hypothesis testing using Hellinger deviance tests. This article is a first step towards this endeavor. A key issue that distinguishes our work from earlier works is that, in our case, the objective function is a nonlinear function of the smoothed empirical measure, and the analysis of this case requires more involved methods compared with those currently existing in the statistical literature on large deviations. Consistent with large deviation analysis more generally, we identify the rate function I as the convex conjugate of a certain limiting cumulant generating function, although in our problem, we uncover a subtle asymmetry between the upper and lower bounds when our limiting generating function is nondifferentiable. In the classical large deviation literature, similar asymmetries have been studied in other one-dimensional contexts (e.g. [13]), although the statistical problem is still quite different, as the dependence on the parameter θ arises explicitly—inhibiting the use of convexity methods typically exploited in the large deviation literature—and hence requiring novel techniques.

1.1. Large Deviations

In this subsection we provide relevant definitions and properties from large deviation theory required in the sequel. In the following, R + will denote the set of non-negative real numbers.
Definition 1.
A collection of probability distributions { P n : n 1 } on a topological space ( X , B ) is said to satisfy the weak large deviation principle if
lim sup n 1 n log P n ( F ) inf x F I ( x ) , for all closed F B ,
and
lim inf n 1 n log P n ( G ) inf x G I ( x ) for all open sets F B
for some lower semicontinuous function I : X [ 0 , ] . The function I is called the rate function. If the level sets of I are compact, we call I a good rate function and we say that { P n } satisfies the large deviation principle (LDP).
We begin with a brief review of large deviation results for i.i.d. random variables and empirical measures. Let { X n } R be an i.i.d. sequence of real-valued random variables, and let P n denote the distribution of the sample mean X ¯ n . If the moment generating function of X 1 is finite in a neighborhood of the origin, then Cramér’s theorem states that { P n } satisfies the LDP with good rate function Λ * , where Λ * is the convex conjugate (or Legendre–Fenchel transform) of Λ , and where Λ ( α ) = log E [ e α X 1 ] is the cumulant generating function of X 1 (cf. [10], Section 2.2).
Next, consider the empirical measures { μ n } , defined by
μ n ( B ) = 1 n i = 1 n I { X i B } , B B ,
where B denotes the collection of Borel subsets of R . It is well known (cf. [14]) that { μ n } converges weakly to P, namely to the distribution of X 1 . Then Sanov’s theorem asserts that { μ n } satisfies a large deviation principle with rate function I P given by
I P ( ν ) = KL ( ν , P ) if ν P , otherwise ,
where KL ( ν , P ) is the Kullback–Leibler information between the probability measures ν and P. When ν and P each possesses a density with respect to Lebesgue measure (say p and g, respectively), the above expression becomes
KL ( p , g ) : = S p ( x ) log p ( x ) g ( x ) d μ ( x ) if p g , otherwise .
In Sanov’s theorem, the rate function I P is defined on the space of probability measures, which is a metric space with the open sets induced by weak convergence. Extensions of Sanov’s theorem to strong topologies have been investigated in the literature; cf., e.g., [15].
We now turn to a general result, which will play a central role in this paper, namely Varadhan’s integral lemma (cf. [10], Theorem 4.3.1). This result will allow us to infer the scaled limit of a sequence of generating functions from the existence of the large deviation principle.
Lemma 1
(Varadhan). Let { Y n } be a sequence of random variables taking values in a regular topological space ( X , B ) , and assume that the probability law of { Y n } satisfies the LDP with good rate function I. Then for any bounded continuous function F : X R ,
lim n 1 n log E [ exp ( n F ( Y n ) ) ] = sup x X F ( x ) I ( x ) .

1.2. Minimum Hellinger Distance Estimator and Large Deviations

We first observe that the MHDE is obtained by maximizing
A n ( θ ) A n ( θ , g n ) : = R f θ 1 2 ( x ) g n 1 2 ( x ) d μ ( x ) ,
which involves solving the equation A n ( θ ) = 0 . The idea behind the large deviation analysis is to observe that the large deviation behavior of the maximizer can be extracted from that of the objective function A n ( θ ) near 0 . By the Gärtner–Ellis theorem (cf. [10], Section 2.3), this amounts to investigating the asymptotic behavior as n of
1 a n log E g [ exp a n α , A n ( θ ) ] , α R d ,
where a n as n . In the case of maximum likelihood estimation (MLE) or minimum contrast estimation (MCE), the objective function can be expressed as
i = 1 n h θ ( X i ) = n R h θ ( x ) d μ n ( x ) ,
where { μ n : n 1 } is the empirical measure associated with { X k : 1 k n } . Thus, while the objective functions associated with the MLE and MCE are linear functions of the empirical measure, the affinity is a nonlinear function of the empirical measure. This creates certain complications in identifying the rate function I ( · ) alluded to in (3). Of course, in the case of likelihood and minimum contrast estimator analysis, an explicit formula for I ( · ) ensues as the Legendre–Fenchel transform of the cumulant generating function of h θ ( X 1 ) , viz. log E θ 0 exp ( α h θ ( X 1 ) ) . One approach to evaluating the limiting generating function is to apply Varadhan’s lemma as given above in (7). In the context of our problem, that requires an investigation into the large deviation principle for the density estimators g n ( · ) viewed as elements of L 1 ( S ) , viz. the space of integrable functions on S. Equivalently, we require a version of Sanov’s theorem in L 1 -space, which leads to certain topological considerations. The main issue here is that, when L 1 is equipped with a norm topology, the sequence of kernel density estimates { g n ( · ) } possesses large deviation bounds, but the associated rate function may not have compact level sets, as is required for a typical application of Varadhan’s lemma. Nonetheless, one obtains a full LDP when L 1 ( S ) is equipped with the weak topology.
The asymptotic properties of MHDE, such as consistency and asymptotic normality, are established using the norm convergence of g n ( · ) to g ( · ) . For this reason, we focus on a subclass of densities G (see Proposition 1 below) possessing certain equicontinuity properties where norm convergence prevails. These issues are handled in Section 2, where the precise statements of our main results can also be found. Section 3 is devoted to the proofs of the main results. Section 4 contains some concluding remarks.

2. Notation, Assumptions, and Main Results

Let f θ ( · ) denote the postulated density of { X n } , defined on a measure space ( Ω , F ) . Let S R denote the support of X and s θ ( · ) = f θ 1 2 ( · ) . Let the true density of { X n } be given by g ( · ) . Throughout the paper, we assume that the following regularity conditions hold.
Hypothesis 1.
Θ is a compact and convex subset of R d .
Hypothesis 2.
The family F is identifiable; namely, if θ 1 θ 2 , f θ 1 ( · ) f θ 2 ( · ) on a set of positive Lebesgue measure.
Hypothesis 3.
For every θ Θ , s θ is three times continuously differentiable with respect to all components of θ . Denote by s θ the gradient of s θ and its components by s ˙ θ i ( · ) . Let H θ denote the matrix of second partial derivatives of s θ ( · ) with respect to θ and s ¨ θ i j the ( i , j ) t h element of H θ .
Hypothesis 4.
Let the matrix of second partial derivatives of A n ( θ ) and A ( θ ) be denoted by H A n ( θ ) and H A ( θ ) , respectively. Assume that H A n ( θ ) and H A ( θ ) are continuous in θ and that H A ( θ ) is positive definite for every θ Θ . For p G and θ Θ , let λ θ ( p ) denote the smallest eigenvalue of the matrix S H θ ( x ) p 1 2 ( x ) d x . Assume that inf { λ θ ( p ) : p G } c > 0 , where c is independent of θ .
These hypotheses on the family F are generally standard and are used to establish the asymptotic properties of the MHDE. Sufficient conditions on F for the validity of these hypotheses are described in [3,16], and [17]. A remark on Hypothesis 4 is warranted here. When p = g , this assumption is related to the positive definiteness of the Fisher information matrix. If one assumes G = F , then this hypothesis reduces to the condition that inf { λ θ : θ Θ } c > 0 , which is standard. Finally, we remark that we have not attempted to provide the weakest regularity conditions, and we do believe some of these conditions can possibly be relaxed.
Recall that the MHDE of θ can be obtained by solving the equation
A n ( θ )   : = θ A ( f θ , g n ) = 1 2 R u θ ( x ) s θ ( x ) g n 1 2 ( x ) d x = 0 ,
where u θ ( x ) = θ f θ ( x ) ( f θ ( x ) ) 1 is the score function, which is obtained using θ s ( x ; θ ) = 1 2 u ( x ; θ ) s ( x ; θ ) .
We begin by providing some heuristics for the case d = 1 . Let A ˙ n ( θ ) denote the derivative of A n ( θ ) when d = 1 . Let θ ^ n denote the argzero of the function A ˙ n ( θ ) obtained from (11) above. Let θ ^ n , l = inf { θ Θ : A ˙ n ( θ ) 0 } and θ ^ n , u = sup { θ Θ : A ˙ n ( θ ) 0 } . Since θ ^ n , l θ ^ n θ ^ n , u , we obtain using Markov’s inequality that for any ϵ > 0 ,
P g ( θ ^ n , l θ g + ϵ ) P g ( A ˙ n ( θ g + ϵ ) 0 ) E g [ exp ( n α A ˙ n ( θ g + ϵ ) ] ,
where α > 0 . Similarly, for α < 0 , it can be seen that
P g ( θ ^ n , u θ g ϵ ) P g ( A ˙ n ( θ g ϵ ) 0 ) E g [ exp ( n α A ˙ n ( θ g ϵ ) ] .
Thus, an evaluation of (9) will allow us to obtain the logarithmic upper bound for θ ^ n , l and θ ^ n , u . Next, using the inequalities
P g ( θ ^ n , l θ g + ϵ ) P g ( A ˙ n ( θ g + ϵ ) 0 ) P g ( θ ^ n , u θ g + ϵ ) ,
P g ( θ ^ n , u θ g ϵ ) P g ( A ˙ n ( θ g ϵ ) 0 ) P g ( θ ^ n , l θ g ϵ ) ,
under additional hypotheses, one can derive large deviation lower bounds for θ ^ n . Deriving these bounds for MLE and MCE is rather standard, since the objective functions and their derivatives are linear functionals of the empirical distribution, as stated in (10), but this is not the case for the Hellinger distance.
Observe that the probabilities in (12) and (13) represent rare-event probabilities since, under the hypotheses described previously, θ ^ n converges to θ g almost surely as n . The distributional results concerning θ ^ n rely on the continuity and differentiability properties of A n ( θ ) , which depend nonlinearly on g n , and the norm convergence of g n to g.
Let G denote the collection of all probability densities with support S. By Scheffe’s theorem, the pointwise convergence of g n to g implies g n L 1 g as n . Additionally, when g n ( · ) is the kernel density estimator, then Glick’s Theorem guarantees that g n L 1 g almost surely as n when b n 0 and n ; cf. [18]. Since the MHDE are functionals of density estimators, it is natural to expect that the large deviations of density estimators will play a significant role in our analysis. For this reason, one is forced to consider the topological issues that arise in the large deviation analysis of density estimators. Interestingly, it turns out that the weak topology on L 1 ( S ) plays a prominent role. This, in turn, leads to the question of whether certain continuity properties, which were part of the traditional theory of MHD analysis, continue to hold if G were viewed as a subset of L 1 ( S ) equipped with weak topology. Expectedly, while the answer in general is no (cf. [19]), Proposition 1 provides sufficient conditions on the family G under which one additionally obtains norm convergence.
Before proceeding, we now introduce some further regularity conditions, as follows.
Hypothesis 5.
u θ s θ L 2 ( S ) and is an L 2 ( S ) -continuous function of θ .
Hypothesis 6.
The family F consists of bounded equicontinuous densities.
Hypothesis 7.
The family G consists of bounded and equicontinuous densities.
Hypothesis 8.
u θ g L 2 ( S ) and is an L 2 ( S ) -continuous function of θ .
Here, we note that Hypotheses 6 and 7 are related. Furthermore, if one is willing to assume that G = F , then one does not need Hypothesis 7. On the other hand, if one believes that parametric distributions are approximations to G , then one needs to work with Hypothesis 7. For this reason, we have maintained both of these hypotheses in our main results. Hypotheses 5 and 8 are related to finiteness of the Fisher information and are standard in the statistical literature.
Before we state the first proposition, we recall the definition of weak topology on L 1 (cf. [19]). A sequence { h n : n 1 } is said to converge weakly in L 1 if S h n ( x ) b ( x ) d x S h ( x ) b ( x ) d x as n for every b L ( S ) , where L ( S ) is a class of essentially bounded functions. We assume throughout the paper that the topology on Θ is the standard topology generated by the Euclidean metric.
Proposition 1.
Let G denote the class of densities, equipped with the weak topology. Further assume that Hypotheses 1–7 hold. Let Θ G be equipped with the product topology. Then the mapping A : Θ G R d defined by
A ( θ , g ) R u θ ( x ) s θ ( x ) g 1 2 ( x ) d x
is jointly continuous in ( θ , g ) . Furthermore, if g n w g , then
lim n sup θ Θ | | A ( θ , g n ) A ( θ , g ) | | = 0 .
Finally, under Hypothesis 7, the family G is a weakly sequentially closed subset of L 1 ( S ) .
Our next result is concerned with the limit behavior of the generating function of A n ( θ ) . In the following we use the notation p g to mean the probability measures associated with p ( · ) and g ( · ) are absolutely continuous.
Theorem 1.
Assume that Hypotheses 1–7 hold, and set
Λ n , θ ( α )   : =   1 n log E g [ exp ( n α , A n ( θ ) ] , α R d .
Then Λ θ ( α ) : = lim n Λ n , θ ( α ) exists and is a convex function given by
Λ θ ( α ) = sup p G S α , u θ ( x ) s θ ( x ) p 1 2 ( x ) d x KL ( p , g ) ,
where
KL ( p , g ) = S p ( x ) log p ( x ) g ( x ) d x if p g , otherwise .
Remark 1.
Since Λ θ is defined via a limiting operation, it is hard to extract its qualitative properties. However, we can obtain a simple lower bound by observing that KL ( p , g ) = 0 if and only if p = g , and an upper bound using that the Kullback–Leibler information is nonnegative. This results in the following bounds:
S α , u θ ( x ) s θ ( x ) g 1 2 ( x ) d x Λ θ ( α ) sup p G S α , u θ ( x ) s θ ( x ) p 1 2 ( x ) d x .
Furthermore, if all densities in G are bounded by one, then p 1 2 ( · ) p ( · ) implies
Λ θ ( α ) sup p G S α , u θ ( x ) s θ ( x ) p ( x ) d x KL ( p , g ) .
Using a variational argument, it can be shown that the supremum on the right-hand side is attained at p * given by
p * ( x )   : = exp ( α , u θ ( x ) ) s θ ( x ) S α , u θ ( x ) s θ ( x ) g ( x ) d x ;
cf. [20]. Furthermore, the maximum that results from this choice of p * ( · ) is
log S exp ( α , u θ ( x ) ) s θ ( x ) g ( x ) d x ,
yielding yet another lower bound for Λ θ ( α ) , although the comparison of these two lower bounds is not immediate.
Returning to our main discussion, recall from [21] that the convex conjugate of the function Λ θ is defined by
Λ θ * ( x ) = sup α R d α , x Λ ( α ) , x R d .
Let D θ denote the domain of Λ θ ; namely,
D θ = { α R d : Λ θ ( α ) < } ;
and let R θ denote the range of the gradient map Λ θ ; that is,
R θ = x R d : Λ θ ( α ) = x , some α R d .
We begin with the discussion of the case d = 1 . In this case, the generating function Λ θ reduces to
Λ θ ( α ) = sup p G α S exp ( n α A ˙ n ( θ ) s ( x ; θ ) p 1 2 ( x ) d x KL ( p , g ) .
By the convexity of Λ θ ( · ) , this function is differentiable almost everywhere (cf. [21]), and in the proof, we would like to exploit the differentiability of this function at the point α θ * where it attains its minimum value. If Λ θ is not differentiable at this point, it is helpful to consider the directional derivatives of Λ θ . Specifically, let Λ θ , + ( · ) and Λ θ , ( · ) denote the right and left derivatives of Λ θ ( · ) , respectively. When x Λ θ , ( α ) , Λ θ , + ( α ) , then it is well known that Λ θ * ( x ) = α x Λ θ ( α ) , but this observation will not be sufficient to obtain a proper lower bound. For that to hold, we need a stronger condition, namely that 0 R θ , which will only be true if Λ θ is differentiable at its point of minimum, α θ * . Otherwise, the expected lower bound turns out to be Λ θ * ( x ) , where x = Λ θ , + ( α θ * ) ; cf. [13].
We now turn to our large deviation theorem in R 1 , where we study the rare-event probabilities P g ( θ ^ n C ) for sets C that are away from the true value θ g . Specifically, we establish an analogue of the LDP, but where a subtle difference arises in the lower bound in the absence of differentiability of Λ θ .
We recall that θ ^ n is defined using the kernel density estimator g n ( · ) defined in (1), whose behavior is dictated by the bandwidth sequence { b n } .
Theorem 2.
Assume d = 1 , Hypotheses 1–8 are satisfied, and θ ^ n is the unique zero of A ˙ n ( θ ) = 0 . Further assume that b n 0 and n b n as n . Then for any closed set F not containing θ g ,
lim sup n 1 n log P g ( θ ^ n F ) inf θ F Λ θ * ( 0 ) .
Moreover, for any open set G not including θ g ,
lim inf n 1 n log P g ( θ ^ n G ) inf θ G I ( θ ) ,
where
I ( θ ) = inf Λ θ * ( x ) : x R θ [ 0 , ) ,
and the infimum is taken to be infinity if the set R θ [ 0 , ) is empty.
Remark 2.
If F = [ θ , ) where θ > θ g , then in both the upper and lower bounds, it is sufficient to evaluate the infimum at the boundary point θ . That is,
lim sup n 1 n log P g ( θ ^ n [ θ , ) ) Λ θ * ( 0 ) .
Similarly, if G = ( θ , ) where θ > θ g , then
lim inf n 1 n log P g ( θ ^ n ( θ , ) ) I ( θ ) .
Furthermore, if inf α Λ θ ( α ) is achieved at a unique point α θ * and Λ θ is differentiable at α θ * , then the right-hand side of (28) reduces to Λ θ * ( 0 ) , i.e., the upper and lower bounds coincide and the limits exist. Since the rate function appearing in the upper and lower bounds coincide in this case, we obtain a proper LDP if the resulting rate function has the required regularity properties, in particular, I ( θ ) = Λ θ * ( 0 ) is lower semicontinuous and has compact level sets.
The proof of the above theorem relies on (14) and (15) combined with Theorem 1, together with a change of measure argument characteristic of large deviation analysis. The comparison inequalities in (14) and (15) are critical to obtaining the characterizations in the above theorem, but these are essentially one-dimensional results and their analogues in higher dimensions ( d 2 ) are not immediate. Consequently, when Λ θ is not differentiable, new complications arise, which lead to a slightly different, and less explicit, representation of the lower bound.
Next we establish a large deviation theorem for R d , generalizing the previous theorem to higher dimensions. In the following, let dist ( x , G ) = inf y G | | x y | | denote the distance between a point x R d and a set G R d .
Theorem 3.
Assume Hypotheses 1–8 are satisfied, and assume that b n 0 and n b n as n . Then for any closed set F not containing θ g ,
lim sup n 1 n log P g ( θ ^ n F ) inf θ F Λ θ ( 0 ) .
Moreover, for any open set G not including θ g ,
lim inf n 1 n log P g ( θ ^ n G ) inf θ G I ( θ ) ,
where I ( θ ) = inf Λ θ * ( x ) : x R θ B ( 0 ; c θ ) and c θ = b dist ( θ , Θ G ) for some universal constant b ( 0 , ) , and the infimum is taken to be infinity if the set R θ B ( 0 ; c θ ) is empty.
Remark 3.
As we noted for the one-dimensional case in Remark 2, under a differentiability assumption on Λ θ , the function I ( θ ) can be identified as Λ θ * ( 0 ) , but in full generality, it is not immediately known that I ( θ ) is even nontrivial. Moreover, without differentiability, the infimum in the definition of I ( θ ) is more restrictive than what we encountered in the one-dimensional problem. However, if one assumes additional geometry on G, such as a translated cone structure, then one obtains improved estimates in the sense that one can take unbounded regions in the definition of I ( θ ) , just as we saw in Theorem 2.2. For further remarks in this direction, see the discussion given after the proof of the theorem.

3. Proofs

We turn first to Proposition 1.
Proof of Proposition 1.
Since Θ G is equipped with product topology, it is sufficient to show that if θ n θ and g n w g , then A n ( θ ) converges to A ( θ ) , where
A ( θ ) = S u θ ( x ) s θ ( x ) g 1 2 ( x ) d x .
Let r θ ( x ) = u θ ( x ) s θ ( x ) , and observe that
A ( θ n , g n ) A ( θ , g ) S | r θ n ( x ) | | g n 1 2 ( x ) g 1 2 ( x ) | d x + S | r θ n ( x ) r θ ( x ) | g 1 2 ( x ) d x | | r θ | | 2 HD ( g n , g ) + S | r θ n ( x ) r θ ( x ) | g 1 2 ( x ) d x = T n , 1 + T n , 2 ,
where the penultimate equation follows by applying the Cauchy–Schwarz inequality. Then by the Cauchy–Schwarz inequality and Hypothesis 5, T n , 2 0 . Since Hellinger distance is dominated by the L 1 -distance, in order to complete the proof, it is sufficient to show that | | g n g | | 1 0 . Now since g n w g , it follows that as n ,
G n ( x )   : = S g n ( y ) I { y x } d y S g ( y ) I { y x } d y G ( x ) .
Evidently, G n ( · ) and G ( · ) are nondecreasing and right continuous. Furthermore, if x * = inf { x : x S } and x * = sup { x : x S } , then G n ( x * ) G ( x * ) and G n ( x * ) G ( x * ) , where G n ( x * ) = lim x x * G n ( x ) , G n ( x * ) = lim x x * G n ( x ) , G ( x * ) = lim x x * G ( x ) , G ( x * ) = lim x x * G ( x ) . Thus G n converges to G, which is a proper distribution function. Then by Lemma 1 of Boos [22], g n ( · ) converges to g ( · ) uniformly on compact sets. This, in turn, implies the L 1 convergence of g n ( · ) to g ( · ) (by Scheffe’s lemma), which establishes the convergence of T n , 1 to 0, thus completing the proof of the joint continuity of A ( θ , g ) .
Next, the uniform convergence (17) follows by Hypothesis 5, since
sup θ Θ A ( θ , g n ) A ( θ , g ) S | r θ ( x ) | | g n 1 2 ( x ) g 1 2 ( x ) | d x sup θ Θ | | r θ | | 2 HD ( g n , g ) 0 .
Finally, to prove that G is weakly sequentially closed, note that convergence in weak topology implies pointwise convergence, yielding g ( · ) 0 . Noting that
S g ( x ) d μ ( x ) = 1 + S ( g ( x ) g n ( x ) ) d x ,
it follows that g ( · ) integrates to one, using L 1 convergence, thus completing the proof of the proposition. □
We now turn to the proof of Theorem 1. The proof relies on the large deviation theorem for the kernel density estimator g n ( · ) in the weak topology of G . The next proposition is concerned with the LDP for { g n } in G , equipped with the inherited weak topology from L 1 ( S ) . This issue has received considerable attention recently (cf. [23,24]), where it is established that the full LDP may not hold for { g n } in norm topology, but does hold under the weak topology.
Proposition 2.
Assume Hypotheses 1–8 and that b n 0 and n b n as n . Then { g n } satisfies the LDP in the weak topology of L 1 ( S ) with good rate function I given by
I ( p ) = S p ( x ) log p ( x ) g ( x ) d x if g p , otherwise .
Proof of Theorem 1.
As before, let G be equipped with the weak topology. Set r θ ( x ) = u θ ( x ) s θ ( x ) , and define F : G R as follows:
F ( h ) = S α , r θ ( x ) h 1 2 ( x ) d x .
By Hypothesis 5, r θ L 2 ( S ) . To show that F ( · ) is continuous, let h n w h as n . Then
| F ( h n ) F ( h ) | S r θ ( x ) | h n 1 2 ( x ) h 1 2 ( x ) | d μ ( x ) | | r θ | | 2 HD ( h n , h ) | | r θ | | 2 | | h n h | | 1 0 as n ,
where we have used the Cauchy–Schwarz inequality that the L 1 distance dominates the Hellinger distance in (38). Now by Hypothesis 7, as in the proof of Proposition 1, we have that | | h n h | | 1 0 as n , establishing the continuity of F ( · ) . Next, to show that F ( · ) is bounded, note that sup { F ( p ) : p G } | | r θ | | 2 by the Cauchy–Schwarz inequality. Then by Proposition 2, it follows by Varadhan’s integral lemma (see [10], Theorem 4.3.1) that
lim n 1 n log E exp ( n F ( g n ( x ) ) = lim n 1 n log E exp n S α , u θ ( x ) s θ ( x ) g n 1 2 ( x ) d x = sup p G S α , u θ ( x ) s θ ( x ) p 1 2 ( x ) d x KL ( p , g ) : =   Λ θ ( α ) .
This completes the proof of the theorem. □
The proofs of our main results will involve probability bounds on the modulus of continuity of A n ( θ ) and A n ( θ ) , respectively. Recall that the modulus of continuity ω ( h ; r ) of a function h : R d R is given by
ω ( h ; r )   : =   sup | | x 1 x 2 | | r | h ( x 1 ) h ( x 2 ) | , r > 0 .
Observe that when h ( · ) is replaced by A n ( θ ) or A n ( θ ) , the modulus of continuity becomes a random quantity. Our next proposition summarizes the continuity properties of A n ( θ ) and A n ( θ ) via their modulus of continuity as real-valued functionals from G equipped with the weak topology.
Proposition 3.
Assume that Hypotheses 1–8 hold and that b n 0 and n b n as n . Then, with respect to { A n } and A , the modulus of continuity satisfies the following relations, each with probability one:
( i ) lim n ω ( A n ; r ) = ω ( A , r ) ; ( ii ) lim r 0 ω ( A n ; r ) = 0 ; and ( iii ) lim r 0 ω ( A ; r ) = 0 .
Similarly, the sequence { A n } and A satisfy the analogous relations with probability one; namely,
( iv ) lim n ω ( A n ; r ) = ω ( A ; r ) ; ( v ) lim r 0 ω ( A n ; r ) = 0 ; and ( vi ) lim r 0 ω ( A ; r ) = 0 .
Proof. 
First observe that A n ( θ ) converges uniformly to A ( θ ) . To see this, note that if g n w g , then by Proposition 1, it converges in L 1 . Hence
sup θ Θ | A n ( θ ) A ( θ ) | sup θ Θ R s θ ( x ) | g n 1 2 ( x ) g 1 2 ( x ) | d x | | g n 1 2 ( x ) g 1 2 ( x ) | | 2 | | g n g | | 1 0 ,
where the last inequality follows using that the Hellinger distance is dominated by the L 1 -distance. We now prove (i). For this we invoke the properties of the modulus of continuity. Observe that
ω ( A n ; r ) = ω ( A n A + A ; r ) ω ( A n A ; r ) + ω ( A ; r ) ,
which yields
| ω ( A n ; r ) ω ( A ; r ) | ω ( A n A ; r ) .
Next observe that
ω ( A n A ; r ) = sup | | θ 1 θ 2 | | r | ( A n A ) ( θ 1 ) ( A n A ) ( θ 2 ) | 2 sup θ Θ | A n ( θ ) A ( θ ) | 0 ,
where the last convergence follows from the uniform convergence of ( A n A ) ( θ ) to 0 as shown in (42). The proof of (iv) is similar, and specifically is obtained by using that
ω ( ( A n A ) ; r ) 2 sup θ Θ | | A n ( θ ) A ( θ ) | | 0 ,
where the above convergence follows from (17).
We now turn to the proof of (ii). Using the Cauchy–Schwarz inequality and the definition of Hellinger distance,
ω ( A n ; r ) = sup | | θ 1 θ 2 | | r | A n ( θ 1 ) A n ( θ 2 ) | = sup | | θ 1 θ 2 | | r R ( s θ 1 ( x ) s θ 2 ( x ) ) g n 1 2 ( x ) d x HD ( f θ 1 , f θ 2 ) sup | | θ 1 θ 2 | | r | | f θ 1 f θ 2 | | 1   : =   ω ( H ; r ) ,
where H : ( θ 1 , θ 2 ) | | f θ 1 f θ 2 | | 1 is continuous since F is continuous in θ . Also, since Θ × Θ is compact, H ( · , · ) is uniformly continuous. Since the modulus of continuity converges to 0 if and only if H ( · , · ) is uniformly continuous, (ii) follows. Turning to (v), notice that, as before,
ω ( A n ; r ) sup | | θ 1 θ 2 | | r | | u θ 1 s θ 1 u θ 2 s θ 2 | | 2 .
Now, since u θ s θ is L 2 continuous, by Hypothesis 5, the proof follows as in (ii) due to to the compactness of Θ . The proofs of (iii) and (vi) are similar to (ii) and (v), respectively, and are therefore omitted. □
Proposition 4.
For any 0 < M < and δ > 0 , there exists a positive number r ( M , δ ) such that
P g ( ω ( A n ; r ) δ ) e M n a n d P g ( ω ( A n ; r ) δ ) e M n .
Proof. 
By Markov’s inequality and (46), it follows that for any β > 0 ,
P g ( ω ( A n ; r ) δ ) E g [ e n β ω ( A n ; r ) ] e n β δ e n β ( δ ω ( H ; r ) ) .
Since ω ( H ; r ) 0 as r 0 , there exists an r 0 such that for all r r 0 , ( δ ω ( H ; r ) ) > 0 . Since β > 0 is arbitrary, the proposition follows by taking β = M ( δ ω ( H ; r ) ) 1 , for some r r 0 . The proof of the second inequality is similar, using (47). □
Proof of Theorem 2.
We begin with the proof of the upper bound. Since we assume that the equation A ˙ n ( θ ) = 0 has a unique solution, it follows from the inequality in (12) that for any α > 0 and θ > θ g ,
lim sup n 1 n log P g ( θ ^ n θ ) lim sup n 1 n log E g [ exp ( n α A ˙ n ( θ ) ) ] = Λ θ ( α ) ,
where the last equality follows by applying Theorem 1 with d = 1 . Since the inequality holds for every α > 0 ,
lim sup n 1 n log P g ( θ ^ n θ ) sup α > 0 Λ θ ( α ) sup α R Λ θ ( α ) .
Now, noticing that sup α R Λ θ ( α ) = inf α R Λ θ ( α ) = Λ θ * ( 0 ) , we then obtain
lim sup n 1 n log P g ( θ ^ n θ ) Λ θ * ( 0 ) .
Similarly, for θ < θ g , using (13), one can show by an analogous calculation that
lim sup n 1 n log P g ( θ ^ n θ ) Λ θ * ( 0 ) .
Now let θ 1 = inf { θ > θ g : θ F } and θ 2 = sup { θ < θ g : θ F } . Then
P g ( θ ^ n F ) P g ( θ ^ n θ 1 ) + P g ( θ ^ n θ 2 ) ,
and so by (52) and (53), it follows that
lim sup n 1 n log P g ( θ ^ n F ) min θ { θ 1 , θ 2 } Λ θ * ( 0 ) inf θ F Λ θ * ( 0 ) ,
where the last step follows since F closed implies { θ 1 , θ 2 } F .
Next we turn now to the proof of the lower bound. Let G be an open set, and let θ G . Then there exists an ϵ > 0 (to be chosen) such that I ϵ   : =   ( θ ϵ , θ + ϵ ) G . Note that
{ θ ^ n I ϵ } = { A ˙ n ( θ ^ n ) = 0 , θ ^ n I ϵ } { A ˙ n ( θ ) A ˙ n ( θ ^ n ) δ } { θ ^ n I ϵ , sup θ 1 , θ 2 I ϵ | A ˙ n ( θ 1 ) A ˙ n ( θ 2 ) | δ } .
Thus,
P g ( θ ^ n I ϵ ) P g ( A ˙ n ( θ ) A ˙ n ( θ ^ n ) δ ) P g ( θ ^ n I ϵ , sup θ 1 , θ 2 I ϵ | A ˙ n ( θ 1 ) A ˙ n ( θ 2 | > δ ) ) P g ( A ˙ n ( θ ) A ˙ n ( θ ^ n ) δ ) P g ( sup θ 1 , θ 2 I ϵ | A ˙ n ( θ 1 ) A ˙ n ( θ 2 ) | > δ ) = P g ( A ˙ n ( θ ) δ ) P g ( sup θ 1 , θ 2 I ϵ | A ˙ n ( θ 1 ) A ˙ n ( θ 2 ) | > δ ) = P g ( A ˙ n ( θ ) δ ) P g ( ω ( A ˙ n ; ϵ ) > δ ) .
We now investigate P g ( A ˙ n ( θ ) δ ) . Let Q n denote the distribution of A ˙ n ( θ ) , and define Q n , α as follows:
Q n , α ( B ) = 1 Λ n , θ ( α ) B e n α y d Q n ( y ) , B B .
Let B = ( x η , x + η ) , for some η > 0 , where B ( δ , ) and x R θ . Then
Q n ( B ) exp n α x n η | α | + n Λ n , θ ( α ) Q n , α ( B ) .
Taking the logarithm, dividing by n, and then taking the limit as n , we obtain
lim inf n 1 n log Q n ( B ) α x η | α | Λ θ ( α ) + lim inf n 1 n log Q n , α ( B ) .
Now since x R θ , we can apply Theorem IV.1 of [25] to obtain that the last term on the right-hand side of the previous equation converges to zero. Upon letting η 0 , it follows that
lim inf n 1 n log Q n ( B ) Λ θ * ( x ) .
Since the above inequality holds for all x R θ ( δ , ) , we conclude that
lim n 1 n log P g ( A ˙ n ( θ ) δ ) I δ ( θ ) ,
where I δ ( θ ) = inf x R θ ( δ , ) Λ θ * ( x ) .
By Proposition 4, choosing M > I δ ( θ ) , one can find ϵ > 0 such that
P g ( ω ( A ˙ n ; ϵ ) > δ ) e M n .
Since
P g ( θ ^ n G ) P g ( A ˙ n ( θ ) δ ) 1 P g ( ω ( A ˙ n ; ϵ ) ) P g ( A ˙ n ( θ ) δ ) ,
by the choice of M, it follows from (61) that
lim inf n 1 n log P g ( θ ^ n G ) I δ ( θ ) .
Taking the supremum on left- and right-hand side over all δ > 0 yields the required lower bound. □
Turning to the higher dimensional case, we first need the following result, which provides a uniform bound on the Hessian of the objective function A n ( θ ) .
Lemma 2.
Under Hypotheses 1–8, there exists a finite constant 0 < C < such that with probability one,
sup n 1 sup θ Θ | | H A n ( θ ) | | 2 C .
Proof. 
This is standard. Specifically, note that the ( i , j ) th element of the matrix H A n ( θ ) is given by
h n , i j = S s ¨ θ i j ( x ) g n 1 2 ( x ) d x .
Next, writing down the expression for s ¨ θ i j in terms of the derivatives of the score function u θ , using the Cauchy–Schwarz inequality along with Hypotheses 3, 4, 6, and 8, and the definition of the matrix norm, the lemma follows. □
In the proof of the lower bound, we will take a somewhat different approach, involving the analysis of k constraints, and our strategy will be to reduce this to a problem involving a single constraint. Specifically, in (67) below, we establish that, instead of studying k constraints on a quantity D n (which we are about to define), we can cast the problem in terms of a d-dimensional vector Y n (defined in (70) below) belonging to a ball centered at 0 and of appropriate radius.
To be more precise, let G R d be open, and consider the probability that we obtain an estimated value θ G . Let { θ 1 , , θ k } Θ G , and for any δ > 0 , set
d n ( j ) = A n ( θ ) A n ( θ j ) δ , j = 1 , , k
and D n ( θ ) = ( d n ( 1 ) , d n ( k ) ) . If θ is chosen as the estimate, then we must have A n ( θ ) A n ( θ j ) 0 for all j, so, in particular,
P g ( θ ^ n G ) P g ( D n ( θ ) 0 )
(by which we mean that d n ( j ) 0 for all j in this last probability).
To evaluate the latter probability, observe that by a second-order Taylor expansion,
s θ ( x ) s θ j ( x ) = θ θ j , s θ ( x ) + 1 2 ( θ θ j ) H ( x ; θ j * ) ( θ θ j ) .
Using the positive definiteness and uniform boundedness of the matrix R H ( x ; θ ) p 1 2 ( x ) d x , by Hypothesis 4, we have that for any unit vector v R d ,
sup p G inf η Θ v R H ( x ; η ) p 1 2 ( x ) d x v c ,
where c is a positive constant independent of v . Thus, for each j,
sup p G inf η Θ ( θ θ j ) R H ( x ; η ) p 1 2 ( x ) d x ( θ θ j ) c θ θ j 2 .
Integrating with respect to g n 1 2 ( · ) and using the definition of A n ( · ) , we then obtain that
d n ( j ) = R θ θ j , s ( x , θ ) g n 1 2 ( x ) d x + R ( θ , θ j ) ,
where
R ( θ , θ j ) c θ θ j 2 δ .
Let Y n ( θ ) = ( Y n , 1 , , Y n , d ) , where for s ( x ; θ ) : = s θ ( x ) :
Y n , j = S θ j s ( x ; θ ) g n 1 2 ( x ) d x , 1 j k .
(We have suppressed θ in the notation for Y n , j .) Then the inequality d n ( j ) 0 corresponds to an event E n , j described by the occurrence of the inequality
θ θ j | | θ θ j | | , Y n c | | θ θ j | | + δ ( | | θ θ j | | ) 1 ,
where the right-hand side is always negative for small δ (since dist ( θ , Θ G ) > 0 ) and behaves like a constant multiple of dist ( θ , Θ G ) as this distance tends to infinity. Thus, we can choose a positive constant a δ such that
a δ dist ( θ , Θ G ) c | | θ θ j | | δ ( | | θ θ j | | ) 1 , j = 1 , , k ,
and set c θ ( δ ) : = a δ dist ( θ , Θ G ) . Finally, let E ˜ n denote the event that
θ θ j | | θ θ j | | , Y n c θ ( δ ) .
Then for all j, E n , j E ˜ n , where we recall that E n , j was defined via (72). Now, since the definition of the event E ˜ n does not depend on any specific vector θ j , one can replace the vector ( θ θ j ) ( | | θ θ j | | ) 1 by any unit vector v in R d . Hence
P g ( D n 0 ) P g ( v , Y n c θ ( δ ) , for all unit vectors v ) = P g ( Y n B ¯ ( 0 ; c θ ( δ ) ) ) ,
and we now derive a large deviation lower bound for the probability on the right-hand side.
Proposition 5.
Assume that Hypotheses 1–8 hold, and suppose that G is an open subset of R d . Assume that b n 0 and n b n as n . Then for any θ G and r > 0 ,
lim n 1 n log P g ( Y n B ( 0 ; r ) ) I r ( θ ) ,
where I r ( θ ) = inf Λ θ * ( x ) : x R θ B ( 0 ; r ) and the infimum is taken to be infinity if the set R θ B ( 0 ; r ) is empty.
Proof. 
We begin by studying the limiting generating function of Y n . By Varadhan’s integral lemma, it follows that
lim n Λ n , θ ( α )   : =   lim n 1 n log E g [ exp ( n α , Y n ] = Λ θ ( α ) ,
where
Λ θ ( α ) = sup p G S α , s θ ( x ) p 1 2 ( x ) d x KL ( p , g ) .
Define the α -shifted distribution by
Q n , α ( B ) = 1 Λ n , θ ( α ) B e n α , y d Q n ( y ) ,
where Q n denotes the distribution of Y n . Note by the convexity of Λ θ ( α ) that it is almost everywhere differentiable. Fix x R θ B ( 0 ; r ) and choose α such that Λ θ ( α ) = x . Let δ > 0 be such that B ( x ; δ ) B ( 0 ; r ) . Then
Q n ( B ( x ; δ ) ) = exp ( n Λ n , θ ( α ) ) B ( x ; δ ) exp ( n α , y ) d Q n , α ( y ) exp ( n ( α , x + Λ n , θ ( α ) + | | α | | δ ) ) Q n , α ( B ( x ; δ ) ) ,
implying
lim inf n 1 n log Q n ( B ( x ; δ ) ) α , x + Λ θ ( α ) | | α | | δ + lim inf n 1 n log Q n , α ( B ( x ; δ ) ) .
Now, notice that the limiting cumulant generating function of Y n under the measure Q n , α is given by
Λ ˜ θ ( β ) = Λ θ ( α + β ) Λ θ ( β ) .
Since Λ ˜ θ is a proper convex function, it is continuous since Λ θ ( α ) is finite in the R d , and moreover, by the choice of x , it is differentiable at 0 . Hence Condition II.1 of [25] is satisfied. Now, using Theorem IV.1 of [25], it follows that
lim inf n 1 n log Q n , α ( B ( x ; δ ) ) = 0 .
Substituting the above into (80), we obtain
lim inf n 1 n log P g ( Y n B ( 0 ; r ) ) Λ θ * ( x ) .
Taking the supremum in x R θ B ( 0 ; r ) , the proposition follows. □
Proof of Theorem 3: Upper Bound.
Let F be a closed subset of Θ . Note Θ compact implies that F is compact. Let { B ( θ ; r ) : θ Θ } denote an open cover of F, and let { B ( θ 1 ; r ) , , B ( θ k ; r ) } denote the finite subcover. Using that A n ( θ ^ n ) = 0 , we then obtain that for any α R d ,
P g ( θ ^ n F ) j = 1 k P g ( θ ^ n B ( θ k ; r ) ) = j = 1 k E g [ exp ( n α , A ˙ n ( θ ^ n ) ) I θ ^ n B ( θ j ; r ) ]   : =   j = 1 k T n ( j ) .
Adding and subtracting A n ( θ j ) to A n ( θ ) and then applying Hölder’s inequality yields T n ( j ) T n ( 1 , j , p ) T n ( 2 , j , q ) , where
log T n ( 1 , j , p ) = 1 p log E g [ exp ( n p α , A n ( θ j ) ) I θ B ( θ j ; r ) ] , log T n ( 2 , j , q ) = 1 q log E g [ exp ( n q α , ( A n ( θ ^ n ) A n ( θ j ) ) ) I { θ ^ n B ( θ j ; r ) } ] .
First we study T n ( 2 , j , q ) . For θ ^ n B ( θ j , r j ) and θ 1 , θ 2 Θ , the Cauchy–Schwarz inequality gives
| α , A n ( θ ^ n ) A n ( θ j ) ) | | | α | | 2 sup θ 1 , θ 2 B ( θ j , r ) | | A n ( θ 1 ) A n ( θ 2 ) ) | | 2 | | α | | 2 | r | sup θ B ( θ j , r ) | | H A n ( θ ) | | 2 | | α | | 2 | r | max 1 j k sup θ B ( θ j , r j ) | | H A n ( θ ) | | 2 ,
where H A n ( θ ) is the Hessian matrix consisting of the second partial derivatives of A n ( θ ) . Hence we obtain for any 1 j k that
1 n log T n ( 2 , j , q ) r 1 n q ( n q | | α | | 2 ) max 1 j k sup θ B ( θ j , r ) | | H A n ( θ ) | | 2 = r | | α | | 2 max 1 j k sup θ B ( θ j , r ) | | H A n ( θ ) | | 2 .
Now by Lemma 2,
lim sup n 1 n log T n ( 2 , j , q ) C r .
Also, for each 1 j k , Theorem 1 provides that
lim sup n 1 n log T n ( 1 , j , p ) 1 p Λ θ j ( p α ) .
Thus
lim sup n 1 n P g ( θ ^ n F ) max 1 j k lim sup n 1 n log T n ( 1 , j , p ) + max 1 j k lim sup n 1 n log T n ( 2 , j , p ) max 1 j k 1 p Λ θ j ( p α ) + C r .
Since the last inequality holds for all p > 1 ,
lim sup n 1 n P g ( θ ^ n F ) max 1 j k 1 p Λ θ j ( p α ) + C r max 1 j k Λ θ j ( α ) + C r as p 0 .
Moreover, for each j,
Λ θ j ( α ) sup α R d Λ θ j ( α ) : = Λ θ j ( 0 ) .
Hence
lim sup n 1 n P g ( θ ^ n F ) max 1 j k Λ θ j * ( 0 ) + C r inf θ F Λ θ * ( 0 ) + C r .
The upper bound follows by letting r 0 . □
Proof of Theorem 3: Lower Bound.
Let G be an open subset of Θ , and let θ G . Then G c = Θ G is compact, and there exists a collection T = { θ 1 , , θ k } G c such that B ( θ 1 ; ϵ ) , , B ( θ k ; ϵ ) forms a finite subcover of Θ G , where ϵ > 0 . Since
A n ( θ ) sup t T A n ( t ) A n ( θ ) max 1 j k A n ( θ j )   +   max 1 j k sup t B ( θ j ; ϵ ) [ A n ( t ) A n ( θ j ) ] A n ( θ ) max 1 j k A n ( θ j ) + sup | | θ 1 θ 2 | | < ϵ [ A n ( θ 1 ) A n ( θ 2 ) ]
it follows that
P g θ ^ n G P g A n ( θ ) > max 1 j k A n ( θ j ) + δ , sup | | θ 1 θ 2 | | < ϵ [ A n ( θ 1 ) A n ( θ 2 ) ] δ J n , 1 J n , 2 ,
where
J n , 1   : =   P g A n ( θ ) > max 1 j k A n ( θ j ) + δ , J n , 2   : =   P g sup | | θ 1 θ 2 | | r [ A n ( θ 1 ) A n ( θ 2 ) ] δ : = P g ω ( A n ; ϵ ) δ .
We now investigate the behavior of J n , 1 and J n , 2 . Starting with J n , 1 , note that
J n , 1 P g min 1 j k ( A n ( θ ) A n ( θ j ) δ ) 0 = P g ( D n 0 ) .
Now by (74), it follows that
J n , 1 P g ( Y n B ( 0 ; r ) ) ,
where Y n is as in (71) and r = c θ ( δ ) . Applying Proposition 3.4, we obtain
lim n 1 n log J n , 1 I r ( θ ) ,
where I r ( θ ) = inf Λ θ * ( x ) : x R θ B ( 0 ; r ) , and we now observe that r may be chosen to be c θ : = lim δ 0 c θ ( δ ) > 0 , where c θ ( δ ) is given as in (73). Hence we may replace I r ( · ) with I ( · ) on the right-hand side of the previous equation. Next, using Proposition 4 yields that
lim inf n 1 n log P g ( θ ^ n G ) lim inf n 1 n log J n , 1 + lim n log 1 J n , 2 J n , 1 I ( θ ) .
Finally, the required lower bound is obtained by maximizing the right-hand side over all θ G . □
In the proof of the lower bound, it is clear that the choice of { θ 1 , , θ k } plays a central role, and the rate function I ( θ ) will be minimized when k is small. As a simple example, suppose that our goal is to obtain a lower bound for P g ( θ ^ n G ) , where
G = ( θ 1 , θ 2 ) : θ 1 > a 1 or θ 2 > a 2 R 2 , θ g G ,
which is a union of two halfspaces, This can be expressed as a + C , where a = ( a 1 , a 2 ) and C = { ( θ 1 , θ 2 ) : θ 1 > 0 or θ 2 > 0 } , which is an example of a translated cone. Now if θ G , then we can find two elements which generate the entire set Θ G , in the sense that all other normalized differences lie between these two unit vectors. These two representative points are the unit vectors e 1 = ( 1 , 0 ) and e 2 = ( 0 , 1 ) , and all other normalized differences ( θ θ ˜ / θ θ ˜ lie between these vectors for all θ ˜ Θ G . Now going back to (73), we see that this equation again holds. Furthermore, (74) holds with B ( 0 ; c δ ( θ ) ) now replaced by an intersection of two halfspaces rather than of all halfspaces, yielding an unbounded region in the definition of I ( θ ) . This potentially improves the quality of the lower bound compared with what is presented in the statement of Theorem 3. This idea can be potentially generalized to other sets, such as other unions of halfspaces, and so from a practical perspective, could apply somewhat generally.

4. Concluding Remarks

In this article, we have derived large deviation results for the minimum Hellinger distance estimators of a family of continuous distributions satisfying an equicontinuity condition. These results extend large deviation asymptotics for M-estimators given, e.g., in [6,9]. In contrast to the case for M-estimators, our setting is complicated due to its inherent nonlinearity, leading to complications in the proofs of both the upper and lower bounds, and an unexpected subtlety in the form of the rate function for the lower bound. Our results suggest that one can, under additional hypotheses, establish saddlepoint approximations to the density of MHDE, which would enable one to sharpen inference for small samples.
Similar results are expected to hold for discrete distributions. However, the equicontinuity condition is not required in that case, since 1 , unlike L 1 ( S ) , possesses the Schur property. Hence the LDP in the weak topology of 1 can be derived (more easily) using a standard Gärtner–Ellis argument, and utilizing this, one can, in principle, repeat all of the arguments above to derive results analogous to Theorems 2 and 3. Large deviations for other divergences under weak family regularity (such as noncompactness of the parameter space Θ )—and their connections to estimation and test efficiency—are interesting open problems requiring new techniques beyond those described in this article.

Author Contributions

Conceptualization, A.N.V. and J.F.C.; Methodology, A.N.V. and J.F.C.; Validation, A.N.V. and J.F.C.; Writing—original draft, A.N.V. and J.F.C.; Writing—review & editing, A.N.V. and J.F.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Beran, R. Minimum Hellinger distance estimates for parametric models. Ann. Stat. 1977, 5, 445–463. [Google Scholar] [CrossRef]
  2. Lindsay, B.G. Efficiency versus robustness: The case for minimum Hellinger distance and related methods. Ann. Stat. 1994, 22, 1081–1114. [Google Scholar] [CrossRef]
  3. Basu, A.; Shioya, H.; Park, C. Statistical Inference: The Minimum Distance Approach; CRC Press: Boca Raton, FL, USA, 2011. [Google Scholar]
  4. Pardo, L. Statistical Inference Based on Divergence Measures; CRC Press: Boca Raton, FL, USA, 2006. [Google Scholar]
  5. Bahadur, R.R. Rates of convergence of estimates and test statistics. Ann. Math. Stat. 1967, 38, 303–324. [Google Scholar] [CrossRef]
  6. Borovkov, A.A.; Mogulskii, A.A. Large Deviations and Testing Statistical Hypotheses. Sib. Adv. Math. 1992, 2, 43–72. [Google Scholar]
  7. Fu, J.C. On a theorem of Bahadur on the rate of convergence of point estimators. Ann. Stat. 1973, 1, 745–749. [Google Scholar] [CrossRef]
  8. Arcones, M.A. Large deviations for M-estimators. Ann. Inst. Stat. Math. 2006, 58, 21–52. [Google Scholar] [CrossRef]
  9. Joutard, C. Large deviations for M-estimators. Math. Methods Stat. 2004, 13, 179–200. [Google Scholar]
  10. Dembo, A.; Zeitouni, O. Large Deviations Techniques and Applications; Springer: Berlin, Germany, 1998. [Google Scholar]
  11. Puhalskii, A.; Spokoiny, V. On large-deviation efficiency in statistical inference. Bernoulli 1998, 4, 203–272. [Google Scholar] [CrossRef]
  12. Nikitin, Y. Asymptotic Efficiency of Nonparametric Tests; Cambridge University Press: Cambridge, UK, 1995. [Google Scholar]
  13. Biggins, J.; Bingham, N. Large deviations in the supercritical branching process. Adv. Appl. Probab. 1993, 25, 757–772. [Google Scholar] [CrossRef]
  14. Billingsley, P. Convergence of Probability Measures, 2nd ed.; John Wiley & Sons, Inc.: New York, NY, USA, 1999. [Google Scholar]
  15. de Acosta, A. On large deviations of empirical measures in the τ-topology. J. Appl. Probab. 1993, 31, 41–47. [Google Scholar] [CrossRef]
  16. Basu, A.; Sarkar, S.; Vidyashankar, A.N. Minimum negative exponential disparity estimation in parametric models. J. Statist. Plann. Inference 1997, 58, 349–370. [Google Scholar] [CrossRef]
  17. Cheng, A.-L.; Vidyashankar, A.N. Minimum Hellinger distance estimation for randomized play the winner design. J. Statist. Plann. Inference 2006, 136, 1875–1910. [Google Scholar] [CrossRef]
  18. Devroye, L.; Györfi, L. Nonparametric Density Estimation: The L1 View; Wiley Series in Probability and Mathematical Statistics: Tracts on Probability and Statistics; John Wiley & Sons, Inc.: New York, NY, USA, 1985. [Google Scholar]
  19. Conway, J.B. A Course in Functional Analysis; Springer: New York, NY, USA, 1990. [Google Scholar]
  20. Dupuis, P.; Ellis, R.S. A Weak Convergence Approach to the Theory of Large Deviations; John Wiley & Sons: New York, NY, USA, 1997. [Google Scholar]
  21. Rockafellar, R.T. Convex Analysis; Princeton University Press: Princeton, NJ, USA, 1970. [Google Scholar]
  22. Boos, D.D. A converse to Scheffé’s theorem. Ann. Stat. 1985, 13, 423–427. [Google Scholar] [CrossRef]
  23. Lei, L. Large Deviations for Kernel Density Estimators and Study for Random Decrement Estimator. Ph. D. Thesis, Université Blaise Pascal-Clermont-Ferrand II, Clermont-Ferrand, France, 2005. [Google Scholar]
  24. Louani, D.; Maouloud, S.M.O. Some functional large deviations principles in nonparametric function estimation. J. Theor. Probab. 2012, 25, 280–309. [Google Scholar] [CrossRef]
  25. Ellis, R.S. Large deviations for a general class of random vectors. Ann. Probab. 1984, 12, 1–12. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Vidyashankar, A.N.; Collamore, J.F. Rare Event Analysis for Minimum Hellinger Distance Estimators via Large Deviation Theory. Entropy 2021, 23, 386. https://doi.org/10.3390/e23040386

AMA Style

Vidyashankar AN, Collamore JF. Rare Event Analysis for Minimum Hellinger Distance Estimators via Large Deviation Theory. Entropy. 2021; 23(4):386. https://doi.org/10.3390/e23040386

Chicago/Turabian Style

Vidyashankar, Anand N., and Jeffrey F. Collamore. 2021. "Rare Event Analysis for Minimum Hellinger Distance Estimators via Large Deviation Theory" Entropy 23, no. 4: 386. https://doi.org/10.3390/e23040386

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop