Improvement of Bobrovsky–Mayor–Wolf–Zakai Bound

This paper presents a difference-type lower bound for the Bayes risk as a difference-type extension of the Borovkov–Sakhanenko bound. The resulting bound asymptotically improves the Bobrovsky–Mayor–Wolf–Zakai bound which is difference-type extension of the Van Trees bound. Some examples are also given.


Introduction
The Bayesian Cramér-Rao bound or Van Trees bound [1] has been extended in a number of directions (e.g., [1][2][3]). For example, multivariate cases for such bounds are discussed by [4]. These bounds are used in many practical fields such as signal processing and nonlinear filtering. However, these bounds are not always sharp. To improve them, Bhattacharrya type extensions for them were provided by [5,6]. These Bayesian bounds are split into two categories, the Weiss-Weinstein family [7][8][9] and the Ziv-Zakai family [10][11][12]. The work in [13] serves as an excellent reference of this topic.
Recently, the authors in [14] showed that the Borovkov-Sakhanenko bound is asymptotically better than the Van Trees bound, and asymptotically optimal in a certain class of bounds. The authors in [15] compared some Bayesian bounds from the point of view of asymptotic efficiency. Furthermore, necessary and sufficient conditions for the attainment of Borovkov-Sakhanenko and the Van Trees bounds were given by [16] for an exponential family with conjugate and Jeffreys priors.
On the other hand, the Bobrovsky-Mayor-Wolf-Zakai bound ( [17]) is known as a difference-type (Chapman-Robbins type) variation of the Van Trees bound. In this paper, we consider the improvement of the Bobrovsky-Mayor-Wolf-Zakai bound by applying the Chapman-Robbins type extension of the Borovkov-Sakhanenko bound. This bound is categorized into Weiss-Weinstein family.
As discussed later, the obtained bound is asymptotically superior to the Bobrovsky-Mayor-Wolf-Zakai bound for a sufficiently small perturbation and large sample size. We also provide several examples for finite and large sample size settings which include conjugate normal and Bernoulli logit models.
. Considering variance-covariance inequality for G h , we have the following theorem for the Bayes risk.
Bound (1) is directly derived as a special case of the Weiss-Weinstein class [7]. However, we prove it in the Appendix B for the sake of clarity.
Note that The Borovkov-Sakhanenko bound is obtained from the variance-covariance inequality for G 0 where f (x,θ) to the variancecovariance inequality, we have the Van Trees bound, that is, Since lim h→0 B h = B 0 , the value of Bobrovsky-Mayor-Wolf-Zakai Bound (4) converges to Van Trees Bound (5) as h → 0 under (B2) in Appendix A. Hence, the value of Bound (4) for a sufficiently small h is very close to the one of Bound (5) in this case.
On the other hand, we often consider the normalized risk (see [3,14]). For the evaluation of the normalized risk (6), Bayesian Cramér-Rao bounds can be used. For example, from Bound (3), Moreover, the authors in [14,15] showed that the Borovkov-Sakhanenko bound is asymptotically optimal in some class, and asymptotically superior to the Van Trees bound, that is, Denote Borovkov-Sakhanenko Bound (3), Van Trees Bound (5), Bobrovsky-Mayor-Wolf-Zakai Bound (4), and Bound (1) as BS n , VT n , BMZ n,h and N n,h , when sample size is n and perturbation is h, respectively. Then, (8) means Hence, from (9), holds for a sufficiently large n. Moreover, for this large n ∈ N, under (B1) and (B2). Hence, if Inequality (8) is strict, then N n,h > BMZ n,h for this large n ∈ N and a sufficiently small h by (10) and (11). The equality in (8) holds if and only if ϕ is proportional to I(θ). Therefore, Bound (1) is asymptotically superior to the Bobrovsky-Mayor-Wolf-Zakai bound (4) for a sufficiently small h. However, the comparison between Bounds (1) and (4) is not easy for a finite n. Hence, we now show comparisons of various existing bounds in two simple examples for fixed n ∈ N and h ∈ R 1 . Example 1. Let X 1 , . . . , X n be a sequence of iid random variables according to N(θ, 1) (θ ∈ Θ = R 1 ). We show that Bound (1) is asymptotically tighter than Bobrovsky-Mayor-Wolf-Zakai Bound (4) for a sufficiently large n. Suppose that the prior of θ is N(m, τ 2 ), where m and τ > 0 are known constants. Denote X = (X 1 , . . . , X n ) and x = (x 1 , . . . , x n ). In this model, Fisher information I(θ) per observation equals 1. We consider the estimation problem for ϕ(θ) = θ 2 since Bound (1) coincides with Bound (4) for ϕ(θ) = θ (see also [5,6]).
First, we calculated Bobrovsky-Mayor-Wolf-Zakai Bound (4). The ratio of f (x, θ + h) and where T = ∑ n i=1 X i . Since the conditional distribution of T given θ is N(nθ, n) and the moment generating function g T (s) is where E T|θ (·) denotes the conditional expectation with respect to the conditional distribution of T given θ. Then, from (12) and (14), we have that We can easily obtain from (15). Next, we calculated Bound (1). Since I(θ) = 1, ϕ(θ) = θ 2 and ϕ (θ) = 2θ, Since we have from (18) and (14). Here, since moment-generating function g θ (s) of θ is g θ (s) = E{exp(sθ)} = exp sm + s 2 τ 2 2 , g θ (s) =E{θ exp(sθ)} = (m + sτ 2 ) exp sm + s 2 τ 2 2 , So, from (20), we obtain E exp − 2h Hence, from (19) and (21), and Lastly, we compare (1) and (4). From Bounds (1) and (4), we have for arbitrary h ∈ R 1 . In general, while the Bayes risk is O(n −1 ), bounds BMZ h and N h are O(exp(−nh 2 )) or decrease exponentially for h = 0 as n → ∞. Thus, we take the limit as h → 0 in order to obtain an asymptotically tighter bound. Define lim h→0 BMZ h = BMZ 0 and lim h→0 N h = N 0 . Since from (16) and (26), we may compare their reciprocals, 4/BMZ 0 and 4/N 0 , in order to compare BMZ 0 and N 0 . BMZ 0 and N 0 are the Van Trees and Borovkov-Sakhanenko bounds, respectively. The Borovkov-Sakhanenko bound is asymptotically tighter than the Van Trees bound. In this case, the Borovkov-Sakhanenko bound is also tighter than the Van Trees bound for fixed n. In fact, since the difference is from (28), so 4/BMZ 0 > 4/N 0 and hence BMZ 0 < N 0 for all n ∈ N. Next, we compare these bound to the Bayes risk of the Bayes estimatorφ B (X) of ϕ(θ) = θ 2 . The Bayes estimatorφ B (X) is given bŷ Then, the Bayes risk of (30) is Then, the normalized risk satisfies Thus, the Van Trees bound is not asymptotically tight, while the Borovkov-Sakhanenko bound is asymptotically tight.
We set the hyperparameters to these values for some moment conditions. In this case, Fisher information for Model (33) is given by and we considered the estimation problem of ϕ(θ) = θ.

Asymptotic Comparison by Laplace Approximation
In this section, we consider Example 2 in the previous section, again in the case when sample size is n. Bound (1) is asymptotically better than Bound (4) for a sufficiently large n by using the Laplace method. These bounds are only approximations as n → ∞. The probability density function of X i given θ is and the likelihood ratio of (49) is Assume that the prior density of θ is Then, the ratio of (51) is equal to By denoting X = (X 1 , . . . , X n ), and x = (x 1 , . . . , x n ), the ratio of joint probability density functions of (X, θ) is by the iid assumption of X i | θ, (50), and (52). From (53), we have (1 + e θ ) 2n+2c 2 (1 + e θ+h ) −2n−2c 2 e 2c 1 h .