Stein's method of exchangeable pairs for the Beta distribution and generalizations

We propose a new version of Stein's method of exchangeable pairs, which, given a suitable exchangeable pair $(W,W')$ of real-valued random variables, suggests the approximation of the law of $W$ by a suitable absolutely continuous distribution. This distribution is characterized by a first order linear differential Stein operator, whose coefficients $\gamma$ and $\eta$ are motivated by two regression properties satisfied by the pair $(W,W')$. Furthermore, the general theory of Stein's method for such an absolutely continuous distribution is developed and a general characterization result as well as general bounds on the solution to the Stein equation are given. This abstract approach is a certain extension of the theory developed in the papers \cite{ChSh} and \cite{EiLo10}, which only consider the framework of the density approach, i.e. $\eta\equiv1$. As an illustration of our technique we prove a general plug-in result, which bounds a certain distance of the distribution of a given random variable $W$ to a Beta distribution in terms of a given exchangeable pair $(W,W')$ and provide new bounds on the solution to the Stein equation for the Beta distribution, which complement the existing bounds from \cite{GolRei13}. The abstract plug-in result is then applied to derive bounds of order $n^{-1}$ for the distance between the distribution of the relative number of drawn red balls after $n$ drawings in a P\'olya urn model and the limiting Beta distribution measured by a certain class of smooth test functions.


Introduction
Since its introduction in [Ste72] in 1972 Stein's method has become a famous and useful tool for proving distributional convergence. One of its main advantages over other techniques is that it automatically yields concrete error bounds on various distributional distances. Being first only developed for normal approximation it was observed by several authors that Stein's idea of linking a characterizing operator for the target distribution to a differential equation, the Stein equation, carries over to many other absolutely continuous and discrete distributions, where, in the discrete case, the differential equation has to be replaced by a suitable difference equation. Among those other distributions, to which Stein's method has been successfully extended, are the Poisson distribution (see e.g. [Che75], [AGG89] or [BHJ92]), the Gamma distribution (see [Luk94] or [Rei05]), the exponential distribution (see e.g. [CFR11], [PR11] and [FR13]), the Laplace distribution [PR12] and, more generally, the class of Variance-Gamma distributions [Gau14]. Stein's method for the Beta distributions has been developed independently in the paper [GR13] as well as in the preprint [Döb12b]. Although in both works [GR13] and [Döb12b] a rate of convergence for the relative number of drawn red balls in a Pólya urn model was derived using Stein's method for Beta distributions, the actual approaches were quite different. While in [GR13] the authors developed a useful and widely applicable technique to find a whole class of characterizing operators for a discrete distribution whose probability mass function is known and compared one of these operators to the Stein operator for the limiting Beta distribution, the preprint [Döb12b] built on a coupling approach by developing a new version of the exchangeable pairs approach of Stein's method for a rather large class of absolutely continuous distributions on the real line which differs from the versions in the framework of the density method as developed in [EL10] and [CS11] by a modification of the Stein equation adapted to a given exchangeable pair. Recently, in [LRS14] a nice generalization of the density method, which does not necessarily assume absolute continuity of the given distribution, was given and, as an application, the calculation from [GR13] has been reworked in this framework. The main purpose of the present paper is to give a more easily readable account of the method and ideas from [Döb12b] by keeping the class of Beta distributions on [0, 1] and the Pólya urn model as a running example and omitting parts of the theory contained in [Döb12b]. Also, we derive new bounds on the solution to the Stein equation for the Beta distribution, which complement the bounds from [GR13] in the sense that they are neither uniformly worse nor uniformly better in the parameters of the Beta distribution. In any case, this seems to be the first paper which also gives bounds on the second derivative and also higher order derivatives of the solution in the case of the Beta distribution. The remainder of this paper is structured as follows: In Section 2 the general approach is motivated by means of a natural exchangeable pair in the context of the Pólya urn model and it is stressed by means of this example that the framework of exchangeable pairs within the density approach as developed in [EL10] and [CS11] is not always suitable and why one might want to use a different Stein characterization. Furthermore, our main application, Theorem 2.1, a quantitative distributional limit theorem for the relative number of drawn red balls is stated. Then, motivated by this example, in Section 3 a general version of Stein's method for a large class of absolutely continuous distributions adapted to a given exchangeable pair is developed. In Section 4 the theory from Section 3 is specialized to the class of Beta distributions and Theorem 2.1 is proved. Finally, in Section 5 several proofs for statements from Sections 3 and 4 are given. discrete time point n ∈ N a ball is drawn from the urn uniformly at random and this ball together with c other balls of the same colour is returned to the urn. If we denote by S n the number of drawn red balls after the first n drawings, n ∈ N, then we can write where X j denotes the indicator of the event that the j-th drawn ball is red, j ∈ N. It is known from elementary probability theory that for each n ∈ N and all x 1 , . . . , x n ∈ {0, 1} we have (2) P (X 1 = x 1 , . . . , X n = x n ) = where k := n j=1 x j . In particular, this shows that the sequence (X j ) j∈N is exchangeable. It now follows from (2) that for each k = 0, . . . , n we have , or, with a := r c and b := w c , The distribution of S n given by (3) is usually referred to as the Pólya distribution with parameters n ∈ N and a, b > 0. It is a well-known fact that the distribution of 1 n S n converges weakly as n → ∞ to the distribution Beta(a, b) with parameters a and b, where for general a, b > 0 the Beta distribution Beta(a, b) with parameters a and b is defined by the density function p := p a,b with Here, B(a, b) denotes the Euler Beta function B(a, b) = where the constants C(·, ·) are defined in (49) and (50) below.
The proof will be given in Section 4. In the paper [GR13] the authors even proved a concrete upper bound of order n −1 for the Wasserstein distance between the distributions of Z and W from Theorem 2.1 and also showed that the rate n −1 is optimal. Since the Wasserstein distance is induced by 1-Lipschitz test functions, this implies that their result is stronger than Theorem 2.1 as far as the class of test functions is concerned. However, it should be mentioned that their method of comparing Stein operators can only be applied in situations, where the distribution of W is explicitly known. Contrarily, the exchangeable pairs technique which is used here, in general, seems to be more flexible in this respect. For instance, our plug-in result, Theorem 4.4 below, might be beneficial for other applications, where the exact distribution of W is not at hand. Recall that a pair (X, X ′ ) of random elements on a common probability space is called exchangeable, if The representation (6) for W suggests constructing another random variable W ′ such that W and W ′ make up an exchangeable pair using a Gibbs sampling procedure. Noticing that also the random variables X 1 , . . . , X n are exchangeable, the construction of W ′ can be simplified to the following: Observe X 1 = x 1 , . . . , X n = x n and construct X ′ n according to the distribution L(X n |X 1 = x 1 , . . . , X n−1 = x n−1 ). Then, letting n is small which suggests that the exchangeable pair (W, W ′ ) be beneficial for a Stein's method approach to the proof of weak convergence of L(W n ) to Beta(a, b). From the exchangeable pairs approach within normal approximation (see e.g. [Ste86], [CS05] or [CGS11]) and for non-normal approximation (see [EL10] and [CS11]) we know that exchangeability of (W, W ′ ) is not enough to guarantee distributional closeness of W and of Z ∼ Beta(a, b) but that a further regression property has to be satisfied.
Proof. We have W ′ − W = X ′ n n − Xn n and by exchangeability of X 1 , . . . , X n it clearly holds that E[X n |W ] = E[X n |S n ] = 1 n S n = W . Also, by the definition of X ′ n and since X ′ n only assumes the values 0 and 1 we have for any x 1 , . . . , and hence, Thus, since σ(W ) ⊆ σ(X 1 , . . . , X n ), we obtain Finally, we have as was to be shown.
From the theory developed in [EL10] and in [CS11] we know that if a given exchangeable pair (W, W ′ ) satisfies a regression property of the form where λ > 0 is a typically small constant and R is negligible in size, then L(W ) can be approximated by the absolutely continuous distribution whose density has logarithmic derivative ψ, if and only if the following additional condition is satisfied: It must be the case that which is often paraphrased as that the term on the left hand side in (9) must satisfy a law of large numbers in order for the approximation to be accurate. Comparing (8) to the statement of Proposition 2.2 we see that according to the theory from [EL10] or [CS11] the only possibility would be to approximate the distribution of W by a distribution whose density has logarithmic derivative equal to (a constant multiple) of for x in the support of this density, which should be equal to [0, 1] in this case. Since the logarithmic derivative ψ a,b of the density p a,b of Beta(a, b) is given by and we already know that we conclude by way of contradiction that the law of large numbers (9) cannot hold. Indeed, we will see in Proposition 2.3 below that that the term on the left hand side of (9) is close to the non-constant random quantity W (1 − W ) rather than to the constant 1. From Proposition 2.2 and some experience with the exchangeable pairs approach within Stein's method we conclude that it would be desirable to have a Stein operator L of the form for the Beta distribution Beta(a, b). Indeed, in Section 4 we will see that a random variable Z ∼ Beta(a, b) satisfies the Stein identity for all g in a suitable class of functions, i.e. we can let The statement of the following Proposition will make it possible to exploit the above constructed exchangeable pair (W, W ′ ) in connection with the Stein identity (12) in Section 4.
Proposition 2.3. For the above constructed exchangeable pair (W, W ′ ) we have and hence 1 2λ Proof. From general facts about Gibbs sampling (see e.g. Appendix B in [Döb12a]) it is known that Since X 2 n = X n we have from the proof of Proposition 2.2 that and hence where we have used E[X n |W ] = W again. Finally, we compute Putting pieces together, we eventually obtain The last assertion easily follows from (13) and from λ = 1 n(a+b+n−1) .
One main aspect of the theoretical contribution of this article is to emphasize that it is no coincidence that but that this is a natural replacement of condition (9) from the density approach to our class of Stein operators of the form (15) below. We end this motivational section by an abstraction of the ideas in the context of the Pólya urn model and the limiting Beta distribution above. Suppose we are given a sequence of random variables W = W n of which we know that, as n → ∞, it converges in distribution to a random variable Z with an absolutely continuous distribution and density p with respect to the Lebesgue measure. We will also assume that p itself is absolutely continuous (on each compact subinterval of its support (a, b), where −∞ ≤ a < b ≤ ∞ are extended real numbers). Suppose also that we can naturally construct a random variable W ′ , a small random perturbation of W , such that (W, W ′ ) is an exchangeable pair, |W − W ′ | is small in a certain sense and that a regression property of the form holds, where γ is a certain function on the support of L(Z), λ > 0 is constant and R is a negligible remainder term. The goal is to compute a rate of convergence for the distributional convergence W → Z by Stein's method of exchangeable pairs for L(Z). By the above reasoning it would be beneficial to have a characterizing Stein operator L for Z of the form where η is a function that still has to be found. One might suppose that, in order that L characterizes L(Z), given the density p of Z and the function γ the function η is unique but we will see that this is only so up to a constant multiple of p −1 . Note that by exchangeability where the first approximation is by the assumption that R is of negligible order and the second is by the fact that W converges to Z in distribution. Hence, it is natural to assume from the outset that E[γ(Z)] = 0. In particular, we should assume that E|γ(Z)| < ∞. A natural question is, given p and γ, if there is a general formula for the function η. In the preprint [Döb12b] the first order linear differential equation was found by making, for a given test function h, the ansatz g h (x) = α(x)f h (x) for the solutions g h of the Stein equation belonging to the operator (15) and f h of the Stein equation corresponding to the density approach. Here, again ψ denotes the logarithmic derivative of p. In this paper we follow a different, more direct reasoning. If η is such that (15) is characterizing L(Z), then, for suitable functions g by partial integration: Thus, from (20) we conclude that η must satisfy (17). Of course, (17) can be solved by the method of variation of the constant and it turns out that hold for each regular enough, say e.g. bounded, function g. Also note that every other solution to (17) has the form η κ (x) = η(x) + κ p(x) for some constant κ. In principle, the particular choice of κ is arbitrary and the choice κ = 0 sometimes even yields better behaved solutions g h to the Stein equation (18). In fact, it is easy to see from (32) below that the choice κ = 0 automatically implies g h (a+) = g h (b−) = 0. Also, sometimes the choice κ = 0 is implicit in the density approach. For instance, if a > −∞, b < ∞ and the density p is such that Hence, in all these cases, using the density approach implicitly entails choosing η κ with κ = p(a+). When developing the general theory in Section 3 we restrict ourselves to the solution η given by (21), i.e to κ = 0. We thus already mention at this point that the density approach for p is included in the theory presented in Section 3 if and only if , it turns out that in many cases η given by (21) has a neat analytical representation, e.g. it is given by a polynomial of degree at most 2, whereas the choice κ = 0 would introduce a complicated coefficient into (18) originating from the term p(x) −1 . For instance, if Z ∼ N(0, 1) is standard normally distributed and γ(x) = −x, then (21) yields η ≡ 1, whereas the general expression is η κ (x) = 1 + κe x 2 /2 , which is difficult to handle in practice. Furthermore, if p is not bounded away from zero, then κ = 0 gives an unbounded function η κ , whereas η given by (21) usually is bounded, at least if a > −∞ and b < ∞ (see, e.g. Proposition 3.5 below). In the next section we will see that under certain mild conditions on the density p of Z and on the coefficient γ which, of course, needs not originate from an exchangeable pair, the operator L given by (15) is indeed characterizing L(Z) and prove bounds on the corresponding Stein equation (18) for suitable test functions h. Finally, we want to propose a strategy of how to proceed, if, contrarily to the above reasoning, we do not know the limiting density p from the outset but are only given an exchangeable pair (W, W ′ ) such that (14) holds and also is satisfied with the same constant λ > 0 and a small remainder term S, where η now is a certain given function, which is positive on a certain open interval J = (a, b) ⊆ R, where γ is also defined. Note that from (17) we have for the logarithmic derivative ψ of the sought density p that ψ = γ − η ′ η and, hence, for x ∈ J and x 0 ∈ J an arbitrary point: Here, of course, K = p(x 0 )η(x 0 ) is the normalization constant. Formula (24) shows that p is uniquely determined by γ and η. Furthermore, in Theorem 3.22 we will give precise criteria for γ and p defined by (24) to satisfy b a γ(t)p(t)dt = 0 and for η to satisfy (21) so that the results of the theory developed in Section 3 can in fact be applied. This, together with Proposition 3.19 and Remark 3.20 (iii), suggests the approximation of L(W ) by the distribution with density p, if the exchangeable pair (W, W ′ ) satisfies (14) and (23). Note that this idea yields a certain extension of the methodology proposed in [CS11], where only Stein characterizations from the density approach are put to use.

The general approach
Motivated by Section 2 in this section we develop a general version of Stein's method for a random variable Z with an absolutely continuous distribution with respect to the Lebesgue measure. For an interval J ⊆ R, we will call a function defined on R locally absolutely continuous on J, if its restriction to each compact sub-interval of J is absolutely continuous. Also, we will use the words increasing, decreasing and so on in the weak sense, unless explicitly otherwise stated. Througout we suppose that Z has a Lebesgue density p on R satisfying the following condition: condition 3.1. For some extended real numbers −∞ ≤ a < b ≤ ∞ the density p is positive and locally absolutely continuous on the interval (a, b).
By (a, b) we will henceforth denote the closure of the real interval (a, b) with respect to the usual topology on R. Furthermore, we assume that we are given a function γ on (a, b) which might be motivated by a given exchangeable pair and which has the following properties: Henceforth, we will always assume that Conditions 3.1 and 3.2 are satisfied. Note that by Condition 3.2 there exists a point x 0 ∈ (a, b) such that though it might not be unique. For definiteness, we choose By item (iii) of Condition 3.2 we can define the function I : (a, b) → R by and by the positivity of p on (a, b) we can define the function η on (a, b) by The following proposition lists some properties of the function I. Proof. Of course, (a) follows from the fundamental theorem of calculus for Lebesgue integration and the second part of (b) is immediate from item (iii) of Condition 3.2. Finally, (c) and the first part of (b) follow from the second part of (b) and (25).
If a > −∞ and/or b < ∞, then it is of interest to know under what circumstances it is possible to extend η to a continuous function on (a, b) because we would like to have η(W ) make sense, even if W assumes one of the boundary values a and b with positive probability. We will see that in most cases we indeed have η(a+) = 0 or η(b−) = 0 if a > −∞ or if b < ∞, respectively. The following Mills ratio condition is satisfied by most absolutely continuous distributions and will in fact turn out to be equivalent to the asserted boundary behaviour of η. From now on, we will denote by F the distribution function corresponding to the density p.
condition 3.4. The density p of Z satisfies all the properties from Condition 3.1 and also the following: Proposition 3.5. Assume that Conditions 3.1 and 3.2 hold for p and γ, respectively. Then, the function η vanishes at the finite end points of the support (a, b) of L(Z), i.e. η(a+) = 0 whenever a > −∞ and η(b−) = 0 whenever b < ∞, if and only if Condition 3.4 is satisfied. Thus, in this case we can extend η to a continuous function on (a, b) vanishing at the finite end points of this interval.
Proof. Suppose, that a > −∞ and choose y ∈ (a, x 0 ). Then γ(y) > 0 and, by the nonnegativity of I and the monotonicity of γ, for a < x < y: so that lim x↓a η(x) = 0. Conversely, if η(a+) = 0, then, again by (29), The calculation for finite b is similar by using the representation x γ(t)p(t)dt and is therefore omitted.
Not every density p satisfies Condition 3.4 as is clarified by the following example.
example 3.6. Let δ n ∈ (0, 1), n ≥ 1, be such that n≥1 δ n = 1 and define x n := 1 − n−1 j=1 δ j = ∞ j=n δ j and I n := [x n+1 , x n ], n ≥ 1. Furthermore let q be the unique continuous function, which is linear on each interval I n and such that q(x 2n ) = δ 2 2n and q(x 2n+1 ) = δ 2n for n ≥ 1 and q(1) := δ 1 . Define p to be the probability density which is a constant multiple of q. Then, p satisfies Condition 3.1 with a = 0 and b = 1 but Condition 3.4 does not hold: We have lim n→∞ x 2n = 0 but Note that p satisfies lim x→0 p(x) = 0, so that this does not only happen because p(0+) might not exist.
The counterexample given in Example 3.6 is quite artificial. Indeed, the following proposition lists mild assumptions on the density p which guarantee that Condition 3.4 is satisfied. In practice, at least one of these assumptions is usually met. In particular, note that by part (f) of Proposition 3.7 the Mills ratio limits from Condition 3.4 at finite boundary points a or b are always zero, whenever they exist.
Proposition 3.7. Assume a > −∞. In either of the following cases lim x↓a The density p is bounded away from zero in a suitable neighbourhood of a. (b) We have p(a+) = 0 and there is a δ > 0 such that p is increasing on (a, a + δ).
(c) We have p(a+) = 0 and there is a δ > 0 such that p is convex on (a, a + δ).
(d) We have p(a+) = 0 and there is a δ > 0 such that p is concave on (a, a + δ).
(e) The density p is analytic at a.
Proof. That item (a) is sufficient is clear. If (b) holds, then the claim follows from the inequality valid for x ∈ (a, a + δ). Under Condition (c) we obtain a continuous and convex function on [a, a + δ) by letting p(a) := 0. Now, let a < x < y < a + δ. Then, there exists a λ ∈ (0, 1) with x = λa + (1 − λ)y and by convexity we have: Thus, the assumptions of (b) are satisfied. If (d) holds, then again letting p(a) := 0 we obtain a continuous and concave function on [a, a + δ). Thus, there exists a decreasing function f on [a, a + δ) such that If there was a sequence (x n ) n≥1 in [a, a + δ) such that x n ↓ a and f (x n ) ≤ 0 for each n ≥ 1, then for each x ∈ (a, a + δ) and large enough n we would have which would contradict Condition 3.1. Thus, there is an ε < δ such that f (x) ≥ 0 for all x ∈ (a, a + ε). Hence, p is increasing on (a, a + ε) and (b) is satisfied. If (e) holds, then there is an r > 0 and a real sequence (c k ) k≥0 such that p(x) = ∞ k=0 c k (x−a) k for all x ∈ (a, a + r) and the function f : (a − r, a + r) → R with f (x) := ∞ k=0 c k (x − a) k is well-defined. Let n 0 := min{k ≥ 0 : c k = 0}. Then n 0 < ∞ since p is positive on (a, b). If n 0 = 0 and hence f (a) = c 0 = lim x↓a p(x) = 0, then there is nothing to show. Otherwise, we have for all x ∈ (a, a + r) and hence, by de l'Hôpital's rule, In order to prove (f) we show that always if p satisfies Condition 3.1. To show this, define the function G(x) := log F (x) for x ∈ (a, b). Then, G is increasing and continuously differentiable on (a, b) and satisfies G(a+) = −∞ and G(b−) = 0. If (30) did not hold, then Hence, choosing δ > 0 such that G ′ (x) ≤ c + 1 for all x ∈ (a, a + δ] we would obtain Now, for a given Borel-measurable function h on (a, b) such that E|h(Z)| < ∞ consider the Stein equation (18). It is easy to check that the function g h : (a, b) → R given by is a solution to (18) in the sense that g h is locally absolutely continuous on (a, b) and (18) holds for each point x ∈ (a, b) where g h is in fact differentiable. At all other points x ∈ (a, b), in contrast to the usual convention, we define g ′ h (x) by (18) such that this identity holds true on (a, b). The formula for g h might as well be found by the method of variation of the constant using the fact that log(pη) is a primitive function of γ/η, which in turn follows from (17). In what follows we will always call the solution g h given by (32) the standard solution to equation (18). It turns out that this particular solution has the best properties. For instance, if g h is bounded then it is the only bounded solution since the solutions of the corresponding homogenious equation are given by constant multiples of I −1 = η −1 p −1 , which is unbounded by Proposition 3.3 (b). Since we do not exclude cases where the given random variable W assumes the finite value a and/or b with positive probability, we have to make sure that g h can be extended to a continuous function on (a, b). Assume that a > −∞.
Here, and in what follows we will often writeh for h − E[h(Z)]. By de l'Hôpital's rule we have (33) if h has a right limit at a. Note that γ has a right limit at a since it is decreasing. Similarly, if h has a left limit at b < ∞. (b) If b < ∞ and h has a left limit at b, then g h can be extended continuously to b The success of Stein's method within applications considerably depends on good bounds on the solutions g h and their lower order derivatives, generally uniformly over some given class of test functions h. The next step will be to prove such bounds. It has to be mentioned that we cannot expect to derive concrete good bounds in full generality, but that sometimes further conditions have to be imposed either on the density p or on the coefficient γ. Nevertheless, we will derive bounds involving functional expressions which can be simplified, computed or further bounded a posteriori for concrete distributions. So our abstract viewpoint will pay off. Moreover, some of our general bounds will already be explicit.
In what follows, we denote by g h the standard solution to Stein's equation (18) on (a, b), implicitly assuming that h satisfies the assumptions of Proposition 3.8. Furthermore, for a function f we denote by f ∞ its essential supremum norm on (a, b). Note that this implies for f a Lipschitz-continuous function on (a, b) that f ′ ∞ is just its minimum Lipschitz constant. First we give bounds for bounded and measurable test functions h.
Proposition 3.9. Assume Conditions 3.1 and 3.2 and let m be the median of L(Z). Then, if h : (a, b) → R is Borel-measurable and bounded we have The proof is deferred to Section 5. The following corollary specializes this result to the case that γ Proof. In this case we clearly have I(m) = c 2 E[|Z − m|] which implies the result by Proposition 3.9.
In the case that µ = N(0, 1) and c = 1 this result specializes to the well known bound π 2 h − µ(h) ∞ (see [CGS11] or [CS05], e.g.). Remark 3.11. In the formulation of Proposition 3.9 it might suprise that there is no bound mentioned for g ′ h ∞ . This is because, in general a bound of the form ∞ with a finite constant C does not exist in this setup. Note that this is contrary to the density approach, where one usually has such a bound (see [CS11] or [CGS11]). Proposition 3.9 is already sufficient to prove that the operator L given by (15) characterizes the distribution of Z. The proof is given in Section 5 Proposition 3.12. A random variable X with values in (a, b) has the same distribution as Z if and only if for each continuous function f on (a, b), which is locally absoulutely continuous on (a, b) and which satisfies In particular, in this case both expected values exist.
Next, we will turn to Lipschitz continuous test functions h. In contrast to bounded measurable test functions, there we will also be able to prove useful bounds for g ′ h . In order that E[h(Z)] exists for Lipschitz continuous test functions h we need to assume that E|Z| < ∞. The following two result, which are also proved in Section 5, include optimal bounds for both, g h and g ′ h , when h is Lipschitz. Proposition 3.13. Assume that E|Z| < ∞ and Conditions 3.1 and 3.2 hold. Then, we have for any Lipschitz continuous test function h : (a, b) → R and any x ∈ (a, b): Here, for x ∈ (a, b), the positive functions H(x) and G(x) are defined by Moreover, these bounds are optimal among all bounds involving the factor h ′ ∞ . Remark 3.14. is bounded on (a, b). Indeed, if a > −∞, for instance, we have that However, in general S(x) is unbounded, if |γ(x)| does not grow at least linearly with x. For instance, if Z ∼ N(0, 1) and γ(t) = − sign(t), then we have for positive x that by the Gaussian Mills ratio inequality.
(b) The bound for |g h (x)| in part (a) of Proposition 3.13 can be written as where τ is the so-called Stein factor or Stein kernel of Z given by i.e. τ is the function η which belongs to the choice γ(x) = E[Z] − x. The Stein kernel τ appeared first in Lecture 6 of [Ste86] and it has turned out to be a fundamental object in Stein's method for one-dimensional absolutely continuous distributions (see, e.g. [NP09], [NV09] and [LRS14]).
Remark 3.16. (i) It is quite remarkable that in the case of normal approximation (via its classical Stein equation) the bound given in Corollary 3.15 (a) even improves on the best bound 2 h ′ ∞ currently mentioned in the literature (see, e.g. [CGS11] or [CS05]). In fact, in this case c = 1 and thus our bound reduces to h ′ ∞ . (ii) For concrete distributions the ratio appearing in the bound for g ′ h (x) may be bounded uniformly in x by some constant which can sometimes also be computed explicitely. Furthermore, in [EV12] the authors give mild conditions for the existence of a finite constant k such that g ′ h ∞ ≤ k h ′ ∞ for any Lipschitzcontinuous h. In practice, these conditions are usually met. However, there is no hope of estimating the constant k by their method of proof. Thus, for concrete distributions and explicit constants it might therefore by useful to work with our bound from Corollary 3.15 (b) (see e.g. Section 4 for the Beta distribution). (iii) For the normal distribution and also for the larger class of distributions discussed in [EL10], one also has a bound of the form g ′′ h ∞ ≤ C h ′ ∞ for some finite constant C holding for each Lipschitz function h. As was shown by a universal counterexample in [EV12], if γ(x) = c(E[Z] − x) such a bound cannot be expected unless a = −∞ and b = ∞. In such case one will have to assume that h ′ is also Lipschitz, for example by demanding that h has two bounded derivatives, in order to obtain a finite bound on g ′′ h ∞ . For the Stein solutions within the density approach, however, there are many examples of distributions, whose support is strictly included in R but for which such bounds are available (see, e.g., chapter 13 of [CGS11]). Now, we show how we can use the above results and the density formula (24) to give bounds on higher order derivatives of g h , if h itself is smooth enough. First note that the constant K from (24) is given by and, hence, we have the explicit formula Formula (37) is a more general version of formula (3.14) in [NV09] and is also derived in [KT12]. Now, if the coefficient γ is also absolutely continuous, by differentiating Stein's equation (18), we obtain for h Lipschitz . This means, that the functiong := g ′ h is a solution of the Stein equation corresponding to the test function h 2 for the distribution ofZ which satisfies the Stein identity From (37) we know that a densityp ofZ is given by whereK, K, C > 0 are suitable normalizing constants. Thus, if we have bounds for the first derivative of the Stein solutions for the distribution ofZ and for Lipschitz functions h, then from this observation we obtain bounds on g ′′ h for h such that h 2 is Lipschitz. Note that if γ(x) = c(E[Z] − x), this essentially means that h ′ must be Lipschitz as well. Of course, in order to apply this procedure, one has to make sure that E[h 2 (Z)] = 0 and that g ′ h is the standard solution to the Stein equation for L(Z) and the test function h 2 . Remarkably, under mild conditions this turns out to always be the case.
Proposition 3.17. Assume that Conditions 3.1 and 3.2 hold, E|Z| < ∞, E|Zγ(Z)| < ∞ and that γ is locally absolutely continuous on (a, b). Furthermore, let h be a Lipschitz-continuous function. Then, E[h 2 (Z)] exists and equals 0. Furthermore, if either the derivative g ′ h of g h is bounded, a > −∞ or b < ∞, then g ′ h is the standard solution to the Stein equation corresponding to the distribution ofZ and the test function The proof is given in Section 5.
Remark 3.18. If Z ∼ Beta(a, b), then (39) implies thatZ ∼ Beta(a + 1, b + 1). This will be used in Section 4 to provide bounds on higher order derivatives of g h in the case of the Beta distribution. If, on the other hand, Z ∼ Exp(α) has an exponential distribution with mean α −1 , thenZ ∼ Gamma(2, α) has an Erlang distribution. Using this fact, Proposition 3.17 and the general bounds from Corollary 3.15 applied for both the exponential and the Gamma(2, α) distribution, one can, with some work, derive the following bounds on the standard solution g h to the Stein equation for the distribution Exp(α), if h is continuously differentiable on [0, ∞) and both h and h ′ are Lipschitz: These bounds are better than those derived in [FR13] and, additionally, since we do not have to assume that h ′ (0) = 0 for the bound on g ′′ h ∞ to be valid, one term in the bounds of Theorems 1.1 and 1.2 from [FR13] would drop off, if instead our bounds were used.
Next, we introduce the approach of exchangeable pairs satisfying the regression properties (14) and (23) in our general framework. As was observed in [Röl08] for the normal distribution, in case of univariate distributional approximations, one does not need the full strength of exchangeability, but equality in distribution of the random variables W and W ′ is sufficient. This may allow for a greater choice of admissible couplings in several situations, or at least, relaxes the verification of asserted properties. Thus, let W, W ′ be real-valued random variables defined on the same probability space such that W D = W ′ . We will assume, that the random variables W and W ′ only have values in an interval (a, b) ⊆ J ⊆ (a, b) where both functions η and γ are defined (recall that it might be the case that η can only be defined on (a, b)). However, from Proposition 3.5 we know that we can let J = (a, b) if Condition 3.4 holds.
Proposition 3.19. Assume that Conditions 3.1 and 3.2 hold and that W is square integrable with E|γ(W )| < ∞. Furthermore, for some constant λ > 0, assume that the general regression property (14) and also the second moment condition (23) are satisfied by the pair (W, W ′ ). Let f : J → R be a bounded, continuously differentiable function, which is Lipschitz-continuous and has a Lipschitz-continuous derivative f ′ . Then, where f ′′ ∞ denotes the minimum Lipschitz constant of f ′ . The proof is given in Section 5.
Finally, in our general framework, we readdress the last issue discussed in Section 2. Namely, we suppose that we are given two functions γ and η, such that for some −∞ ≤ a < b ≤ ∞ the function γ is defined on (a, b), η is defined at least on (a, b) and the following properties hold.  on (a, b) and, if we define dt , x ∈ (a, b) , Note that by definition we have Q(x) ≤ 0 for all x ∈ (a, b), if Condition 3.21 is satisfied. Now, we define the density p by relation (24) with K being a suitable normalizing constant. The existence of K follows from the fact that, by Condition 3.21, for each c ∈ (a, x 0 ) there is a finite constant L > 0 such that Lγ(x) ≥ 1 for each x ∈ (a, c). Thus, A similar calculation shows that also . Hence, p can be suitably normalized. Now, let Z be a random variable with probability density function p. The next result is a generalization of Lemma 3, Lecture 6 in [Ste86].
Theorem 3.22. If Condition 3.21 is satisfied, then the density p defined by (24) is such that In particular, the theory developed in this section can be applied in this framework.

Stein's method for the Beta distribution
In this section we specialize the theory from Section 3 to the family Beta(a, b), a, b > 0, of Beta distributions as defined in Section 2. Let us fix a, b > 0 and from now on assume that Z ∼ Beta(a, b). Motivated by the Pólya urn example, the above constructed exchangeable pair (W, W ′ ) and by Proposition 2.2 we define the function γ := γ a,b as in Proposition 2.2 and observe that It is thus easy to see that γ satisfies all assumptions of Condition 3.2 and also that the Beta density p := p a,b given by (4) satisfies Conditions 3.1 and 3.4, the latter either directly or by Proposition 3.7. We claim that the function η defined by (21) is given by This is equivalent to proving that which easily follows from differentiating both sides of (45) and using (10). Thus, from Proposition 3.12 we immediately obtain the following Stein characterization for the Beta distribution.
For the Beta distribution and a mesaurable function h with E|h(Z)| < ∞, the Stein equation (18) is given by and the standard solution (32) has the form (47) if h has a right limit at 0 and a left limit at 1 by Proposition 3.8. For a, b > 0 define the constant From Proposition 3.9 and Corollary 3.15 we can derive the following bounds for the solution (47) to (46). The proof is given in Section 5.
, where m is the median of Beta(a, b).
where C(a, b) is given by (49) and (50). (c) If h is continuously differentiable with Lipschitz derivative h ′ , then g ′ h is Lipschitz and g ′′ h ∞ ≤ C(a + 1, b + 1) h ′′ ∞ + (a + b)C(a + 1, b + 1)C(a, b) h ′ ∞ . (d) More generally, if m ≥ 1 is an integer and h is at least (m−1)-times differentiable such that h (j) is Lipschitz-continuous for j = 0, . . . , m − 1, then g where we define an empty product to be equal to 1.
Remark 4.3. (a) It is worthwhile to compare our bound for g ′ h ∞ from Proposition 4.2 (b) to the bound g ′ h ∞ ≤ (b 0 + b 1 ) h ′ ∞ given in [GR13]. One can show that if a = b, then our bound is uniformly better than theirs. However, if a = b, then there are regions for (a, b) where our constant C(a, b) is smaller and other ones, where their b 0 + b 1 is smaller. For instance, if 0 < a, b ≤ 1, then, again, C(a, b) ≤ b 0 + b 1 . But, if 1 < b < 2 is fixed and a tends to zero, then C(a, b) goes to infinity while their b 0 + b 1 tends to 12. In any case, neither our bound nor the bound from [GR13] seem to be optimal for g ′ h ∞ . (b) Form Corollary 3.15 (b) we know that for Lipschitz h and x ∈ (0, 1) By an application of de l'Hôpital's rule, one can show that B(0+) = 2 a + 1 and B(1−) = 2 b + 1 .
We conjecture that if min(a, b) < 1, then i.e. that B assumes its maximum value at the boundary of (0, 1). However, if min(a, b) > 1, then we believe that there is always an x 1 ∈ (0, 1) such that From Proposition 3.19, Remark 3.20 (ii) and the bounds from Proposition 4.2 we obtain the following plug-in result, which bounds a certain distance to the Beta distribution by terms related to a given exchangeable pair.
Theorem 4.4. Let W and W ′ be identically distributed random variables on a common probability space (Ω, A, P ) and let F ⊆ A be a sub-σ-algebra of A such that hold for a constant λ > 0 and for F -measurable remainder terms R and S. Then, for each continuously differentiable function h : [0, 1] → R with a Lipschitz derivative h ′ it holds that where the constants C(·, ·) are defined by (49)

Proofs
Proof of Proposition 3.9.
for each x ∈ (a, b) since by the positivity of p and because γ is decreasing Hence, M is increasing and, thus, for each x ∈ (a, m] we have The same bound can be proved for x ∈ (m, b) by using the representation and the fact that also 1 − F (m) = 1 2 .
The following two lemmas, which are quite standard in Stein's method, will be needed for the proof of Proposition 3.13. For proofs we refer to [Döb12b], for instance.
Lemma 5.1. Suppose that p satisfies Condition 3.1 and that b a |x|p(x)dx < ∞. Then, for each x ∈ (a, b) we have: Lemma 5.2. Suppose that p satisfies Condition 3.1 and that E|Z| b a |x|p(x)dx < ∞ Then, for each Lipschitz function h, the following assertions hold true: By Lemmas 5.2 and 5.1 we thus obtain that implying (a). Now, we turn to the proof of (b). By Stein's equation (18) we obtain for x ∈ (a, b) where we have again writtenh = h − µ(h). Using Lemma 5.2 again, we obtain

By (53) we can thus bound
which reduces to the bound asserted in (b). Optimality of the bound in (a) follows from choosing h(x) = x and observing that the above inequalities are in fact equalities, in this case. To see that also the bound in (b) is optimal, for given x ∈ (a, b) choose a 1-Lipschitz function h such that h ′ (s) = 1 for all s ∈ (a, x) and h ′ (s) = −1 for all s ∈ (x, b). Then, from (53) and the nonnegativity of H and G, we see that equality holds in (54).
Proof of Proposition 3.12. We first prove necessity. Let f be given as in the proposition. First we show that E|γ(Z)f (Z)| < ∞. We have Repeating essentially the same calculation without absolute value signs and using To prove sufficiency it is clearly enough to show that holds for each bounded and continuous function h. Let g h be the standard solution of the Stein equation (18) corresponding to h. Then, from Proposition 3.9 we know that g h ∞ < ∞. Also, g h is continuous on (a, b) and continuously differentiable on each compact subinterval of (a, b). Furthermore, since I(x) = η(x)p(x) and g h solves (18) we have By the hypothesis of Proposition 3.12 we can thus conclude that as desired.
Proof of Proposition 3.17. We first show that E[h 2 (Z)] exists. SinceZ has density proportional to ηp by (39) and by (38), existence follows, if we can show that To show finiteness of the first integral in (55) note that since γ is decreasing, by Fubini's theorem Since h ′ is bounded, to show that the second integral in (55) is finite, it suffices to prove that We have since E|γ(Z)| < ∞ and E|Zγ(Z)| < ∞ and similarly one shows that Hence, (57) holds and E[h 2 (Z)] exists. Now, we prove that E[h 2 (Z)] = 0. From (38) and (39) we see that this amounts to proving Using ηp = I, I ′ = γp and I(a+) = I(b−) = 0, from Fubini's theorem we obtain that the left hand side of (58) equals Similarly, using γ(x 0 ) = 0, the definition of g h in (32) and Fubini's theorem again, we have that the right hand side of (58) equals where we have used E[γ(Z)] = 0 for the last equality. Thus, from (59) and (60) we conclude that (58) holds. Thus, the standard solution f = f h 2 to (40) is well-defined and given by (61) Hence, by dominated convergence. Furthermore, since g ′ h is also a solution to (40) and the solutions to the corresponding homogeneous equation are exactly the constant multiples of η −2 p −1 , there is a constant c ∈ R such that Now, first suppose that a > −∞. Since g h solves the Stein equation (18) we know that .
As x ↓ a > −∞, by (33), the term in brackets converges toh(a) − γ(a+)h (a) γ(a+) = 0 and since lim x→a I(x) = 0 we conclude from (64) that also Hence, from (62), (65) and (63) we conclude that g ′ h = f is the standard solution to (40). Similarly, one obtains this result if b < ∞. Finally assume that g ′ h is bounded. Since lim x↓a η(x) 2 p(x) = 0 we conclude from (63) and (62) that Proof of Proposition 3.19. Let x 0 be defined as above and define the function G : Then, by Taylor's formula, for each x, x ′ ∈ I we have Hence, by distributional equality, we obtain This immediately implies the identity Observing that Proof of Proposition 4.2. Claim (a) immediately follows from Proposition 3.9. Similarly, the first part of claim (b) immediately follows from Corollary 3.15 (a). For the second part of (b) we note that by Corollary 3.15 (b) we have for x ∈ (0, 1): , x ∈ (0, 1).
Now, we turn to the proof of (c). From Proposition 3.17 we know that g ′ h is the standard solution to the Stein equation corresponding to the distribution Beta(a + 1, b + 1). Thus, since h 2 is Lipschitz by part (b), applying (b) for Beta(a + 1, b + 1) and for Beta(a, b) yields g ′′ h ∞ ≤ C(a + 1, b + 1) h ′ 2 ∞ ≤ C(a + 1, b + 1) h ′′ ∞ + (a + b) g ′ h ∞ ≤ C(a + 1, b + 1) h ′′ ∞ + C(a + 1, b + 1)C(a, b) h ′ ∞ , as claimed. The proof of (d) is very similar to the proof of (c) which is why we only give a sketch. Defining h 1 :=h and for k = 2, . . . , m one can see by induction that for all k = 2, . . . , m Hence, by (b) and from Proposition 3.17 similarly to (79) we can prove that The bound now follows from an easy induction on m.
Note that for the function S from the above proof we have S(z) = g z ∞ , where, for z ∈ (0, 1), g z is the standard solution to the Stein equation (46) for h z = 1 (−∞,z] . Thus, computing S ∞ is also of some importance, if one is interested in the Kolmogorov distance to a Beta distribution.