THE CENTER OF MASS OF THE ISE AND THE WIENER INDEX OF TREES

We derive the distribution of the center of mass S of the integrated superBrownian excursion (ISE) from the asymptotic distribution of the Wiener index for simple trees. Equivalently, this is the distribution of the integral of a Brownian snake. A recursion formula for the moments and asymptotics for moments and tail probabilities are derived.


Introduction
The Wiener index w(G) of a connected graph G with set of vertices V (G) is defined as in which d(u, v) denotes the distance between u and v in the graph. The ISE (integrated superBrownian excursion) is a random variable, that we shall denote J , with value in the set of probability measures on R d . The ISE was introduced by David Aldous [1] as an universal limit object for random distributions of mass in R d : for instance, Derbez & Slade [11] proved that the ISE is the limit of lattice trees for d > 8. A motivation for this paper comes from a tight relation established between some model of random geometries called fluid lattices in Quantum geometry [3], or random quadrangulations in combinatorics, and the one-dimensional ISE [6]: let (Q n , (b n , e n )) denote the random uniform choice of a quadrangulation Q n with n faces, and of a marked oriented edge e n with root vertex b n in Q n , and set r n = max x∈V (Qn) d(x, b n ), r n and W n being the radius and the total path length of Q n 1 , respectively. As a consequence of [6], n −1/4 r n , n −5/4 W n converges weakly to (8/9) 1/4 · (R − L, S − L), in which S = xJ (dx) stands for the center of mass of the ISE, while, in the case d = 1, [L, R] denotes the support of J . In [8], a modified Laplace transform is given, that determines the joint law of (R, L). As a consequence, the first two moments of R − L are derived, and this is pretty much all the information known about the joint law of (R, L, S), to our knowledge. In this paper, we prove that in which a k is defined by a 1 = 1, and, for k ≥ 2, Since S has a symmetric distribution, the odd moments vanish. Also, we have the following asymptotics for the moments.
As a consequence, Carleman's condition holds, and the distribution of S is uniquely determined by its moments. In Section 2, we recall the description of J in terms of the Brownian snake, following [14, Ch. IV.6], and we derive a distributional identity for S in term of some statistic of the normalized Brownian excursion (e(t)) 0≤t≤1 . In [12], the joint moments of (η, ξ), in which are computed with the help of explicit formulas for the joint moments of the Wiener index and the total path length of random binary trees. In Section 3, we use the results of [12] to derive Theorems 1.1 and 1.2. As a byproduct we also obtain that Theorem 1.3. We have, as k → ∞, Finally, Section 4 deals with tail estimates for S.

The ISE and the Brownian snake
For the ease of non specialists, we recall briefly the description of J in terms of the Brownian snake, from [14, Ch. IV.6] (see also [5,10,16], and the survey [17]). The ISE can be seen as the limit of a suitably renormalized spatial branching process (cf. [6,15]), or equivalently, as an embedding of the continuum random tree (CRT) in R d .
As for the Brownian snake, it can be seen as a description of a "continuous" population T , through its genealogical tree and the positions of its members. Given the lifetime process ζ = (ζ(s)) s∈T of the Brownian snake, a stochastic process with values in [0, +∞), the Brownian snake with lifetime ζ is a family of stochastic processes W s (·) with respective lifetimes ζ(s). The lifetime ζ specifically describes the genealogical tree of the population T , and W describes the spatial motions of the members of T . A member of the population is encoded by the time s it is visited by the contour traversal of the genealogical tree, ζ(s) being the height of member s ∈ T in the genealogical tree (ζ(t) can be seen as the "generation" t belongs to, or the time when t is living). Let Due to the properties of the contour traversal of a tree, any element of s ∧ t is a label for the more recent ancestor common to s and t, and the distance between s and t in the genealogical tree is If it is not a leaf of the tree, a member of the population is visited several times (k + 1 times if it has k sons), so it has several labels: s and t are two labels of the same member of the population if d(s, t) = 0, or equivalently if s ∧ t ⊃ {s, t}. Finally, s is an ancestor of t iff s ∈ s ∧ t. In this interpretation, W s (u) is the position of the ancestor of s living at time u, and is the position of s. Before time m = C(s 1 , s 2 ), s 1 and s 2 share the same ancestor, entailing that Obviously there is some redundancy in this description: it turns out that the full Brownian snake can be recovered from the pair W s , ζ(s) 0≤s≤1 (see [15] for a complete discussion of this).
In the general setting [14, Ch. IV], the spatial motion of a member of the population is any Markov process with cadlag paths. In the special case of the ISE, this spatial motion is a d-dimensional Brownian motion: is a standard linear Brownian motion started at 0, defined for 0 ≤ t ≤ ζ(s) ; b) conditionally, given ζ, the application s → W s (.) is a path-valued Markov process with transition function defined as follows: for s 1 < s 2 , conditionally given W s1 (.), The lifetime ζ is usually a reflected linear Brownian motion [14], defined on T = [0, +∞). However, in the case of the ISE, in which e denotes the normalized Brownian excursion, or 3-dimensional Bessel bridge, defined on T = [0, 1]. With this choice of ζ, the genealogical tree is the CRT (see [1]), and the Brownian snake can be seen as an embedding of the CRT in R d . We can now give the definition of the ISE in terms of the Brownian snake with lifetime 2e [14, Ch. IV.6]: Recall that the occupation measure J of a process W is defined by the relation: holding for any measurable test function f .

Remark 2.2.
It might seem more natural to consider the Brownian snake with lifetime e instead of 2e, but we follow the normalization in Aldous [1]. If we used the Brownian snake with lifetime e instead, W , J and S would be scaled by 1/ √ 2.
Based on the short account on the Brownian snake given in this Section, we obtain: is a Gaussian process whose covariance is C(s, t) = 2 min s≤u≤t e(u), s ≤ t.
Proof. With the notation ζ = 2e and m = C(s 1 , s 2 ) = 2 min s1≤u≤s2 e(u), we have, conditionally, given e, for s 1 ≤ s 2 , in which b) yields the second equality, (7) yields the third one, and a) yields the fourth equality.
As a consequence of Proposition 2.4, conditionally given e, S is centered Gaussian with variance This last statement is equivalent to Theorem 2.3.
Remark 2.5. The d-dimensional analog of Theorem 2.3 is also true, by the same proof.

The moments
Proof of Theorem 1.1. From Theorem 2.3, we derive at once that As a special case of [12, Theorem 3.3] (where a k is denoted ω * 0k ), and the result follows since E N 2k = (2k)!/(2 k k!).
More precisely, we show by induction that for k ≥ 3, .
This holds for k = 3 and k = 4. For k ≥ 5 we have by the induction assumption b j ≤ 1 for 1 ≤ j < k and thus The induction follows. Moreover, it follows easily from (11) To obtain the numerical value of β, we write, somewhat more sharply, where 0 ≤ θ ≤ 1, and sum over k > n for n = 10, say, using b n−1 < b k−2 < β and b n−2 < b k−3 < β for k > n. It follows (with Maple) by this and exact computation of b 1 , . . . , b 10 that 0.981038 < β < 0.9810385; we omit the details.
Remark 3.1. For comparison, we give the corresponding result for ξ defined in (5). There is a simple relation, discovered by Spencer [18] and Aldous [2], between its moments and Wright's constants in the enumeration of connected graphs with n vertices and n + k edges [19], and the well-known asymptotics of the latter lead, see [12,Theorem 3.3 and (3.8)], to

Moment generating functions and tail estimates
The moment asymptotics yield asymptotics for the moment generating function E e tS and the tail probabilities P (S > t) as t → ∞. For completeness and comparison, we also include corresponding results for η. We begin with a standard estimate.
The sums over even k only are asymptotic to half the full sums.
Sketch of proof. (i). This is standard, but since we have not found a precise reference, we sketch the argument. Write where g(y) = −γy ln y + y ln x and f (y) = g(y) + b ln y. The function g is concave with a maximum at y 0 = y 0 (x) = e −1 x 1/γ . A Taylor expansion yields in which y denotes the smallest integer larger than, or equal to, y.
The standard argument with Markov's inequality yields upper bounds for the tail probabilities from Theorem 1.2 or 4.2.
Theorem 4.3. For some constants K 1 and K 2 and all x ≥ 1, say, Proof. For any even k and x > 0, P (|S| > x) ≤ x −k E S k . We use (3) and optimize the resulting exponent by choosing k = 10 1/3 x 4/3 , rounded to an even integer. This yields (16); we omit the details. (15) is obtained similarly from (6), using k = 5x 2 .
Remark 4.4. The proof of Theorem 4.3 shows that any K 1 > 2π 3/2 β/5 1/2 ≈ 4.9 and K 2 > 10 1/6 βπ 3/2 /5 ≈ 1.6 will do for large x. Alternatively, we could use Theorem 4.2 and P (S > x) < e −tx E e tS for t > 0 and so on; this would yield another proof of Theorem 4.3 with somewhat inferior values of K 1 and K 2 .
The bounds obtained in Theorem 4.3 are sharp up to factors 1 + o(1) in the exponent, as is usually the case for estimates derived by this method. For convenience we state a general theorem, reformulating results by Davies [7] and Kasahara [13].
Theorem 4.5. Let X be a random variable, let p > 0 and let a and b be positive real numbers related by a = 1/(peb p ) or, equivalently, b = (pea) −1/p .
is equivalent to E X r 1/r ∼ br 1/p as r → ∞.
Here r runs through all positive reals; equivalently, we can restrict r in (18) to integers or even integers.
(ii) If X is a symmetric random variable, then (17) and (18) are equivalent, where r in (18) runs through even integers.
(ii) follows from (i) applied to |X|, and (iii) for general X follows by considering max(X, 0).