Moderate deviations and laws of the iterated logarithm for the volume of the intersections of Wiener sausages ∗

Using the high moment method and the Feynman-Kac semigroup technique, we obtain moderate deviations and laws of the iterated logarithm for the volume of the intersections of two and three dimensional Wiener sausages.

Let p ≥ 2 be an integer and let {β j (s), 1 ≤ j ≤ p} be p independent standard Brownian motions.
Write W j r (t) = 0≤s≤t B r (β j (s)) and define the volume of the intersections of p Wiener sausages as follows: V r (t In the classical paper [17], Donsker and Varadhan studied asymptotic behavior of the Laplacetransform E(exp{−λ|W r (t)|}) of the Wiener sausage |W r (t)| and solved a conjecture of Mark Kac concerning Wiener sausage.The fluctuation theorem of the Wiener sausage was obtained by Le Gall ([22]).The large deviation below the scale of the mean in the downward direction for the volume of the Wiener sausage: P(|W r (t)| ≤ f (t)) where f (t) = o(E|W r (t)|), was studied in [10], [17] and [28].The large deviation on the scale of the mean in the downward direction: P(|W r (t)| ≤ cE|W r (t)|), was studied in [7].The large deviation on the scale of the mean in the upward direction : P(|W r (t)| ≥ cE|W r (t)|), was considered in [6], [9] and [25].Strong approximations of threedimensional Wiener sausages were studies in [15].
The asymptotic behavior of the volume of the intersections of Wiener sausages was obtained by Le Gall ([23], d = 2) and van den Berg ( [5], d ≥ 3).Van den Berg, Bolthausen and den Hollander [8] first considered the large deviations for the volume of the intersections of Wiener sausages.They obtained the following deep results: for any c > 0, Noting that t log t ∼ E(|W r (t)|)/(2π) and t ∼ E(|W r (t)|)/κ r , a natural problem is to consider where f (t) = o(E(|W r (t)|).This is a motivation of our paper.In fact, the problem for the intersection of the ranges of independent random walks was studied by Chen ([12] [13]).In this paper, we prove that the volume of the intersection of Wiener sausages has the same large deviations as the intersection of the ranges of independent random walks.As an application, the corresponding laws of the iterated logarithm are obtained.The main results are as follows:  (1.5) where κ(d, p) is the best constant of the Gagliardo-Nirenberg inequality t(log log t) 3  V r (t) = (2πr) 2 κ(3, 2) 4 a.s.
(1.10) Remark 1.1.(1).Let us compare Theorem 1.1 with the results of van den Berg et al. in [8].Theorem 4 in [8] gives us the following estimates: as d = 3 and p = 2.The large deviation results in [8] give us some hints for the moderate deviations results and also suggest that the optimal conditions on b(t) are b(t) = o(log t) for d = 2 and b(t) = o(t 1/3 ) for d = 3.Similarly, we can also guess the moderate deviations for the intersection of Wiener sausages in d = 4 case from results in [8] (also see [13]).
(2).Theorem 1.1 is a complement of van den Berg, Bolthausen and den Hollander's results ( [8]).In the case of d = 2, (1) in Theorem 1.1 answers the problem (1.3).In the case of d = 3, there exists a gap for answer of the problem (1.3) since we require a slightly stronger condition b(t ).In [13], the author has obtained a complete result for the intersection of the range of the random walk in which the fact that the range R n ≤ n is used.In the case of Wiener sausage, we can not find a proper upper bound for |W r (t)|.
(3 The proof of Theorem 1.1 is based on the weak and L p -convergence results for the Wiener sausage in [20] and the high moment method that were developed in [2], [12], [14] and [26].The key component is the moment estimates and Feynman-Kac semigroup approach.The proofs of the main results are given respectively in Section 2 and Section 3. The proofs of some technical lemmas are delayed to Sections 4, 5 and 6.Our proofs draw on some ideas and techniques in [12].

Moderate deviations
In this section, we give the proof of Theorem 1.1.Since V r (t) is a non-negative random variable, by a version of Gärtner-Ellis theorem advised by Chen ([12]), in order to prove Theorem 1.1, we need only to prove the following result.
Theorem 2.1.(1).Let d = 2 and p ≥ 2 and let b(t) be a positive function satisfying (1.4).Then for any θ > 0, (2.1) (2).Let d = 3 and p = 2 and let b(t) be a positive function satisfying (1.7).Then for any θ > 0, We only prove the case of d = 2.The proof of the case of d = 3 is analogous.For simplicity, we denote by l t = t b(t) .

Upper bound
We apply the high moment method to prove the upper bound (cf.[12]).The proof is based on L pconvergence results for the Wiener sausage in [20] and the high moment estimates of the volume of intersections of Wiener sausages.
By the scaling property of Brownian motion, we can easily get the following L p -convergence results from Corollary 3.2 in [20]. ) where α([0, 1]) p is the p-multiple intersection local time for Brownian motions, the quantity formally written as The key estimates in the upper bound are the following moment estimates.Their proofs will be given in Section 4.

The proof of the upper bound
Let s > 0 be fixed.By Lemma 2.2, we get . By (2.3), Lemma 2.3, and dominated convergence theorem, we have (2.9) Now applying the large deviation result for Brownian intersection local time given by Chen (Theorem 2.1, [11]), we obtain the upper bound:

Lower bound
We will use the Feynman-Kac semigroup method (cf.[12]) to prove the lower bound.The key is to find a proper Feynman-Kac type operator and obtain a good lower bound.
Notice that for any non-negative, bounded and uniformly continuous function f on 2 with || f || q = 1, for any integer m ≥ 1, (2.11) Therefore, the lower bound transfer to estimate the following linear operator T on L 2 ( d ) defined by Lemma 2.4.T is a self-adjoint operator on L 2 ( d ).
The following lemma plays an important role in the lower bound.Its proof is given in section 6. Lemma 2.5.Let f be bounded and continuous on d .(1).If d = 2 and b(t) satisfies (1.4), then (2.12) (2.13)

The proof of the lower bound
We now complete the proof of the lower bound.By (2.11) and Lemma 2.5, we have Taking supremum over all non-negative, bounded and uniformly continuous function f on 2 with || f || q = 1, the right hand side becomes sup where the last step follows from Lemma A.2 in [11].

Laws of the iterated logarithm
We prove Theorem 1.2 in this section.The upper bound of the law of the iterated logarithm is a direct application of Theorem 1.1 and Borel-Cantelli lemma.So we only give proof of the lower bound.Because the proof of the case of d = 3 is analogous, we only prove the case of d = 2.That is We first give some notations and a basic lemma which is the main tool to prove (3.1).For each , and let P x denote the probability induced by the p independent Brownian motions respectively.E x denotes the expectation correspondent to P x .To be consistent with the notation we used before, we have 2).Let d = 3 and p = 2 and let (1.7) hold.Then Proof.The proof is analogous to Lemma 7 in [12].We only prove (3.2).For given ȳ = ( . Therefore, by (2.1), we have lim sup It is easy to see from Theorem 4 in [12] that (3.2) is a consequence of (3.4) and the following where |B t | denotes the Lebesgue measure of the set B t .For any function f on 2 , define Let f be a non-negative, bounded and uniformly continuous function on 2 with || f || q = 1.Similar to (2.10), for any integer m ≥ 1 we have Hence, we have By lemma 2.5, we have Taking supremum over all non-negative, bounded and uniformly continuous function f on 2 with || f || q = 1 in the last inequality, we get where βj (t) = β j (t + l t ) − β j (l t ).Notice that where γ(t) = min 1≤ j≤p inf z∈B t ( y j ) p l t (z) = min 1≤ j≤p inf z∈B t ( y j ) 1 2πl t e −z 2 /2l t and p l t (z) is the probability density of β(l t )).We have where the fifth step follows from Jensen's inequality.It follows from inf t>0 γ(t)|B t | > 0 that for some constant δ > 0 and for any t > 0, inf By (3.7) with t replaced by t − l t , we obtain Finally, we let ε → 0 + on the right hand.Then (3.5) follows from (2.14).
We now complete the proof of (3.1).
Then by the law of the iterated logarithm of the Brownian motion and n k log log n k = o( n k+1 / log log n k+1 ) as k → ∞, we have that with probability 1, the events and so by the Markov property and the conditional Borel-Cantelli lemma, (3.8) holds.

High Moment estimates
In this section we prove Lemmas 2.2 and 2.3.Our proofs are analogous to the case of the range of random walk ([24] [12]).There also exist some technical difficulties.For examples, the range R n of the random walk has a natural upper bound n, but |W r (t)| ≤ t is not true.In the case of Wiener sausage, the behavior of the Wiener sausage within a short time should be concerned.
We first prove Lemma 2.2, which is a conclusion of the following lemma.
For △ ⊂ R, denote by Let us make a decomposition of the domain [0, t] which is come from Chen ( [12]).Let a ≥ 2 be an integer and let t 0 = 0 and t 1 , t 2 , • • • , t a be positive real numbers satisfying Then it is obvious from Consequently, for any θ > 0, Proof.
We now prove Lemma 2.3.We first give the following useful lemma.

Lemma 4.2. For any integer
the set of all permutations on {1, 2, • • • , m}.Then It is easy to see that P{ y ∈ W r (t)} = P{T ( y) ≤ t}, and P{ y ∈ W j r (t)} = P{T j ( y) ≤ t}.Hence, by the strong Markov property, we have and by Hölder's inequality, Repeating this procedure yields the conclusion of Lemma 4.2.
Remark 4.1.We borrow the method from ( [24]) in which the analogous result for intersections of the ranges of random walks is given by LeGall and Rosen.But we need a different analysis when applying the Markov property at time T ( y k ) due to V r (t) is a continuous version of the intersections of the range of random walks.
Using Lemma 4.2, we get the following high moment estimates.

Lemma 4.3. There is a constant C depending only on d and p such that:
(1).When d = 2 and p ≥ 2, . By the help of Lemma 4.2 with p = 1, we have Therefore, (4.6) is valid.[12].When m is large enough, the behavior of the Wiener sausage within a short time must be concerned.

The proof of Lemma 2.3. (1). For given
. Therefore, by Hölder's inequality, we have sup sup EV m r (t) where the last step follows from (4.6) and C is a positive constant.Lemma 2.3 follows from a Taylor expansion.

Exponential moment estimates of the Wiener sausage
In this section, we give some exponential moment estimates of single Wiener sausage which will be used in the proof of Lemma 2.5.
From ( [27]), we have the following asymptotic behavior of the expectation of the Wiener sausage: Proof.We only prove (5.2).By Lemma 4.2 with p = 1, we have By (5.1) and the Taylor's expansion, we can easily get (5.2) holds for some θ 0 > 0. For any θ > 0, we choose δ > 0 such that θ < θ 0 [δ −1 ] and denote where the first inequality is due to the subadditivity of the Wiener sausage and the Markov property of Brownian motion.
Next, we will give an exponential estimate for |W r (t)|− |W r (t)|.The proof is analogous to Theorem 5.4 in [2].The following lemma plays an important role in the proof of Lemma 5.3, which was given by Bass, Chen and Rosen ( [1]).Then for some λ > 0, Then there is a constant C such that: and set Then where βk = β k − Eβ k and ᾱj,k = α j,k − Eα j,k .By Lemma 5.1, we have that Therefore, by Lemma 5.2, there is a θ > 0 such that By choice of N it is easy to see that there is a C > 0 independent of t such that So there is some Next, We need only to show that for some C > 0, , where W ′ r (t) is an independent copy of W r (t).Then, for each 1 ≤ j ≤ N , { ᾱj,1 , ᾱj,2 , • • • , ᾱj,2 j−1 } is a sequence of i.i.d random variables with the same distribution as V r (t2 − j ).By Lemma 2.3 (with p = 2), there is a δ > 0 such that By Lemma 5.2 again, there is a θ > 0 such that sup Hence for some c > 0, ).Using Hölder's inequality with 1/p = 1 − 2 −N /2 and 1/q = 2 −N /2 we have where the second inequality follows from the fact that λ N < 1. Repeating this procedure,

So we have
So (5.6) holds.
Remark 5.1.The analogous results for the range of the random walk R n are proved in [2] and [13].
When d = 3, the author of [13] shows that there is a θ > 0 such that where the fact that R n ≤ n is used.In the case of Wiener sausage, we can not find a proper upper bound for |W r (t)|.That is why we can not improve the restriction on b(t) for d = 3.

Some estimates of the Feynman-Kac semigroup
In this section we show Lemma 2.5.The proof is also based on a decomposition of the domain [0, t] and the exponential moment estimates of the Wiener sausage similar to [12].
We first give a weak convergence theorem of the Wiener sausage due to LeGall ( [20]).
where the last step is a direct consequence of Theorem 3.1 in [20].

Denote by
We now use the exponential moment estimates of the Wiener sausage and the Feynman-Kac semigroup approach to prove the following lemma.Lemma 6.2.Let f be bounded and continuous on d .
(1).When d = 2, lim inf Proof.The proof is analogous to Lemma 5 in [12].We only prove (6.4).Define the linear operator T t on L 2 ( 2 ) as follows: for any ξ ∈ L 2 ( 2 ), By Lemma 2.4, T t is self-adjoint.Let g be an infinitely differentiable function on 2 satisfying g(x) = 0 for all x ∈ [−M , M ] 2 and 2 |g(x)| 2 d x = 1.Set Let p t (x) be the probability density of β(t).Let { s , s ≥ 0} denote the σ-algebra filter generated by {β(s), s ≥ 0}.Then by the Markov property, we have Next we calculate 〈ξ t , T t ξ t 〉.Since it follows from (6.1), Lemma 6.1 and dominated convergence theorem that lim t→∞ log〈ξ t , T t ξ t 〉 = log Let us consider the following the semigroup of linear operators on L 2 ( 2 ) defined by Then by Feymman-Kac formula, we see that the generator f of T f t is self-adjoint and satisfies Next, we will use Lemma 5.3 to prove that |W r (t)| is exponentially equivalent to Proof.(1).By Chebyshev's inequality, we need only to prove there exits a constant θ > 0 such that By triangle inequality, we need only to prove there exits a constant θ > 0 such that By (5.1), one can easily get and so (6.9) holds.
It remains to show (6.11).In fact, by the Markov property, we have which implies (6.11). ( By triangle inequality, in order to get (6.7),we need only to prove there exists a constant α > 0 such that lim sup and so (6.12) holds.
The proof of Lemma 2.5.We only prove (2.12).Noticed that For any ε > 0, set   in [12].The method has been used in this paper (see the proof of Lemma 6.3).The authors are grateful to the referees for their comments that help improve the manuscript.

(
0} is the spectral measure of T t .Then by |g| 2 d x = 1, we see that 〈P t λ ξ t , ξ t 〉 is a probability measure on 2 , and by Jensen's inequality we have that 〈ξ is the spectral measure of f .Then by |g| 2 d x = 1, we see that 〈P 6.16) By (6.15) and (6.16), we need only to prove