STRICT CONCAVITY OF THE HALF PLANE INTERSECTION EXPONENT FOR PLANAR BROWNIAN MOTION

Abstact The intersection exponents for planar Brownian motion measure the exponential decay of probabilities of nonintersection of paths. We study the intersection exponent (cid:24) ( (cid:21) 1 ; (cid:21) 2 ) for Brownian motion restricted to a half plane which by conformal invariance is the same as Brownian motion restricted to an in(cid:12)nite strip. We show that (cid:24) is a strictly concave function. This result is used in [11] to establish a universality result for conformally invariant intersection exponents.


Introduction
The intersection exponents for planar Brownian motion give the exponential rate of decay of probabilities of certain nonintersection events. The importance of the exponents can be seen both in their relevance to fine properties of Brownian paths [6,7,8] and in their apparent relation to critical exponents for self-avoiding walks and percolation [5,11]. Most of the work (see [9] and references therein) has focused on the exponents for Brownian motion in the whole plane. However, recent work using conformal invariance [10] shows that the exponents for Brownian motions restricted to a half plane are fundamental in understanding all exponents.
The purpose of this paper is to study the half plane exponents ξ = ξ(λ 1 , λ 2 ). These exponents, which we define below, are denoted byξ(λ 1 , 1, λ 2 ) in [10]; however, since we will only be considering these exponents in this paper we choose the simpler notation. The main result of this paper is that ξ(λ 1 , λ 2 ) is a strictly concave function. The corresponding result for the whole space exponent was proved in [9]. While the basic framework of the argument in this paper is similar to that in [9], there are two differences which make the arguments in this paper somewhat nicer. First, while [9] discussed both two and three dimensional Brownian motions, this paper only considers planar Brownian motions. Hence, conformal invariance can be exploited extensively. Second, a coupling argument is used which improves the rate of convergence to an invariant measure; in particular, the stretched exponential rate, O(e −β √ n ), in [9] is improved to an exponential rate, O(e −βn ), here. The coupling argument is similar to the argument used in [4] (see [4] for other references for coupling arguments). The main theorem in this paper is used in [11] to show universality among conformal invariance exponents. This latter paper gives the first rigorous result that indicates why self-avoiding walk and percolation exponents in two dimensions should be related to Brownian exponents.
We will now give the definition of ξ(λ 1 , λ 2 ). Rather than considering Brownian motions restricted to the upper half plane H = {x + iy : y > 0}, we will study Brownian motions restricted to the infinite strip this estimate can be deduced from the "gambler's ruin" estimate for one dimensional Brownian motion.) Let B 1 t be another complex valued Brownian motion defined on a different probability space (Ω 1 , P 1 ) with stopping times T 1 n = inf{t : [B 1 t ] = n}. Assume for now that B 1 0 has a uniform distribution on [−iπ/2, iπ /2]. If w, z ∈ C , we write w z if (w) > (z). Define the (Ω, P) random variables Jn . If λ 1 , λ 2 ≥ 0, the half plane exponent ξ = ξ(λ 1 , λ 2 ) is defined by Here we write ≈ for logarithmic asymptotics, i.e., the logarithms of both sides are asymptotic. If λ 1 = 0 or λ 2 = 0 we use the convention 0 0 = 0, i.e., The existence of such a ξ was established in [10]; we will reprove this in this paper and show, in fact, that Moreover, for each M < ∞, the implicit multiplicative constants in (2) can be chosen uniformly for 0 ≤ λ 1 , λ 2 ≤ M . The estimate (1) shows that ξ(0, 0) = 1.
The main result of this paper that is used in [11] is the following.
Theorem 1 Let λ 2 ≥ 0 and let ξ(λ) = ξ(λ, λ 2 ). Then ξ is C 2 for λ > 0 with It is conjectured [10], in fact, that Of course, if this conjecture is true, Theorem 1 would follow immediately. However, this conjecture is still open, and it is possible that Theorem 1 will be useful in proving the conjecture. In [10] a whole family of half plane intersection exponents ξ(a 1 , . . . , a p ) were defined for any nonnegative a 1 , . . . , a p ; however, it was shown that all of these values can be determined from the values of ξ(λ 1 , λ 2 ). This is why we restrict our attention to ξ(λ 1 , λ 2 ) in this paper.
Studying exponential decay rates (i.e., large deviation rates) generally leads to studying the behavior of processes conditioned on this exceptional behavior, and this is the case here. Fix λ 2 and let Ψ n = − log Z + n , Note that (2) implies as n → ∞, Direct differentiation give whereẼ n and var n denote expectation and variance with respect to the measure What we prove is that there is an a = a(λ, λ 2 ) and a v = v(λ, λ 2 ) such that Moreover, we show that |ξ (λ)| is bounded so that ξ is C 2 with Part of the work is showing that the measures on paths (3) approach an invariant measure and that the random variables are approximately a stationary sequence with exponentially small correlations. A separate argument is given to show that v > 0 which then gives Theorem 1.
We now outline the paper. Section 2 derives a number of results using conformal invariance. We start by reviewing facts about Brownian motion in a rectangle with the intent of using conformal invariance to relate these results to regions that are conformally equivalent to a rectangle. We assume that the reader is familiar with the conformal invariance of Brownian motion (see, e.g., [2, V]). An important conformal invariant is extremal distance. This quantity was first studied in complex variables (see, e.g., [1]), but we give a self-contained treatment of the facts that we need. Section 3 discusses the intersection exponent. The main goals of this section are to derive the separation lemma, Lemma 9, and to use this to derive (2). Since the results in this section are very similar to results in corresponding sections of [6,7,8], we are somewhat brief. The last section constructs the invariant measure on paths and uses this to justify the differentiation of ξ(λ). The proof here is easier than that of the corresponding parts of [6,7,8]; we use a coupling argument derived from that in [4] to show the exponential decay of correlations.
We make some assumptions about constants in this paper. We fix M < ∞ and we consider only 0 ≤ λ 1 , λ 2 ≤ M . Constants c, c 1 , c 2 , . . . and β, β 1 , β 2 , . . . are positive constants that may depend on M but do not depend on anything else. (In fact, the constants in Section 2 do not depend on M .) In particular, constants do not depend on the particular λ 1 , λ 2 . Constants c, c 1 , c 2 and β may change from line to line, but c 3 , c 4 , . . . and β 1 , β 2 , . . . will not vary. If δ n ↓ 0, we write Similarly, if f, g are positive, we write All implicit constants in the notations O(·) and will depend only on M and not on λ 1 , λ 2 . We will write P z , E z to denote probabilities and expectations assuming B 0 = z. If the z is omitted the assumption is that B 0 has a uniform distribution on [−iπ/2, iπ/2]. The same assumptions will be made about P z 1 , P 1 . We will consider two norms on probability measures. The first is the standard variation measure The second norm will be Here the supremum is over all E for which P 1 (E) + P 2 (E) > 0. Equivalently, we could define where the norm on the right hand side is the standard L ∞ norm. This norm could be infinite. Note that will also be written will also be written I would like to thank Wendelin Werner for many useful conversations on intersection exponents.

Rectangle estimates
Let J be the infinite strip as in the introduction, and Let ∂ 1 , ∂ 2 be the vertical boundaries of J L , We will need some standard facts about Brownian motion in rectangles and half infinite strips. What we need can be derived from the exact form of the solution of the Dirichlet problem in a rectangle (see, e.g., [3, Section 11.3]); we will just state the results that we will need. If U is any open region, let For this subsection, let τ = τ J + , τ L = τ J L . First, for real n > 0 Also, Consider an "excursion" W t from { (z) = L} to { (z) = 0}. There are a number of ways to get such an excursion. One way is to let z ∈ C with (z) ≥ L, setting By vertical translation, we can allow the excursion to start at L + iy for any y ∈ R. An excursion starts on { (z) = L}, immediately enters { (z) < L}, and then has the distribution of a Brownian motion conditioned to leave {0 < (z) < L} at { (z) = 0}. Such a process can also be given by (L − X t ) + iY t where X t is a Bessel-3 process stopped when X t = L, and Y t is an independent Brownian motion. Suppose W t is such an excursion with W 0 ∈ J + and let Consider the event For each z ∈ J + with (z) = L, there is a probability measure Q z,L on ∂ 1 given by The following lemma whose proof we omit can be proved either by direct examination of the solution of the Dirichlet problem or by a coupling of h-processes.
where the supremum is over all −π/2 < y 1 , y 2 < π/2, and · 1 is the norm as defined in (4). Then g(L) < ∞ for every L > 0. Moreover, there exist c 3 , β 1 such that for all L ≥ 1, Suppose U ⊂ J + is open and connected with By splitting the path B[0, T 0 ] as in (6), we see that If we let L → ∞, we see that there is a density such that if Q denotes the measure H(y) dy, then Similarly, (8) holds with Q replacing Q L,L ,

Extremal distance
Let U be a bounded simply connected domain in C whose boundary is a Jordan curve. Let A 1 , A 2 be disjoint closed connected subsets of ∂U , each larger than a single point. We denote the other arcs by A 3 , A 4 so that ∂U = A 1 ∪ A 2 ∪ A 3 ∪ A 4 ; the arcs A 1 , A 2 , A 3 , A 4 are closed and the intersection of two arcs is empty or a single point; and going counterclockwise the order of the arcs is A 1 , A 3 , A 2 , A 4 . There is a unique L = L(A 1 , A 2 ; U ) such that there is a conformal transformation F : U → J L which can be extended continuously to the boundary so that We call L the extremal distance (this is actually π −1 times the standard extremal length or extremal distance as in [1], but this definition will be more convenient for us). As defined, extremal distance is clearly a conformal invariant.
Let us define a similar quantity which is sometimes easier to estimate. Let B t be a complex valued Brownian motion and for j = 1, 2, 3, 4, let In other words, f j is the solution of the Dirichlet problem with boundary value 1 on A j and 0 It is easy to check that L → φ L is a continuous, strictly decreasing function with Estimates for rectangles tell us that φ L e −L/2 as L → ∞, i.e., there is a c 4 such that for all L ≥ 1, Using conformal equivalence of rectangles and symmetry, and the supremum is obtained at the center. By conformal invariance, we then get for all such domains, Proof. We only need to prove the first inequality. Let z = (L/2) + iy with |y| < π/2. Let Note that S and S are independent, the distribution of S does not depend on y, and Hence it suffices to show for all |y| < π/2 and all t > 0,

Lemma 3
Suppose U is a domain as above such that for some n > 0, Then By continuity, we can find a z 0 with (z 0 ) = n/2 and f 3 (z 0 ) = f 4 (z 0 ). Hence, Suppose η : [0, 1] → C is a simple, continuous path with We will call such a path a crossing path. If η 1 , η 2 are two crossing paths we write We call U (more precisely, U , A 1 , A 2 ) the generalized rectangle generated by η 1 , η 2 . Note that the rectangles J L are generalized rectangles.

Lemma 4
There exists a c 5 such that the following holds. Suppose Proof. The first inequality follows immediately from Lemma 3. For the other direction let z = n/2. Then by (5),

But geometric considerations give
A similar argument holds for A 2 giving The lemma then follows from (11).

Lemma 5 Let ψ(d) be the maximum of
Then lim and choose s, t with Without loss of generality assume d < 1/10 and [η 1 (s)] ≤ 0. Let and consider the curve consisting of the straight line from η 1 (s) to w followed by the line from w to η 2 (t). It is easy to see that there is a c such that for all z in this line By continuity we can find a z 0 on this line so that But the Beurling projection theorem (see, e.g., [2, V.4]) gives Consideration of (12) and (13) on the rectangle J L shows that L → 0 as d → 0+.

Path domains
Let X denote the set of continuous functions We will call such domains path domains; in particular, D(γ) is the path domain associated to γ. We also consider J + as the path domain associated to the function even though this function is not actually in X .
There is a well-defined probability measure on Brownian excursions in the disk ∆ starting at 1 and conditioned to leave ∆ at A. (One way to obtain the measure is to define the measure for Brownian paths starting at z ∈ ∆ conditioned to leave the disk at A and then taking a limit as the initial point z approaches 1.) By reversing time, we can consider these paths as starting at A and leaving the disk at 1. The lifetimes of these paths are random and finite. However, if these paths are conformally mapped to D(γ) by f −1 γ we get paths with infinite lifetime. This measure could also be defined by taking appropriate limits of paths in J + , and clearly the limit is conformally invariant. We denote this measure by ν(γ).
Let γ ∈ X , and let D = D(γ). If n > 0, let y n = y n (γ) be the largest y such that Note that D \ V o n consists of two connected components, a bounded component that we denote by V − n and an unbounded component that we denote by V + n . While we think of V − n as being to the left of { (z) = n} and V + n as being to the right, note that both Note that if 0 < m < n, and σ m < κ n , then

Truncated paths
Fix γ ∈ X and let F = F γ be the conformal transformation taking D = D(γ) to J + as above.
and for n > 0, let We let ν = ν(γ) be the measure on paths as in Section 2.3, and let ν * be the corresponding measure on paths in J + . Note that ν and ν * are related by the conformal map F . We write informally ν * = F (ν) although this notation ignores the time change involved in the conformal transformation.
If η ∈ X , we define σ n (η), κ n (η) as in Section 2.3. If n > 0, we use Φ n η to represent the bounded path obtained from η by truncating the domain at σ n (η), Let ν n , ν * n denote the measures on truncated paths obtained from ν, ν * by performing this truncation.
There is another way to get ν n (or similarly ν * n ) that we will describe now. For any z ∈ D with (z) ≥ n, start a Brownian motion B t at z. As before, let consider the time reversal of the path B[T n , τ D ]. More precisely, we let The conditional measure of these paths given B(0) = z and B(τ D ) ∈ ∂ γ gives a measure that we denote ν n,z . Let Q n,z be the measure on {w : (w) = n} obtained from the distribution of B(T n ) given B(0) = z and T n ≤ τ D . Then we can also describe ν n,z by starting a Brownian motion on {w : (w) = n} using the initial distribution Q n,z ; conditioning on the event and considering the paths (Note that we do not fix a w and then do the conditioning, but rather we condition just once. In particular, the measure on { (w) = n} given by terminal points under ν n,z is not the same as Q n,z .) We can do the same construction on paths in J + giving the measures ν * n,z , Q * n,z . By (7), if s ≥ 1, (z), (w) ≥ n + s, then By conformal transformation, we see that a similar result holds for Q n,z . More precisely, if s ≥ 1, Letting z tend to infinity, we therefore get measures Q n , Q * n such that if s ≥ 1 and K(z) ≥ K n +s, The measure ν n as above can be obtained in the same was as the ν n,z , using Q n as the initial measure. Note that Q 0 is the same as the Q of Section 2.1, and Q n is just a translation of this measure.
Estimates for the rectangle tell us that if n, s > 0, By conformal invariance, we get a similar result for ν, Let If ν n (m) denotes the conditional measure derived from ν n by conditioning on the event where · denotes variation measure as in the introduction. Similarly, if Q n,z (m) denotes the measure on { (w) = n}, Let ν n,z (m) be the measure defined similarly to ν n,z except that the conditioning is on the event We have derived the following lemma.

Lemma 6
There exists a c such that the following holds. Suppose 0 < n < m < ∞ and z ∈ D with Then ν n,z (m) − ν n ≤ ce −2s .
Let n ≥ 1 and (1/4) . Then by (14) and comparison to the rectangle, we can see LetX n be the set of γ ∈ X such that Note that if γ ∈X n , then It follows from Lemmas 3 and 4 that there is a c such that if j = 1, 2 and z, w ∈ D(γ) with In particular, if 2n ≤ (z) ≤ 3n, Note that the measure ν n,z (5n) depends only on and not on all of D.

An important lemma
In this subsection we will consider two paths γ 1 , γ 2 ∈ X . We use the notation of the previous subsection except that we use superscripts to indicate which path we are considering. For example, the measure ν in the previous section corresponding to γ 1 and γ 2 will be denoted ν 1 and ν 2 respectively. We will write Note that if then γ 1 ∈X n if and only if γ 2 ∈X n . The following lemma is then a corollary of (17).

Lemma 7
There is a constant c 8 such that the following holds. Suppose γ 1 , γ 2 ∈X n and Then

Intersection exponent
In this section we define the intersection exponent and derive some important properties. We use the notation in the introduction. On the event J n , the random variables Z + n , Z − n can be defined by Recall that we use the convention that 0 0 = 0, i.e., (Z + n ) 0 is the indicator function of the event {Z + n > 0}. Let q n = q n (λ 1 , λ 2 ) = E[Θ n ].
The Harnack inequality applied separately to B and B 1 can be used to show that there is c 7 such that q n+m ≤ c 7 q n q m−1 .
In particular, log(c 7 q n−1 ) is a subadditive function, and hence by standard arguments there is a ξ = ξ(λ 1 , λ 2 ), which we call the intersection exponent, such that q n ≈ e −ξn , n → ∞.
Moreover, there is a c 8 such that q n ≥ c 8 e −ξn .

Separation lemma
Let F n denote the σ-algebra generated by Note that the random variables Z + n , Z − n , Θ n are functions of B[0, T n ] and hence are F nmeasurable. Let δ n be the F n -measurable random variable and let U n be the F n -measurable event The following lemma can be proved easily using conformal invariance and estimates for Brownian motion in a wedge. We omit the proof.

Lemma 8
There exist c, β such that the following is true. Suppose n ≥ 0, 3/2 ≤ r ≤ 3, and > 0. Then, The next lemma is a key lemma to show quick convergence to equilibrium. We call it the separation lemma because it states roughly that paths conditioned not to intersect actually get a reasonable distance apart with positive probability.

Lemma 9
There exists a c such that the following is true. If n ≥ 0, Proof. Let N be the smallest positive integer so that Let m k = 3/2 for k ≤ N , and for k > N, where the infimum is over all n ≥ 0, m k ≤ r ≤ 2, and on the event Assume that k ≥ N , n ≥ 0, m k+1 ≤ r ≤ 2, and that Let ρ = ρ(n, k) be the smallest positive integer l such that If B(n + (l − 1)2 −k+1 ) ∈ J then there is a positive probability (independent of the exact position of If j ≤ k 2 /4, the strong Markov property and the definition of h k imply that This implies Summing over j, this gives Combining this with (18) we get which finishes the lemma.
It follows from the lemma that for every n ≥ 2, It is easy to check that From this we conclude that there is a c 9 > 0 such that for all n ≥ 0, where the supremum is over all z with (z) = 0, and defineZ − n similarly. Let DefineZ + n to be the same as Z + n−1 for the path LetZ − n be defined similarly andΘ n = (Z + n ) λ 1 (Z − n ) λ 2 . By the Harnack inequality applied to B 1 , there is a constant c such that Θ n ≤ cΘ n .
But the Harnack inequality applied to B shows that for any z, The following then follows from (19).

Lemma 10
There exists a c 10 such that for all y ∈ R and all n ≥ 0, E iy [Θ n ; J n ] ≤ c 10 q n .

Other lemmas
In this section we derive a number of lemmas. The main goal is to get the estimate q n ≤ ce −ξn which can be derived from the estimate q n+m ≥ cq n q m . The separation lemma tells us that a good proportion of the paths under the measure given by Θ n are separated. To such separated paths we can attach other Brownian paths.

Lemma 11
There exists a c 11 such that the following is true. Let Λ n = Λ n (c 11 ) be the event Then Proof. For any > 0, let κ = κ( ) be the first time t such that By the standard "gambler's ruin" estimate, Hence by the strong Markov property, the Harnack inequality, and (19), In particular, by taking sufficiently small we can make the right hand side smaller than q n /2.
We let B(z, ) denote the closed disk of radius about z. We also fix a c 11 so that the previous lemma holds and let Λ n = Λ n (c 11 ).
Lemma 12 There is a c 12 such that the following is true. Let Γ n be the event Then, sup y E iy [Θ n ; Λ n ∩ Γ n ] ≥ c 12 q n .
Proof. Let r n be the supremum over y of E iy [Θ n ; Λ n ], and let y = y n be a number that obtains the supremum. By the previous lemma, r n ≥ q n /2. Let for some δ > 0, independent of n. Hence by the strong Markov property, and hence E iy [Θ n ; Λ n ∩ Γ n ] ≥ δr n .
We now let and letZ Using (16), we can see that there is a c 13 such that on the event Λ n ∩ Γ n , Θ n ≥ c 13 Θ n , and hence sup E iy [Θ n ; Λ n ∩ Γ n ] ≥ c 13 c 12 q n .
Another simple use of the Harnack inequality shows that there is a c such that for all y ∈ [−π/4, π/4], E iy [Θ n ; Λ n ∩ Γ n ] ≥ cq n .
From this and the work of the previous section, we can conclude q n+m ≥ cq n q m .
Hence from standard subadditivity arguments we get the following.

Lemma 13
There exist c 8 , c 14 such that for all n,

Coupling
Let Y be the set of continuous γ : In other words γ ∈ Y if and only if Gγ ∈ X where We will apply results about X from Section 2 to Y using the natural identification γ ↔ Gγ.
Let B t be a complex valued Brownian motion as before with We will start the Brownian motion with "initial condition" γ ∈ Y. More precisely, set B 0 = γ(0) and if n ≥ 0 let Note that γ n ∈ Y if and only if the event J n = {B[0, T n ] ⊂ J } holds.
For any γ ∈ Y, n ≥ 0, let The conditioning is with respect to the measure discussed in Section 2.3 (where it is done for X ), and we recall that w z means (w) > (z). Note that if −∞ < r < s < u ≤ 0, then . In other words, − log Y + r,s and − log Y − r,s are positive additive functionals. For 0 ≤ m < n < ∞, we define the random variables are positive additive functionals (that can take on the value +∞). We let where we again use the convention 0 0 = 0 if either λ 1 or λ 2 is 0. We also write Θ n = Θ n (λ 1 , λ 2 ) = Θ 0,n .
We write P γ , E γ to denote probabilities and expectations with initial condition γ. For n ≥ 0, we let F n be the σ-algebra generated by γ (in case γ is chosen randomly) and In other words, F n is the σ-algebra generated by the random function γ n . If n ≥ 0, γ ∈ Y, let We collect some of the results of the previous sections using the notation of this section.

Lemma 14
There exists a c such that for all γ ∈ Y, n ≥ 0, Proof. The first inequality is an immediate corollary of Lemma 13. The second follows from Lemma 9 and Lemma 12.
If n ≥ 0, letŶ n be the collection of γ ∈ Y such that and if −n ≤ s ≤ r − (n/5) ≤ −n/5, The c 16 , β 2 are the constants from Lemma 15 (which we have now fixed) and 1/100 is an arbitrarily chosen small number. The condition (21) can also be written
Lemma 17 There exist c 18 , β 4 such that the following is true. Suppose n ≥ 1 and γ 1 , γ 2 ∈Ŷ n with Then for all m ≥ 0, Proof. As in the proof of Lemma 16 we have for any γ ∈ Y, Hence, for j = 1, 2, But on the event it follows from Lemma 6 that We now fix N ≥ 3 and consider the following discrete time, time inhomogeneous, Markov chain, X = γ. Start a Brownian motion with initial condition γ, and let it run until T 3 giving γ 3 as described above. The weighting on γ 3 is given by the following density with respect to the Wiener measure of the Brownian motion: For fixed γ this gives a probability density since The distribution of X = γ is the same as the distribution of X Keeping N fixed and assuming 3k ≤ N , let us write just γ 0 , γ 3 , . . . , γ 3k for X 3k . We will call (γ 1 t , γ 2 t ), t = 0, 3, . . . , 3k, a (k, N )-coupling if for j = 1, 2, γ j 0 , . . . , γ j 3k , has the distribution of this Markov chain with initial condition γ j 0 . Let us describe one coupling. Suppose γ 1 , γ 2 ∈ X . Let B 3 , B 4 be independent Brownian motions (with stopping times T 3 r , T 4 r ) started with initial conditions γ 1 , γ 2 , respectively. Use these Brownian motions to produce (γ 1 3/2 , γ 2 3/2 ). Let U j 3/2 , j = 1, 2, be the event By the separation lemma, Lemma 9, we can see that Consider the event Then, the measure on given by can be seen to be greater than a constant times Lebesgue measure. Hence we can couple the paths B 3 , B 4 to produce a coupling (γ 1 2 , γ 2 2 ) with We now use the same Brownian motion, say B 3 , to extend the paths beyond γ j 2 . It is not difficult to see we can extend these rather arbitrarily and still get sets of positive probability. From this we get the following. The function δ m in the lemma might go to zero very quickly, but this will not be a problem.
In particular, we get that there exist c, β such that for all n ≥ 3 and all γ 1 , γ 2 ∈ Y, From this we can easily deduce the following.

Lemma 21
For every γ ∈ Y, the limit exists. Moreover, there exist c 23 , β 7 such that for all n ≥ 2, In other words,

Invariant measure on Y
If n > 0, we let Y n be the set of continuous Here, b = b γ can be any positive real number. We let ρ be the Skorohod metric on Y n such that ρ(γ 1 , γ 2 ) < if there exist an increasing homeomorphism φ : Measures on Y n will be with respect to the corresponding Borel σ-algebra. Let (It might be more precise to write Φ m,n rather than Φ m , but there should be no confusion.) If m < n and ν n is a measure on Y n , then we write Φ m ν n for the measure on Y m induced by Φ m . A collection of measures {ν n : n > 0} is called consistent if each ν n is a measure on Y n and Φ m ν n = ν m , 0 < m < n < ∞.
A measure ν on Y is a consistent collection of measures {ν n } (where ν n = Φ n ν).
If ν is any probability measure on Y and n ≥ 3, let G n ν be the probability measure on Y obtained by the distribution of the Markov chain X By letting s → ∞, we see that there is a limiting measure µ n such that if ν is any probability measure on Y and s ≥ 3n, Φ n (G s ν) − µ n ≤ ce −βs .
It is easy to check that the {µ n } form a consistent collection of measures and hence give a measure µ on Y. Also, if n ≥ 0. E µ [Θ n ] = e −ξn , We summarize this in a proposition.

Proposition 22
There exists a probability measure µ = µ(λ 1 , λ 2 ) on Y and c, β such that if ν is any probability measure on Y and 0 ≤ n ≤ s/3 < ∞, A particular application of the proposition is the following. Suppose n ≥ 1, k ∈ {1, 2, 3}, and Φ + n−1,n is as defined in Section 4.1. Then (This is actually true for all k = 1, 2, 3, . . ., if we allow the implicit constant to depend on k. Since we will only need the result for k = 1, 2, 3, it is easier just to state it for these k and have no k dependence in the constant.) Letμ be the measure on Y with dμ = R dµ.
Thenμ is the invariant measure for the time homogeneous Markov chain X n = X ∞ n with state space Y defined for n > 0 by saying that the density of X n with respect to Wiener measure is For integer k > 0, we define a = a(λ 1 , λ 2 ) and b k = b k (λ 1 , λ 2 ) by
The formula for v is not very useful for determining that v > 0. We establish this in the remaining subsection.

Strict concavity
Let Y ∞ be the set of continuous γ : (−∞, ∞) → J , As before, if r ∈ R, we set σ r = σ r (γ) = inf{t : [γ(t)] = r}, and we let γ r be the element of Y, The measureμ gives a probability measure P on Y ∞ in a natural way. To be more precise, let E denote expectations with respect to P. Then if f is a function on Y, n ≥ 0, We write V for variance with respect to this measure. Let Y + r,s , Y − r,s be the functionals as in Section 4.1 which are now defined for −∞ < r < s < ∞. For integer n, let ψ + n = − log Y + n−1,n . Note that · · · , ψ + −1 , ψ + 0 , ψ + 1 , · · · , is a stationary sequence of random variables in (Y ∞ , P). If n is a positive integer, let Ψ + n = − log Y + 0,n , so that Ψ + n = ψ + 1 + · · · + ψ + n .
If n ≥ 0, let G n be the σ-algebra of Y ∞ generated by γ n . (Note that G n is really the same as the F n of the previous section, but we use the new notation to emphasize that we are considering the measure P on Y ∞ . Also, it is straightforward to show that there is a c such that E[ψ + 2 + · · · + ψ + n | G 1 ] ≥ an − c. In particular, there is a c 24 such that if m < n are positive integers, We end by proving the strict concavity result, i.e., that v > 0. Assume on the contrary that for some C < ∞ and all n ≥ 0.  But we know that the left hand side is bounded by C. By choosing M sufficiently large we get a contradiction. This completes the proof.