Interacting partially directed self avoiding walk : scaling limits

This paper is dedicated to the investigation of a $1+1$ dimensional self-interacting and partially directed self-avoiding walk, usually referred to by the acronym IPDSAW and introduced in \cite{ZL68} by Zwanzig and Lauritzen to study the collapse transition of an homopolymer dipped in a poor solvant. In \cite{POBG93}, physicists displayed numerical results concerning the typical growth rate of some geometric features of the path as its length $L$ diverges. From this perspective the quantities of interest are the projections of the path onto the horizontal axis (also called horizontal extension) and onto the vertical axis for which it is useful to define the lower and the upper envelopes of the path. With the help of a new random walk representation, we proved in \cite{CNGP13} that the path grows horizontally like $\sqrt{L}$ in its collapsed regime and that, once rescaled by $\sqrt{L}$ vertically and horizontally, its upper and lower envelopes converge to some deterministic Wulff shapes. In the present paper, we bring the geometric investigation of the path several steps further. In the extended regime, we prove a law of large number for the horizontal extension of the polymer rescaled by its total length $L$, we provide a precise asymptotics of the partition function and we show that its lower and upper envelopes, once rescaled in time by $L$ and in space by $\sqrt{L}$, converge to the same Brownian motion. At criticality, we identify the limiting distribution of the horizontal extension rescaled by $L^{2/3}$ and we show that the excess partition function decays as $L^{2/3}$ with an explicit prefactor. In the collapsed regime, we identify the joint limiting distribution of the fluctuations of the upper and lower envelopes around their associated limiting Wulff shapes, rescaled in time by $\sqrt{L}$ and in space by $L^{1/4}$.


Introduction
In this paper we consider a model of statistical mechanics introduced in [32] by Zwanzig and Lauritzen and refered to as interacting partially directed self avoiding walk (IPDSAW). The model is a (1+1)-dimensional partially directed version of the interacting self-avoiding walk (ISAW) introduced by Flory in [19] as a model for an homopolymer in a poor solvent. The aim of our paper is to pursue the investigation of the IPDSAW initiated in [28] and [10] and in particular to display the infinite volume limit of some features of the model when the size of the system diverges for each of the three regimes: collapsed, critical and extended. The first object to be considered is the horizontal extension of the path. Then, we will consider the whole path, properly rescaled and look at its infinite volume limit in the extended phase and in the collapsed phase. Let us point that numerical simulations are difficult [4] and have not led to theoretical results about the path properties of the polymer in the three regimes that we establish in this paper. Note that with such configurations, the modulus of a given stretch corresponds to the number of monomers constituting this stretch (and the sign gives the direction upwards or downwards). Moreover, any two consecutive vertical stretches are separated by a monomer placed horizontally and this explains why N n=1 |l n | must equal L−N in order for l = (l i ) N i=1 to be associated with a polymer made of L monomers (see Fig. 2). The repulsion between the monomers constituting the polymer and the solvent around them is taken into account in the Hamiltonian associated with each path l ∈ Ω L by rewarding energetically those pairs of consecutive stretches with opposite directions, i.e., One can already note that large Hamiltonians will be assigned to trajectories made of few but long vertical stretches with alternating signs. Such paths will be referred to as collapsed configurations. With the Hamiltonian in hand we can define the polymer measure as Both equalities in (1.6) are straightforward consequences of the fact that (log Z L,β ) ∞ L=1 is a super-additive sequence. A phase transitions of such system is associated with a loss of analyticity of β → f (β) at some critical point. In [28], we displayed an alternative way of computing the partition function that turns out to simplify the investigation of the phase diagram. It is indeed possible to exhibit an auxiliary random walk V = (V i ) ∞ i=0 with geometric increments and with law P β such that with c β := 1+e −β/2 1−e −β/2 that is simply the normalizing constant of P β . In section 3.1, we will recall how to exhibit such a random walk representation of Z L,β but let us mention already that, for N ∈ {1, . . . , L}, the contribution to the partition function of those trajectories in L N,L is given by the term indexed by N in the sum of (1.7).
A useful feature of the random walk representation is that it allows us to read the phase diagram on (1.7) directly. To that purpose, we note that (1.7) makes it natural to define the excess free energy as f (β) := f (β) − β so that the exponential growth rate of the sum in the r.h.s. of (1.7) equals f (β). Then, β → Γ β being a decreasing bijection from (0, ∞) to (0, ∞), we denote by β c the unique solution of Γ β = 1. For β ≥ β c the inequality Γ β ≤ 1 immediately yields that f (β) = 0 since those terms indexed by N ∼ √ L in (1.7) decay subexponentially. As a consequence the trajectories dominating Z L,β have a small horizontal extension, i.e., N = o(L). When β < β c in turn, Γ β > 1 and since for c ∈ (0, 1] the quantity P β (V cL,(1−c)L ) decays exponentially fast with a rate that vanishes as c → 0 we can claim that the dominating trajectories in Z L,β have an horizontal extension of order L, and moreover that f (β) > 0. Thus, the free energy is non analytic at β c and we can partition [0, ∞) into a collapsed phase denoted by C and an extended phase denoted by E, i.e, C := {β : f (β) = 0} = {β : β ≥ β c } (1.9) and E := {β : f (β) > 0} = {β : β < β c }.
(1.10) We shall see that, in fact, there are three regimes; collapsed (β > β c ), critical (β = β c ) and extended (β < β c ), in which the asymptotics of the partition function and the path properties are radically different. Remark 1.1. Observe that the main difference between IPDSAW and wetting/copolymer models, comes from the fact that the saturated phase (where the free energy is trivial) corresponds to a maximization of energy for IPDSAW, and, on the opposite, to a maximization of entropy for wetting/copolymer models. We refer to Giacomin [20] or den Hollander [12] for a review on wetting/copolymer models.

Main results
We mentioned in the preceding section that the excess free energy f (β) is the exponential growth rate of the sum in the r.h.s. of (1.7). For this reason we set and we recall that the definition of the polymer measure in (1.4) is left unchanged if we replace the denominator by Z L,β and substract Lβ to the Hamiltonian.
2.1. Scaling limit of the horizontal extension. Displaying sharp asymptotic estimates of the partition function as the system size diverges is a major issue in statistical mechanics.
Computing the probability mass of a certain subset of trajectories under the polymer measure indeed requires to have a good control on the denominator in (1.4). For the extended and the critical regimes, we display in Theorem 2.1 below an equivalent of the partition function allowing us e.g to exhibit the polynomial decay rate of the partition function at the critical point. For the collapsed regime, in turn, we recall the bounds on Z L,β that had been obtained in [10] allowing us to identify its sub-exponential decay rate. Note that in Remark 2.3 below, we provide some complements concerning Theorems 2.1 and 2.2 among which the exact value of some pre-factors when an expression is available.

IPDSAW : SCALING LIMITS
For each l ∈ Ω L , the variable N l denotes the horizontal extension of l, i.e., the integer N ∈ {1, . . . , L} such that l ∈ L N,L . With Theorem 2.2 below, we provide the scaling limit of the horizontal extension of a typical path l sampled from P L,β and as L → ∞. As for Theorem 2.1 and for the sake of completeness, we integrate the collapsed regime into the theorem although this regime was dealt with in [10,Theorem D]. where g a = inf t > 0 t 0 |B s | ds = a is the continuous inverse of the geometric Brownian area, and we consider g 1 under the conditional law of the Brownian motion conditioned by B g 1 = 0. (1) For the extended regime, in Section 4, we will decompose each path into a succession of patterns (sub pieces) and we will associate with our model an underlying regenerative process (σ i , ν i , y i ) i∈N of law P β in such a way that σ i (resp. ν i , resp. y i ) plays the role of the number of monomers constituting the ith pattern (resp. the horizontal extension of the ith pattern, resp. the vertical displacement of the ith pattern). Then, the constant c in Theorem 2.1 (1) and the limiting rescaled horizontal extension in Theorem 2.2 (1) satisfy and e(β) = E β (ν 1 ) E β (σ 1 ) . (2) For the critical regime β = β c , the appearance of the distribution of g 1 is explained at the end of Section 6. (3) For the collapsed regime, by inspecting closely (4.29)-(4.35) of [10], we see that this result can be easily generalized to a large deviation principle of speed √ L for the sequence of random variables (N l / √ L) L∈N with the good rate function a ∈ (0, ∞) → G(a(β))− G(a) which admits a unique minimum a(β). This large deviation result holds under P o L,β the polymer measure restricted to have only one bead. A rigorous definition of a bead is recalled in the paragraph of Section 2.2 that is dedicated to the collapsed phase. Note, however, that we are not able at this stage to prove the same LDP under P L,β .
The good rate function G(a) − G(a(β)) was carefully investigated in [10, Remark 5] and an exact expression was provided, i.e., where L Λ and H are defined in (3.11) and (3.16). Note that, for β > β c , the function a → G(a) is C ∞ , strictly concave, strictly negative and a(β) is the unique zero of its derivative on (0, ∞).

2.2.
Scaling limit of the vertical extension. The horizontal extension of l ∈ Ω L can be viewed as the projection of l onto the horizontal axis. Thus, after providing the scaling limit of N l in each of the three phases, a natural issue consists in displaying the scaling limit of the projection of the polymer onto the vertical axis. To be more specific, we will try to exhibit the scaling limit of the whole path rescaled horizontally by its horizontal extension N l and vertically by some ad-hoc power of N l .
To that aim, the fact that each trajectory l ∈ Ω L is made of a succession of vertical stretches makes it convenient to give a representation of the trajectory in terms of its upper and lower envelopes. Thus, we pick l ∈ L N,L and we let E i=0 be the upper and the lower envelopes of l, i.e., the (1 + N )-step paths that link the top and the bottom of each stretch consecutively. Thus, and E + l,N +1 = E − l,N +1 = l 1 + · · · + l N (see Fig. 3). Note that the area in between these two envelopes is completely filled by the path and therefore, we will focus on the scaling limits of E + l and E − l . At this stage, we define Y : [0, 1] → R to be the time-space rescaled cadlag process of a given (Y i and for each l ∈ L N,L we let E + l , E − l be the time-space rescaled processes associated with the upper envelope E + l and with the lower envelope E − l , respectively. In this paper we will focus on the infinite volume limit of the whole path in the extended phase (β < β c ) and inside the collapsed phase (β > β c ). Concerning the critical regime (β = β c ) this limit will be discussed as an open problem in section 2.3 below. The extended phase (β < β c ). When β < β c and under P L,β , we have seen that a typical path l adopts an extended configuration, characterized by a number of horizontal steps of order L. We let Q L,β be the law of 1] under P L,β . We let also (B s ) s∈[0,1] be a standard Brownian motion. (2.8) Remark 2.5. The constant σ β takes value E β (y 2 1 )/E β (ν 1 ) where y 1 (resp. ν 1 ) correspond to the vertical (resp. horizontal) displacement of the path on one of the pattern mentioned in Remark 2.3 (1). These objects are defined rigorously in Section 4 below.
Although the upper and the lower envelopes of each trajectory l ∈ Ω L seem to be the appropriate objects to consider when it comes to describing the geometry of the whole path, it turns out that it is simpler to prove Theorem 2.4 by recovering the envelopes from two auxiliary processes, i.e, the middle line M l and the profile |l|. Thus, we associate with each l ∈ L N,L the path |l| = (|l i |) N +1 i=0 (with l N +1 = 0 by convention) and the path i=0 that links the middles of each stretch consecutively, i.e., M l,0 = 0 and and M l,N +1 = l 1 + · · · + l N (see Fig. 3). With the help of (2.7), we let M l and l be the time-space rescaled processes associated with M l and l and one can easily check that As a consequence, proving Theorem 2.4 is equivalent to proving that where Q L,β is the law of √ N l M l (s), | l(s)| s∈[0,1] with l sampled from P L,β .
The collapsed phase (β > β c ). The collapsed regime was studied in [10], where a particular decomposition of the path into beads has been introduced. A bead is a succession of non-zero vertical stretches with alternating signs which ends when two consecutive stretches have the same sign (or when a stretch is nul). Such a decomposition is meaningful geometrically and we proved in [10, Theorem C] that there is a unique macroscopic bead in the collapsed regime and that the number of monomers outside this bead are at most of order (log L) 4 . The next step, in the geometric description of the path, consisted in determining the limiting shapes of the envelopes of this unique bead. This has been achieved in [10] where we proved that the rescaled upper envelope (respectively lower envelope) converges in probability towards a deterministic Wulff shape γ * β (resp. −γ * β ) defined as follows where L is defined in (3.8) and h 0 in (3.16). Thus, we obtained Theorem 2.6. ( [10] Theorem E) For β > β c and ε > 0, This Theorem has also been stated a a Shape Theorem in [10]. The natural question that comes to the mind is : are we able to identify the fluctuations around this shape ? For technical reasons that will be discussed in Remark 2.8 below, we are not able to identify such a limiting distribution. However, we can prove a close convergence result by working on a particular mixture of those measures P L ,β for L ∈ K L := L + [−ε(L), ε(L)] ∩ N with ε(L) := (log L) 6 . Thus, we define the extended set of trajectories Ω L = ∪ L ∈K L Ω L , and we let P L,β be a mixture of those P L ,β , L ∈ K L defined by where we recall (2.1). In other words, P L,β can be defined as We denote by Q L,β the law of the fluctuations of the envelopes around their limiting shapes, that is the law of the random processes where l is sampled from P L,β as L → ∞. We obtain the following limit.
Theorem 2.7 (Fluctuations of the convex envelopes around the Wulff shape). For β > β c , and H = H(q β , 0), q β = 1 a(β) 2 we have the convergence in distribution is a process independent of ξ H which has the law of ξ H conditionally on ξ H (1) = 1 0 ξ H (s) ds = 0. From Theorem 2.7 we deduce that the fluctuations of both envelopes around their limiting shapes are of order L 1/4 . Remark 2.8. The reason why we prove Theorem 2.7 under the mixture P L,β rather than P L,β is the following. We need to establish a local limit theorem for the associated random walk of law P β conditionned on having a large geometric area G N (V ) and we are unable to do it. Fortunately, we know how to condition the random walk on having a large algebraic area A N (V ) and under the mixture P L,β we are able to compare quantitatively these two conditionings (see Step 2 of the proof of Proposition 5.1 in Section 5.4). In the construction of the mixture law P L,β (cf. (2.15)), the choice of the prefactors of those P L ,β with L ∈ K L may appear artificial. However it is conjectured (see e.g. [21,Section 8]) that our inequalities in Theorem 2.1 (3) can be improved into so that the ratio of any two prefactors would converges to 1 as L → ∞ uniformly on the choice of the indices of the two prefactors in K L . In other word, P L,β should, in first approximation, be the uniform mixture of those {P L ,β , L ∈ K L }. Remark 2.9. As for the extended phase with Theorem 2.4, it will be easier to work with the middle line M l and with the profile | l| defined in (2.9-2.10). As a consequence, proving Theorem 2.7 is equivalent to proving that 1] with l sampled from P L,β . The convergence of M l in (2.18) answers an open question raised in [4,Fig. 14 and Table II] where the process is referred to as the center-of-mass walk.

2.3.
Discussion and open problems. Giving a path characterization of the phase transition is an important issue for polymer models in Statistical Mechanics. From that point of view, identifying in each regime the limiting distribution of the whole path rescaled in time by its total length N and in space by √ N is challenging and meaningful. This was studied in [14] and [9] for (1 + 1)-dimensional wetting models which deal with a N -step random walk (with continuous or discrete increments) conditioned to remain non-negative and receiving an energetic reward ε every time it touches the x-axis (which plays the role of a hard wall). Such models exhibit a pinning transition at some ε c such that when ε > ε c the polymer is localized, meaning that the path typically remains at distance O(1) from the wall. Thus, the rescaled path converges to the null function. When ε < ε c in turn, the polymer is delocalized and visits the origin only O(1) times. Then, the rescaled path converges towards a Brownian meander if it is constrained to come back to the origin at its right extremity and converges towards a normalized Brownian excursion otherwise. Finally, the critical regime ε = ε c is characterized by a number of contacts between the polymer and the x-axis that grows as √ N . The rescaled path converges to a reflected Brownian motion when there is no constraint on its right extremity and towards a reflected Brownian bridge otherwise. We note finally that similar results have been obtained in [31] when the pinning of the path occurs at a layer of finite width on top of the hard wall. Before comparing the infinite volume limit description of the wetting transition with that of the collapse transition, let us insist on the fact that the nature of these two phase transitions are fundamentally different and this can be explained in a few words. For the wetting model, the saturated phase for which the free energy is trivial (=0) corresponds to the polymer being fully delocalized off the interface which means that entropy completely takes over in the energy-entropy competition that rules such systems. For the IPDSAW in turn, the saturated phase is characterized by a domination of trajectories that are maximizing the energy. In other words, we could say that both models display a saturated phase which in the pinning case is associated with a maximization of the entropy, whereas it is associated with a maximization of the energy for the polymer collapse. As a consequence, only the extended regime of the IPDSAW and the localized regime of the wetting model may be compared. In both cases, one can indeed decompose the trajectory into simple patterns, that do not interact with each other and are typically of finite length, i.e., the excursions off the x-axis for the wetting model and the pieces of path in between two consecutive vertical stretches of length 0 for the IPDSAW: these patterns can be seen as independent building blocks of the path and can be associated with a positive recurrent renewal. However the comparison can not be brought any further, since even in this regime the envelopes of the IPDSAW display a Brownian limit whereas the limiting object is the null function for the wetting model. Due to the convergence of both envelopes towards deterministic Wulff shapes, the collapsed IPDSAW may be related to other models in Statistical Mechanics that are known to undergo convergence of interfaces towards deterministic Wulff shapes. This is the case for instance when considering a 2 dimensional bond percolation model in its percolation regime and conditioned on the existence of an open curve of the dual graph around the origin with a prescribed and large area enclosed inside the curve (see [1]). A similar interface appears when considering the 2-dimensional Ising model in a big square box of size N at low temperature with no external field and − boundary conditions and when conditioning the total magnetization to deviate from its average (i.e., −m * N 2 with m * > 0) by a factor a N ∼ N 4/3+δ (δ > 0). It has been proven in [17], [22], [23] and [24] that such a deviation is typically due to a unique large droplet of +, whose boundary converges to a deterministic Wulff shape once rescaled by √ a N .
However, the closest relatives to the collapsed IPDSAW are probably the 1-dimensional SOS model with a prescribed large area below the interface (see [15]) and the 2-dimensional Ising interface separating the + and − phases in a vertically infinite layer of finite width (again with a large area underneath the interface, see [16]). For both models and in size N ∈ N, the law of the interface can be related to the law of an underlying random walk V conditioned on describing an abnormally large algebraic area (qN 2 with q > 0). As a consequence, once rescaled in time and space by N the interface converges in probability towards a Wulff shape, whose formula depends on q and on the random walk law. The fluctuations of the interface around this deterministic shape are of order √ N and their limiting distribution is identified in [15, theorem 2.1] for SOS model and in [16, theorem 3.2] for Ising interface at sufficiently low temperature. The proofs in [15] and [16] use an ad-hoc tilting of the random walk law (described in Section 3.2), so that the large area becomes typical under the tilted law. In this framework, a local limit theorem can be derived for any finite dimensional distribution of V under the tilted law. In the present paper, our system also enjoys a random walk representation (see section 3.1) and we will use the "large area" tilting of the random walk law as well to prove Theorem 3.1. However, our model displays three particular features that prevent us from applying the results of [15] straightforwardly. First, the conditioning on the auxiliary random walk V is, in our case, related to the geometric area below the path rather than to the algebraic area (see Remark 2.8). Second, the horizontal extension of an IPDSAW path fluctuates, which is not the case for SOS model. Thus, the ratio q of the area below the path divided by the square of its horizontal extension fluctuates as well, which forces us to display some uniformity in q for every local limit theorem we state in Section 5. Third and last, the fact that an IPDSAW path is characterized by two envelopes makes it compulsory to study simultaneously the fluctuations of V around the Wulff shape and the fluctuations of M around the x-axis (recall (2.18)). We recall that the increments of M are obtained by switching the sign of every second increment of V . As a consequence, we need to adapt, in Section 3.2, the proofs of the finite dimensional convergence and of the tightness displayed in [15]. Let us conclude with the critical regime of IPDSAW that is studied in Section 6. The random walk representation described in Section 3.1 and the fact that Γ β = 1 when β = β c tells us that the horizontal extension of the path has the law of the stopping time τ L := min{N ≥ 1 : Studying the scaling of τ L requires to build up a renewal process based on the successive excursions made by V inside the lower half-plane or inside the upper half-plane. A sharp local limit theorem is therefore required for the area enclosed in between such an excursion and the x-axis and this is precisely the object of a recent paper by Denisov, Kolb and Wachtel in [13]. Note that, the fact that the horizontal extension fluctuates makes the scaling limits of the upper and lower envelopes of the path much harder to investigate at β = β c . As soon as the limiting distribution of the rescaled horizontal extension is not constant, which is the case for the critical IPDSAW, one should indeed consider the limiting distribution of the rescaled horizontal extension and of V and M simultaneously. For instance, the asymptotic decorelation of M and V may well not be true anymore. For this reason, we will state the investigation of the limiting distribution of the upper and lower envelopes of the critical IPDSAW as an open problem. Let us conclude by pointing out that the critical regime of a Laplacian (1+1)-dimensional pinning model that is investigated by Caravenna and Deuschel in [8] has somehow a similar flavor. More precisely, when the pinning term ε is switched off, the path can be viewed as the bridge of an integrated random walk and therefore scales like N 3/2 . This scaling persists until ε reaches a critical value ε c . At criticality, and once rescaled in time by N and in space N 3 2 / log 5 2 (N ) the path is seen as the density of a signed measure µ N on [0, 1]. Then, µ N that is build with a path sampled from the polymer measure, converges in distribution towards a random atomic measure on [0, 1]. The atomes of the limiting distribution are generated by the longest excursions of the integrated walk, very much in the spirit of the limiting distribution in Theorem 2.2 (2) where each contribution to the sum constituting the limiting distribution is associated to a long excursion of the auxiliary random walk.
Computer Simulations. As explained in the Appendix 7, the representation formula (1.7) provides an exact simulation algorithm for the law of a path under the polymer measure P β,L . However, this algorithm is very efficient only for β = β c , and looses all efficiency when β is not close to β c .
Open problems.
• Find the scaling limit of the envelopes of the path in the critical regime.
• Establish the fluctuations of the envelopes around the Wulff shapes (Theorem 2.7) for the true polymer measure P L,β rather that for the mixture P L,β . • Establish a Central Limit Theorem and a Large Deviation Principle for the horizontal extensions in the collapsed and extended regimes. • Devise a dynamic scheme of convergence of measures on paths such that the equilibrium measure is the polymer measure, and with a sharp control on the mixing time to equilibrium similar to the one devised for S.O.S by Caputo, Martinelli and Toninelli in [5].

Preparation
We begin this section by recalling the proof of the probabilistic representation of the partition function (recall 1.7). This proof was already displayed in [28] but since it constitutes the starting point of our analysis it is worth reproducing it here briefly. Moreover, we obtain as a by product the auxiliary random walk V of law P β , which, under an appropriate conditioning, can be used to derive some path properties under the polymer measure. In Section 3.2, we recall how to work with the random walk V (of law P β ) conditioned on describing an abnormally large algebraic area. To that aim, we introduce a strategy initially displayed in [15] and subsequently used in [10] which consists in tilting P β appropriately such that the path typically describes a large area. This framework will be of crucial importance to study the collapse regime.
3.1. Probabilistic representation of the partition function. Let P β be the law of the random walk V : 2) Coming back to the proof of (1.7) we recall (1.1-1.5) and we note that the ∧ operator can be written as x ∧ y = (|x| + |y| − |x + y|) /2, ∀x, y ∈ Z. (3.3) Hence, for β > 0 and L ∈ N, the partition function in (1.5) becomes which immediately implies (1.7). A useful consequence of formula (3.5) is that, once conditioned on taking a given number of horizontal steps N , the polymer measure is exactly the image measure by the T N −transformation of the geometric random walk V conditioned to return to the origin after N+1 steps and to make a geometric area L − N , i.e.,

Large deviation estimates.
In this section, we introduce an exponential tilting of the probability measure P β (through the Cramer transform), in order to study P β conditioned on the large deviation event {A n (V ) = qn 2 , V n = 0}. Following Dobrushin and Hryniv in [15], for n ∈ N, we use Under the tilted probability measure the large deviation event {A n = n 2 q, V n = 0} becomes typical. First, we denote by L(h), h ∈ R the logarithmic moment generating function of the random walk V , i.e, From the definition of the law P β in (3.1), we obviously have L(h) < ∞ for all h ∈ (−β/2, β/2). For the ease of notations, we set Λ n := ( An n , V n ) and we denote its logarithmic moment generating function by L Λn (H) for H := (h 0 , h 1 ) ∈ R 2 , i.e., (3.10) We also introduce L Λ the continuous counterpart of L Λn as which is defined on With the help of (3.9) and for H = (h 0 , h 1 ) ∈ D n , we define the H-tilted distribution by For a given n ∈ N and q ∈ N n , the exponential tilt is given by H q n := (h q n,0 , h q n,1 ) which, by Lemma 5.5 in Section 5.1 of [10], is the unique solution of An important feature of this exponential tilting is that P n, Then, we define the continuous counterpart of H q n by H(q, 0) := ( h 0 (q, 0), h 1 (q, 0)) which is the unique solution of the equation and we state a Proposition that allows us to remove the n dependence of the exponential decay rate.

Scaling Limits in the extended phase
In this section, we will first display an alternative representation of the model in terms of an auxiliary regenerative process. We shall restrict ourselves to paths of length L whose last stretch has a zero vertical length, i.e., Ω c L = {l ∈ Ω L : l N l = 0}. Note that the natural one-to-one correspondence between Ω L and Ω c L+1 conserves the Hamiltonian and therefore, proving Theorem 2.1 (1) or 2.2 (1) or Theorem 2.4 with the constraint immediately entails the same results without this constraint. Let us define a pattern as a path whose first zero length vertical stretch occurs only at the end of the path. We shall decompose a path into a finite number of patterns. That is if for l ∈ Ω c L we consider the successive indices corresponding to vertical stretches of zero length, i.e., Then N k = T k − T k−1 is the horizontal extension of the k-th pattern, S k = N k + l T k−1 +1 + · · · + |l T k | is the length of the k-th pattern and J k = l T k−1 +1 + · · · + l T k is the vertical displacement on the k-th pattern. If π L (l) = r is the number of patterns, then the horizontal extension is N l = N 1 + · · · + N r , the total length is of course L = S 1 + · · · + S r and the total vertical displacement is J 1 + · · · + J r .
The key observation that will lead to the construction of the renewal structure, is that the Hamiltonian of the path is the sum of the Hamiltonian of the patterns, since the separating two horizontal steps prevent any interaction between the patterns. Let us define the constrained excess partition function as We shall apply to Z c L,β the probabilistic representation displayed in (3.4-3.5). The only additional constraint is that l N l = 0, which we translate immediately as the added constraint V N = 0 on the associated random walk V , i.e., Accordingly the pattern excess partition function is defined aŝ We will use the decomposition into patterns to generate an auxiliary renewal process, whose inter-arrivals are associated with the successive lengths of the patterns. Thus, it is natural to consider the series and the convergence abcissaf (β) := inf {α : ϕ(α) < +∞}. An important observation at this stage is the link between ϕ and f (β) that is stated in the following lemma.
Lemma 4.1 allows us to define rigorously the renewal process. We even enlarge the probability space on which this renewal process is defined to take into account the horizontal extension and the vertical displacement on each pattern. We finally obtain an auxiliary regenerative process that will be the cornerstone of our study of the extended phase. To that aim, we let (σ i , ν i , y i ) i≥1 be an IID sequence of random variables of law P β . The law of (σ 1 , n 1 , y 1 ) is given by • The conditional distribution of y 1 given σ 1 = s, ν 1 = n is The link between the latter regenerative process and the polymer law is stated in Lemma 4.2 below. We let T be the set of renewal times associated to σ, i.e., T = {σ 1 + · · · + σ r , r ∈ N}.
Lemma 4.2. Given integers r, s 1 , . . . , s r , n 1 , . . . , n r , t 1 , . . . , t r such that Proof. We disintegrate Z c L,β with respect to the number of patterns r and to s 1 , . . . , s r the respective lengths of these patterns: It is now folklore in Probability theory (see [20,Chapter 1], for an application of this technique to the linear pinning model) to multiply and divide the r.h.s. by e L f (β) and obtain We use the probabilistic representation of the partition function, and we let L n.,s.,t. be the subset of Ω c L containing those configurations forming r patterns of respective horizontal extensions, lengths and horizontal displacements n 1 , . . . , n r , s 1 , . . . , s r , t 1 , . . . , t r . We obtain and we recall that for l ∈ L n.,s.,t. and i ∈ {1, . . . , r}, the vertical displacement on the jth pattern is l n 1 +···+n i−1 +1 + · · · + l n 1 +···+n i . For the associated random walk trajectory V , the vertical displacement is given by (−1) n 1 +···+n i−1 Y n i (V ). It remains to use the symetry of the V -random walk to get We multiply the numerator and the denominator by e − f (β)L and we use (4.3), (4.6) and the definition of P β to get 4.1. Proof of Theorems 2.1 (1), 2.2 (1) and of Theorem 2.4. The proof of Theorem 2.1 (1) is a straightforward application of formula (4.6) and of the renewal Theorem which ensures us that lim L→∞ P β (L ∈ τ ) → 1/µ β > 0 with µ β := E β (σ 1 ). The finiteness of µ β is an easy consequence of the definition of P β and of the fact thatf (β) < f (β). The proof of (2.2) (i.e., Theorem 2.2 (1)) is performed as follows. We let π L (l) be the number of patterns in a sequence l ∈ Ω c L (and thus T π L (l) = N l ). We set also π L (σ) := max{i ≥ 1 : σ 1 + · · · + σ i ≤ L}, such that π L (σ) is the counterpart of π L (l) for the renewal process associated with the interarrivals (σ i ) i∈N . By Lemma 4.2, (2.2) will be proven once we show that, under P β (· | L ∈ T ), we have and therefore the quantity e(β) in (2.2) is given by To prove (4.7) we note that, under P β , a straightforward application of the law of large number gives the almost sure convergence of (ν 1 + · · · + ν n )/n to E β (ν 1 ) and of π L (σ)/L to 1/µ β . Thus (ν 1 + · · · + ν π L (σ) )/L tends almost surely to E β (ν 1 )/µ β and this convergence also holds in probability. Moreover, we have just seen that lim L→∞ P β (L ∈ τ ) → 1/µ β > 0 so that the latter convergence in probability also holds under P β (· | L ∈ T ) and this completes the proof of (2.2).
It remains to prove (2.11). To begin with, we show that, under P c L,β the largest stretch of a given configuration l ∈ Ω c L is typically not larger than c log L. This will imply the convergence in probability of ( √ N l l s ) s∈[0,1] to 0, which is the convergence of the second coordinate in (2.11).
Proof. We will prove a slightly stronger property, that is there exists a c > 0 such that The fact that each stretch of a given configuration l ∈ Ω c L belongs to one of the π L (l) patterns of l will then be sufficient to obtain (4.8). With the help of Lemma 4.2, we can state that By the renewal theorem, we know that lim L→∞ P β (L ∈ T ) = 1/µ β > 0. Moreover, sincê has finite small exponential moments and therefore, we can choose c > 0 large enough so that lim L→∞ P β (max{|σ i |, i = 1, . . . , L} ≥ c log L) = 0. Thus, by choosing c > 0 large enough, the r.h.s. in (4.10) vanishes as L → ∞ and the proof of Lemma 4.3 is complete.
At this stage, the proof of (2.11) will be complete once we prove that, with l sampled from P c L,β , we have with v β = E β (y 2 1 ). For simplicity we will rather deal with the process M l = N l √ L M l and since, by Theorem 2.2 (1) we know that N l /L converges in probability to µ β , we can claim that (2.11) will be proven once we show that, with l sampled from P c L,β , the process M l converges in law towards v β /µ β B.
Because of Lemma 4.3, we do not change the limit of M l under P c L,β if, for l ∈ Ω c L and i ∈ {1, . . . , π L (l)}, we redefine M l,i in (2.9) as M l,i = l 1 + · · · + l i . We will use this later definition until the end of this proof only. Then, we letM l be the cadlag process defined as where C 1,l (t) simply counts how many patterns have been completed by the trajectory l before its tN l th horizontal step, i.e., In this section, all random processes are viewed as elements of D [0,1] the set of cadlag processes on [0, 1] endowed with the Skorohod topology D and we refer to [3] for an overview on the subject. At this stage we note that, for t ∈ [0, 1], we have and therefore, (4.9) ensures that M l andM l have the same limit in law under P c L,β . Let (σ i , ν i , y i ) i≥1 be an IID sequence of random variables under P β and let B : We note that, because of Lemma 4.2, the cadlag processM l under P c L,β has the same law as the processW L under P β (· | L ∈ T ) which is defined bȳ where C 2,L (t) is the counterpart of C 1,l (t) in the framework of the associated regenerative process, i.e., where we recall π L (σ) = max{i ≥ 1 : σ 1 + · · · + σ i ≤ L} and V L = ν 1 + · · · + ν π L (σ) . Thus, the proof of (2.11) will be complete once we show that, under The proof of (4.17) is standard in regenerative process theory, see e.g. Section 5.10 of Serfozo [30]. We just need to be careful when transporting results from P β to P β (· | L ∈ T ).

5.
Fluctuation of the convex envelopes around the Wulff shape: proof of Theorem 2.7 Let us first recall some notations. For each l ∈ L N,L we defined in (2.9) the middle line M l .
We also defined in Section 3.1, the T N transformation that associates with each l ∈ L N,L the path V l = (T N ) −1 (l) such that V l,0 = 0, V l,i = (−1) i−1 l i for all i ∈ {1, . . . , N } and V l,N +1 = 0. Finally, note that the path M l can be rewritten with the increments of the V l random walk as In the same spirit, we will need to work with the V random walk sampled from P β directly. We associate with each V trajectory the process M that is obtained exactly as M l is obtained from V l in (5.1), i.e., We recall finally that, for any trajectory

5.1.
Outline of the proof. We recall the definition of P L,β in (2.15). As stated in Remark 2.9, the proof of Theorem 2.7 will be complete once we show the convergence in law (2.18).
To that aim we will prove Proposition 5.1 and 5.2 below, that are a finite dimensional convergence and a tension argument which will be sufficient to prove (2.18) be the density of the law of ξ H (t) conditional on with Let us give here the key idea behind the proofs of Propositions 5.1 and 5.2. We will first prove the counterpart of Propositions 5.1 and 5.2 with the processes M and V sampled from P β (cf. 5.2) conditional on V N = 0, A N (V ) = qN 2 (q > 0). The reason for these two intermediate results is that they can be obtained with the tilting of P β exposed in Section 3.2 and first introduced by Dobrushin and Hryniv in [15]. Then, we will translate these two results in terms of the two processes M l and V l (see (5.1)) obtained with l sampled from P L,β . However, this last step is difficult because the conditioning that emerges from the T N transformation (see section (3.1)) involves the geometric area below the V l random walk rather than its algebraic counterpart. This is the reason why we state in Section 5.2 a handful of preparatory Lemmas indicating that, when the algebraic area below V is abnormally large (qN 2 instead of the typical N 3/2 ) then the geometric area below V is not only also abnormally large but is fairly close to the algebraic area. These Lemmas will be proven in Section 5.6, except for Lemma 5.5 that was already proven in [10]. In Section 5.3 we state and prove a local limit theorem for any finite dimensional joint distribution of the middle line M and the profile |V | with V sampled from P β (·|V N = 0, |A N (V )| = qN 2 ) as N → ∞. As a by product of this local limit theorem we will observe that asymptotically the rescaled profile | V | and the rescaled middle line M decorrelate. Section 5.3 can be seen as the first part of the proof of Proposition 5.1. We will indeed prove in Section 5.4 that the latter asymptotic decorelation still holds true with | V l | and M l when l is sampled from P L,β and β > β c . This will complete the proof of Proposition 5.1. Similarly, Section 5.5 can be seen as the first part of the proof of Proposition 5.2 since we prove the tightness of M and V under P β (·|V N = 0, A N (V ) = qN 2 ) as N → ∞. However, we will not display the part of the proof showing that this tightness is still satisfied under P L,β since this can be done by mimicking the proof in Section 5.4.

5.2.
Preparations. Lemma 5.3 shows that the probability, under the polymer measure, that the rescaled horizontal excursion deviates from a(β) by more than a given vanishing quantity decays faster than any given polynomial provided the vanishing quantity decreases slowly enough.
where ρ β (q) := L Λ ( H(q, 0)) − q h 0 (q, 0), so that ρ β is C ∞ and we have G(a) = a(log Γ(β) − ρ β ( 1 a 2 )) (recall (2.4)). Lemma 5.6 insures us that, when V is sampled from P β conditional on V N = 0, |A N (V )| = qN 2 , the probability that the geometric area described by V differs from the algebraic area by more than (log N ) 4 tends to 0 faster than any polynomial. Lemma 5.6. For any [q 1 , q 2 ] ⊂ (0, ∞) and α, c > 0 we have We recall that all Lemmas of this Section are proven in Section 5.6. 5.3. Asymptotic decorrelation of the middle line and of the profile. In this section, we prove the following Lemma that gives us a local limit theorem for the paths . This local limit theorem is reinforced by the fact that it is uniform in q belonging to any compact set of ]0, ∞[. Proof of Lemma 5.7. First of all we note that, thanks to Lemma 5.6, it is sufficient to prove Lemma 5.7 with the conditioning {V N = 0, |A N (V )| = qN 2 } instead of W N,L,qN 2 . We will first prove Lemma 5.7 subject to Theorem 5.9 that is stated below and then, we will prove Theorem 5.9.
Let k H (z 0 , z 1 ) be the density of the law of ( We now can use some symmetry argument (the symmetry of the distribution of the increments of the geometric random walk) to say that Therefore we obtain the desired result by applying Theorem 5.9 and combining it with the definition of f c H,t .
Proof of theorem 5.9. Before starting the proof of Theorem 5.9, we shall give a flavour of its nature by looking at a toy model. Let us consider a random walk S 0 = 0, S n = X 1 + · · · + X n with IID increment and let us build an alternating sign random walkS n := n i=1 (−1) i+1 X i . If X 1 is square integrable and, say, E [X 0 ] = 0, E X 2 0 = 1, then by the Central Limit Theorem, Sn √ n d − → Z ∼ N (0, 1). Furthermore, we have asymptotic decorrelation a pair of independent N (0, 1) distributed random variables, and this convergence can be lifted to the level of processes : a pair of independent standard Brownian motions. Theorem 5.9 shows that we can extend this decorrelation result to properly conditioned processes, in the sense of finite distributions. Recall that V Nt = (V N t 1 , . . . , V N tr 1 ) and fors ∈ (0, 1) r 2 , M Ns = ( M N s 1 , . . . , M N sr 2 ). Let us now start the proof of Theorem 5.9. The proof is a copy of the classic proof of the local central limit theorem, and very similar to the proof given in [15]. We set Recall that by construction of H q N , we have E N,H q N (A N ) = N 2 q. Hence, by Fourier inversion formula where A, ∆ > 0 are positive constants. We can bound J (q) i for i = 2, 3, 4 exactly with the same procedure used in [10], Proposition 6.1. We shall focus on proving that It is enough to prove that To this end we shall noteΛ N = ( A N N , V N , V Nt , 2M Ns ) and consider the moment generating function Therefore, an order 2 Taylor expansion gives We can write the moment generating function explicitly as lLΛ N (H q N , 0, 0) and converges to the preceding limit. We have similarly, There remains to understand the cross term, since by regrouping the alternating signs two by two, we have (thanks to the boundedness of the third derivative L (z) on a compact of (−β/2, β/2)). Therefore, as a whole, the Hessian matrix 1 N HessLΛ N (H q N , 0, 0) converges uniformly on [q 1 , q 2 ] to the covariance matrix of the vector ( To conclude, we need to prove now that in the expression of R N , we can replace the quantities E N,H q N V Nt and E N,H q N M Ns by respectively N γ * q (t) and 0. We first observe that when H = (h 0 , h 1 ) lives in a compact set of (−β/2, β/2), which is the case if q ∈ [q 1 , q 2 ] and H = H q N or H = H(q, 0), then the variances of ξ H (t) have a positive lower bound, and therefore there exists a uniform Lipschitz constant C such that for anȳ x,x ∈ R r 1 ,ȳ,ȳ ∈ R r 2 , z 0 , z 1 , z 0 , z 1 and any q ∈ [q 1 ,

The second observation is that with
and that, thanks to the boundedness on compact sets of L (), this convergence holds uniformly. Similarly and this convergence holds uniformly, since by grouping the alternating signs two by two, we have We set ψ 1,L (x,ȳ) = P L,β Hs ,t (x,ȳ) , and the proof of Theorem 5.1 will be complete once we show that ψ 1 ∼ ψ 5 . To achieve this equivalence we introduce 3 intermediate functions ψ 2 ,ψ 3 and ψ 4 and we divide the proof into 4 steps. For i ∈ {1, 2, 3, 4}, the i-th step consists in proving that ψ i ∼ ψ i+1 so that at the end of the fourth step we can state that ψ 1 ∼ ψ 5 . In steps 1 and 2, we will use the fact that for i ∈ {1, 2, 3}, the ψ i function is of the form ψ i = A i B i such that an equivalence of type (5.13) between ψ j and ψ k will be proven once we show that For the ease of notation, we will write H instead of Hs ,t (x,ȳ) until the end of the proof.
In the first step, we work under the polymer measure P L,β and we restrict the trajectories of Ω L to those having an horizontal extension ≈ a(β) √ L and an algebraic area With the second step, we use the random walk representation in Section 3.1 to switch from P L,β to P β . Finally, in steps 3 and 4, we apply the local limit theorem stated in Lemma 5.7 to complete the proof.
Step 1. We rewrite ψ 1 under the form where, for B ⊂ Ω L , the quantity Z L ,β (B) is the restriction of the partition function Z L ,β to those trajectories l ∈ B ∩ Ω L , i.e., For simplicity, we will omit the L, η dependency of A L,L ,η in what follows. The equivalence ψ 1 ∼ ψ 2 will be proven once we show that (i) and (ii) in (5.15) are satisfied with j = 1, k = 2. We note that Thus, (5.18) and (5.21) are sufficient to prove (i). It remains to show that (ii) is satisfied. We note that and then, we can use directly (5.21) to obtain (ii) and this completes the proof of step 1.
Step 2. To begin with, we set J N,L ,L = {L − N − c(log L) 4 , . . . , L − N } and we note that, with the help of the random walk representation, we can rewrite We switch the order of summation in (5.24) and we obtain We define the third intermediate function The equivalence ψ 2 ∼ ψ 3 will be proven once we show that (i) and (ii) in (5.15) are satisfied with j = 3, k = 2.
For (i), we note that for all b ∈ D N,L satisfying b ≥ L − ε(L) − N and b ≤ L + ε(L) − N − c(log L) 4 we have U b,L = U b,L and therefore For (ii), in turn, we note that, and where we have used again that U b,L = U b,L for b ∈ D N,L \ D N,L . At this stage, we state two claims that will be sufficient to complete this step.  Proof of Claim 5.10. Since η L → 0, we easily infer that for L large enough and for N ∈ I η,L and b ∈ D N,L we have b Thus, we can use Lemma 5.7 and (5.12) to assert that, for L large enough, L r 1 +r 2 2 P β H | W N,L,b is bounded above uniformly in N ∈ I η,L and b ∈ D N,L . The β dependency of R 1 and R 2 is omitted for simplicity.
Proof of Claim 5.11. By using again the fact that for N ∈ I η,L and b ∈ D N,L we have, for L large enough, that b N 2 ∈ [R 1 , R 2 ], we can apply Lemma 5.6, to assert that for L large enough We recall the definition of W N,L,b in (5.31) and we set We can bound from above the ratio in the l.h.s. of (5.34) as where we have used (5.36) to obtain that for L large enough, N ∈ I η,L and b ∈ D N,L we have At this stage, we need to use the fact that ε(L) = log(L) 6 and we set N 2 belong to the compact [R 1 , R 2 ] on which the function ρ β is differentiable. Therefore, there exists a C > 0 such that Thus, we can apply Lemma 5.5 to assert that there exists M 1 > M 2 > 0 such that for L large enough and for N ∈ I η,L we have It suffices to combine (5.37) and (5.42) and to note that 5 , to complete the proof of Claim 5.11.
Step 3. In this step, we note first that ψ 3 can be written as and we set ) → 0 .
Step 4. Obviously we have, Therefore, By the implicit function theorem applied to the definition (3.16) of H, the function q → H(q, 0) is globally Lipschitz on compact sets, and thus there exists a constant C > 0 such that sup By the global Lipschitz properties in (H,x,ȳ) of the Gaussian densities f c H,t (ȳ) and g H,s (x) we conclude that ψ 4,L ∼ ψ 5,l . 5.5. Proof of Proposition 5.2. The proof of the tigthness of the sequence of distributions (Q L,β ) L≥1 is obtained by combining arguments of [15,Section 6] with the steps 1, 2 and 3 of the proof of Proposition 5.1. We focus on proving the tightness of the first coordinate of the process, i.e., ( √ N l M l (s)) s∈[0,1] (the proof for the second coordinate is completely similar and even closer to the proof displayed in [15,Section 6]). Let us denote by ( M l (s)) s∈[0,1] the polygonal interpolation of the middle line. Then, see for example [11] proof of Lemma 5.1.4, the distribution under P L,β of ( M l (s)) s∈[0,1] and ( M l (s)) s∈[0,1] are exponentially close, and so we shall restrict ourselves to proving the tightness of the sequence of continous processes ( √ N l M l (s)) s∈[0,1] using the criterion of Theorem 7.3, (ii) of [3]: where w(f, δ) = sup |x−y|≤δ |f (x) − f (y)| denotes the modulus of continuity.
We then inspect closely the steps 1, 2 and 3 taken in Proposition 5.1. It is tedious, but straightforward, to see that we only need to prove that To this end, we shall use Kolmogorov's tightness criterion (see Theorem 1.8 Chap. XIII of [29]) and show that for all [q 1 , q 2 ] there exists α, γ, C, N 0 > 0 such that Eventually, we shall prove that Proof of Lemma 5.3. We just need to recall from [10] the formula (4.43) and the inequality (4.50) with ε = η L to see that since G(a) reaches its maximum at a(β) and since the second derivative of G(a) is strictly negative, η L = L −1/8 is suitable.
Proof of Lemma 5.4. Let us recall the notation I jmax of [10, Section 1] which is the set of indexes of stretches that occur in the largest bead. In other word it is the largest set of consecutive indices for which V l,i keeps the same sign. To begin with, we can show, following mutatis mutandis the proof of Theorem C, that for α > 0, there exists a c > 0 such that Note that for l ∈ Ω L , we have N l i=1 |V l,i | = N l i=1 |l i | = L − N l and thus, by the definition of I jmax we have also At this stage, we recall that A N (V ) = N i=1 V i and we use (5.52) and (5.53) to assert that . It remains to use (5.51) to complete the proof of Lemma 5.4.
Proof of Lemma 5.6. For the sake of conciseness, we will use, in this proof only, the notations Thus, the proof of Lemma 5.6 will be complete once we show that for α, c > 0 we have where the intersection of [q 1 , q 2 ] with N N 2 is omitted for simplicity. Since the equality P N,H q N W c,N,q | T N,q = P β W c,N,q | T N,q holds for all N ∈ N and q ∈ [q 1 , q 2 ], we can use [10, Proposition 2.2] to ensure that (5.55) will be proven once we show that for c, α > 0 we have lim so that we can write the upper bound At this stage, we need to distinguish between the positive part A + N (V ) and the negative part A − N (V ) of the algebraic area below the V trajectory, i.e., have the same law (by symmetry) and therefore the equality holds true for all k ∈ {1, . . . , N − 1} and A ∈ Bor(R k ). A straightforward application of (5.60) tells us that under P N,H q N , both sets in the r.h.s. of (5.57) have the same probability and similarly both sets in the r.h.s. of (5.59) have the same probability. Therefore we can combine (5.57), (5.58) and (5.59) to obtain As a consequence, the proof of Lemma 5.6 will be complete once we show, on the one hand, that for all α > 0 there exists a c > 0 such that and, on the other hand, that for all α, x, y > 0 we have In order to prove (5.62) and (5.63) we recall Lemma 6.2 in [10], Lemma 5.12. For [q 1 , q 2 ] ⊂ (0, ∞) there exists N 0 ∈ N and there exist three positive constants C , C 1 , λ such that for N ≥ N 0 and for every integer j ≤ N/2, the following bound holds To prove (5.62), we apply lemma 5.12 directly and we obtain which suffices to complete the proof of (5.62). For the proof of (5.63), we note that and we apply lemma 5.12 again to obtain and this completes the proof of (5.63).

Scaling Limits in the critical phase
In this section we will prove the items (2) of Theorems 2.1 and 2.2 which correspond to the critical case (β = β c ). For the sake of notations, we will write β instead of β c until the end of this proof. In Section 6.1 below, we first exhibit a renewal structure for the underlying geometric random walk, based on "excursions". Then we state a local limit theorem for the area of such an excursion. This Theorem has been proven recently in [13]. With these tools in hand we will be able to prove Theorem 2.1 (2) in Section 6.2 and Theorem 2.2 (2) in Section 6.3. Finally, in Section 6.4, we identify the limiting law of the rescaled horizontal extension obtained in Section 6.3 with that of the Brownian stopping time g 1 under a proper conditioning.

Preparations.
The renewal structure. We introduce a sequence of stopping times (τ k ) k∈N which look a lot like ladder times by the prescription τ 0 = 0 and To these we associate 2) and the sequence of areas sweeped We let τ = τ 1 = N 1 . Let us observe that A 1 = G τ −1 (V ). For simplicity, we will drop the V dependency of G in what follows.
Proposition 6.1. The random variables (A i , N i ) i≥1 are independent and the sequence We shall first need to study the distribution of V τ . Let T be a random variable with distribution geometric of parameter p β = 1 − e −β/2 that is and let µ β be the law of the associated symmetric random variable, that is µ β is the distribution of εT with ε independent from T and P (ε = ±1) = 1 2 : Finally we let P β,x be the law of the random walk starting from V 0 = x ∈ Z and P µ β ,β be the law of the random walk when V 0 has distribution µ β . Lemma 6.2. Under P β,x with x ∈ Z or under P β,µ β , the random variable V τ is independent from the couple (G τ −1 , τ ). Moreover Proof. Let x > 0, y ≥ 0 and a, n be integers. Under P β,x we compute the probability of T n,a,y := {G τ −1 = a, τ = n, V τ = −y} by disintegrating it with respect to the value z > 0 taken by V n−1 , i.e., where γ β can be seen as a normalizing constant for a distribution on the non negative integers, we obtain that γ β = p β and V τ is independent of (G τ −1 , τ ) and distributed as −T . For x < 0 the proof is exactly the same. For x = 0, we take into account the possibility that the walk sticks to zero for a while. Thus, for y > 0, we partition the event {G τ −1 = a, τ = n, V τ = −y} depending on the value z > 0 taken by V n−1 and on the number of steps k during which the random walks sticks to 0, i.e., and we obtain by symmetry P β (G τ −1 = a, τ = n, V τ = y) = κe − β 2 y . It is only for x = y = 0 that we need to take into account positive and negative excursions, and we obtain P β (G τ −1 = a, τ = n, V τ = 0) = 2κ .
Summing all these probabilities yields and we can conclude that the random variable V τ is independent from the couple (G τ −1 , τ ) with distribution µ β . The final computation showing that µ β is an "invariant measure", is straightforward: = . . . = µ β (y) (6.9) Proof of Proposition 6.1. The proof is based on the preceding Lemma, and uses induction in conjunction with the Markov property. Our induction assumption is thus that the sequence (A i , N i ) 1≤i≤k is independent, that the subsequence (A i , N i ) 2≤i≤k is IID, and that the random variable V τ k is independent of (A i , N i ) 1≤i≤k , with distribution µ β . Then, and this concludes the induction step.
Local limit theorem for the "excursion" area. Let us state first the theorem. Here f ex stands for the density of the standard Brownian excursion (see e.g. Janson [25]).  (1)) (x → +∞). (6.11) A straightforward application of the dominated convergence theorem entails that if b n → ∞ then for all η > 0, 1 n 2/3 It is easy to check from [13] that this theorem still holds when started from the "invariant measure" µ β . More precisely, R n → 0 with R n := sup a∈Z n 3/2 P β,µ β (G n = a | τ = n) − w(a/n 3/2 ) (6.13) Without loss in generality we shall assume that R n non-increasing.
6.2. Proof of Theorem (2.1) (2). Using the random walk representation (3.5), we obtain, since Γ β = 1, that the excess partition function is Then, we partition the event {G N = L − N, V N +1 = 0} depending on the length r on which the random walk sticks at the origin before its right extremity, that is Then we use the fact that, for all x ∈ N, we have P β (U 1 = x) /P β (U 1 ≥ x) = 1 − e β/2 , and therefore is the renewal set associated to the sequence of random variables X k := A k + N k (recall (6.1-6.3).
It is clear that we are going to obtain the same asymptotics for Z L,β if we substitute P β,µ β to P β in the r.h.s. of (6.16), that is if we consider a true renewal process with the random variable X 1 having the same distribution as the X i for i ≥ 2. Thus, the proof of Theorem (2.1) (2) will be a consequence of the tail estimate of X under P β,µ β in the next lemma.
Proof of 6.18. We recall that τ 1 = N 1 and we drop the index 1 for simplicity. First, we use that for all j, k ∈ N P β, 6.20) and then split the probability of the r.h.s. in (6.20) into two terms: From this equality, the proof will be divided into two steps. The first step consists in controlling u n and the second step v n .
Step 1. Our aim is to show that for all ε > 0 there exists an η > 0 such that Proof. In this step, we will need an improved version of the local limit Theorem established in [7,Proposition 2.3] for V n and A n simultaneously.
sup n∈N sup k,a∈Z with g(y, z) = 6 π e −2y 2 −6z 2 +6yz for (y, z) ∈ R 2 . Proof of Proposition 6.6. Compared to what is done in [7], the improvement in (6.23) comes from the fact that the increments of the Random walk under P β have a finite fourth moment and a third moment that is null. The proof is performed by using Fourier inversion formula, which can be done for instance by mimicking the proof of [27,Theorem 2.3.10]. For this reason we will not repeat it here.
We resume the proof of (6.22) by bounding from above the probability that there exists a piece of the V = (V i ) n i=0 trajectory of length smaller than n 2/3 log n with an algebraic area (seen from its starting point) that is larger than n 2 and/or that one of the increments of V is larger than (log n) 2 . Thus, we set B n := C n ∪ D n with C n := i∈{0,...,n−1} where J n = (j 1 , j 2 ) ∈ {0, . . . , n} 2 : 0 ≤ j 2 − j 1 ≤ n 2/3 log n and A s,t = t−1 i=s V i . Then, for each (j 1 , j 2 ) ∈ J n we apply Markov property at j 1 and we get Since under P β,µ β , the random variable V 1 has small exponential moments, there exists a constant C > 0 such that We note that A j 2 −j 1 ≥ n 2 implies max{|V i |, i = 1, . . . , j 2 − j 1 } ≥ n 2(j 2 −j 1 ) so that finally we can use (6.25) to prove that there exists C > 0 such that which (recall (6.24)) suffices to claim that P β,µ β (D n ) = o(1/n 4/3 ). Moreover, P β,µ β (V 1 ≥ (log n) 2 ) ≤ ce − β 2 log(n) 2 suffices to conclude that P β,µ β (C n ) = o(1/n 4/3 ) which, in turn, implies that P β,µ β (B n ) = o(1/n 4/3 ). At this stage, for k ≤ ηn 2/3 , we can partition the set {A k = n−k, τ = k, V k = 0} depending on the indices at which a trajectory passes above √ k for the first and the last time. Thus, We also consider the positions of V at ξ √ k and ξ √ k and the algebraic areas below V in-between 0 and ξ √ k , ξ √ k and ξ √ k as well as ξ √ k and k. Thus, we sett = (t 1 , t 2 ),x = (x 1 , x 2 ),ā = (a 1 , a 2 ) and we write and with Then we set and we note that if (t,x,ā) ∈ ∪ ηn 2/3 k=1 G k,n , then either x 1 − √ k ≥ (log n) 2 and Y n,k (t,x,ā) ⊂ C n or x 1 ≤ √ k + (log n) 2 and k − t 1 − t 2 ≤ n 2/3 / log n and then provided η is chosen small enough. Thus, Y n,k (t,x,ā) ⊂ D n so that Clearly, for k ≤ n 2/3 / log(n) we have G k,n = G k,n so that we should simply focus on bounding from above P β,µ β ∪ ηn 2/3 k= n 2/3 log n ∪ (t,x,ā)∈G k,n \ G k,n Y n,k (t,x,ā) .
Pick n 2/3 / log n ≤ k ≤ ηn 2/3 and (t,x,ā) ∈ G k,n \ G k,n . By applying Markov property at t 1 and k − t 2 we can write P β,µ β (Y n,k (t,x,ā)) = S 1 S 2 S 3 (6.32) with Since we are looking for an upper bound of the r.h.s. in (6.32), we can easily remove the restriction {τ > k − t 1 − t 2 } in S 2 and write Therefore, it remains to bound and we recall (6.34) and Proposition 6.6 that yield with g(y, z) = 6 π e −2y 2 −6z 2 +6yz ≤ 6 π e − 3 2 z 2 . We recall (6.31) and we write and then for all (t,x,ā) ∈ G k,n \ G k,n we have S 2 ≤ C 1 log(n) 3 /n 2 and at the same time Thus, It remains to bound from above ηn 2/3 k=n 2/3 / log n (t,x,ā)∈G k,n \ G k,n S 1 S 2 S 3 . We rewrite and we note that x 2 e −3x 3 /8 ≤ e −x 3 /4 for x large enough. Since k ≤ ηn 2/3 it comes that n 2/3 /(k − t 1 − t 2 ) ≥ n 2/3 /k ≥ 1/η so that by choosing η small enough we get and the Riemann sum between brackets above converges to η 0 (1/x)e −1/(4x 3 ) dx so that the r.h.s. in (6.39) is smaller that ε/n 4/3 as soon as η is chosen small enough and this completes the proof.
Step 2. Our aim is to show that for all η > 0, Proof. By Theorem 8 of Kesten [26] (see also Theorem A.11 of [20]) and since E β V 2 1 < +∞, we can state that P β ( τ = n) ∼ Cn −3/2 with C = (E β V 2 1 /2π) 1/2 and with τ = inf{i ≥ 1 : V i ≤ 0} which may differ from τ (recall 6.1) when V 0 = 0 only. In Appendix 7.2, we extend this local limit theorem to the random walk with initial distribution µ β and we obtain We recall the definition of R n in (6.13) and we write We can establish by dominating convergence (see Remark 6.4) that By putting together (6.42), (6.43) we obtain (6.40) and this completes the proof.
6.3. Proof of Theorem 2.2 (2). Let (U i ) ∞ i=1 be the sequence of inter-arrivals of a 1/3stable regenerative set 1 T on [0, 1], conditioned on 1 ∈ T and denote by (U i ) ∞ i=0 its order statistics. Let (Y i ) ∞ i=1 be an IID sequence of continuous random variables, independent of (6.44) We are first going to prove that where the second identity in law in (6.45) is straightforward and then we shall identify the with the distribution of g 1 conditionnaly on B g 1 = 0. We recall (6.1-6.3) and we consider the i.i.d. sequence of random vectors (N i , A i ) ∞ i=1 and we recall that X i = N i + A i for i ∈ N. We recall that, under P β , the first excursion has law P β,0 and the next excursions have law P β,µ β . Let us set S n = X 1 + · · · + X n and v L := max{i ≥ 0 : S i ≤ L}. We recall (6.17) and we consider the sequence under the law P β (·|L ∈ X). We denote by X r 1 ≥ · · · ≥ X rv L the order statistics of (X such that if X r i = X r j and i < j then r i < r j . To simplify notations we set To begin with, we will prove (6.45) subject to Propositions 6.7 and Claim 6.8 below. Then, the remainder of this section will be dedicated to the proof of Propositions 6.7 and Claim 6.8.
(6.46) Claim 6.8. For β = β c , t ∈ [0, ∞) and L large enough, Pick t ∈ [0, ∞) and ε > 0. With Claim 6.8 and with (6.16) and (6.19) we obtain that there exists an r ε ∈ N such that, provided L is chosen large enough, we have rε r=0 ξ r,L ∈ [1−ε, 1] and r≥rε ξ r,L ≤ ε. Then, it suffices to apply Proposition 6.7 to each probability indexed by r ∈ {1, . . . , r ε } in the r.h.s. of (6.47) to conclude that, for L large enough 1 We refer to [6, Appendix A] for a self-contained introduction of the α-stable regenerative sets on [0, 1] (see also [2]). In fact, it is useful to keep in mind that such a set is the limit in distribution of the set τ N ∩[0, 1] when τ is a regenerative process on N with an inter-arrival law K that satisfies K(n) ∼ L(n)/n 1+α and with L a slowly varying function. The alpha-stable regenerative set can also be viewed as the zero set of a Bessel bridge on [0, 1] of dimension d = 2(1 − α). which completes the proof of (6.45).
Proof of Proposition 6.7. To begin with, let us distinguish between the k excursions associated with the first k variables of the order statistics (X i ) v L i=1 and the others, i.e., Then, the proof of Proposition 6.7 will be deduced from the following two steps.
Step 1 Show that for all k ∈ N and under P β (· | L ∈ X), Step 2 Show that for all ε > 0, Before proving (6.52) and (6.53), we need to settle some preparatory lemmas. To begin with we let F be the distribution function of X under P β,µ β that is F (t) = P β,µ β (X ≤ t) for t ∈ R and F −1 its pseudo-inverse, that is F −1 (u) = inf{t ∈ R : F (t) ≥ u} for u ∈ (0, 1). Lemma 6.9. There exists C > 0 such that Proof. The proof is a straightforward consequence of (6.18).
Recall (6.44). The next lemma deals with the convergence in law, as m → ∞, of the horizontal extension of an excursion renormalized by m 2/3 and conditioned on the area of the excursion being equal to m. Lemma 6.10. For all β > 0 and all m ∈ N we consider the random variable N m 2/3 under the laws P β,a (·|X = m) with a ∈ {0, µ β }. We have
To display an upper bound for the sequence E β,µ β N m 2/3 X = m m∈N it suffices of course to consider where we have used (6.57). Since the second term in the r.h.s. in (6.58) clearly vanishes as m → ∞, we focus on the first term and since w is uniformly continuous because s → w(s) is continuous on [0, ∞) and vanishes as s → ∞, we can write the first term as a Riemann sum that converges to dx plus a rest that vanishes as m → ∞ and this gives us the expected boundedness. Similarly, the convergence in law is obtained by picking t ∈ [0, ∞) and by writing P β,µ β N 1 where η ∈ (0, t). We note easily that u m = [(1 − e −β/2 ) P β,µ β (X = m)] −1 u m with u m defined in (6.21). Therefore, (6.18) and (6.22) tell us that u m can be made arbitrarily small provided η is small enough and m large enough. Thus, it remains to deal with v m , which, with the help of (6.18) is treated as the second term in the r.h.s. in (6.21). Thus, (6.40) tells us that there exists a D > 0 such that lim m→∞ v m = t η Dt −3 w(t −3/2 ) dt and this suffices to complete the proof of (6.55). Lemma 6.11. For β > 0 and ε > 0, there exists a c ε > 0 such that for L ∈ N, Proof. Since under P β only the first excursion has law P β,0 and the others P β,µ β the proof of (6.61) will be complete once we show for instance that for c large enough and L ∈ N We recall that, if, ( are the partial sums of ( a sequence of IID exponential random variables with parameter 1, we can state that (6.63) with Γ cL 1/3 = γ 2 + · · · + γ cL 1/3 +1 . Thus we can rewrite the l.h.s. in (6.62) as with D L = Γ cL 1/3 /Γ cL 1/3 +1 . After some easy simplifications we rewrite (6.64) as , (6.65) and then we use Lemma 6.9 to claim that, as L → ∞, the r.h.s. in (6.65) converges to P (γ 1 ≥ c C 1/3 ). This completes the proof of the lemma.

(6.67)
Note that, for y = 0, the terms P β,µ β (X = L−n−y) and P β,µ β (X ≥ L 2 −y) in the expression of G L (y) and K L (y) should be replaced by P β,0 (X = L − n − y) and P β,0 (X ≥ L 2 − y), respectively. However, this does not change anything in the sequel and this is why we will focus on y > 0. It remains to prove that G L (y) and K L (y) are bounded above uniformly in L ∈ N and y ∈ {0, . . . , L/2}. We will focus on the G L function since K L can be treated similarly. The constants c 1 , . . . , c 4 below are strictly positive and independent of L, n, y. By recalling (6.18) and since L−n−y ≥ L/4 when n ∈ {0, . . . , L/4} we can claim that in the numerator of G L (y), the term P β,µ β (X = L − n − y) is bounded above by c 1 /L 4/3 independently of n while (6.19) tells us that L/4 n=0 P β,µ β (n ∈ X) ≤ c 2 L 1/3 . Let us now deal with the denominator : (6.19) tell us that P β,µ β (L ∈ X) ≥ c 3 /L 2/3 while (6.18) gives . As a consequence, G L (y) is bounded above uniformly in L ∈ N and y ∈ {0, . . . , L/2}.
We resume the proof of Proposition 6.7.

Proof of
Step 1 (6.52). The proof of Step 1 will be complete once we show that lim l→∞ X 1 L To obtain this convergence in law, we consider g 1 , . . . , g k that are real Borel and bounded functions. We consider also t ∈ N and (x i ) t i=1 a sequence of strictly positive integers satisfying x 1 + · · · + x t = L with an order statistics x r 1 ≥ · · · ≥ x rt . The key observation is that, by independence of the (N i , X i ) i∈N , we have We consider f 1 , . . . , f k , g 1 , . . . , g k that are real Borelian and bounded functions and we use (6.69) to observe that Because of Lemma 6.5, we can assert that, under P β (·|L ∈ X) the random set (X/L) ∩ [0, 1] converges in law towards U, i.e., the 1/3 regenerative set on [0, 1] conditioned on 1 ∈ U (the latter convergence is proven e.g. in [6], Proposition A.8). As a consequence X 1 L , . . . , X k L converges in law towards (U 1 , . . . , U k ) which implies that X 1 , . . . , X k tend to ∞ in probability and therefore we can use Lemma 6.10 to show that the l.h.s. in (6.70) tends to (as L → ∞) Thus, the proof of Step 1 is complete.

Proof of
Step 2 (6.53). Under P β (·|L ∈ X), the reversibility yields that . We set v L/2 := max{i ≥ 1 : S i ≤ L/2} and v L/2 := v L − min{i ≥ 1 : S i ≥ L/2} such that v L/2 and v L/2 have the same law and We also denote by (N mid , A mid , X mid ) the features of the excursion containing L/2 in case τ v L/2 < L/2. In case τ v L/2 = L/2, we set (N mid , A mid , X mid ) = (0, 0, 0). By applying Lemma 6.11 we can state that by choosing c large enough, the quantity P β (v L/2 , v L/2 ≤ cL 1/3 |L ∈ X) is arbitrary close to 1 uniformly in L provided c is chosen large enough. Thus, we set Step 2 will be complete once we show that, for each c > 0 and ε > 0. We recall (6.51) and we compute E β (B k,L |H c,L ) by conditioning on σ(X i , i ∈ N) as we did in (6.69). We recall that H c,L is σ(X i , i ∈ N)-measurable and that under P β , the first excursion has law P β,0 and the others P β,µ β , i.e., H c,L , (6.75) but then, we can use Lemma 6.10 which allows us to bound by M > 0 each term E β,a N/X 2 3 X = X i with a ∈ {0, µ β }. By using again the fact there exists an η > 0 such that P β (H c,L |L ∈ X) ≥ η uniformly in L we can claim that the proof will be complete once we show that We note that, under P β (·|L ∈ X) we have necessarily X i ≤ L/k for i ≥ k + 1. For simplicity we assume that k ∈ 2N and we denote by ( X 1 , . . . , X v L/2 ) the order statistics of the variables (X 1 , . . . , X v L/2 ) and by (X 1 , . . . ,X v L/2 ) the order statistics of the variables (X v L , X v L −1 , . . . , X 1+v L −v L/2 ). Then, we can easily note that k i=1 X i ≥ k/2 i=1 X i +X i so that the expectation in (6.76) is bounded above by where the factor 2 in front of the first term is a direct consequence of (6.72). The second term in (6.77) is clearly bounded by (1/k) 2/3 and therefore, it can be omitted. As a consequence, it suffices to show that i=k ( X i L ) 2 3 1 {v L/2 ≤cL 1/3 } only depends on the random set of points X∩[0, L/2] and this allows us to use lemma 6.12 to claim that proving (6.78) without the conditioning by {L ∈ X} is sufficient. Therefore, we only need to estimate the quantity where (X 1 , . . . , X cL 1/3 ) is the order statistics of (X 1 , . . . , X cL 1/3 ) under P β without any conditioning. We recall that, if (Γ i ) cL 1/3 +1 i=1 are the partial sums of (γ i ) cL 1/3 +1 i=1 a sequence of IID exponential random variables with parameter 1, we can state that Moreover, by Lemma 6.9, we can claim that there exists a M > 0 such that F −1 (u) ≤ M/(1 − u) 3 for all u ∈ (0, 1) and consequently converges to 1 as L → ∞ and, on the other hand, that for all i ∈ N \ {0}, Γ i follows a law Gamma of parameter (i, 1) which entails that for i ≥ 5, E 1 Γ i 4 = (i−5)! (i−1)! . Consequently, we can use (6.80) and (6.81) to complete the proof of the step.
Proof of Claim 6.8. We use again the random walk representation (3.5), and since Γ β = 1, we obtain Similarly to what we did in (6.14-6.16), we partition the event {G N = L − N, V N +1 = 0} depending on the length r on which the random walk sticks at the origin before its right extremity, that is ξ r,L P β r + N 1 + · · · + N v L−r+1 L 2/3 ≤ t | L − r + 1 ∈ X , (6.84) where we recall the definition of ξ r,L in (6.48). This ends the proof of Claim 6.8. We aim to identify formally the distribution of lim L→+∞ N l with the distribution of g 1 conditionnally on B g 1 = 0.
Step 1 : Identifying the distribution of Y 1 . . We shall show that Y 1 is distributed as the extension of a Brownian excursion normalized by its area. More precisely on the space of excursions (U δ , U δ ) (see Revuz  The operator that normalizes the area is η(w) := s A(w) 2/3 (w). It satisfies A(η(w)) = 1 and there exists a probability γ A defined on {w ∈ U δ : A(w) = 1}, called the law of the Brownian excursion normalized by its area, that satisfies (with n + denoting the Itô measure of positive excursions) for every positive measurable F, ψ: n + [F (η(w))ψ(A(w))] = γ A (F (w))η + (ψ(A(w))) .
It is now just a matter of playing with Brownian scaling, and with the characterization of the normalized Brownian excursion (see e.g. [29, Chapter XII, Exercise 2.13]) to show that Y 1 has distribution ζ(w) with w sampled from γ A .
Step 2: a Brownian construction of a 1 3 -stable regenerative set. Observe that if (τ t , t ≥ 0) is the inverse local time at level 0 of Brownian motion B, then by strong Markov property (G τt , t ≥ 0) is a subordinator. Since it has the scaling (G τct , t ≥ 0) = law (c 3 G τt , t ≥ 0), its range R is a stable Step 3: a formal conditioning. We now have to take into account the conditioning. The (U i ) +∞ i=1 are the interarrivals of T = R ∩ [0, 1], conditionnaly on 1 ∈ T that is Let us explain why this conditioning is only formal. The sets on which we condition are of zero probability measure and thus the law of T = R ∩ [0, 1] conditioned by 1 ∈ T is defined in [6] through regular conditional distributions (formulas (1.19) and (1.20)). 2 7. Appendix 7.1. Perfect simulation procedure. We shall use the acceptance-reject algorithm. Let X, X 1 , . . . , X n be IID with values in E, let A be a measurable set of E such P (X ∈ A) > 0, and T := inf {n ≥ 1 : X n ∈ A} .
Then T has a geometric distribution of parameter P (X ∈ A), X T is independent of T , with distribution the conditional law P (X T ∈ B) = P (X ∈ B | X ∈ A) . Therefore we can use an acceptance-reject to simulate a trajectory of an IPDSAW under P L,β (for β = β c ). The mean number of rejects for an acceptance is the mean of the geometric r.v. that is, thanks to Theorem 2.1, In a nutshell, we have a perfect simulation algorithm with complexity L 2/3 .