Martingales associated to peacocks using the curtain coupling

We consider right-continuous peacocks, that is, families of real probability measures (μt)t∈[0,1] that are increasing in convex order. Given a sequence of time partitions we associate the sequence of martingales characterised by the fact that they are Markovian, constant on the partition intervals [tk, tk+1[, and such that the transition kernels at times tk+1 are the curtain couplings of marginals μtk and μtk+1 . We study the limit curtain processes obtained when the mesh of the partition tends to zero and study existence, uniqueness and relevancy with respect to the original data. For any rightcontinuous peacock we show there exist sequences of partitions such that a limit process exists (for the finite-dimensional convergence). Under certain additional regularity assumptions, we prove that there is a unique limit curtain process and that it is a Markovian martingale. We first study by elementary methods peacocks whose marginals correspond to uniform distributions in convex order. In this case, the results and techniques complete the results and techniques used in a parallel work by Henry-Labordère, Tan and Touzi [9]. We obtain the same type of results for all limit curtain processes associated to a class of analytic discrete peacocks, i.e., the measures μt are finitely supported and vary analytically in t. Finally, we give examples of peacocks and sequences of partitions such that the limit curtain process is a non-Markovian martingale. Since the seminal article [16] by Hans G. Kellerer it has been known exactly which families (μt)t∈R of real probability measures are of the form (Law(Xt))t∈R where (Xt)t∈R is a martingale. A simple application of the conditional Jensen inequality gives a necessary condition: the functions t 7→ μt have to be increasing in convex order. Indeed, if φ is a convex function ∫ φdμs = E(φ(Xs)) ≤ E(φ(Xt)) = ∫

as attested by the title of his paper: Markov-Komposition und eine Anwendung auf Martingale. In fact the martingale (X t ) t∈R associated to (µ t ) t∈R can be chosen to be Markovian and this statement completes Kellerer's Theorem.
Theorem (Kellerer, 1972). Let (µ t ) t∈R be a family of integrable probability measures on R. The following conditions are equivalent • The family of laws (µ t ) t∈R is increasing in convex order, • there exists a Markovian martingale (X t ) t∈R with Law(X t ) = µ t for every t ∈ R.
Kellerer's proof is of a topological and functional analytic nature and does not involve many pure probabilistic arguments. It relies on the compactness of certain sets of measures on finitely many copies of R. In particular, Kellerer does not explicitly construct the martingale (X t ) t∈R in his theorem. Quite recently, from a more stochastic point of view -motivated by mathematical finance questions -Hirsch, Profeta, Roynette and Yor devoted a book [11] to defining explicit martingales associated to (µ t ) t , the existence of which is ensured by Kellerer's Theorem. They studied processes having marginal distributions increasing in convex order, in particular classes of processes that generalise their "guiding example" X t = t −1 t 0 exp(B s − s/2)ds. Such processes were called "Processus Croissants pour l'Ordre Convexe", which has been abbreviated to PCOC and is pronounced "peacock". In the present paper we investigate a particular martingale construction based on optimal transport. In keeping with the terminology in later papers [12,13,9,4] where the initial data are families of measures (µ t ) t∈R instead of processes, any family of measures increasing in convex order will also be called a peacock. In this paper we restrict to peacocks indexed on a compact interval, which most of the time will be [0, 1]. We furthermore restrict to right-continuous peacocks (for the weak topology), making it possible to consider càdlàg martingales in this context. The goal of the present paper is to focus on one particular martingale construction based on (martingale) optimal transport theory. For every peacock (µ t ) t∈[0,1] we construct a Markovian martingale (X t ) t∈[0, 1] attached to a discrete grid 0 ≤ t 0 < t 1 < · · · < t N = 1 such that • Law(X t ) = µ t k for every t ∈ [t k , t k+1 [, where k ∈ {0, . . . , N − 1}, • Law(X t k , X t k+1 ) is the (left-)curtain coupling of µ t k and µ t k+1 (this coupling comes from martingale transport theory and will be introduced and commented in Subsection 1.1).
Using sequences of these interpolating martingales we investigate the convergence of these processes when the mesh of the grid tends to zero. If a limit process exists we will call it a limit curtain process. Such a limit process may not exist, may not be unique if it does and depends on the convergence topology. We say that a limit curtain process of (µ t ) t∈[0,1] is relevant if it is a martingale for its own filtration, has the proper marginals, i.e., Law(X t ) = µ t at every time t ∈ [0, 1] and is concentrated on càdlàg paths.
Martingales associated to peacocks using the curtain coupling Theorem A (Propositions 2.4 and 2.6). Given any right-continuous peacock (µ t ) t∈[0,1] , for the finite dimensional topology (see Subsection 2.4) the set of limit curtain processes is not empty and contains at least one relevant process (X t ) t∈ [0,1] , that is, a càdlàg martingale with Law(X t ) = µ t at every time.
For the Skorokhod topology all limit curtain processes are relevant (but it is not known whether or not this set is empty).
Remark 0.1. Our proof of Theorem A is new but the techniques, in particular the technique of considering a martingale on a countable subset of [0, 1], have been used several times in the peacock literature [3,8,12] or on related topics since at least 1978 [15].
We study some representative examples of peacocks, both continuous and discrete, where continuous or discrete refer to the distributions µ t involved in the peacocks. For some of them, we prove the existence and uniqueness of the limit curtain process in the Skorokhod topology, for arbitrary sequences of partitions.
The following result on peacocks for uniform measures is obtained by using direct methods.
Theorem B ( Proposition 3.1, Subsection 2.3 and end of Section 3). Let µ t be the uniform measure on [−f (t), f (t)], where f is a positive, increasing, and continuous function on [0, 1]. There exists a unique limit curtain process of (µ t ) t∈[0,1] and it is Markovian.
The process consists of piecewise increasing trajectories with jumps downwards at random arrival times t to the bottom −f (t) of the interval. The intensity of jumps is independent of the position and happens with intensity 2 −1 d log(f (t))/dt. For f (t) = exp(2t)/2, the set of arrival times for the jumps is a standard Poisson point process.
We now state our results on the existence and the uniqueness of limit curtain processes for discrete peacocks. This type of result is not typical in the peacock literature because the measures µ t are often absolutely continuous or smooth, for instance the result of a diffusion, and discrete measures are usually seen as general measures that may be obtained as limits.
Then, there is a unique limit curtain process and this process is Markovian.
Finally, we give examples of peacocks with non-Markovian limit curtain processes. This may occur both in the discrete and continuous situation.
This limit is a local Lévy process. Our closer result is Theorem B. The peacocks that we consider there do not satisfy Assumptions 3.1 and 3.4. Neither does our main peacock [µ], with µ t uniform on [−1/2 exp(2t), 1/2 exp(2t)], satisfy Assumption 3.4 (iii) (but it satisfies Assumption 3.1). Nevertheless, this peacock is not far from entering the setting of [9] and its limit curtain process appears prototypical for the local Lévy processes of Henry-Labordère, Tan and Touzi. From this perspective the class of peacocks considered in [9] may appear more general than our uniform peacocks (peacocks with uniform laws). Note finally that the methods used in the proofs are different.
Finally the third main result of [9] is in Section 3.3. It is related to a characterisation of the limit curtain process as the optimiser of a continuous-time martingale transport problem and of its dual problem. It completely departs from our Theorems C and D. In particular, discrete peacocks are not addressed in [9].
Our paper is organised as follows. In Section 1 we introduce the necessary definitions related to martingale transport and convex order, and make explicit the relation between the peacock problem and optimal transport theory. The next sections are devoted to proving the theorems of the introduction: Theorem A in Section 2, Theorem B in Section 3, Theorem C in Section 4 and Theorem D in Section 5 and 6. We conclude the paper with two remarks and by suggesting some open questions (Section 7).

Reminders about the convex order and the curtain coupling
On R, let P be the space of probability measures and M the space of positive measures with finite first moments. We denote by P 1 the space P ∩ M of probability measures with finite first moments. For every metric space (S, ρ), we denote by P(S), M(S) and P 1 (S) the corresponding spaces. On P 1 (S) we denote by T cb (S) the usual weak topology of P(S) induced by the continuous bounded functions.
We will use the Prokhorov distance for two types of spaces. First, the space (R j , . ∞

Definitions of the shadows and the curtain coupling
The aim of this subsection is to introduce the (left-)curtain coupling, which is the essential feature of our approach to the peacock problem. Its definition is based on the shadow projection that is defined in Definition 1.3 and for which we recall important properties and examples. Definition 1.1 (Usual, convex and extended convex order on M). Let µ and ν be measures in M. The measure ν is greater than µ in convex order if for any convex function ϕ : R → R we have ϕ dµ ≤ ϕ dν.
We write µ C ν. Proof. The first assertion is a famous theorem of Strassen [26,Theorem 8] applied to dimension one. The second is Proposition 4.4 in [5].
In [5,Lemma 4.6] the following important theorem-definition is proven.

Definition 1.3 (Definition of the shadow).
If µ E ν, there exists a unique measure η such that • if η satisfies the two first conditions, i.e., µ C η ≤ ν, one has η C η .
This measure η is called the shadow of µ in ν and we denote it by S ν (µ).
The shadows are sometimes difficult to determine. An important fact is that they have the smallest variance among the measures η . Indeed, η C η implies xdη = xdη and x 2 dη ≤ x 2 dη with equality if and only if η = η or x 2 dη = +∞. For every µ ∈ P we denote by G µ the quantile function of µ. Recall that it is the unique leftcontinuous increasing function from ]0, Example 1.4 (Shadow of an atom, Example 4.7 in [5]). Let δ be an atom of mass α at a point x. Assume that δ E ν. Then S ν (δ) is the restriction of ν between two quantiles, more precisely it is ν = (G ν ) # λ ]s,s [ where s − s = α and the barycentre of ν is x.
The next lemma describes the tail of the shadows. We denote by spt the support of a positive measure, i.e., the smallest closed set of full measure.
The following result is one of the most important on the structure of shadows. It is Theorem 4.8 of [5]. Proposition 1.6 (Structure of shadows). Let γ 1 , γ 2 and ν be elements of M and assume that µ = γ 1 + γ 2 E ν. Then we have γ 2 E ν − S ν (γ 1 ) and Example 1.7 (Shadow of a finite sum of atoms). Let µ be the measure n i=1 α i δ xi and ν = G # λ ]0,m] such that µ E ν. We can apply Proposition 1.6 to this sum as well as Example 1.4 on the shadow of one atom. We obtain recursively the following description.
There exists an increasing sequence of sets J 1 ⊆ · · · J n ⊆]0, m] satisfying that J k has With the shadow projections, we can introduce the left-curtain coupling. Notice that for an atomic measures µ it is directly related to Example 1.7 where (x i ) i has to be an increasing sequence. Definition 1.8 (Left-curtain coupling, Theorem 4.18 in [5]). Let µ, ν ∈ M satisfy µ C ν. There exists a unique measure π ∈ Π M (µ, ν) such that for any x ∈ R the measure π ]−∞,x]×R has first marginal µ ]−∞,x] and second marginal S ν (µ ]−∞,x] ). We denote it by π lc and call it left-curtain coupling or simply curtain coupling.
One of the main theorems of [5] is the equivalence of three properties of couplings: left-curtain, left-monotone and optimal. Let us define left-monotone couplings. Definition 1.9 (Left-monotone coupling). Let π be an element of Π M (µ, ν). The coupling π is left-monotone if there exists a Borel set Γ with • π(Γ) = 1, • for every (x, y − ), (x, y + ) and (x , y ) elements of Γ satisfying x < x and y − < y + , the real y is not an element of ]y − , y + [.

Pertinence of the curtain coupling and a possible martingale analogue of the constrained Benamou-Brenier transport problem
The next paragraphs aim to introduce more background both on the martingale and non-martingale (or classical) optimal transport theory. We motivate our intension to develop the theory of the curtain coupling, notably in relation with peacocks, from the fact that it is the natural counterpart of this rich and well-developed classical theory. We believe that the curtain coupling could play the same central role in the martingale setting as the quantile coupling does in the classical setting. We stress that the curtain coupling is defined for all pairs µ C ν and it can be conveniently worked with, which is rare for martingale couplings.
The classical optimal transport problem is the minimisation problem π → c dπ where π varies in Π(µ, ν). For specific cost functions c the minimiser is uniquely determined. One of the most studied situation is that of a smooth c : R 2 → R with ∂ x ∂ y c < 0, e.g. c(x, y) = (y − x) 2 or −xy. The two latter examples amount to the maximisation of the covariance for joint laws of µ and ν. In general if ∂ x ∂ y c < 0 the minimiser is the celebrated quantile coupling, also known as covariant coupling (see, e.g., [25]). Still in case ∂ x ∂ y c < 0, if µ is continuous (its atomic part is zero), the optimal transport plan can be defined as follows by the increasing map T : where we recall that G ν is the quantile function of ν and F µ is the cumulative distribution function of µ. Denoting by f ⊗ g the function x → (f (x), g(x)), we have π = (Id ⊗T ) # µ. In particular π is concentrated on the graph of T . In probabilistic terms, if X is a random variable of law µ, the optimal transport π is the deterministic coupling Law(X, T (X)). In the martingale transport problem, in case ∂ x ∂ y ∂ y c < 0, when µ is continuous, the shape of the optimal coupling is very similar. In fact the curtain coupling can be written . Moreover, as a consequence of the left-monotonicity, (recall Definition 1.9) T 2 is increasing and for x > x, T 1 (x ) must not be in the interval ]T 1 (x), T 2 (x)[ ; see [10,5] for more explicit details. Compared with Proposition 1.10 and Remark 1.11, it appears that the curtain coupling is the natural martingale transport counterpart to the quantile coupling: A first observation is that these are optimal transport plans for cost functions satisfying the same condition up to one derivative or having the same expression up to one power degree ; a second observation is that these couplings are characterised by combinatorial monotonicity conditions on the maps T 1 , T 2 or T ; and a third observation is that they can be defined using shadows for the first-order or second-order stochastic dominance ( sto or C , respectively). In the last assumption we mean that the quantile coupling can be defined specifying that the measure µ ]−∞,x] is transported on the lowest measure for sto among the measures η that have mass µ(] − ∞, x]) and satisfy η ≤ ν.
A very important tool in the optimal transport theory is the displacement interpolation of two probability measures µ = µ 0 and ν = µ 1 , introduced by McCann in [23]. In the case of the real line or of a Euclidean space it is simply the interpolation by the curve (µ t ) t∈[0,1] defined by µ t = Law((1−t)X +tY ) where π = Law(X, Y ) is an optimal transport plan from µ to ν. In Euclidean spaces the process t → (1 − t)X + tY is a minimiser of a famous dynamical transport problem introduced by Benamou and Brenier [6]. This dynamical problem makes sense on a wide class of geodesic metric spaces. In probabilistic terms, the problem is to find a process (X t ) t∈[0,1] with Law(X 0 ) = µ, Law(X 1 ) = ν EJP 23 (2018), paper 8. minimising a certain action, which is A : 1] , the curve (µ t ) t∈[0,1] defined by µ t = Law(X t ) is a displacement interpolation between µ and ν. In view of the success of the displacement interpolation, it is natural to search for an appropriate martingale displacement interpolation. In [3], the Skorokhod embedding problem has been approached via optimal transport methods and the martingales obtained in this paper can be regarded as martingale displacement interpolations.
From the perspective of optimal transport, the interpretation of the peacock problem is slightly different. In our problem not only µ 0 and µ 1 are fixed as in the Skorokhod problem or the Benamou-Brenier dynamical problem but also all the intermediate marginals. However, for a given curve (ν t ) t∈[0,1] , the Benamou-Brenier minimisation problem still makes sense under the additional constraint Law(X t ) = ν t for every t.
This has been studied by Ambrosio-Gigli-Savaré and Lisini [1,19] in the setting of metric spaces. These authors proved, among other results, that the minimal value proof of Theorem 5] that the minimising process can be approximated using a discretisation of time and the optimal transport plans between ν t k and ν t k+1 for t k = k/2 n and k = 0, . . . , 2 n − 1. Therefore, both our goal to define a process with prescribed marginals, and our method to consider optimal transport plans for the measures indexed on a partition, are similar to those appearing in optimal transport theory. From this perspective our construction of a martingale associated to a peacock can be regarded as the martingale analogue of the classical optimal transport theory developed in [1,19]. Note finally that in [9], the authors are able to define a relevant martingale counterpart where it presented as a maximisation problem) and Remark 0.2 of the present paper. Nevertheless it is neither known which peacocks and processes minimise A for only µ 0 , µ 1 fixed, nor what are the martingale transport plans in Π M (µ 0 , µ 1 ) derived from it.
For the sake of completeness, we finally mention that a different optimal transport problem with infinitely many marginals on R has been studied by Pass [24]. The minimiser that is the quantile process alias Kamae-Krengel process [15] is also a minimiser of the Benamou-Brenier problem.

From curtain couplings to limit curtain processes 2.1 Setting
Recall that we only consider right-continuous peacocks indexed on a compact interval, that most of the time will be [0, 1]. We remind the reader that (µ t ) t∈[0,1] is a peacock if for every s, t ∈ [0, 1], µ t is in P 1 and µ s C µ t . We shall sometimes abbreviate we are not in the equality case of the conditional version of Jensen's inequality and thus it is countable.

Limit curtain processes
Given any peacock [µ], we describe a procedure to construct a process using a discretisation. More precisely, for any interval partition σ = {t 0 , . . . , t N } of [0, 1] with 0 = t 0 < · · · < t N = 1, we denote by P µ,σ the law, on the Skorokhod space D[0, 1], of a process (X µ,σ t ) t∈[0,1] that we describe now: A random trajectory is (almost surely) , and for every k < N the joint law (X µ,σ t k , X µ,σ t k+1 ) is the left-curtain coupling between µ t k and µ t k+1 . In order, for P µ,σ , to be uniquely determined we moreover need to specify that X µ,σ is a Markov process.
Notice that, as the process is most of time constant, it can be seen as an inhomogeneous Markov chain with transitions at deterministic times (t k ) k=0,...,N . Note moreover that transitions are martingale transport plans and E(X µ,σ t |F s ) = E(X µ,σ t |X µ,σ s ) because the process is Markovian.
We are ready for the definition of limit curtain coupling. Actually any element P ∈ LimCurt([µ]) is relevant for the peacock problem because any process (X t ) t∈[0,1] of law P -(i) satisfies Law(X t ) = µ t for every t, (ii) is a martingale (see Proposition 2.6 for the proof of these two points). One of our goals is to provide sufficient conditions on [µ] to ensure that LimCurt([µ]) is not empty, or, even better, reduced to a single element. The other goal is to examine whether there are Markovian or non-Markovian processes among the limit processes.

Finite dimensional topology
In a similar way as we did for LimCurt([µ]) we introduce LimCurtFD([µ]). 1] , with its cylindrical σ-field, is called a limit curtain process for the finite dimensional topology of [µ] if there exists a sequence of partitions (σ (p) ) p∈N with mesh |σ (p) | going to 0 as p tends to +∞, such that P µ,σ (p) tends to P in the weak convergence sense.
The following example shows that a limit curtain process for the finite dimensional topology may not satisfy the condition Law(X t ) = µ t . It may also be a measure on R [0,1] that is not obtained from a measure on D[0, 1]. is not empty and contains at least one relevant process.
More precisely, for any nested sequence of partitions (σ (p) ) p∈N such that ∪ p σ (p) is dense and contains the points of discontinuity of t → µ t , there exists a subsequence of (P µ,σ (p) ) p∈N converging to a limit P ∈ LimCurtFD([µ]). Moreover, there exists a process (X t ) t∈[0,1] associated with P that is a (maybe non-Markovian) càdlàg martingale with 1-dimensional marginals µ t at any time t ∈ [0, 1].
Sketch of proof of Proposition 2.4. We mostly follow the proof of [12, Theorem 3.2] and will cite the numbering of this article. The only important difference concerns the partition sequence (σ (p) ) p that may not be dyadic and must include the times of discontinuity. This is an essential feature with regard to the finite dimensional convergence that is proved after Lemma 2.5, but it is not for Theorem 3.2 in [12] that is a pure existence theorem. At step (2) we adapt the proof as follows: We take X µ,p t = X µ,p t p instead of 0 where t p is the greatest element of σ (p) that is smaller than t. Moreover, we choose left-curtain couplings as Markov transitions instead of an arbitrary martingale provided by Strassen's Theorem. Observe that after step (2) we are really considering P µ,p := P µ,σ (p) and an associated process X µ,p , as described in Subsection 2.2. We can follow the steps of Hirsch an Roynette until the end. We obtain a measure P on D[0, 1] as the limit of a subsequence (P µ,ϕ(p) ) p of (P µ,p ) p in a peculiar sense: When finitely many times of ∪ p σ (p) are selected the joint projection of P µ,ϕ(p) on these marginals converge to the corresponding projection for P . Hence, there exists a càdlàg martingale (X t ) t∈[0,1] that has the correct marginal law of P when projected on countably many copies of R with indices in ∪ p σ (p) . The fact that such a process exists is granted by the classical theory of continuous martingales (see step (5)). Moreover, using the continuity at times t / ∈ ∪ p σ (p) the 1-marginals are µ t on this set also.
The only element that must be added to Hirsch and Roynette's proof is the finite dimensional convergence for finitely many times not all in ∪ p σ (p) . For this purpose we need a lemma. contained in the ball of centre (Id ⊗ Id) # ν r and radius ε (for any distance metrising Proof. The result is due to the fact that Π M (ν r , ν r ) = {(Id ⊗ Id) # ν r }. Let (π n ) n be a sequence of martingale transport plans with both marginals converging to ν r . The set D is tight because for every π ∈ D Thus Prokhorov theorem shows that every subsequence of (π n ) n has a converging subsequence in the closed set Π M . The limits must be in Π M (ν r , ν r ), so that (π n ) n converges to (Id ⊗ Id) # ν r . The lemma follows from this remark.
Take r 1 , . . . , r j real times and for every i ∈ N, q i 1 , . . . , q i j elements of ∪ p σ (p) such that for every k ≤ j, (q i k ) i is a decreasing sequence converging to r k . We specify moreover that q i k has to be r k if r k ∈ ∪ p σ (p) . Moreover, times r k / ∈ ∪ p σ (p) are continuity points of the peacock so that in both cases we have lim i→∞ . . , X rj ) when i goes to infinity. We need to state the commutativity of limits lim p (lim i u i,p ) = lim i (lim p u i,p ). Lemma 2.5 will provide the required uniformity. For this lemma we choose the Prokhorov distance on the space of probability measures on (R j , . ∞ ) and denote it by Prok. Take ε > 0. We claim that for p sufficiently large (greater than some p ε ) the inequality holds as soon as i is large enough (greater than some i that does not depend on p). Indeed, for every k = 1, . . . , j, Law(X is an element of Π M (µ s , µ t ) where s and t have distance to r k smaller than max(|σ ϕ(p) |, |q i k − r k |). With the same notation α as in Lemma 2.5 we choose p ε,k sufficiently large so that |σ ϕ(p) | ≤ α if p > p ε,k . If now we also have |q i k − r k | ≤ α, the Prokhorov distance, associated with the norm . ∞ of R 2 , between Law(X µ,ϕ(p) r k , X µ,ϕ(p) q i k ) and (Id ⊗ Id) # µ r k is smaller than ε. Hence considering a coupling as ensured by the Strassen-Dudley theorem one can couple these measures in a close way. Therefore if (Ω, P) is the probability space on which X µ,ϕ(p) is defined, , . . . , X µ,ϕ(p) rj ), Law(X r1 , . . . , X rj ))  (σ (p) ) p∈N a sequence of partitions such that (P µ,σ (p) ) p∈N converge to some P in the Skorokhod topology. Then any process (X t ) t∈[0,1] of law P satisfies Law(X t ) = µ t for every t ∈ [0, 1] and is a martingale.
Proof. With the convergence in the Skorokhod space, the finite dimensional convergence is also true for finitely many times selected in a set E = [0, 1] \ D where D is countable (and 1 ∈ E). Without loss of generality, we can assume that the paths of (X t ) t∈[0,1] are right-continuous and (µ t ) t∈[0,1] is right-continuous. It follows that Law(X t ) = µ t is also satisfied for t ∈ D. Moreover (X t ) t∈E is a martingale on E (see for instance [12,Lemma 3.4]). Finally, (X t ) t∈[0,1] is the regularised martingale obtained at any point as limit on the right in the sense of the classical theory of continuous martingales (same argument as step (5)

Uniform measures on intervals
In this section we consider a prototypical peacock [µ]: for every t ∈ [0, 1], the law of µ t is a simple continuous measure, namely the uniform measure on an interval. The left-curtain coupling π lc between two uniform measures, µ and ν, can easily be deduced from Definition 1.8. As explained in Subsection 2.3, the resulting coupling is invariant under translation and scaling, so that it is enough to look at µ = λ [0,1] and ν = 1 1+2a λ [−a,1+a] for some a > 0. Note now that, for every x ∈ [0, 1], the shadow of µ| ]−∞,x] in ν has the same mass and barycentre, respectively x and x/2, as µ| ]−∞,x] . It is readily proved that this shadow is the uniform measure 1 1+2a λ [−ax,(1+a)x] . Therefore, the left-curtain coupling from µ to ν can be described with two linear maps. The submeasure 1+a 1+2a λ [0,1] ≤ µ is mapped linearly on [0, 1 + a] and the remainder a 1+2a λ [0,1] linearly on [0, −a]. The coupling can also be described with random variables. Let X be uniform on [0, 1] and Z be an independent Bernoulli variable Z ∼ B(a/ (1 + 2a)). Then define EJP 23 (2018), paper 8. Y = (1 + a)X if Z = 0 and Y = −aX if Z = 1. This gives π lc = Law(X, Y ). In Figure 1, on the left we illustrate the curtain coupling π lc between two uniform measures. On the picture, an arrow from x to y corresponds to some ordered pair (x, y) in the support of π. Roughly speaking the transport route is open for transporting mass from x to y. On the right of the figure we consider five uniform measures in convex order and draw the same arrows representing the four curtain couplings between them.
The next result is the main part of Theorem B. The remaining part is based on Subsection 2.3 and proved at the end of the present section.
In simple words, at successive random times T , the process jumps down and starts new pieces of increasing curves from position − exp(2T )/2.
Let us briefly give an idea on how the Poisson point process can appear. The curtain coupling of µ t and µ t+h is directed by the quotient of their support lengths. This is exp(2h), which is independent of t. Therefore, the probability to jump down is 2 −1 (1 − exp(−2h)), which is equivalent to h as h tends to 0. Recall that it is also independent of the position in the support of µ t . In the limit, the probability to jump down during the period [t, t + h] is still independent of the position and from t. The transition kernel between µ t and µ t+h is, up to a scaling factor, the same for h > 0 fixed, and every t ∈ [0, 1 − h].
Our proof of Proposition 3.1 relies on the Euler approximation method and the approximation of the classical Poisson point process by Bernoulli processes.

Euler approximation method
We consider a continuous transition function T : (s, t, x) ∈ [0, 1] × [0, 1] × R → R defined for s ≤ t such that the limit exists. We denote by R T the rest of the Taylor expansion In the next proposition we will compare, given any partition σ : 0 = t 0 < t 1 < · · · < t N < 1 = t N +1 and two initial points x 0 andx 0 , the solution x(t) of the ODE to the Euler scheme starting inx 0 : The comparison betweenx k and x(t k ) can be done at discrete times t k but also at continuous times by associating with (x k ) k=0,...,T the càdlàg functionx defined bȳ x(t) =x k on [t k , t k+1 [ . The proof follows the classical line for the convergence of the Euler scheme in numerical analysis. Proposition 3.2. Let T , V and R T be the functions introduced above and assume that V is continuous, bounded, and that there exists L > 0 such that Let R V be the local truncature error in the approximation of the flow at first order. We assume the uniform estimates |R Then there is a function F : (R + ) 2 → R + increasing in both arguments and with F (0, 0) = 0 such that x −x ∞ ≤ F (|x 0 −x 0 |, |σ|).
Note that the hypotheses on V ensure that (3.1) is in the scope of Picard-Lindelöf Theorem.
Proof. We consider the one-step operation starting fromx k and x k on the interval We take the difference and obtain Using the fact that n k=1 (1 + h k ) ≤ exp(h 1 + · · · + h n ) for positive real numbers h k . It follows for n ≤ N

Poisson point process
We state the following result without proof. It states that it is possible to couple a Bernoulli process and a Poisson process. We invite the reader to consult [2] on the Poisson approximation. Lemma 3.3. Let (σ (p) ) p∈N be a sequence of interval partitions with the mesh |σ (p) | going to 0 as p tends to +∞. Then there exists a probability space on which one can define an increasing sequence of random variables (T i ) i∈N * and for every p ∈ N, an increasing sequence (T The next lemma concerns the trajectories of the limit process suggested in Proposition 3.1. These are for s ∈ [0, 1], |g s (s + u(t − s)) − g s (s + u(t − s ))| ≤ 10(|s − s| + |t − t|).
The norm of these derivatives is bounded by e 2 so that the mean inequality permits us to conclude.
Proof of Proposition 3.1. We prove that for a sequence (σ (p) ) p , the sequence P µ,p := P µ,σ (p) converges in the Skorokhod topology to the law P of the process described in the statement of the proposition. Our strategy is to use the Prokhorov distance associated with the Skorokhod distance. In different words, for every ε > 0, we want to couple P and P µ,p in D[0, 1] × D[0, 1] using a coupling Θ such that with probability greater that 1 − ε the (Skorokhod) distance between the two marginal associated càdlàg processes (X t ) t∈[0,1] and (X µ,p t ) t∈[0,1] is smaller than ε.
It is also correct to perform the coupling in another probability space and this is what we will do with the probability space of Lemma 3.3 together with a uniform random value X 0 ∼ U([−1/2, 1/2]) independent of this space. We construct the process X as explained in the statement of the proposition. A random path starts from X 0 at time 0 and jumps down at times T 1 , . . . , T N . The piece of trajectory after the i-th jump is defined by g Ti .
Before we describe the piecewise constant process X µ,p , let us introduce an intermediate process Y p . A random path starts at point X 0 and jumps at each time t The process X µ,p does not directly follow the trajectories g s but it is a discretisation of those trajectories in the sense of Proposition 3.2. A random path starts from X 0 . It is constant on each interval [t , which is small. In the other case it does a small jump upwards from x = X µ,p k−1 . The vector field V corresponding to this transition T with respect to the definitions of Subsection 3.1 is V (t, x) = x + 2 −1 exp(2t). Note that V is 1-Lipschitz in x and continuous in t. The solutions of the ODE (3.1) are of the form EJP 23 (2018), paper 8. exp(2t)/2−C exp(t) where C is a constant. For C = exp(s) we recover g s . The trajectories of the flow starting from [−1/2, 1/2] at time 0 or from −2 −1 exp(2s) for some s ∈ [0, 1] at time s are bounded and V is also bounded if (t, x) is in a bounded set.
We can now conclude, explaining that with high probability the trajectories of X are close to those of Y p and that the trajectories of Y p are close to those of X µ,p . Of course, this holds if p is large enough. For the first estimate we consider the event With this λ used in the definition of the Skorokhod distance and Lemma 3.4, we see that the Prokhorov distance between Law(X) and Law(Y p ) is smaller than 10ε. As ε can be chosen arbitrary, this first distance tends to 0 as p tends to infinity.
The distance between Law(Y p ) and Law(X µ,p ) also tends to zero: Proposition 3.2 allows us to compare on each [t k )] and beginning of the other ones at g t (p) k (t (p) k ), so that the distance between these points tends to zero together with |σ (p) |. Moreover, the precise expression of F given in the proof of Proposition 3.2 allows us to bound uniformly from above the supremum norms over all pieces of Y p .
We conclude the section with the proof of Theorem B.
Proof of Theorem B. In Proposition 3.1 we have proved the result in the case µ t = ν t where ν t is the uniform measure on [−g(t), g(t)] with g : t ∈ [0, 1] → 1 2 exp(2t). Of course, the result also holds if we extend (or restrict) the same peacock to [0, T ] for some T > 0. We explain now how Subsection 2.3 can be used to deduce the general case. Let f be some positive increasing and continuous function and for every t ∈ [0, 1] let µ t be the uniform measure on [−f (t), f (t)]. If f is constant the result is trivial. Therefore we assume f (1) > f (0). Let φ : x → ax be the linear function with ag(0) = f (0) and T > 0 such that ag(T ) = f (1). Therefore φ # ν tT = µ t for t ∈ {0, 1}. Let τ : [0, 1] → [0, T ] be the function such that φ # ν τ (t) = µ t . In fact τ (t) = g −1 (f (t)/a) = 1 2 log(2a −1 f (t)). The function τ is increasing and continuous. Hence according to Subsection 2.3 the process (X t ) t∈[0,1] is a limit curtain process whose law is in LimCurt([µ t ]) if and only if (aX τ (t) ) t∈[0,1] has law P for some P ∈ LimCurt([ν t ]). Hence the law of the limit curtain process is uniquely determined. The trajectories of the process are piecewise deterministic and obtained by transforming the trajectories in Proposition 3.1. The expectation of the number of jumps

Finitely supported measures
Let V be the set of vectors   (1, . . . , 1) T , Diag(a) is the diagonal matrix with entries a 1 , . . . , a n and A, B, Y and X are columns. Lemma 4.1. With the notation above, assume that the entries of X are all different and that the same holds for Y . Then the affine space Γ ⊆ M n×n (R) has dimension (n − 1)(n − 2), and, in a neighborhood of ((X; A), (Y ; B)), the map is analytic, as a map to the affine Grassmannian of affine spaces of dimension (n−1)(n−2) included in R n×n ≡ M n×n (R) .
Proof. We can prove that the application that maps M ∈ Γ to the submatrix consisting of the n − 1 upper rows and the n − 2 left-more columns is an affine bijection with M (n−1)×(n−2) . Indeed, there always exists a way to complete such a matrix to an element of Γ and this way is unique. We first consider the n − 1 upper rows together with the first and third constraint of (4.1). On each line we obtain a 2 × 2 linear system to solve and the solution is unique because of | 1 xn−1 1 xn | = 0. We complete the n-th row in the unique possible way according to the second constraint and we have still two relations on the lower row that need to be checked. These relations rely on the definition of V. First, we already have m ij = b j = 1 = a i and j m ij = a i for every i ≤ n − 1. It follows j m nj = a n . Second, we have j m ij y j = a i x i for every i ≤ n − 1 and we want to prove it for i = n. This follows by subtracting these n − 1 relations to b j y j = a i x i .
In this section we are interested in defining a limit curtain coupling for peacocks where (x 1 , . . . , x n , a 1 , . . . , a n )(t) ∈ V are real analytic functions of time and furthermore satisfy a 1 (t), . . . , a n (t) > 0 and x 1 (t) < . . . < x n (t) for every t ∈ [0, 1]. We will denote (x 1 , . . . , x n , a 1 , . . . , a n )(t) by (X t ; A t ) and Γ(X s , A s , X t , A t ) by Γ st . The fact that the measures of [µ] are in the convex order implies that for s ≤ t the subspace Γ st associated with (X s ; A s ) and (X t ; A t ) contains at least one matrix with nonnegative entries. The linear equalities defining Γ st are indeed equivalent to those of Π M (µ s , µ t ). More precisely the affine map 1≤i,j≤n  The sum of the entries on the i-th row is no longer a i (s) but 1. Moreover, A T t = A T sM (s, t). Each row i = 1, . . . , n is a state of a Markov chain and at every time the vector A t is a probability measure on these states. However the family (M (s, t)) s,t is not compatible.
According to Proposition 1.10 left-curtain couplings are exactly left-monotone couplings, which means that M (s, t) ∈ Γ st is the unique matrix in (R + ) n×n that satisfies This is a closed condition, that is, stochastic matrices are in a bounded set. From this, we recover the main result of [14] in the specific case of finitely supported measure: The According again to Proposition 1.10, left-curtain coupling are the unique optimal solutions of a linear minimisation problem. Therefore, M (s, t) is an extreme point of Π M (µ s , µ t ) ≡ Γ st ∩ (R + ) n×n . Hence M (s, t) satisfies at least (n − 1)(n − 2) relations of type m i,j = 0 that are independent of the ones defining Γ st .
Given an interval partition σ (p) = {t 0 , . . . , t Qp } of [0, 1] with 0 = t 0 < · · · < t Qp = 1 of the interval [0, 1], we introduce the coherent family (R (p) (s, t)) 0≤s≤t≤1 in the following way. If s ∈ [t i , t i+1 [ and t ∈ [t j , t j+1 [ the transition matrix between those times is R (p) (s, t) =M (t i , t i+1 ) · · ·M (t j−1 , t j ). It sends the distribution of mass A ti to A tj . We will prove in Proposition 4.4 thatR (p) (s, t) converges to a certain R st when |σ (p) | goes to zero. This will in particular prove that R st is a stochastic matrix that sends the distribution A s to A t .
Proof. Let us first prove the lemma in the case where the finite sequence θ n is just θ 0 = ξ + and θ 1 = ξ − . The product of transition matrices isM (ξ − , ξ + ) that we simply noteM . Therefore due to the shape of the left-curtain couplings we can claim M ij = 0 for j > i + 1 if h := (ξ + − ξ − ) is sufficiently small. Indeed, the shadow of i l=1 a l (ξ − )δ x l (ξ − ) must be close to i l=1 a l (ξ + )δ x l (ξ + ) when h is small. More precisely considering the centres of mass of these measures, only a mass of O(h) is sent to the atoms a i+1 (ξ + )δ xi+1(ξ + ) , . . . , a n (ξ + )δ xn(ξ + ) and this bound O(h) can be chosen uniformly in ξ − . Because of Lemma 1.5 if h is sufficiently small this part of the shadow can only be in x i+1 . Hence the claim onM ij holds. With similar arguments and using what has already been proved it follows that the entryM ij is also O(h), uniformly in ξ − for every j < i. Indeed, the measure a i δ xi is transported to a measure of barycentre a i (ξ − ), only O(h) is transported to a i+1 and no mass goes on upper atoms. Therefore we have proved that for any given peacock, there exists some constant c > 0 such that In the general case where (θ k ) k is not reduced to two times, using the submultiplicativity of the operator norm, the estimate 1 + x ≤ exp(x), and a telescopic sum we obtain (M (θ 0 , θ 1 ) · · ·M (θ K−1 , θ K ) − Id n · · · Id n ≤ n−1 k=0 c(θ k+1 − θ k ) exp(c(θ K − θ 0 )) ≤ c.e c (ξ + − ξ − ). In Theorem 4.4 the limit curtain process is described by a family of transition matrices (R st ) s≤t defined by ordinary differential equations in the space of stochastic matrices. However, in Lemma 4.3 we first study the left-curtain matrices M st in place of directly the transition matricesM st .

Lemma 4.3.
There exists a countable and closed set E ⊆ [0, 1] with finitely many accumulation points, and a map N : [0, 1] \ E → N (t) = (n ij (t)) ij to the square matrices of order n such that the following statements are satisfied • For every adjacent isolated points θ, θ ∈ E, and S a segment in ]θ, θ [ there exists C > 0 with the uniform estimate for t ∈ S and h > 0, • the map t → N (t) is bounded, • on [0, 1]\E, the map N is analytic, that is, it is analytic on each connected component • the sum of the entries of each row of N is identically zero, • for each j ≤ n, the sum of the entries of the j-th column is da j /dt, • at every time t / ∈ E, at least (n − 1)(n − 2) entries of N (t) are zero, • the entries on the diagonal are nonnegative and the other ones are nonpositive.
Proof. We introduce an index k ≥ 1 such that every k ≤ n 2 (n−1)(n−2) is associated with a subset I k of (n − 1)(n − 2) entries of the matrices of M n×n (R). Moreover, for all s, t ∈ [0, 1] 2 we only consider the subsets I k such that the vectorial space ∆ k of matrices with the entries zero on I k is in direct sum with the vectorial part of Γ st . The spaces Γ st are parallel for different values of s so that we can denote the set of theses indices by I(t). The question whether Γ st is in direct sum with ∆ k at time t is just depending on vector X t as can be seen in (4.1). The index k will be an element of I(t) if and only if a certain determinant does not vanish at time t. But it is an analytic function. Hence either k is not an element of I(t) for every t or it is, except finitely many times on [0, 1]. For k ∈ I(t) we can now introduce the analytic map (s, t) → M k (s, t) where {M k (s, t)} = Γ st ∩ ∆ k is the single point at the intersection.
Let ]ξ − , ξ + [ be an interval such that I(t) is the same for every t. The set E of the statement is composed of the ends of these intervals, generically called ξ henceforth, and of points θ defined below, which will be finitely many on each compact subinterval S ⊆]ξ − , ξ + [. Due to the analyticity there are finitely many points ξ in [0, 1]. On ]ξ − , ξ + [, we simply denote I(t) by I. Take t 0 ∈]ξ − , ξ + [. For every (s, t) in a neighbourhood of (t 0 , t 0 ), the matrix M (s, t) is an extreme point of Γ st ∩ (R + ) n×n . It equals at least one M k (s, t) for k ∈ I. Recall that M is defined for s ≤ t whereas the matrices M k are defined on ]ξ − , ξ + [ 2 . Observe also that M (t, t) = Diag(a)(t). The maps (s, t) → M k (s, t) − M l (s, t) are analytic and the locus in R 2 where they vanish is accordingly well-known (see for instance [18,Chapter 6]): In the neighbourhood of a zero there are finitely many curves going out. Each curve may be a half-line or has finitely many intersections with half-lines.
Hence we deduce that there exists a neighbourhood of t 0 such that for every s in this neighbourhood, there exist k(s) and ε(s) > 0 with M (s, s + h) = M k(s) (s, s + h) for every h ∈ [0, ε(s)]. Moreover, the neighbourhood can be restricted, so that k is constant both for s < t 0 and for s > t 0 . Finally, the function ε can be chosen to be continuous. Using the compactness of the segments S ⊆]ξ − , ξ + [ we see that there exists at most finitely many accident times θ k on S. Those points θ are after the points ξ the remaining elements of E. Between two θ's there exists k ∈ I with M (t, t + h) = M k (t, t + h) if h is small enough, and ε can be chosen uniformly on every segment included in ]θ k , θ k+1 [. Hence we obtain (4.2) for N = dM k (t, t + h)/dh| h=0 + .
The statements on N now follow from the system (4.1), equation (4.2), the definition of M k , Lemma 4.2 and the structure of the zeros ofM (t, t + h) for small h stated in the proof of this lemma. More precisely forÑ t = Diag(a 1 (t), . . . , a n (t)) −1 N t , the family (R st ) s≤t associated with the differential equations defines a set of coherent transition matrices on a space of n states. Together with the initial measure (a 1 , . . . , a n )(0) on this space, it defines a Markov process with càdlàg trajectories. This process is the limit in both the finite dimensional and the Skorokhod topology of any sequence (P µ,σ(p) ) p∈N associated with a sequence (σ (p) ) p of partitions with mesh going to zero.
Proof. When written for the stochastic matrices, equation (4. ..,n . At time t, the rate for jumping from curve x i to curve x j isñ i,j (t).
Recall that given any interval partition σ (p) = {t 0 , . . . , t Qp } of [0, 1] with 0 = t 0 < · · · < t Qp = 1, we have introduced the coherent family of (R (p) (s, t)) 0≤s≤t≤1 before Lemma 4.2. Our first task is to prove thatR (p) (s, t) converges to R st when |σ (p) | tends to zero. This proves in particular that R st is a stochastic matrix that sends the row mass A s to A t . Note that, due to Lemma 4.2, we also have R (p) ξ,ξ+h − Id ≤ C(h + |σ (p) |) for some constant C only depending on the peacock. In view of the notation introduced in Subsection 3.1 we can fix s ∈ [0, 1] and denotē R (p) (s, u) by x(u) for a few paragraphs only. We obtain T (u, u + h, x) = x(u)M (u, u + h) and V (x, u) = xÑ (u). Proposition 3.2 requires that V is continuous in u and Lipschitz continuous in x. The second condition is satisfied but the first one may not be true on every [s, t]. Let E ⊆ [0, 1] be as in Lemma 4.3.
We first consider the case [s, t] ⊆]θ k , θ k+1 [ with (θ k ) k as in this lemma. Hence up to a time rescaling we can apply Proposition 3.2. Thusx(u) =R p (s, u) uniformly converges to R su for every u ∈ [s, t] as p goes to infinity. One difficulty to overcome is that σ (p) may avoid the starting and end times s, t. This problem is fixed by the estimates of R ξ,ξ+h −Id above and R (p) ξ,ξ+h −Id (see Lemma 4.2) when h is small. More precisely if s and t are respectively the greatest and smallest times in the partition σ (p) that satisfy s ≤ s and t ≤ t , the matrices R s t andR (p) (s , t ) tends to R st andR (p) (s, t) respectively.
If now E ∩ [s, t] is not empty, due to the structure of E, it is possible to find finitely many [s k , t k ] that do not intersect E such that the cumulated length (s k+1 − t k ) is arbitrarily small. Writing now R st = R s,s1 R s1t1 R t1s2 · · · R s K t K R t K t andR (p) (s, t) in a similar manner we obtain the estimate It follows from the facts that the first term tends to zero and the second can be chosen arbitrarily small thatR (p) (s, t) tends to R st as p goes to infinity.
For proving the convergence of P µ,p in the finite dimensional topology it is enough to prove the convergence for two times marginals, which is what we have already done. This is a simple consequence of continuity of the product of finitely many real numbers.
In the last part of this proof we explain how to prove the convergence in the Skorokhod topology. We prove below that for every ε > 0, if p is sufficiently large, there exists Θ a measure on D[0, 1] × D[0, 1] with marginals P µ,p and P (the Poisson like process generated byÑ ) such that with probability greater that 1 − ε, the Skorokhod distance between the first and the second coordinate of (D[0, 1]) 2 is smaller than ε for the joint law Θ. In other words we prove that the Prokhorov distance between P µ,p and P tends to zero. We will wrongly call constant on [s, t] any P µ,p -random trajectory x that starts close to x k at time s and all the transitions are done from state k to itself. In the case of a P -random trajectory, x is constant on [s, t] if it is continuous. In this case x = x k for some k. A more concrete way to justify this abuse is to introduce α > 0 and consider only partitions with a sufficiently small mesh so that |x k (t) −x(t)| < ε holds at any time for some unique k. In fact, due to the uniform continuity of the trajectories (x k ) k=1,...,K , the real ε may be chosen as small as we want. Finally, it is the same to prove that the Prokhorov distance tends to zero with R or {1, . . . , n} as state space. Consider a finite set S ⊆ [0, 1], and, using the convergence in the finite dimensional topology, consider a sufficiently large n and a coupling Θ such that with probability greater than 1 − ε/10, we have X t − X µ,p t ≤ ε for every time t ∈ S. Here X and X µ,p are the first and second coordinates of Θ and ε is sufficiently small to characterise the state in {1, . . . , n}. Let us call jump the discontinuities of X t and the discontinuities in state of X µ,p . Lemma 4.2 and the fact thatÑ t is bounded allow us to claim that if the mesh of S ∪ {0, 1} is sufficiently small, the probability that X or X µ,p has two or more jumps on some interval of the partition is smaller than ε/10. We can also assume that this mesh is smaller than ε, which is important for the horizontal distortion in the definition of the Skorokhod distance. On these conditions, we can easily prove that Θ is a convenient coupling for proving that the Prokhorov distance associated with the Skorokhod distance is smaller than ε.

A discrete counterexample
We show that not every element of LimCurt([µ]) is Markovian and that this set may have cardinality ≥ 2. Here we take the setting of the last paragraph with n = 3 and EJP 23 (2018), paper 8.
x 3 (t) = 10, so that x 1 < x 2 < x 3 on [1/2, 1[ and x 1 < x 3 < x 2 on ]1, 3/2]. We parametrise the peacock [µ] on [1/2, 3/2]. Note that x 2 (1) = x 3 (1) = 10. We will see that sequences of partitions not including time 1 all generate the same process independently of the sequence and that this is not a Markov process. On the contrary, if the sequence includes time 1 (at least asymptotically) there exists a unique limit curtain process independent of the sequence and this process is Markovian.
A computation allows us to state   Let us sum up. On [1/2, 1[ and ]1, 3/2] the law of the limit curtain process is exactly the same whatever time 1 is included or not. Namely we observe the same behaviour as in Theorem 4.4. For describing the limit process on [1/2, 3/2] when 1 is included to the partitions, we need furthermore one Bernoulli trial. With probability 1/2 a locally continuous trajectory will follow x 2 when arriving in 1 and with probability 1/2 it will follow x 3 . In the other case where the partitions avoid time 1, a random trajectory has probability zero to be one of the functions x 2 or x 3 in a neighbourhood of 1.

A continuous counterexample.
We complete the proof of Theorem D started in Section 5. Compared to Section 5, the novelty is that all measures µ t are absolutely continuous, which means that the non-Markovian behaviour of the limit process is not due to atoms.
Our example [µ] is illustrated in Figure 2. On the figure the grey area corresponds to the points (t, x) such that x is in the support of µ t . Let us explain the process in simple words. Several trajectories are depicted, in particular x 1 and x 2 . They start at different points and have first an increasing piece of trajectory similar to the ones in Section 3. After a jump downwards at time t 0 they start a common constant trajectory from the smallest value of the support of µ t0 at point e −2t0 /2. The next jump happens at the same time t 1 = t 0 + 1 and the two trajectories split again. This behaviour indicates that the Markov property is not satisfied because between the two jumps it is possible to distinguish x 1 from x 2 by having a look at the past.
Our construction is divided into two time intervals -the stocking period [0, 1] and destocking period [1, 2] -whose particularities explained in the two next subsections are related to the curtain coupling. On [0, 1], we wait for a jump downwards, which happens with positive probability and amount to stocking the trajectories on [−e 2 /2, −1/2]. For the martingales of the approximating sequence, at each time t 0 of the partition a continuum of trajectories jump downwards into a small interval and start constant distinct pieces of trajectories. For the limit process all the jumps at time t 0 abut on the common value e −2t0 /2 that is the minimum of the support of µ t0 . On [1,2] the mass stocked on [−e 2 /2, −1/2] is destocked. For the approximating martingales, the trajectories x that have jumped at time t 0 jump at time t 1 = t 0 + 1 with positive probability to distinct positions that depend on their respective values at the beginning of the process. In fact for the limit process x(t 1 ) = x(0) − 11/2. All in all, in the limit we see distinct trajectories that gather at time t 0 , follow a common path and split again at time t 1 . This phenomenon is responsible for the non-Markovian behaviour of the limit process.
Stocking area Figure 2: A non-Markovian limit curtain process associated to a peacock with absolutely continuous 1-marginals. See the right part of Figure 2 for an illustration of a locally destocking peacock. The support of µ 3 t is the union of two intervals. The peacock is not globally destocking on the maximal interval [1,2] because there is no possible value of b that satisfies b < c(t) for every t. Nevertheless, the behaviour is the same.
Lemma 6.2. Let [µ] be a destocking peacock. With the same notation as above for a, b, c and (µ i t ) i∈{1,2,3} , we assume moreover that • the functions t → b(t) and t → c(t) are smooth, • For every i = 1, 2, 3, t → µ i t (R) is smooth with nonzero derivative, • t → µ 2 t (R) decreases from 1 to 0. Then there is a unique limit curtain process in LimCurt([µ]). It is a locally constant process with exactly one jump.
Proof. For proving the convergence in the Skorokhod topology, we apply Theorem 12.6 of [7]. First we notice that the processes are concentrated on the path x j,k,l : In the subspace consisting of the latter càdlàg paths, pointwise convergence on a countable, dense subset of [0, 1] provides convergence in the Skorokhod topology.
Therefore according to [7,Theorem 12.6] it is enough to check that the sequence of processes has a limit for the finite dimensional convergence. The description of the leftcurtain coupling between µ s and µ t together with the assumptions of the lemma, provide a candidate limit process that we describe now: Start from a point k ∈ [a, b] according to µ 2 0 = µ 2 , be constant until time b −1 (k). At this time start a second constant trajectory, either at point c • b −1 (k) or in a point uniformly chosen according to dµ 1 t /dt| t=b −1 (k) with the proper probabilities making this transition a martingale kernel. Given finitely many times t 1 , . . . , t j and a partition σ, it is enough to consider the trajectories that jump outside the intervals containing the times t i . These trajectories can easily be coupled with the trajectories of the candidate limit process. As is the proof of Theorem 4.4, this proves that the Prokhorov distance associated with the Skorokhod distance on D[0, 1] tends to zero when the mesh |σ| tends to zero.

Putting the two steps together
We consider a peacock parametrised on [0, 2] that we illustrate in Figure 2. When restricted to [0, 1], it is simply the peacock of Example 6.1. On [1,2] it is a (locally) destocking peacock. It is consisting of four terms µ t = µ 1 Let (σ (p) ) p be a sequence of partitions of [0, 1] with mesh |σ (p) | going to 0. We associated any σ (p) with the partition of [0, 2] consisting of the times t of σ (p) together with the times t + 1 ∈ [1,2]. We denote the latter byσ (p) . Note that 1 is a time of this partition. We have seen in the two former paragraphs that when restricted to [0, 1] or [1,2] the peacock [µ] has a unique limit curtain process and one can check that it is Markovian in both cases. For the peacock on [0, 2] and the sequence (σ (p) ) p one also obtains a limit curtain process but rather surprisingly it is not a Markovian process.
The proof is technical but essentially runs as the one of Proposition 3.1. Hence our focus will be on the main ideas. If t k−1 and t k are elements of σ (p) , different trajectories of X µ,p are jumping down at time t k . They are mapped linearly from [e 2t k−1 /2 − e t k−1 , e 2t k−1 /2] to the small interval [−e 2t k−1 /2, −e 2t k /2]. Between t k and t k−1 + 1 nothing can happen to these trajectories because the left-curtain coupling is identity in their regions. At time t k + 1 all the mass contained in the small interval [− exp(2t k−1 )/2, − exp(2t k )/2] must jump again either to the neighbourhood of 1 − exp(2t k )/2 or somewhere down into the interval [−6, −5]. We have parametrised the masses µ 1 t and µ 3 t in such a way that the jump down becomes linear when |t k −t k−1 | ≤ |σ p | is small. Therefore the mapping of the positions between time t k−1 and 1 + t k is almost linear. More precisely the left-curtain transitions reverses the orientation at the first jump and make it right again at the second. In the limit curtain process, there are two types of trajectories. The first type is consisting of the continuous trajectories t → min (exp(2t)/2 − (1/2 − X 0 ) exp(t), exp(2)/2 − (1/2 − X 0 ) exp(1)) .
We are interested in the second type of trajectories that start in the same way but jump after a duration T < 1 (an exponential random time). After the jump the trajectory has value − exp(2T )/2 on [T, T + 1[. There is a second jump at time T + 1 either to − exp(2T )/2 + 1 or to a point of [−6, −5] that depends of the past in the simplest manner. Indeed, this point is X 0 − (6 − 1/2). Hence the limit curtain process is not Markovian.

An equivalent topology
We open the section by a remark on the topology used to define LimCurt and LimCurtFD the sets of limit curtain couplings. As in [17,5,14] an alternative topology with first moment may appear more pertinent with respect to the peacock and martingale transport literature. We explain that our theorems are the same with this topology in place of T cb .
Let (S, ρ) be a Polish metric space, e.g. (R j , · ) or D[0, 1] with the Skorokhod distance. Recall that T cb (S) is the topology on P(S) with µ n → µ if and only if for all continuous bounded functions f : S → R, it holds f dµ n → f dµ. Let T 1 (S) be the topology on P 1 (S) defined by the continuous functions growing at most linearly -we mean that f /(1 + ρ(x 0 , ·)) is bounded for some x 0 ∈ S.
According to [27, Theorem 7.12 and Remark 7.13] the two topologies coincide when restricted to a bounded set or a set C ⊆ P 1 (S) satisfying a tightness condition.  • For sets of uniformly integrable measures.
-For instance for S = R and µ 1 ∈ P 1 (R) the set C = {ν ∈ P 1 , ν C µ 1 } is consisting of uniformly integrable measures. The proof is similar to the proof of uniform integrability in the theory of L 1 martingales (see Exercise 1.1 [11]).
for S = R j and µ 1 ∈ P 1 (R) the set D = {π ∈ P(R j ), ∀i ≤ j, (proj i ) # π C µ 1 } is consisting of uniformly integrable measures, as a consequence of the previous example.

Markov-Lipschitz property
To the best of our knowledge, the proofs of Kellerer's Theorem in the literature all rely on the so-called Lipschitz-(Markov) property, that already appeared in Kellerer's paper [16,Definition 2 and 3]. We mean in particular the very elegant theory of Lowther [22,21,20] on almost continuous diffusions (abbreviated ACD) and the proof by Hirsch, Roynette and Yor that is presented in [13]. The latter is based on Pierre's uniqueness theorem for the Fokker-Planck equation [11,Chapter 6], a convolution technique and the important contributions by Lowther. As examples of the strength of Lowther's results, we mention the uniqueness of ACD martingales Φ(µ) for all continuous peacocks (µ t ) t , the continuity of the map Φ and the fact that any other continuous Ψ associating some not a priori ACD martingale Ψ(µ) to continuous peacocks (µ t ) t satisfies Ψ = Φ [20, Theorems 1.3, 1.4 and 1.5].

Open problems
We propose the following open problems: • Can the set LimCurt([µ]) be empty for some right-continuous peacock? We conjecture that the answer is no.
• How many Markov processes may LimCurt([µ]) contain? We conjecture that there is exactly one.
One may ask the same questions for peacocks without continuity assumptions, or the second question for LimCurtFD([µ]). Also the same questions make sense for all other couplings than the left-curtain coupling, still using the Markov composition. In view of the Kamae-Krengel Theorem [15] and Pass' transport problem [24], the case of the quantile coupling seems to be of particular interest.