Fragmentation of ordered partitions and intervals

Fragmentation processes of exchangeable partitions have already been studied by several authors. This paper deals with fragmentations of exchangeable compositions, i.e. partitions of N in which the order of the blocks matters. We will prove that such a fragmentation is bijectively associated to an interval fragmentation. Using this correspondence, we then study two examples : Ruelle’s interval fragmentation and the interval fragmentation derived from the standard additive coalescent.


Introduction
Random fragmentations describe objects that split as time goes on. Two types of fragmentation have received a special attention: fragmentations of partitions of N and mass-fragmentations, i.e. fragmentations on the space S = {s 1 ≥ s 2 ≥ . . . ≥ 0, i s i ≤ 1}. Berestycki [3] has proved that to each homogeneous fragmentation process of exchangeable partitions, we can canonically associate a mass fragmentation. More precisely, let π = (π 1 , π 2 , . . .) be an exchangeable random partition of N (i.e. the distribution of π is invariant under finite permutations of N) whose blocks (π i ) i≥1 are ordered by increasing of their least elements. According to the work of Kingman and Pitman [15,16], the asymptotic frequency of the i-th block π i , f i = lim n→∞ Card{π i ∩{1,...,n}} n , exists for every i a.s. We denote by (|π i | ↓ ) i∈N the sequence (f i ) i∈N after a decreasing rearrangement. If (Π(t), t ≥ 0) is a fragmentation of exchangeable partitions, then (|Π i (t)| ↓ i∈N , t ≥ 0) is a mass fragmentation. Conversely, a fragmentation of exchangeable partitions can be built from a mass fragmentation via a "paintbox process".
One goal of this paper is to develop a similar theory for fragmentations of exchangeable compositions and interval fragmentations. Let us recall that a composition of a natural number n is an ordered collection of natural numbers (n 1 , . . . , n k ) with sum n. Here we will also use the definition of Gnedin [11]: a composition of the set {1, . . . , n} is an ordered collection of disjoint nonempty subsets γ = (A 1 , . . . , A k ) with ∪A i = {1, . . . , n}. The vector of class size of γ, ( A 1 , . . . , A k ) is a composition of n and is called the shape of γ. Hence, there is a one to one correspondence between measures on compositions of n and measures on exchangeable compositions of the set {1, . . . , n}. Gnedin proved a theorem analogous to Kingman's Theorem in the case of exchangeable compositions: for each probability measure P that describes the law of a random exchangeable composition, we can find a probability measure on the space of open subsets of [0,1], such that P can be recovered via a "paintbox process". This is why it seems very natural to look for a correspondence between fragmentations of compositions and interval fragmentations.
The first part of this paper develops the relation between probability laws of exchangeable compositions and laws of random open subsets, and its extension to infinite measures. Then we prove that there exists indeed a one to one correspondence between fragmentations of compositions and interval fragmentations. The next part gives some properties and characteristics of these processes and briefly presents how this theory can be extended to time-inhomogeneous fragmentations and self-similar fragmentations. Finally, as an application of this theory, the last section describes two well known interval fragmentations: first, the interval fragmentation introduced by Ruelle [2,7,9,19] and second, the fragmentation derived from the standard additive coalescent [1,4]. 2 Exchangeable compositions and open subsets of ]0, 1[

Probability measures
In this section, we define exchangeable compositions following Gnedin [11], and recall some useful properties. For n ∈ N, let [n] be the set of integers {1, . . . , n}. Let k n : C n → C n−1 be the restriction mapping from compositions of the set [n] to compositions of the set [n − 1] and let C be the projective limit of (C n , k n ). We endow C with the product topology, it is then a compact set. The composition of [n] (resp. N) with a single nonempty block will be denoted by 1 n (resp. 1 N ) and we will write C * n for C n \{1 n }. In the sequel, for n ∈ N ∪ {∞}, γ ∈ C n and A ⊂ [n], γ A will denote the restriction of γ to A. Hence, for m ≤ n, γ [m] will denote the restriction of γ to [m]. We say that a sequence (P n ) n∈N of measures on (C n ) n∈N is consistent if, for all n ≥ 2, P n−1 is the image of P n by the projection k n , i.e., for all γ ∈ C n−1 , we have By Kolmogorov's Theorem, such a sequence (P n ) n∈N determines the law of a random composition of N.
A random composition Γ of N is called exchangeable if for all n ∈ N, for every permutation σ of [n] and for all γ ∈ C n , we have where σ(Γ [n] ) is the image of the composition Γ [n] by σ. Hence, given an exchangeable random composition Γ, we can associate a function defined on finite sequences of integers by ∀k ∈ N, ∀n 1 , . . . , n k ∈ N k , p(n 1 , . . . , n k ) = P(Γ [n] = (B 1 , . . . , B k )), where (B 1 , . . . , B k ) is a composition of the set [n] with shape (n 1 , . . . , n k ) and n = n 1 + . . . + n k . This function determines the law of Γ and is called the exchangeable composition probability function (ECPF) of Γ.
Notation 2.2. Let γ be a composition of N. For (i, j) ∈ N 2 , we will use the following notation: • i ∼ j, if i and j are in the same block.
• i ≺ j, if the block containing i precedes the block containing j.
• i j, if the block containing i follows the block containing j.
Let U be the set of open subsets of ]0, 1[. For u ∈ U, let where u c = [0, 1]\u. We also define a distance on U by: This makes U a compact metric space. . We construct a random composition of N in the following way: we draw (X i ) i∈N iid random variables with uniform law on [0, 1] and we use the following rules: • i ∼ j, if i = j or if X i and X j belong to the same component interval of u.
• i ≺ j, if X i and X j do not belong to the same component interval of u and X i < X j .
• i j, if X i and X j do not belong to the same component interval of u and X i > X j .
This defines an exchangeable probability measure on C that we shall denote P u ; the projection of P u on C n will be denoted by P u n . If ν is a probability measure on U, we denote by P ν the law on C whose projections on C n are: Let us recall here a useful theorem from Gnedin [11]: Then U n converges almost surely to a random element U ∈ U. The conditional law of Γ given U is P U . As a consequence, if P is an exchangeable probability measure on C, then there exists a unique probability measure ν on U such that P = P ν .
Hence, with each exchangeable composition Γ, we can associate a random open set that we will call asymptotic open set of Γ and denote U Γ . We shall also write |Γ| ↓ for the decreasing sequence of the lengths of the interval components of U Γ . More generally, for u ∈ U, u ↓ will be the decreasing sequence of the interval component lengths of u.
Let us notice that this theorem is the analogue of Kingman's Theorem for the representation of exchangeable partitions. Actually, let Π = (Π 1 , Π 2 , . . .) be an exchangeable random partition of N (the blocks of Π are listed by increase of their least elements). Pitman [16] has proved that each block of Π has almost surely a frequency, i.e.
One calls f i the frequency of the block Π i . Therefore, for all exchangeable random partitions, we can associate a probability on S = {s = (s 1 , s 2 , . . .), s 1 ≥ s 2 ≥ . . . ≥ 0, i s i ≤ 1} which will be the law of the decreasing rearrangement of the sequence of the partition frequencies.
Conversely, given a lawν on S, we can construct an exchangeable random partition whose law of its frequency sequence isν (cf. [15]): we pick S ∈ S with lawν and we draw a sequence of independent random variables V i with uniform law on [0, 1]. Conditionally on S, two integers i and j are in the same block of Π iff there exists an integer k such that k l=1 S l ≤ V i < k+1 l=1 S l and k l=1 S l ≤ V j < k+1 l=1 S l . We denote by ρν the law of this partition (and by a slight abuse of notation, ρ s denotes the law of the partition obtained withν = δ s ). Kingman's representation Theorem states that any exchangeable random partition can be constructed in this way.
Let ℘ 1 be the canonical projection from the set of compositions C to the set of partitions P and ℘ 2 the canonical projection from the set U to the set S that associates to an open set u the decreasing sequence u ↓ of the lengths of its interval components. To sum up, we have the following commutative diagram between probability measures on P, C, S, U:

Representation of infinite measures on C
In this section, we show how Theorem 2.4 can be extended to a class of infinite measures on C.
Definition 2.5. Let µ be a measure on C. We call µ a fragmentation measure if the following conditions hold: • µ is exchangeable.
Notice that by exchangeability, the last condition implies that, for all n ≥ 2, we have µ({γ ∈ C, γ [n] = 1 n }) < ∞. We will see in the sequel that such a measure can always be associated to a fragmentation process and conversely. Definition 2.6. A measure ν on U is called a dislocation measure if: where s 1 is the length of the largest interval component of u.
In the sequel, for any ν measure on U, we define the measure P ν on C by Notice that if ν is a dislocation measure, then P ν is a fragmentation measure. In fact, the measure P ν is exchangeable since P u is an exchangeable measure. For u =]0, 1[, we have P u (1 N ) = 0, and as ν(]0, 1[) = 0, we have also P ν (1 N ) = 0. We now have to check that P ν ({γ ∈ C, γ [n] = 1 n }) < ∞ for all n ∈ N. Let us fix u ∈ U. Set u ↓ = s = (s 1 , s 2 , . . .).
Let i l be the composition of N given by ({i}, N \ {i}) and l = i δ i l . Let i r be the composition of N given by (N \ {i}, {i}) and r = i δ i r . It is easy to check that l and r are also two fragmentation measures.
Theorem 2.7. If µ is a fragmentation measure, there exists two unique nonnegative numbers c l and c r , called coefficients of erosion, and a unique dislocation measure ν on U such that: Besides, we have 1 {γ∈C, Uγ =]0,1[} µ = c l l + c r r and 1 {γ∈C, Recall that in the case of fragmentation measures on partitions, Bertoin [6] proved the following result: let˜ i be the partition of N, {i}, N \ {i} and define the measure˜ = i δ˜ i . Letμ be an exchangeable measure on P such that µ({N}) = 0 andμ(π ∈ P, π n = {[n]}) is finite for all n ∈ N. Then there exists a measureν on S such thatν((1, 0, 0, . . .)) = 0 and S (1 − s 1 )ν(ds) < ∞, and a nonnegative number c such that:μ = ρν + c˜ .
Fragmentation measures on partitions fit in a more general framework of exchangeable semifinite measures on partitions as developed by Kerov (see [14], Chapter 1, Section 3).
Hence, Theorem 2.7 is an analogous decomposition in the case of fragmentation measures on compositions, except that, in this case, there are two coefficients of erosion, one characterizing the left side erosion and the other the right side erosion.
Proof. We adapt a proof due to Bertoin [6] for the exchangeable partitions to our case. Set n ∈ N. Set µ n = 1 {Γ [n] =1n} µ, therefore µ n is a finite measure. Let − → µ n be the image of µ n by the n-shift, i.e.
Then − → µ n is exchangeable since µ is, and furthermore it is a finite measure. So, we can apply Theorem 2.4: According to Theorem 2.4, since − → µ n is an exchangeable finite measure, − → µ n -almost every composition has an asymptotic open set and so µ n -almost every composition has also an asymptotic open set, and as µ n ↑ µ, µ-almost every composition has also an asymptotic open set. Besides, we have Hence, since µ n ≤ µ n+1 , we deduce that ν n ≤ ν n+1 . Set ν = lim n→∞ ↑ ν n . Furthermore, we have Since We now have to study µ on the event As in Section 2.1, we can now establish connections among fragmentation measures on C and P and dislocation measures on U and S. Let us recall that ℘ 1 is the canonical projection from C to P, and denote q : Then we have the following commutative diagram: Proof. It remains to prove thatμ = ρν + (c l + c r )˜ . Setμ = ρ ν + c˜ . Sinceμ is the image by ℘ 1 of µ, we haveμ (˜ 1 ) = µ( 1 l ) + µ( 1 r ) and then c = c r + c l .
Let us fix n ∈ N and π ∈ P n \{1 n }.
We get So we deduce that ν =ν.

Fragmentation of compositions
We remark that the operator F RAG has some useful properties. First, if 1 (.) denotes the constant sequence equal to 1 n , we have F RAG(γ, 1 (.) ) = γ. Furthermore, the fragmentation operator is compatible with the restriction i.e., for every n ≤ n, This implies that F RAG is a consistent operator and we can extend this definition to the compositions of N. Notice that we have this equality since we take care of fragmenting the block γ i by γ (m i ) . Indeed, if we have fragmented the block γ i by γ (i) , the operator F RAG would not be anymore compatible with the restriction. For example, take n = 3, Besides, the operator F RAG preserves the exchangeability. More precisely, let (Γ (i) , i ∈ {1, . . . , n}) be a sequence of random compositions which is doubly exchangeable, i.e. for each i, Γ (i) is an exchangeable composition, and moreover, the sequence (Γ (i) , i ∈ {1, . . . , n}) is also exchangeable. Let Γ be an exchangeable composition of C n independent of Γ (·) . Then F RAG(Γ, Γ (.) ) is an exchangeable composition. Let us prove this property. We fix a permutation σ of [n] and we shall prove that Let k be the number of blocks of Γ and denote by m 1 , . . . , m k the minima of Γ 1 , . . . , Γ k . Let us define now m 1 , . . . , m k the minima of σ(Γ 1 ), . . . , σ(Γ k ). and Γ (·) = (Γ (i) , i ∈ {1, . . . , n}) by Since σ(Γ) law = Γ and Γ (.) law = Γ (.) and Γ (.) remains independent of Γ, we get We can now define the notion of exchangeable fragmentation process of compositions.
Definition 3.2. Let us fix n ∈ N and let (Γ n (t), t ≥ 0) be a (possibly time-inhomogeneous) Markov process on C n which is continuous in probability. We call Γ n an exchangeable fragmentation process of compositions if: • Γ n (0) = 1 n a.s.
• Its semi-group is described in the following way: there exists a family of probability measures on exchangeable compositions The fragmentation is homogeneous in time if P t,s depends only on s − t. A Markov process (Γ(t), t ≥ 0) on C is called an exchangeable fragmentation process of compositions if, for all n ∈ N, the process (Γ [n] (t), t ≥ 0) is an exchangeable fragmentation process of compositions on C n .
Hence, in our definition we impose that the blocks split independently by the same rule ( the "branching property"). This hypothesis is crucial for most of the following results (see however Section 4.5 where more general processes are considered).
In the sequel, a c-fragmentation will denote an exchangeable fragmentation process on compositions.
Proposition 3.3. The semi-group of transition of a time-homogeneous c-fragmentation has the Feller property.

Interval fragmentation
In this section we recall the definition of a homogeneous 1 interval fragmentation [5]. We consider a family of probability measures (q t,s , t ≥ 0, s > t) on U. For every interval I =]a, b[⊂]0, 1[, we define the affine transformation g I :]0, 1[→ I given by g I (x) = a + x(b − a). We still denote g I the induced map on U, so, for V ∈ U, g I (V ) is an open subset of I. We define then q I t,s as the image of q t,s by g I . Hence q I t,s is a probability measure on the space of open subsets of I. Finally, for W ∈ U with interval decomposition (I i , i ∈ N), q W t,s is the distribution of ∪X i where the X i are independent random variables with respective laws q I i t,s .
it is a Markov process that fulfills the following properties: • U is continuous in probability and U (0) =]0, 1[ a.s.
• U is nested i.e. for all s > t we have U (s) ⊂ U (t).
• There exists a family (q t,s , t ≥ 0, s > t) of probability measures on U such that: In the sequel, we abbreviate an interval fragmentation process as an i-fragmentation.
We remark that if we take the decreasing sequence of the sizes of the interval components of an i-fragmentation, we obtain a mass-fragmentation, denoted here a m-fragmentation (see [6] for a definition of m-fragmentations).

Link between i-fragmentation and c-fragmentation
From this point of the paper and until Section 4.4, the fragmentation processes we consider will always be homogeneous in time, i.e. q t,s depends only on s − t, hence we will just write q s−t to denote q t,s .
Let (U (t), t ≥ 0) be a process on S. Let (V i ) i≥0 be a sequence of independent random variables uniformly distributed on ]0,1[. Using the same process as in Definition 2.3 with U (t) and (V i ) i≥1 , we define a process (Γ U (t), t ≥ 0) on C.
Theorem 3.5. There is a one to one correspondence between laws of i-fragmentations and laws of c-fragmentations. More precisely: • If a process (U (t), t ≥ 0) is an i-fragmentation, then (Γ U (t), t ≥ 0) defined as above is a c-fragmentation and we have U Γ U (t) = U (t) a.s. for each t ≥ 0.
Proof. We start by proving the first point. For the sake of clarity, we will write in the sequel Γ(t) instead of Γ U (t). We have by Theorem 2.4, U Γ(t) = U (t) a.s. for each t ≥ 0. Let us fix n ∈ N and t ≥ 0. We are going to prove that, for s > t, the conditional law of Γ [n] (s) Notice that a i and b i do not depend on the choice of l ∈ Γ i . Furthermore, we have a i < b i if Γ i is not a singleton. We also define Conditionally on Γ [n] (t), the random variables (Y i j ) j∈Γ i ,i∈J are independent and uniformly dis- is a fragmentation process, the processes and are also independent of the singletons of Γ(t). For i ∈ J, let Γ (i) (s) be the composition of Γ i obtained from U i (s) and (Y i j ) j∈Γ i using Definition 2.3; for i / ∈ J, we set Γ (i) = 1 Γ i . Hence, Γ (i) (s) has the law of Γ Γ i (s − t) and the processes (Γ (i) (s), s ≥ t) 1≤i≤k are independent. Furthermore, by construction we have Γ [n] (t + s) = F RAG(Γ [n] (t), Γ (·) (s)). Hence, (Γ [n] (t), t ≥ 0) has the expected transition probabilities.
Let us now prove the second point. In the sequel, we will write U t to denote U Γ(t) . First, we prove that for all s > t, U s ⊂ U t . Fix x / ∈ U t , we shall prove x / ∈ U s . We have χ Ut (x) = min{|x − y|, y ∈ U c t } = 0. Let U n t be the open subset of ]0, 1[ corresponding to Γ [n] (t) as in Theorem 2.4. So we have lim n→∞ d(U n t , U t ) = 0. Fix ε > 0. Hence, there exists N ∈ N such that, for all n ≥ N , χ U n t (x) ≤ ε. This implies that: ∀n ≥ N, ∃y n / ∈ U n t such that |y n − x| ≤ ε. Besides, as (Γ(t), t ≥ 0) is a fragmentation, we have for all n ∈ N, U n s ⊂ U n t . Hence, we have also ∀n ≥ N, y n / ∈ U n s , and so χ U n s (x) ≤ ε for all n ≥ N . We deduce that χ Us (x) = 0 i.e. x / ∈ U s .
We now have to prove the branching property. Fix t > 0. We consider the decomposition of U t in disjoint intervals: Set F k (s) = U t+s ∩ I k (t). We want to prove that, given U t : • ∀l ∈ N, F 1 , . . . , F l are independent processes.
• F k has the following law: For all k ∈ N, there exists i k ∈ N such that, if J n i k (t) denotes the interval component of U n t containing the integer i k , then J n i k (t) n→∞ −→ I k (t). Let B k be the block of Γ(t) containing i k . As B k has a positive asymptotic frequency, it is isomorphic to N. Let f be the increasing bijection from the set of elements of B k to N. Let us re-label the elements of B k by their image by f . The process (U Γ B k (t+s) , s ≥ 0) has then the same law as (U s , s ≥ 0) and is independent of the rest of the fragmentation. Besides, given I k (t) =]a, b[, F k (s) = a + (b − a)U Γ B k (t+s) , so the two points above are proved.
Hence, this result complements an analogous result due to Berestycki [3] in the case of mfragmentations and p-fragmentation (i.e. fragmentations of exchangeable partitions). We can again draw a commutative diagram to represent the link between the four kinds of fragmentation:

Some general properties of fragmentations
In this section, we gather general properties of i and c-fragmentations. Since the proofs of these results are simple variations of those in the case of m and p-fragmentations [6], we will be a bit sketchy.

Rate of a fragmentation process
Let (Γ(t), t ≥ 0) be a c-fragmentation. As in the case of p-fragmentation [6], for n ∈ N and γ ∈ C * n , we define a jump rate from 1 n to γ: With the same arguments as in the case of p-fragmentation, we can also prove that the family (q γ , γ ∈ C * n , n ∈ N) characterizes the law of the fragmentation (you just have to use that distinct blocks evolve independently and with the same law). Furthermore, observing that we have ∀n < m, ∀γ ∈ C * n , q γ = γ∈Cm,γ [n] =γ q γ , and that ∀n ∈ N, ∀σ permutation of [n], ∀γ ∈ C * n , q γ = q σ(γ) , we deduce that there exists a unique exchangeable measure µ on C such that µ(1 N ) = 0 and µ(Q ∞,γ ) = q γ for all γ ∈ C * n and n ∈ N, where Q ∞,γ = {γ ∈ C, γ [n] = γ}. Furthermore, the measure µ characterizes the law of the fragmentation. We call µ the rate of the fragmentation.
We remark also that if a measure µ is the rate of a fragmentation process, we have for all n ≥ 2, So we can apply Theorem 2.7 to µ and we deduce the following result: If µ is the rate of a c-fragmentation, then there exist a dislocation measure ν and two nonnegative numbers c l and c r such that: With a slight abuse of notation, we will write sometimes in the sequel that µ = (ν, c l , c r ) when µ = P ν + c l l + c r r .

The Poissonian construction
We notice that if µ is the rate of a c-fragmentation, then µ is a fragmentation measure in the sense of Definition 2.5. Conversely, we now prove that, if we consider a fragmentation measure µ, we can construct a c-fragmentation with rate µ.
We consider a Poisson measure M on R + × C × N with intensity dt ⊗ µ ⊗ , where is the counting measure on N. Let M n be the restriction of M to R + × C * n × {1, . . . , n}. The intensity measure is then finite on the interval [0, t], so we can rank the atoms of M n according to their first coordinate. For n ∈ N, (γ, k) ∈ C × N, let ∆ (.) n (γ, k) be the composition sequence of C n defined by: . We construct then a process (Γ [n] (t), t ≥ 0) on C n in the following way:  (γ, k)). We can check that this construction is compatible with the restriction; hence, this defines a process (Γ(t), t ≥ 0) on C.
Proposition 4.1. Let µ be a fragmentation measure. The construction above of a process on compositions from a Poisson point process on R + × C × N with intensity dt ⊗ µ ⊗ , where is the counting measure on N, yields a c-fragmentation with rate µ.
Proof. The proof is an easy adaptation of the Poissonian construction of p-fragmentations (cf. [6]). As the sequence ∆ (.) n (γ, k) is doubly exchangeable, we also have that Γ [n] (t) is an exchangeable composition for each t ≥ 0. Looking at the jump rates of the process Γ [n] (t), it is then easy to check that the constructed process is a c-fragmentation with rate µ.
A Poissonian construction of an i-fragmentation with no erosion is also possible with a Poisson measure on R + × U × N with intensity dt ⊗ ν ⊗ . The proof of this result is not as simple as for compositions because it cannot be reduced to a discrete case as above. In fact, to prove this proposition, we must take the image of the Poisson measure M above by an appropriate application. For more details, we refer to Berestycki [3] who has already proved this result for m-fragmentation and the same approach works in our case.
To conclude this section, we turn our interest on how the two erosion coefficients affect the fragmentation. Let (U (t), t ≥ 0) be an i-fragmentation with parameter (0, c l , c r ). Set c = c l + c r . We have: Indeed, consider a c-fragmentation (Γ(t), t ≥ 0) such that U Γ(t) = U (t) a.s. We define µ c l ,cr = c l l + c r r . Hence (Γ(t), t ≥ 0) is a fragmentation with rate µ c l ,cr . Recall that the process (Γ(t), t ≥ 0) can be constructed from a Poisson measure on R + ×C×N with intensity dt⊗µ c l ,cr ⊗ . By the form of µ c l ,cr , we remark that, for all t ≥ 0, Γ(t) has only one non-singleton block. Furthermore, for all n ∈ N, the integer n is a singleton at time t with probability 1 − e −tc , and, given n is a singleton of Γ(t), {n} is before the infinite block of Γ(t) with probability c l /c and after with probability c r /c. By the law of large numbers, we deduce that the proportion of singletons before the infinite block of Γ(t) is almost surely c l c (1 − e −tc ) and the proportion of singletons after the infinite block of Γ(t) is almost surely cr c (1 − e −tc ). Remark 4.2. Berestycki [3] has proved a similar result for the m-fragmentation. He also proved that if (F (t), t ≥ 0) is a m-fragmentation with parameter (ν, 0), thenF (t) = e −ct F (t) is a mfragmentation with parameter (ν, c). There is no simple way to extend Berestycki's result to the case of an i-fragmentation since the Lebesgue measure of c U (t) squeezed between two successive interval components of U (t) depends on the time where the two component intervals split.

Projection from U to S
We know that if (U (t), t ≥ 0) is an i-fragmentation, then its projection on S, (U ↓ (t), t ≥ 0) is an m-fragmentation. More precisely, we can express the characteristics of the m-fragmentation in terms of the characteristics of the i-fragmentation. Proof. Let (Γ(t), t ≥ 0) be a c-fragmentation with rate µ = (ν, c l , c r ). Let (Π(t), t ≥ 0) be its image by ℘ 1 . The process (Π(t), t ≥ 0) is then a p-fragmentation. Set n ∈ N and π ∈ P * n . We have whereμ is the image of µ by ℘ 1 . Besides we have already proved thatμ = (ν, c l + c r ). We consider now the i-fragmentation (U Γ(t) , t ≥ 0) with rate (ν, c l , c r ). We get that the process (U ↓ Γ(t) , t ≥ 0) is a.s. equal to the m-fragmentation (|Π(t)| ↓ , t ≥ 0) with rate (ν, c l + c r ).
According to Proposition 4.3 and using the theory of m-fragmentation (see [6]), we deduce then the following results: • Let (Γ(t), t ≥ 0) be a c-fragmentation with parameter (ν, c l , c r ). We denote by B 1 the block of Γ(t) containing the integer 1. Set σ(t) = − ln |B 1 (t)|. Then (σ(t), t ≥ 0) is a subordinator. If we denote ζ = sup{t > 0, σ t < ∞}, then there exists a non-negative function φ such that We call φ the Laplace exponent of σ and we have: where (|U i |) i≥0 is the sequence of the lengths of the component intervals of U .
• An (ν, c r , c l ) i-fragmentation (U (t), t ≥ 0) is proper (i.e. for each t, U (t) has almost surely a Lebesgue measure equal to 1) iff c l = c r = 0 and ν i s i < 1 = 0.

Extension to the time-inhomogeneous case
We now briefly expose how the results of the preceding sections can be transposed in the case of time-inhomogeneous fragmentation. We will not always provide the details of the proofs since they are very similar to the homogeneous case. In the sequel, we shall focus on c-fragmentation (Γ(t), t ≥ 0) fulfilling the following properties: • for all n ∈ N, let τ n be the time of the first jump of Γ [n] and λ n be its law. Then λ n is absolutely continuous with respect to Lebesgue measure with continuous and strictly positive density.
Remark that a time homogeneous fragmentation always fulfills these two conditions. Indeed, in that case, λ n is an exponential random variable and the function h n γ (t) does not depend on t. As in the case of fragmentation of exchangeable partitions [2], for n ∈ N and γ ∈ C * n , we can define an instantaneous rate of jump from 1 n to γ: With the same arguments as in the case of fragmentations of exchangeable partitions [2], we can prove that, for each t > 0, there exists a unique exchangeable measure µ t on C such that µ t (1 N ) = 0 and µ t (Q ∞,γ ) = q γ,t for all γ ∈ C * n and n ∈ N, where Q ∞,γ = {γ ∈ C, γ n = γ}. Furthermore, the family of measures (µ t , t ≥ 0) characterizes the law of the fragmentation. We call µ t the instantaneous rate at time t of the fragmentation. We remark also that if (µ t , t ≥ 0) is the family of rates of a fragmentation process, we have for all n ≥ 2, So we can apply Theorem 2.7 to µ t and we deduce the following proposition: Corollary 4.4. Let (µ t , t ≥ 0) be the family of rates of a c-fragmentation. Then there exist a family of dislocation measures (ν t , t ≥ 0) and two families of nonnegative numbers (c l,t , t ≥ 0), (c r,t , t ≥ 0) such that: Besides we have for all T ≥ 0, Proof. The first part of the proposition comes from Theorem 2.7. For the second part, use that For the upper bound concerning the erosion coefficients, we remark that: In the same way as for homogeneous fragmentation, we define a family of fragmentation measures as a family (µ t , t ≥ 0) of exchangeable measures on C such that, for each t ∈ [0, ∞[, we have: • ∀n ∈ N, ∀A ⊂ C * n , µ t (A) is a continuous function of t.
Proposition 4.5. Let (µ t , t ≥ 0) be a family of fragmentation measures. A c-fragmentation with rate (µ t , t ≥ 0) can be constructed from a Poisson point process on R + × C × N with intensity dt ⊗ µ t ⊗ , where is the counting measure on N in the same way as for time-homogeneous fragmentation.
It is very easy to check that the proof of the homogeneous case applies here too. Of course, a Poissonian construction of a time-inhomogeneous i-fragmentation with no erosion is also possible with a Poisson measure on R + × U × N with intensity dt ⊗ ν t ⊗ . Concerning the law of the tagged fragment, if one defines σ(t) = − ln |B 1 (t)|, with B 1 the block containing the integer 1, we have now that σ(t) is a process with independent increments. And so, if we denote ζ = sup{t > 0, σ t < ∞}, then there exists a family of non-negative functions (φ t , t ≥ 0) such that We call φ t the instantaneous Laplace exponent of σ at time t and we have where (|U i |) i≥0 is the sequence of the lengths of the component intervals of U . Furthermore, an

Extension to the self-similar case
A notion of self-similar fragmentations has been also introduced [5]. We recall here the definition of a self-similar p-fragmentation, the reader can easily adapt this definition to the three other instances of fragmentations.
Definition 4.6. Let Π = (Π(t), t ≥ 0) be an exchangeable process on P. We order the blocks of Π by their least elements. We call Π a self-similar p-fragmentation with index α ∈ R if • Π(0) = 1 N a.s.
• Π is continuous in probability • For every t ≥ 0, let Π(t) = (Π 1 , Π 2 , . . .) and denote by |Π i | the asymptotic frequency of the block Π i . Then for every s > 0, the conditional distribution of Π(t + s) given Π(t) is the law of the random partition whose blocks are those of the partitions where Π (1) , . . . is a sequence of independent copies of Π and s i = s|Π i | α .
Notice that an homogeneous p-fragmentation corresponds to the case α = 0.
We have still the same correspondence between the four types of fragmentation. In fact, a self-similar fragmentation can be constructed from a homogeneous fragmentation with a time change: Proposition 4.7.
[5] Let (U (t), t ≥ 0) be an homogeneous interval fragmentation with dislocation measure ν. For x ∈]0, 1[, we denote by I x (t) the interval component of U (t) containing x. We define Then (U α (t), t ≥ 0) is a self-similar interval fragmentation with index α.
A self-similar i-fragmentation (or c-fragmentation) is then characterized by a quadruple (ν, c l , c r , α) where ν is a dislocation measure on U, c l and c r are two nonnegative numbers and α ∈ R is the index of self-similarity.

Interval components in exchangeable random order
In this section we introduce the notion of random open set with interval components in exchangeable random order. In the next section, we will give an example of an i-fragmentation whose dislocation measure has its interval components in exchangeable random order.
Since we have i s i = 1, there exists almost surely a unique open subset of ]0, 1[ fulfilling these two conditions. We denote by Q s the distribution of U .
Letν be a measure on S such thatν( i s i < 1) = 0. We denote by ν the measure on U defined by: A measure on U which can be written in that form is said to have interval components in exchangeable random order.
Proposition 5.2. Let (U (t), t ≥ 0) be an i-fragmentation with rate (ν, 0, 0) and such that for all t ≥ 0, U (t) has interval components in exchangeable random order. Then ν has also interval components in exchangeable random order.
Proof. Let (F (t), t ≥ 0) be the projection of (U (t), t ≥ 0) on S. We know that F is then an m-fragmentation with rate (ν, 0) whereν is the image of ν by the canonical projection U → S.
Let γ ∈ C n . Let π ∈ P n be the image of γ by the canonical projection ℘ 1 from C to P. Let us now remark that we have where k is the number of blocks of γ and q π the jump rate of the p-fragmentation. Let ν be the measure on U obtained in Definition 5.1 from ν. Let us recall that Q ∞,γ = {γ ∈ C, γ [n] = γ} and define also P ∞,π = {π ∈ P, π [n] = π}. We then have So we get that ν = ν and hence ν has interval components in exchangeable random order.
Let us notice that the proof uses the identity q γ = 1 k! q π , so if we want to extend this proposition to the time-inhomogeneous case, then we must suppose not only that U (t) has interval components in exchangeable random order, but more generally that for all s > t ≥ 0, the probability measure q (see [16,17]), and since the blocks are in exchangeable random order, the semi-group from time t to time s of the c-fragmentation is q t,s -FRAG(Γ(t), ·) with

Dislocation measure of the Brownian fragmentation
We consider the m-fragmentation introduced by Aldous and Pitman [1] to study the standard additive coalescent. Bertoin [4] gave a construction of an i-fragmentation (U (t), t ≥ 0) whose projection on S is this fragmentation. More precisely, let ε = (ε s , s ∈ [0, 1]) be a standard positive Brownian excursion. For every t ≥ 0, we consider We define U (t) as the constancy intervals of (S (t) s , 0 ≤ s ≤ 1). Bertoin [5] proved also that (U ↓ (t), t ≥ 0) is an m-fragmentation with index of self-similarity 1/2, with no erosion and its dislocation measure is carried by the subset of sequences {s = (s 1 , s 2 , . . .) ∈ S, s 1 = 1 − s 2 and s i = 0 for i ≥ 3} and is given byν Proposition 5.5. The i-fragmentation derived from a Brownian motion [4] has dislocation measure ν AP such that: • ν AP is supported by the sets of the form ]0, X[∪]X, 1[, so we shall identify each such set with X and write ν AP (dx) for its distribution.
Notice that we have ν AP (dx) = xν AP (s 1 ∈ dx or s 2 ∈ dx) for all x ∈]0, 1[. Hence, given that the m-fragmentation splits in two blocks of size x and 1 − x, the left block of the i-fragmentation will be a size-biased pick from {x, 1 − x}.
Proof. The first part of the proposition is straight forward since we haveν AP (s 1 = 1 − s 2 ) = 1. For the second part, let us use Theorem 9 in [4] which gives the distribution ρ t of the leftmost fragment of U (t): dx for all x ∈]0, 1[.

We get
ν AP (dx) = lim We can also give a description of the distribution at time t > 0 of U (t). Recall the result obtained by Chassaing and Janson [10]. For a random process X on R and t ≥ 0, we define t (X) as the local time of X at level 0 on the interval [0, t], i.e. Let X t be a reflected Brownian bridge conditioned on 1 (X t ) = t. We define θ ∈]0, 1[ such that It is well known that this equation has almost surely a unique solution. Let us define the process (Z t (s), 0 ≤ s ≤ 1) by Z t (s) = X t (s + θ [mod 1]).
Chassaing and Janson [10] have proved that for each t ≥ 0 Besides, as the inverse of the local time of X t defined by is a stable subordinator with Lévy measure (2πx 3 ) −1/2 dx conditioned to T t = 1, we deduce the following description of the distribution of U (t): Corollary 5.6. Let t > 0. Let T be a stable subordinator with Lévy measure (2πx 3 ) −1/2 dx conditioned to T t = 1. Let us define m as the unique real number in [0, t] such that where T m − = lim x→m − T x . We set: Then Proof. It is clear that {u, X t (u) = 0} coincides with {T x , x ∈ [0, t]} cl when T is the inverse of the local time of X t . Hence, we just have to check that if we set m = θ (X t ), then m verifies the equation tT m − − m ≤ tT u − u for all u ∈ [0, t]. Since X t (θ) = 0, we have T m − = θ, thus we get: Let us fix u ∈ [0, t].
Since v (X t ) is a continuous function, there exists v ∈ [0, 1] such that v (X t ) = u. Besides we have T − u ≤ v ≤ T u , so we get tT m − − m ≤ tT u − u.
Hence, the distribution of [0, 1] \ U (t) can be obtained as the closure of the shifted range of a stable subordinator (T s , 0 ≤ s ≤ t) with index 1/2 and conditioned on T t = 1 (recall also that Chassaing and Janson [10] have proved that the leftmost fragment of U (t) is size-biased picked).