Regeneration in Random Combinatorial Structures

Theory of Kingman's partition structures has two culminating points: the general paintbox representation, relating finite partitions to hypothetical infinite populations via a natural sampling procedure, known as Kingman's paintbox; a central example of the theory - the Ewens-Pitman two-parameter family of partitions. In these notes we further develop the theory by passing to structures enriched by the order on the collection of categories; extending the class of tractable models by exploring the idea of regeneration; analysing regenerative properties of the Ewens-Pitman partitions; studying asymptotic features of the regenerative compositions.


Preface
The kind of discrete regenerative phenomenon discussed here is present in the cycle patterns of random permutations. To describe this instance, first recall that every permutation of [n] := {1, . . . , n} is decomposable in a product of disjoint cycles. The cycle sizes make up a partition of n into some number of positive integer parts. For instance, permutation (1 3)(2) of the set [3] corresponds to the partition of integer 3 with parts 2 and 1. Permutations of different degrees n are connected in a natural way. Starting with a permutation of [n], a permutation of the smaller set [n − 1] is created by removing element n from its cycle. This reduction is a surjective n-to-1 mapping. For instance, three permutations (1 3)(2), (1)(2 3), (1)(2)(3) are mapped to (1) (2). Now suppose the permutation is chosen uniformly at random from the set of all n! permutations of [n]. The collection of cycle-sizes is then a certain random partition π n of integer n. By the n-to-1 property of the projection, the permutation reduced by element n is the uniformly distributed permutation of [n − 1], with the cycle partition π n−1 . The transition from π n to π n−1 is easy to describe directly, without reference to underlying permutations: choose a random part of π n by a size-biased pick, i.e. with probability proportional to the size of the part, and then reduce the chosen part by 1. This transition rule suggests to view the random partitions with varying n altogether as components of an infinite partition structure (π n , n = 1, 2, . . .).
Apart from the consistency property inherent to any partition structure, there is another recursive self-reproduction property of the partitions derived from the cycle patterns of uniform permutations. Fix n and suppose a part is chosen by a size-biased pick from π n and completely deleted. Given the part was m, the partition reduced by this part will be a distributional copy of π n−m . In this sense the partition structure (π n , n = 1, 2, . . .) regenerates.
For large n, the size-biased pick will choose a part with about nU elements, where U is a random variable with uniform distribution on the unit interval. In the same way, the iterated deletion of parts by size-biased picking becomes similar to the splitting of [0, 1] at points representable via products of independent uniform variables. The latter is a special case of the multiplicative renewal process often called stick-breaking.
In these notes we consider sequences of partitions and ordered partitions which are consistent in the same sense as the cycle patterns of permutations for various n. In contrast to that, the assumption about the regeneration property of such structures will be fairly general. The connection between combinatorial partitions and splittings of the unit interval is central in the theory and will be analysed in detail in the general context of regenerative structures.

The paintbox and the two-parameter family
A composition of integer n is an ordered sequence λ • = (λ 1 , . . . , λ k ) of positive integer parts with sum |λ • | := j λ j = n. We shall think of composition as a model of occupancy, meaning n 'balls' separated by 'walls' into some number of nonempty 'boxes', like in this diagram | • • • | • | • • | representing composition (3,1,2). A wall | is either placed between two consequitive •'s or not, hence there are 2 n−1 compositions of n. Sometimes we shall also use encoding the compositions into binary sequences, in which a 1 followed by some m − 1 zeroes corresponds to part m, like the code 100110 for composition (3,1,2), A related labeled object is an ordered partition of the set [n] := {1, . . . , n}, which may be obtained by some enumeration of the balls by integers 1, . . . , n, like (the ordering of balls within a box is not important). The number of such labelings, that is the number of ordered set partitions with shape (λ 1 , . . . , λ k ), is equal to the multinomial coefficient f • (λ 1 , . . . , λ k ) := n! λ 1 ! · · · λ k ! .
Throughout, symbol • will denote a function of composition, also when the function is not sensitive to the permutation of parts.
Discarding the order of parts in a composition (λ 1 , . . . , λ k ) yields a partition of integer |λ|, usually written as a ranked sequence of nondecreasing parts. For instance, the ranking maps compositions (3,1,2) and (1,3,2) to the same partition (3, 2, 1) ↓ , where ↓ will be both used to denote the operation of ranking and to indicate that the arrangement of parts in sequence is immaterial. Sometimes we use notation like 2 ∈ (4, 2, 2, 1) ↓ to say that 2 is a part of partition. The number of partitions of the set [n] with the same shape λ ↓ = (λ 1 , . . . , λ k ) ↓ is equal to f (λ ↓ ) := n! n r=1 1 (r!) kr k r ! where k r = #{j : λ j = r} is the number of parts of λ ↓ of size r.
A random composition/partition of n is simply a random variable with values in the finite set of compositions/partitions of n. One statistical context where these combinatorial objects appear is the species sampling problem. Imagine an alien who has no idea of the mammals. Suppose the first six mammals she observes are tiger, giraffe, elephant, elephant, elephant and giraffe, appearing in this sequence. Most frequent -three of these -have long trunks, two are distinctively taller than the others, and one is striped. She records this as partition (3, 2, 1) ↓ into three distinct species. Composition (1, 3, 2) could appear as the record of species abundance by more delicate classification according to typical height, from the lowest to the tallest 1 . Enumerating the animals in the order of observation gives a labeled object, a partition/ordered-partition of the set [6] = {1, . . . , 6}.
There are many ways to introduce random partitions or compositions. The method adopted here is intrinsically related to the species sampling problem. This is the following ordered version of Kingman's paintbox (see [7], [14], [41]).
Ordered paintbox Let R be a random closed subset of [0,1]. The complement open set R c := (0, 1) \ R has a canonical representation as a disjoint union of countably many open interval components, which we shall call the gaps of R. Independently of R, sample points U 1 , U 2 , . . . from the uniform distribution on [0, 1] and group the points in clusters by the rule: U i , U j belong to the same cluster if they hit the same gap of R. If U i falls in R let U i be a singleton. For each n, count the representatives of clusters among U 1 , . . . , U n and define κ n , a random composition of integer n, to be the record of positive counts in the left-to-right order of the gaps.
For instance, κ n assumes the value (3,1,2) if, in the left-to-right order, there is a gap hit by three points out of U 1 , . . . , U 6 , a singleton cluster resulting from either some gap or from some U j ∈ R, and a gap hit by two of U 1 , . . . , U 6 .
In the proper case R has Lebesgue measure zero almost surely, hence U j ∈ R occurs only with probability zero. We may think then of points of R as possible locations of walls | and of the points of [0, 1] as possible locations of balls •. In a particular realisation, the balls appear at locations U j , and the walls bound the gaps hit by at least one ball. In the improper case, R may have positive measure with nonzero probability. If U j ∈ R we can imagine a box with walls coming so close together that no further ball will fit in this box, so U j will forever remain a singleton, no matter how many balls are added.
Sometimes we shall identify R with the splitting of [0, 1] it induces, and just call R itself the paintbox. Molchanov [38] gives extensive exposition of the theory of random sets, although an intuitive idea will suffice from most of our purposes. This can be a set of some fixed cardinality, e.g. splittings of [0, 1] following a Dirichlet distribution (see [15], [32]), or complicated random Cantor-type sets like the set of zeroes of the Brownian motion. It will be also convenient to make no difference between two closed subsets of [0, 1] if they only differ by endpoints 0 or 1. If 1 or 0 is not accumulation point for R, the gap adjacent to the boundary will be called right or left meander.
The paintbox with random R is a kind of canonical representation of 'nonparametric priors' in the species sampling problem. View R as an ordered space of distinct types. Originally, by Kingman [36], the types were colours making up a paintbox. Consider a random probability measure F on reals as a model of infinite ordered population. Let ξ 1 , ξ 2 , . . . be a sample from F , which means that conditionally given F , the ξ j 's are i.i.d. with distribution F . An ordered partition of the sample is defined by grouping j's with the same value of ξ j , with the order on the groups maintained by increase of the values. The case of diffuse (nonatomic) F is trivial -then ties among ξ j 's have probability zero and the partition has only singletons, so the substantial case is F with atoms, when the partition will have nontrivial blocks. The same ordered partition is induced by any other distribution obtained from F by a suitable monotonic transformation, which may be random. To achieve the uniqueness, view F as a random distribution function and observe that ξ i ≤ ξ j iff F (ξ i ) ≤ F (ξ j ). Conditioning on F and applying the quantile transform y → F (y) to the sample produces another sampleξ 1 ,ξ 2 , . . . from the transformed distributioñ F supported by [0, 1]. In the diffuse case, F is well known to be the uniform distribution, and in general the distribution functionF is of special kind: it satisfies F (x) ≤ x for x ∈ [0, 1] andF (x) = xF -a.s. Moreover, each jump location ofF is preceded by a flat (whereF is constant), whose length is equal to the size of the jump. The latter implies that the composition derived fromF by grouping equalξ j 's in clusters is the same as the composition obtained via the paintbox construction from R = support(F ). The identification with the paintbox construction can be shown more directly, i.e. without appealing toF , by taking for R the range of the random function F (note that support(F ) with 0 attached to it coincides with the range of F ).
Note further important features inherent to the paintbox construction: • The unlabeled object, κ n , is determined by R and the uniform order statistics U n:1 < . . . < U n:n , i.e. the ranks of U 1 , . . . , U n appear as random labels and do not matter.
• Attaching label j to the ball corresponding to U j , we obtain, for each n, an ordered partition K n of the set [n], with shape κ n . This ordered partition is exchangeable, meaning that a permutation of the labels does not change the distribution of K n , thus all ordered partitions of [n] with the same shape have the same probability.
• The ordered partitions K n are consistent as n varies. Removing ball n (and deleting an empty box in case one is created) reduces K n to K n−1 . The infinite sequence K = (K n ) of consistent ordered partitions of [1], [2], . . . defines therefore an exchangeable ordered partition of the infinite set N into some collection of nonempty blocks.
Translating the consistency in terms of compositions κ n we arrive at Definition 2.1. A sequence κ = (κ n ) of random compositions of n = 1, 2, . . . is called a composition structure if these are sampling consistent: for each n > 1, conditionally given κ n = (λ 1 , . . . , λ k ) the composition κ n−1 has the same distribution as the composition obtained by reducing by 1 each part λ j with probability λ j /n.
A size-biased part of composition λ • is a random part which coincides with every part λ j with probability λ j /|λ • |. A size-biased part of a random composition κ n is defined conditionally on the value κ n = λ • . The sampling consistency condition amounts to the transition from κ n to κ n−1 by reducing a size-biased part. This special reduction rule in Definition 2.1 is a trace of the exchangeability in K n that remains when the labels are erased: indeed, given the sizes of the blocks, the ball with label n belongs to a particular block of size λ j with probability λ j /n.
Keep in mind that the consistency of ordered set partitions K n is understood in the strong sense, as a property of random objects defined on the same probability space, while Definition 2.1 only requires weak consistency in terms of the distributions of κ n 's. By the measure extension theorem, however, the correspondence between (the laws of) exchangeable ordered partitions of N and composition structures is one-to-one, and any composition structure can be realised through an exchangeable ordered partition of N. In view of this correspondence, dealing with labeled or unlabeled objects is just the matter of convenience, and we shall freely switch from one model to another.
A central result about the general composition structures says that these can be uniquely represented by a paintbox [14]. This extends Kingman's [36] representation of partition structures. Theorem 2.2. For every composition structure κ = (κ n ) there exists a unique distribution for a random closed set R which by means of the paintbox construction yields, for each n, a distributional copy of κ n .
Sketch of proof The line of the proof is analogous to modern proofs of de Finetti's theorem which asserts that a sequence of exchangeable random variables is conditionally i.i.d. given the limiting empirical distribution of the sequence (see Aldous [1]). To this end, we need to make the concept of a random closed set precise. One way to do this is to topologise the space of closed subsets of [0, 1] by means of the Hausdorff distance. Recall that for R 1 , R 2 ⊂ [0, 1] (with boundary points 0, 1 adjoined to the sets) the distance is equal to the smallest ǫ such that the ǫ-inflation of R 1 covers R 2 and the same holds with the roles swapped, so the distance is small when the sizes and positions of a few biggest gaps are approximately the same for both sets. Realise all κ n 's on the same probability space through some exchangeable K. Encode each composition (λ 1 , . . . , λ k ) into a finite set {0, Λ 1 /n, . . . , Λ k−1 /n, 1} where Λ j = λ 1 + . . . + λ j . This maps κ n to a finite random set R n ⊂ [0, 1]. By a martingale argument it is shown that the law of the large numbers holds: as n → ∞ the sets R n converge almost surely to a random closed set R. The limit R is shown to direct the paintbox representation of κ.
There are various equivalent formulations of the result in terms of (i) the exchangeable quasi-orders on N (in the spirit of [33]), (ii) the entrance Martin boundary for the time-reversed Markov chain (κ n , n = . . . , 2, 1), (iii) certain functionals on the infinitedimensional algebra of quasisymmetric functions [23].
For fixed |λ • | = n this is the distribution of κ n . To avoid confusion with the distribution of K n we stress that the probability of any particular value of the set partition K n with shape Sampling consistency translates as a backward recursion where µ • runs over all shapes of extensions of any fixed ordered partition of [n] with shape λ • to some ordered partition of [n + 1]. For instance, taking λ • = (2, 3), µ • assumes the values (1, 2, 3), (2, 1, 3), (2, 3, 1), (3,3), (2,4). The coefficient c(λ • , µ • ) is the probability to obtain λ • from µ • by reducing a size-biased part of µ • . For fixed n, if p • (λ • ) is known for compositions λ • with |λ • | = n, then solving (1) backwards gives the values of CPF for all compositions with |λ • | ≤ n. By linearity of the recursion, every such partial solution, with n ′ ≤ n, is a convex combination of 2 n−1 solutions obtained by taking delta measures on the level n. Similarly, without restricting n, the set of CPF's is convex and compact in the weak topology of functions on a countable set; this convex set has the property of uniqueneess of barycentric decomposition in terms of extreme elements (Choquet simplex). The extreme CPF's are precisely those derived from nonrandom paintboxes. The correspondence between extreme solutions and closed subsets of [0, 1] is a homeomorphism, which extends to the homemorphism between all CPF's and distributions for random closed R.
Discarding the order of parts in each κ n we obtain Kingman's partition structure π = (π n ) with π n = κ ↓ n . Partition structures satisfy the same sampling consistency condition as in Definition 2.1. The corresponding labeled object is an exchangeable partition Π = (Π n ) of the infinite set N. The law of large numbers for partition structures says that, as n → ∞, the vector n −1 π n padded by infinitely many zeroes converges (weakly for π n , strongly for Π n ) to a random element S of the infinite-dimensional simplex so the components of S are the asymptotic frequencies of the ranked parts of κ n . The partition probability function (PPF) p(λ ↓ ) := P(π n = λ ↓ ), |λ ↓ | = n, n = 1, 2, . . . , specifies distributions of π n 's and satisfies a recurrence analogous to (1). The correspondence between PPF's and distributions for unordered paintbox S is bijective. Note that the possibility of strict inequality j s j < 1 occurs in the improper case, where the diffuse mass 1 − j s j , sometimes also called dust [7], is equal to the cumulative frequency of singleton blocks of Π given S = (s j ).
Discarding order is a relatively easy operation. In terms of ordered and unordered paintboxes R and S the connection is expressed by the formula where the ranking ↓ means that the gap-sizes of R are recorded in nonincreasing order. The operation ↓ is a continuous mapping from the space of closed subsets of [0, 1] to ∇.
In the other direction, there is one universal way to introduce the order. With every partition structure one can accosiate a unique symmetric composition structure, for which any of the following three equivalent conditions holds: (i) all terms in the RHS of (3) are equal, (ii) conditionally given Π n with k blocks, any arrangement of the blocks in K n has the same probability, (iii) the gaps of R appear in the exchangeable random order.
The last property (iii) means that, conditionally given S = (s j ) with s k > 0, every relative order of the first k largest gaps (labeled by [k]) of sizes s 1 , . . . , s k has probability 1/k!. This rule defines R unambiguously in the proper case, and extension to the improper case follows by continuity. A simple example of symmetric R is associated with splitting [0, 1] according to the symmetric Dirichlet distribution on a finite-dimensional simplex. Beside from the symmetric composition structure, there are many other composition structures associated with a given partition structure. Understanding the connection in the direction from unordered to ordered structures is a difficult problem of arrangement. To outline some facets of the problem, suppose we have a rule to compute p • from p, how can we pass then from S to R? Specifically, given S = (s j ), in which order the intervals of sizes s 1 , s 2 , . . . should be arranged in an open set? Other way round, suppose we have a formula for p and know that (2) is true, how then can we compute the probability that given π 5 = (3, 2) ↓ the parts appear in the composition as (2, 3)? Most questions like that cannot have universal answers, because random sets and random series are objects of high complexity, and the paintbox correspondence cannot be expressed by simple formulas.

Ewens-Pitman partition structures
In the theory of partition structures and partitionvalued processes of fragmentation and coagulation [7] a major role is played by the Ewens-Pitman two-parameter family of partitions, with PPF where and henceforth (z) n := z(z + 1) · · · (z + n − 1) is a rising factorial. The principal range of the parameters is and there are also a few degenerate boundary cases defined by continuity. One of many remarkable features of these partitions is the sequential device for generating the corresponding exchangeable partition Π = (Π n ). Start with the one-element partition Π 1 . Inductively, Suppose Π n has been constructed then, given that the shape of Π n is (λ 1 , . . . , λ k ) ↓ , the ball n + 1 is placed in the existing box i with probability (λ i −α)/(n+θ) for i = 1, . . . , k, and starts a new box with probability (θ +kα)/(n+θ). In the Dubins-Pitman interpretation as a 'Chinese restaurant process', the balls correspond to customers arriving in the restaurant, and boxes are circular tables. With account of the circular ordering of customers at each occupied table, and subject to uniform random placement at each particular table, the process also defines a consistent sequence of random permutations for n = 1, 2, . . .; with uniform distributions in the case (α, θ) = (0, 1).
The two-parameter family has numerous connections to basic types of random processes like the Poisson process and the Brownian motion, see Pitman's lecture notes [43] for a summary. It also provides an exciting framework for the problem of arrangement.

Regenerative composition structures
Every κ n in a composition structure may be regarded as a reduced copy of κ n+1 . We complement this now by another type of self-reproduction property, related to the reduction by a whole box. Definition 3.1. A composition structure κ = (κ n ) is called regenerative if for all n > m ≥ 1, the following deletion property holds. If the first part of κ n is deleted and conditionally given this part is m, the remaining composition of n − m is distributed like κ n−m .
Denote F n the first part of κ n and consider its distribution It follows immediately from the definition that κ is regenerative iff the CPF has the product form where Λ j = λ j + . . . + λ k for 1 ≤ j ≤ k. For each n, the formula identifies κ n with the sequence of decrements of a decreasing Markov chain Q ↓ n = (Q ↓ n (t), t = 0, 1, . . .) on 0, . . . , n. The chain starts at n, terminates at 0, and jumps from n ′ ≤ n to n ′ − m with probability q(n ′ : m). The binary code of κ n is obtained by writing 1's in positions n − Q ↓ n (t) + 1, t = 0, 1, . . ., and writing 0' is all other positions, with the convention that the last 1 in position n + 1 is not included in the code. In view of this interpretation, we call q = (q(n : m), 1 ≤ m ≤ n, n ∈ N) the decrement matrix of κ. Since p • is computable from q, the decrement matrix determines completely the distributions of κ n 's and the distribution of the associated exchangeable ordered partition K. For a given regenerative κ let π = (π n ), with π n = κ ↓ n , be the related partition structure. Think of κ n as an arrangement of parts of π n in some order. For partition λ ↓ of n and each m ∈ λ ↓ define the deletion kernel which specifies the conditional probability, given the unordered multiset of parts, to place a part of size m in the first position in κ n (so d(λ ↓ , m) = 0 if m / ∈ λ). The deletion property of κ implies that the PPF of π satisfies the identity where q(n : ·), the distribution of F n , may be written in terms of the deletion kernel as Intuitively, the deletion kernel is a stochastic algorithm of choosing a part of partition π n to place it in the first position of composition κ n . Iterated choices arrange all parts of each π n in κ n , hence the deletion kernel may be used to describe the arrangement on the level of finite partitions. The partition structure π inherits from κ the property of invariance under deletion of a part chosen by some random rule, expressed formally as (7) and (8). This is, of course, a subtle property when compared with more obvious invariance of κ under the first-part deletion, as specified in Definition 3.1.

Compositions derived from stick-breaking
Exploiting the paintbox construction we shall give a large family of examples of regenerative composition structures. The method is called stick-breaking, and it is also known under many other names like e.g. residual allocation model or, deeper in history, random alms [31].
Let (W i ) be independent copies of some random variable W with range 0 < W ≤ 1. A value of W is chosen, and the unit stick [0, 1] is broken at location W in two pieces, then the left piece of size W is frozen, and the right piece of size 1 − W is broken again in proportions determined by another copy of W , and so on ad infinitum. The locations of breaks make up a random set R with points so the gaps are R c = ∪ ∞ k=0 (Y k , Y k+1 ). The cardinality of R is finite if P(W = 1) > 0, but otherwise infinite, with points accumulating only at the right endpoint of the unit interval. By the i.i.d. property of the proportions, the part of R to the right of Y 1 is a scaled copy of the whole set, and this re-scaled part of R is independent of Y 1 . Suppose a composition structure κ is derived from the paintbox R = {Y j , j = 0, 1, . . .}. If (0, Y 1 ) contains at least one of the first n uniform points U j , then the first part of the composition κ n is equal to the number of uniforms hitting this interval. Otherwise, conditionally given Y 1 , the sample comes from the uniform distribution on [Y 1 , 1]. Together with the property (10) of R this implies whence the law of the first part of κ n is which is a mixture of binomial distributions conditioned on a positive value. The key property (10) we exploited can be generalised for every Y k ∈ R, from which iterating the argument we obtain the product formula (6). Concrete examples are obtained by choosing a distribution for W . For instance, taking delta measure δ x with some x ∈ (0, 1) yields R = {1 − (1 − x) k , k = 0, . . . , ∞}, which induces the same composition structure as the one associated with sampling from the geometric distribution on the set of integers. This composition structure was studied in many contexts, inluding theory of records and random search algorithms.
Expectations involved in (11) may be computed explicitly only in some cases, e.g. for W with polynomial density, but even then the product formula (6) rarely simplifies.
Example Here is an example of a relatively simple decrement matrix. Taking W with the general two-parameter beta density we arrive at The product formula (6) simplifies moderately for general integer γ [16], and massively in the following case γ = 1.
Regenerative composition structures associated with Ewens' partitions Now suppose W has a beta(1, θ) density Evaluating beta integrals in (11) we find the decrement matrix and massive cancellation in (6) gives the CPF with Λ j = λ j + . . . + λ k . Symmetrisation (3) gives the PPF known as the Ewens sampling formula (ESF) which is a special case of (4). Recall that the combinatorial factor is the number of set partitions of [n] with given shape. The range of parameter is θ ∈ [0, ∞], with the boundary cases defined by continuity. For θ = 1, the distribution of W is uniform[0, 1] and q(n : m) = n −1 is a discrete uniform distribution for each n; the associated partition π n is the same as the cycle partition of a uniform random permutation of [n]. For general θ, the ESF corresponds to a biased permutation, which for each n takes a particular value with probability θ #cycles /(θ) n .
We shall call κ with CPF (15) Ewens' regenerative composition structure. The problem of arrangement has in this case a simple explicit solution. For partition (λ 1 , . . . , λ k ) ↓ the size-biased permutation is the random arrangement of parts obtained by the iterated size-biased picking without replacement. For nonnegative (s j ) ∈ ∇ with j s j = 1 define a size-biased permutation in a similar way: a generic term s j is placed in position 1 with probability proportional to s j , then another term is chosen by a size-biased pick from the remaining terms and placed in position 2, etc. The resulting random sequence is then in the size-biased order, hence the distribution of the sequence is invariant under the size-biased permutation 2 .
Theorem 3.2. Ewens' composition structure (15) has parts in the size-biased order, for every n. Conversely, if a regenerative composition structure has parts in the size-biased order, then its CPF is (15) The paintbox also has a similar property: the intervals (Y j , Y j+1 ) are in the size-biased order. The law of frequencies S is known as Poisson-Dirichlet distribution. The law of Remark For set partitions, size-biased ordering is sometimes understood as the arranging of blocks of Π n by increase of their minimal elements (other often used names: age ordering, sampling ordering). This creates an ordered partition for each n, but this ordered partion is not exchangeable, since e.g. element 1 is always in the first block. In Ewens' case, but not in general, the unlabeled compositions associated with the arrangement by increase of the minimal elements of blocks are sampling consistent as in Definition 2.1 (this observation is due to Donnelly and Joyce [13]). The last assertion is just another formulation of Theorem 3.2.

Composition structure derived from the zero set of BM
Consider the process (B t , t ≥ 0) of Brownian motion (BM), and let Z = {t : B t = 0} be the zero set of BM. The complement R \ Z is the union of the excursion intervals, where the BM is away from zero. Define R as Z restricted to [0, 1]. There is a meander gap between the last zero of BM and 1, caused by an incomplete excursion abrupted at t = 1, but to the left of the meander the set R is of the Cantor-type, without isolated points. Thus the gaps cannot be simply enumerated from left to right, as in the stick-breaking case. Since the BM is a recurrent process with the strong Markov property, the set of zeroes to the right of the generic excursion interval is a shifted distributional copy of the whole Z, independent of the part of Z to the left of (and including) the excursion interval. This implies that Z is a regenerative set, a property familiar from the elementary renewal theory. The scaling property of the BM, the invariance of Z under homotheties. Following Pitman [41], consider the composition structure κ derived from R = Z ∩ [0, 1]. To check the deletion property in Definition 2.1 it is convenient to modify the paintbox model in a way accounting for the self-similarity.
Modified sampling scheme Let Z be a random self-similar subset of R. Fix n and let X 1 < X 2 < . . . be the points of a unit Poisson process, independent of Z. The interval [0, X n+1 ] is split in components at points of Z, so we can define a composition κ n of n by grouping X 1 , . . . , X n in clusters within [0, X n+1 ]. As n varies, these compositions comprise the same composition structure, as the one induced by the standard paintbox construction with Z∩[0, 1], because (i) the vector (X 1 /X n+1 , . . . , X n /X n+1 ) is distributed like the vector of n uniform order statistics (U n:1 , . . . , U n:n ) and (ii) by self-similarity, Z/X n+1 d = Z.
Note that, because the locations of 'balls' vary with n, the model secures a weak consistency of κ n 's, but does not produce strongly consistent ordered set partitions K n . Applying the modified scheme in the BM case, the deletion property is obvious from the regeneration of Z and of the homogeneous Poisson process, these combined with the self-similarity of Z.

Regenerative sets and subordinators
In the stick-breaking case the regeneration property of the induced composition structure κ followed from the observation that R remains in a sense the same when its left meander is truncated. This could not be applied in the BM case, since the leftmost gap does not exist. By a closer look it is seen that a weaker property of R would suffice. For a given closed R ⊂ [0, 1] define the 'droite' point Z x := min{R ∩ [x, 1]}, x ∈ [0, 1], which is the right endpoint of the gap covering x (or x itself in the event x ∈ R).
which would correspond to the conventional regeneration property in the additive theory.
In fact, this apparently stronger property follows from the weaker independence property due to connection to composition structures. See [24] for details and connection to the bulk-deletion properties of composition structures.
For m-regenerative paintbox R the deletion property of κ follows by considering the gap that covers U n:1 = min(U 1 , . . . , U n ). Then q(n : ·) is the distribution of the rank of the largest order statistic in this gap.
To relate Definition 3.3 with the familiar (additive) concept of regenerative set, recall that a subordinator (S t , t ≥ 0) is an increasing right-continuous process with S 0 = 0 and stationary independent increments (Lévy process). The fundamental characteristics of subordinator are the Lévy measureν on (0, ∞], which controls the intensity and sizes of jumps, and the drift coefficient d ≥ 0 responsible for a linear drift component. The distribution is determined by means of the Laplace transform where the Laplace exponent is given by the Lévy-Khintchine formula The Lévy measure must satisfy the condition Φ(1) < ∞ which implies ν[y, ∞] < ∞ and also restricts the mass near 0, to avoid immediate passage of the subordinator to ∞. A positive mass at ∞ is allowed, in which case (S t ) (in this case sometimes called killed subordinator) jumps to ∞ at some exponential time with rateν{∞}. Two standard examples of subordinators are 1. Stable subordinators with parameter 0 < α < 1, characterised bỹ 2. Gamma subordinators with parameter θ > 0, characterised bỹ The constant c > 0 can be always eliminated by a linear time-change. Let R = {S t , t ≥ 0} cl be the closed range of a subordinator. By properties of the increments, R is regenerative: for Z y the 'droite' point at y > 0, conditionally given Z y < ∞, the random set ( R − Z y ) ∩ [0, ∞] is distributed like R and is independent of [0, Z y ] ∩ R and Z y . Also the converse is true: by a result of Maisonneuve [38] every regenerative set is the closed range of some subordinator, with (ν, d) determined uniquely up to a positive multiple.
Call the increasing process (1 − exp(−S t ), t ≥ 0) multiplicative subordinator, and let R = 1 − exp(− R) be its range. The regeneration property of R readily implies that R is m-regenerative. As time passes, the multiplicative subordinator proceeds from 0 to 1, thus it is natural to adjust the Lévy measure to the multiplicative framework by transformingν, by the virtue of y → 1 − e −y , in some measure ν on (0, 1], which accounts now for a kind of continuous-time stick-breaking. We shall still call ν the Lévy measure where there is no ambiguity. In these terms the Lévy-Khintchine formula becomes For integer 1 ≤ m ≤ n introduce also the binomial moments of ν (where 1(· · · ) stands for indicator), so that Φ(n) = n m=1 Φ(n : m). According to one interpretation of (17), Φ(ρ) is the probability rate at which the subordinator passes through independent exponential level with mean 1/ρ. Similarly, Φ(n) is the rate at which the multiplicative subordinator passes through U n:1 and Φ(n : m) is the rate to jump from below U n:1 to a value between U n:m and U n:m+1 . From this, the probability that the first passage through U n:1 covers m out of n uniform points is equal to which is the general representation for decrement matrix of a regenerative composition structure associated with m-regenerative set. The proper case corresponds to the zero drift, d = 0, then passage through a level can only occur by a jump.
In the case of finiteν and d = 0 the subordinator is a compound Poisson process with no drift. Scaling ν to a probability measure, the range of (1 − exp(−S t ), t ≥ 0) is a stick-breaking set with the generic factor W distributed according to ν ; then (19) becomes (11).
The connection between regenerative compositions structures and regenerative sets also goes in the opposite direction. Sketch of proof Sampling consistency together with the regeneration imply that the first n rows of the minor (q(n ′ : ·), n ′ ≤ n) are uniquely determined by the last row q(n : ·) via formulas Think of κ n as allocation of F n balls in box labeled B, and n − F n balls in other boxes. Formula (21) gives the distribution of the number of balls remaining in B after n − n ′ balls have been removed at random without replacement, with account of the possibility m ′ = 0 that B may become empty. Formula (20) says that the distribution of F n ′ is the same as that of the number of balls which remain in B conditionally given that at least one ball remains. This relation of F n and F n ′ is counter-intuitive, because sampling may eliminate the first block of κ n completely (equivalently, the first block of K n may have no representatives in [n ′ ]).
Interestingly, the argument only exploits a recursion on q, hence avoids explicit limit transition from κ to R, as one could expect by analogy with Theorem 2.2. See [24,12,27] for variations.
We can also view F (t) = 1 − exp(−S t ) as a random distribution function on R + and to construct a composition by sampling from F , as in the species sampling problem. These neutral to the right priors have found applications in Bayesian statistics [34].
Additive paintbox It is sometimes convenient to induce regenerative composition structures using a subordinator (S t ) to create the gaps. Then, independent unit-rate exponential variables E 1 , E 2 , . . . should be used in the role of balls, instead of uniform U j 's in the multiplicative framework.
Formula (19) can be re-derived by appealing to the potential measure ⊓ of subordinator. Heuristically, think of ⊓(dy) as of probability to visit location y at some time, and ofν as distribution of size of a generic jump of the subordinator. The probability that the first part m of composition is created by visiting y by a jump of given size z is then the product of (1 − e −ny )⊓(dy) and (1 − e −z ) m e −(n−m)zν (dz). Taking into account the formula for Laplace transform of the potential measure [ we arrive at (19) by integration. The compensation formula for Poisson processes [6, p. 76] is needed to make this argument rigorous. The advantage of working with R + is that the regeneration property involves no scaling. A disadvantage is that the asymptotic frequency of balls within the walls (a, b) is the exponential probability e −b − e −a , as compared to the size of gap in the multiplicative representation on [0, 1].
In particular, for Ewens' composition structures the subordinator (S t ) is a compound Poisson process with the jump distribution exponential(θ), so the range R of (S t ) is a homogeneous Poisson point process with density θ, and R is inhomogeneous Poisson point process with density θ/(1 − x) on [0, 1]. , .
This composition structure appears as the limit of stick-breaking compositions structures (13) as γ → 0. Although the CPF looks very similar to Ewens' (15), there is no simple product formula for the associated partition structure, even in the case θ = 1.
with the one-block composition appearing in the limit β → ∞.
Sliced splitting We introduce another kind of parametric deformation of a subordinator. Let S = (S t ) be a subordinator with range R, and let X 1 < X 2 < . . . be the points of homogeneous Poisson process with density θ. Take S(j), j ≥ 0, to be independent copies of S, also independent of the Poisson process. We construct the path of interrupted subordinator S (θ) by shifting and glueing pieces of S(j)'s in one path. Run S(0) until the passage through level X 1 at some time T 1 , so S T 1− (0) < X 1 ≤ S T 1 (0). Leave the path of the process south-west of the point (T 1 , X 1 ) as it is, and cut the rest north-east part of the path. At time T 1 start the process (S t (1) + X 1 , t ≥ T 1 ) and let it running until passage through X 2 . Iterate, creating partial paths running from (T j , X j ) to (T j+1 , X j+1 ). From the properties of subordinators and Poisson processes, one sees that S (θ) is indeed a subordinator.
The range of S (θ) , can be called sliced splitting. First R + is split at locations X j , then each gap (X j , X j+1 ) is further split at points of Denote, as usual, Φ,ν the characteristics of S, and Φ θ ,ν θ the characteristics of S (θ) .
To see this, a heuristics is helpful to guess the passage rate through exponential level. Denote E ρ , E θ independent exponential variables with parameters ρ, θ. The process S (θ) passes the level E ρ within infinitesimal time interval (0, t) , and probability of the event E θ < E ρ is ρ/(ρ + θ).
The Green matrix For a sequence of compositions κ = (κ n ) which, in principle, need not be consistent in any sense we can define g(n, j) as the probability that a '1' stays in position j of the binary code of κ n . That is to say, g(n, j) is the probability that the parts of κ n satisfy λ 1 + . . . + λ i−1 = j − 1 for some i ≥ 1. Call (g(n, j), 1 ≤ j ≤ n, n ∈ N) the Green matrix of κ. For κ a regenerative composition structure, g(n, j) is the probability that the Markov chain Q ↓ n ever visits state n + 1 − j, and we have an explicit formula in terms of the Laplace exponent (see [24])

Regenerative compositions from the two-parameter family
Let π be the two-parameter partition structure with PPF (4). Sometimes notation PD(α, θ) is used for the law of frequencies S, where PD stands for Poisson-Dirichlet, and sometimes this law is called Pitman-Yor prior after [46]. Formulas for PD(α, θ) are difficult, but the sequence of frequencies in size-biased order can be obtained by inhomogeneous stick-breaking scheme (9) with W j d = beta(1 − α, θ + jα). We will see that for 0 ≤ α < 1 and θ ≥ 0 and only for these values of the parameters the parts of π can be arranged in a regenerative composition structure.
Define a (multiplicative) Lévy measure ν on [0, 1] by the formula for its right tail The density of this measure is a mixture of two beta-type densities, and in the case θ = 0 there is a unit atom at 1. The associated Laplace exponent is It is a good exercise in algebra to show that the symmetrisation (3) of the product-form CPF with decrement matrix (27) is indeed the two-parameter PPF (4). Like their unordered counterparts, the two-parameter regenerative compositions have many interesting features. Three subfamilies are of special interest and, as the experience shows, should be always analysed first.
Case (0, θ) for θ ≥ 0. This is the ESF case (15), with ν being the beta(1, θ) distribution. The blocks of composition appear in the size-biased order, the gaps of R c too.
The product formula (6) specialises to This composition structure was introduced in [41], where Z was realised as the zero set of a Bessel process of dimension 2 − 2α. For α = 1/2 this is the zero set of BM.
The decrement matrix q in this case has the special property that there is a probability distribution h on the positive integers such that This means that 1's in the binary code of κ n can be identified with the set of sites within 1, . . . , n visited by a positive random walk on integers (discrete renewal process), with the initial state 1. Specifically, and q(n : The arrangement of parts of π n in a composition is obtained by placing a size-biased part of π n in the last position in κ n , then by shuffling the remaining parts uniformly at random to occupy all other positions. Exactly the same rule applies on the paintbox level: for S following PD(α, 0), a term is chosen by the size-biased pick and attached to 1 as the meander, then the remaining gaps are arranged in the exchangeable order.
Case (α, α) for 0 < α < 1. The associated regenerative set has zero drift and the Lévy measureν (dy) = α(1 − e −y ) −α−1 e −αy dy y ≥ 0 , this is the zero set of an Ornstein-Uhlenbeck process. The corresponding range of multiplicative subordinator can be realised as the zero set of a Bessel bridge of dimension 2 − 2α; in the case α = 1/2 this is the Brownian bridge. The parts of κ n are identifiable with the increments of a random walk with the same step distribution h as in (29) for the (α, 0) case, but now conditioned on visiting the state n + 1. The CPF is This function is symmetric for each k, which implies that the parts of each κ n are in the exchangeable random order. This confirms the known fact that the excursion intervals of a Bessel bridge appear in exchangeable order.
Due to symmetry, the transition rule from κ n to κ n+1 is a simple variation of the Chinese restaurant scheme. Now the tables are ordered in a row. Given κ n = (λ 1 , . . . , λ k ), customer n + 1 is placed at one of the existing tables with chance (λ j − α)/(n + α) as usual, and when a new table is to be occupied, this table is placed with equal probability to the right, to the left or in-between any two of k tables occupied so far.
In the case (α, 0), there is a right meander appearing in consequence of killing at ratẽ ν{∞} = 1. Removing the atom at ∞ yields another m-regenerative set (not in the twoparameter family) obtained by (i) splitting [0, 1] using beta(1, θ) stick-breaking, (ii) fitting in each gap (Y j−1 , Y j ) a scaled copy of the (α, 0) m-regenerative set. The decrement matrix is (22), with Φ like for the (α, 0) m-regenerative set and β = −1. A dicrete counterpart, κ n , is a path of a random walk with reflection, but CPF has no simple formula.
The m-regenerative set with parameters 0 < α < 1, θ > 0 is constructable from the sets (0, θ) and (α, 0) by sliced splitting. To define a multiplicative version of the two-level paintbox, to have a relation like (23), first split [0, 1] at points Y i of the Poisson process with density θ/(1 − x) as in the Ewens case, (recall that this is the same as stick-breaking with beta(1, θ) factor W ). Then for each j choose an independent copy of the α-stable regenerative set starting at Y j−1 and abrupted at Y j , and use this copy to split (Y j−1 , Y j ).
The resulting m-regenerative set corresponds to the (α, θ) composition structure, so which is trivial to check. As another check, observe that the structural distribution beta(1 − α, α + θ) is the Mellin convolution of beta(1, θ) and beta(1 − α, α), as it must be for the two-level splitting scheme. The construction is literally the same on the level of finite compositions. First a regenerative Ewens (0, θ) composition of n is constructed, then each part is independently split in a sequence of parts according to the rules of the regenerative (α, 0)-composition.
The arrangement problem for general (α, θ) was settled recently in [44]. Note that every sequence r 1 , r 2 , . . . of initial ranks r j ∈ [j] defines uniquely a total order on N, by placing j in position r j relatively to 1, . . . , j. For instance, the initial ranks 1, 2, 1, 3, . . . encode a total order in which the arrangement of set [4] is 3 1 4 2 (1 is ranked 1 within [1], then 2 is ranked 1 within [2], then 3 is ranked 1 within [3], then 4 is ranked 3 within [4], . . .). For η ∈ [0, ∞], consider a probability distribution for (r 1 , r 2 , . . .) under which r j 's are independent, the probability of r j = j is η/(η + j) and the probability of r j = i is 1/(η + j) for every i < j. Pitman and Winkel [44] show that to arrange S d = PD(α, θ) in regenerative paintbox one should (i) first label the frequencies in the size-biased order, (ii) then, independently, arrange the collection of frequencies by applying the arrangement to the lebels, with parameter η = θ/α. For α = 0, the frequencies will be arranged in the size-biased order (because for η = ∞ the relative ranks are r j = j a.s.); for α = θ this is an exchangeable arrangement of S; and for θ = 0 the arrangement is as for (α, 0) partition described above.
The arrangement of blocks of π n in regenerative composition κ n is analogous, for each n. See [18] for this and larger classes of distributions on permutations, their sufficiency properties and connections to the generalised ESF.

Regenerative partition structures and the problem of arrangement
We discuss next connections between regenerative composition structures and their associated partition structures. One important issue is the uniqueness of the correspondence.

Structural distributions
For R c related to S via (2) letP be the size of the gap covering the uniform point U 1 , with the convention thatP = 0 in the event U 1 ∈ R. We shall understandP as a size-biased pick from S, this agrees with the (unambiguous) definition in the proper case and extends it when the sum of positive frequencies may be less than 1. Obviously, the particular choice of R with gap-sizes S is not important. The law ofP is known as the structural distribution of S. Most properties of this distribution readily follow from the fact that it is a mixture of discrete measures j s j δ s j (dx)+ 1 − j s j δ 0 (dx). In particular, the (n − 1)st moment ofP is the probability that κ n is the trivial one-block composition (n) or, what is the same, that π n = (n) ↓ : In general, there can be many partition structures which share the same structural distribution, but for the regenerative composition structures the correspondence is oneto-one. Indeed, we have p(n) = q(n : n) = Φ(n : n) Φ(n) .
With some algebra a recursion for the Laplace exponent follows which shows that the moments sequence (p(n), n ∈ N) determines (Φ(n), n ∈ N) uniquely up to a positive multiple, hence determines the decrement matrix q. Explicit expressions of the entries of q through the p(n)'s are complicated, these are some rational functions in p(n)'s, for instance q(3 : 2) = (2p(2) − 3p(3) + p(2)p(3))/(1 − p (2)). Because the moments p(n) are determined by the sizes of gaps and not by their arrangement, we conclude that Theorem 4.1. Each partition structure corresponds to at most one regenerative composition structure. Equivalently, for random frequencies S there exists at most one distribution for a m-regenerative set R with (R c ) ↓ d = S.
In principle, one can determine if some PPF p corresponds to a regenerative CPF π by computing q formally from the one-block probabilities p(n)'s, then checking positivity of q, and if it is positive then comparing the symmetrised PPF (6) corresponding to q with p. This method works smoothly in the two-parameter case. For the (α, θ) partition structures the structural distribution is beta(1 − α, θ + α) and see Pitman [43]. Computing formally q from p(n)'s we arrive at q coinciding with (27). However, a decrement matrix must be nonnegative, which is not the case for some values of the parameters: Actually, it is evident that any partition structure of the 'discrete series' with α < 0 in (5) cannot be regenerative just because the number of parts in each π n is bounded by −θ/α.

Partition structures invariant under deletion of a part
Recalling (7), (8), partition structures inherit a deletion property from the parent regenerative compositions. In this section we discuss the reverse side of this connection, which puts the regeneration property in the new light. The main idea is that if a partition structure π has a part-deletion property, then the iterated deletion creates order in a way consistent for all n, thus canonically associating with π a regenerative composition structure. Let π be a partition structure. A random part of π n is an integer random variable P n which satisfies P n ∈ π n . The joint distribution of π n and P n is determined by the PPF and some deletion kernel d(λ ↓ , m), which specifies the conditional distribution of P n given partition π n p(λ ↓ )d(λ, m) = P(π n = λ ↓ , P n = m), |λ ↓ | = n.
For each n = 1, 2, . . . the distribution of P n is then q(n : m) = P(P n = m) = The formulas differ from (7) and (8) in that now they refer to some abstract 'random part' P n of unordered structure. The requirement that P n is a part of π n makes distinct m ∈ λ ↓ d(λ ↓ , m) = 1.
Definition 4.3. Call a partition structure π = (π n ) regenerative if, for each n, there exists a joint distribution for π n and its random part P n such that for each 1 ≤ m < n conditionally given P n = m the remaining partition π n \ {m} of n − m has the same distribution as π n−m . Call π regenerative w.r.t. d if the conditional distribution of P n is specified by d as in (31), for each n. Call π regenerative w.r.t. q if q(n : ·) is the law of P n , which means that p(λ ↓ )d(λ ↓ , m) = q(n : m)p(λ ↓ \ {m}), n = 1, 2, . . . .
Example (Hook partition structures) This is a continuation of Example 3.6. Call λ ↓ a hook partition if only λ 1 may be larger than 1, for instance (4, 1, 1, 1) ↓ . For every deletion kernel with the property it can be shown that the only partition structures regenerative w.r.t. such d are those supported by hook partitions, and they have q(n : n) = 1/(1+nd), q(n : 1) = nd/(1+nd) for some d ∈

Deletion kernels of the two-parameter family
For Ewens' composition structures (15) the deletion kernel is the size-biased pick The factor k m appears since the kernel specifies the chance to choose one of the parts of given size m, rather than a particular part of size m. The regeneration of Ewens' partition structures under this deletion operation was observed by Kingman [37] and called non-interference, in a species sampling context. Kingman also showed that this deletion property is characteristic: if a partition structure is regenerative w.r.t. d 0 , then the PPF is the the ESF (16) with some θ ∈ [0, ∞].
For the regenerative composition structures of the two-parameter family (with nonnegative α, θ) the deletion kernel is one of where k = m k m and n = |λ ↓ |. Kingman's characterisation of the ESF is a special case of a more general result (see Gnedin and Pitman [25]): 3. The kernel d 1 can be called cosize-biased deletion, as each (particular) part m ∈ λ ↓ is selected with probability proportional to |λ ↓ | − m; only (α, 0) partitions are regenerative w.r.t. d 1 .
For general τ , the kernel is intrinsically related to the Pitman-Winkel arrangement of blocks with ζ = τ −1 − 1, see Section 3.4.

Back to regenerative compositions
The framework of regenerative partitions suggests to study three objects: the PPF p, the deletion kernel d and the distribution of deleted part q. Naively, it might seem that d, which tells us how a part is deleted, is the right object to start with, like in Kingman's characterisation of the ESF via the size-biased deletion. However, apart from the deletion kernels d τ for the two-parameter family, and kernels related to hook partitions we do not know examples where the approach based on the kernels could be made explicit. Strangely enough, to understand the regeneration mechanism for partitions, one should ignore for a while the question how a part is deleted, and only focus on q which tells us what is deleted. Fix n and let q(n : ·) be an arbitrary distribution on [n]. Consider a Markov q(n : ·)chain on the set of partitions of n by which a partition λ ↓ (thought of as allocation of balls in boxes) is transformed by the rules: • choose a value of P n from the distribution q(n : ·), • given P n = m sample without replacement m balls and discard the boxes becoming empty, • put these m balls in a newly created box.
Similarly, define a Markov q(n : ·)-chain on compositions λ • of n with the only difference that the newly created box is placed in the first position. Obviously, the q(n : ·)-chain on compositions projects to the q(n : ·)-chain on partitions when the order of boxes is discarded.
Lemma 4.6. If (33) holds for some fixed n and distribution q(n : ·) then the law of π n is a stationary distribution for the q(n : ·)-chain on partitions.
Sketch of proof The condition (33) may be written as a stochastic fixed-point equation where ( π n ′ , 1 ≤ n ′ ≤ n) is a sequence of random partitions, independent of P n , with π n d = π n . The lemma follows since then π n−Pn ∪ {P n } d = π n .
There is an obvious parallel assertion about a random composition κ n , which satisfies where \ stands for the deletion of the first part F n with distribution q(n : ·).
Lemma 4.7. The unique stationary distribution of the q(n : ·)-chain on compositions is the one by which κ n follows the product formula for 1 ≤ n ′ ≤ n with q(n ′ : ·) given by (20). Symmetrisation of the law of κ n by (3) gives the unique stationary distribution of the q(n : ·)-chain on partitions.
It follows that if (33) holds for some n then it holds for all n ′ ≤ n, with all p(λ ↓ ), d(λ ↓ , ·) for |λ ↓ | = n ′ uniquely determined by q(n : ·) via sampling consistency. Thus, in principle, for partitions of n ′ ≤ n the regeneration property is uniquely determined by arbitrary discrete distribution q(n : ·) through the following steps: find first (q(n ′ : ·), n ′ ≤ n) from sampling consistency (20), then use the product formula for compositions (6), then the symmetrisation (3). With all this at hand, the deletion kernel can be determined from (31). Letting n vary, the sampling consistency of all q(n : ·)'s implies that q is a decrement matrix of a regenerative composition structure.
Starting with π n , the deletion kernel determines a Markov chain on subpartitions of π n . A part P n is chosen according to the kernel d and deleted, from the remaining partition π n \ {P n } another part is chosen according to d etc. This brings the parts of π n in the deletion order.
Theorem 4.8. Suppose a partition structure π = (π n ) is regenerative w.r.t. q, then (i) q is a decrement matrix of some regenerative composition structure κ, (ii) π is the symmetrisation of κ, (iii) κ is obtained from π by arranging, for each n, the parts of π n in the deletion order.
Thus the regeneration concepts for partition and composition structures coincide. It is not clear, however, how to formulate the regeneration property in terms of the unordered frequencies S. The only obvious way is to compute PPF and then check if the PPF corresponds to a regenerative CPF. Moreover, the deletion kernel may have no welldefined continuous analogue. For instance, in the (α, α) case d 1/2 is a uniform random choice of a part from π n , but what is a 'random choice of a term' from the infinite random series S under PD(α, α)?

More on (α, α) compositions: reversibility
We have seen that the (α, α) composition structures are the only regenerative compositions which have parts in the exchangeable order. We show now that these structures can be characterised by some weaker properties of reversibility.
Every composition structure κ has a dual κ, where each κ n is the sequence of parts of κ n read in the right-to-left order. For example, the value (3, 2) of κ 5 corresponds to the value (2, 3) of κ 5 . If κ is derived from R, then κ is derived from the reflected paintbox 1 − R. If both κ and κ are regenerative then by the uniqueness (Theorem 4.1) they must have the same distribution. If κ is reversible , i.e. κ d = κ, then the first part of κ n must have the same distribution as its last part. Theorem 4.9. Let κ be a regenerative composition structure. Let F n denote the first and L n the last part of κ n . The following conditions are equivalent: (i) P(F n = 1) = P(L n = 1) for all n; (ii) F n d = L n for all n; (iii) κ n d = κ n for all n (reversibility), (v) κ is an (α, α)-composition structure with some 0 ≤ α ≤ 1.
Invoking the paintbox correspondence, the result implies (ii) R is distributed like the zero set of a Bessel bridge of dimension 2 − 2α, for some 0 ≤ α ≤ 1.
The degenerate boundary cases with α = 0 or 1 are defined by continuity.

Self-similarity and stationarity
Self-similarity of a random closed set Z ⊂ R + is the condition cZ d = Z, c > 0. The property is a multiplicative analogue of the stationarity property (translation invariance) of a random subset of R, as familiar from the elementary renewal theory (see [38] for a general account). We encountered self-similarity in connection with paintboxes for (α, 0) compositions.
Regenerative (0, θ) compositions can be also embedded in the self-similar framework by passing to duals. The mirrored paintbox for the dual Ewens' composition structure is the stick-breaking set R = {V 1 · · · V i , i = 0, 1, . . .} with i.i.d. V i d = beta(θ, 1). This set is the restriction to [0, 1] of a self-similar Poisson point process with density θ/y, y > 0.
Introduce the operation of right reduction as cutting the last symbol of the binary code of composition. For instance, the right reduction maps 100110 to 10011.
Definition 5.1. A sequence of random compositions κ = (κ n ) is called right-consistent if the right reduction maps κ n+1 in a stochastic copy of κ n . If κ is a composition structure, we call it self-similar if it is right-consistent.
If a sequence of compositions κ = (κ n ) is right-consistent, it can be realised on the same probability space as a single infinite random binary string η 1 , η 2 , . . ., with κ n being the composition encoded in the first n digits η 1 , . . . , η n . For right-consistent κ the Green matrix is of the form g(n, j) = P(η j = 1), 1 ≤ j ≤ n, n = 1, 2, . . . and we shall simply write g(j).
Theorem 5.2. A composition structure κ is self-similar iff the paintbox R is the restriction to [0, 1] of a selfsimilar set Z. In this case κ can be encoded in an infinite binary string.
Sketch of proof The 'if' part is easily shown using the modified sampling scheme, as in the BM example. The 'only if' part exploits convergence of random sets as in Theorem 2.2.
Arbitrary infinite binary string η 1 , η 2 , . . . (starting from 1) need not correspond to a composition structure, because care of the sampling consistency should be taken. Let us review the (0, θ) and (α, 0) compositions from this standpoint.
Example. For θ > 0 let η 1 , η 2 , . . . be a Bernoulli string with independent digits and This encodes the dual Ewens' composition structure, with the last-part deletion property.
In the modified sampling scheme, the role of balls is taken by a homogeneous Poisson point process, and the boxes are created by points of an independent self-similar Poisson process.
The family of composition structures can be included in a Markov process with θ ≥ 0 considered as a continuous time parameter [28]. On the level of paintboxes the dynamics amounts to intensifying Poisson processes, so that within time dθ the Poisson process Z = Z θ is superimposed with another independent Poisson process with density θ/x. This is an instance of sliced splitting, so (23) is in force. From this viewpoint a special feature is that the θ-splitting are consistently defined, also in terms of interrupted subordinators, which are here compound Poisson processes with exponential jumps.
Remarkably, the splitting process remains Markovian in terms of the binary codes, and has the dynamics in which every '0' eventually turns in '1' by the rule: at time θ, a '0' in the generic position j of the code is switching at rate 1/(θ + j − 1) to a '1', independently of digits in all other positions.
It is known [47] that no other Bernoulli or renewal strings are sampling consistent, i.e. produce composition structures. We shall turn to a larger class of strings with a Markov property, but first review a few general features of the self-similar compositions.
LetP n be the size-biased pick from κ n , and L n be the last part of the composition. Similarly, letP be the size-biased gap-length of R, and L be the size of the meander gap adjacent to 1. Sketch of proof Since reducing the last box by one ball has the same effect as reducing the box chosen by the size-biased pick, the sizes of the boxes must have the same distribution. This yields (ii), and (i) follows as n → ∞. Alternatively, inspecting the gap covering U n:n it is seen that E[L n−1 ] = p • (n), the probability of one-block composition, so the moments ofP and L coincide. Similarly, η j = 1 in the event U n:1 > max(Z ∩ [0, U n:j ]).
The identity (ii) together with a generalisation of a result by Pitman and Yor [45] yields a characterisation of structural distributions, and shows thatP has a decreasing density on (0, 1].
Theorem 5.4. [26] The structural distribution for self-similar composition structure is of the form where d ≥ 0 and ν is a measure on (0, 1] with There is no atom at 0 iff d = 0 iff Z has Lebesgue measure zero.

Markovian composition structures
For a time being we switch to regeneration in the right-to-left order of parts, starting from the last part, like for the dual Ewens' composition. This is more convenient in the selfsimilar context since 0 is the center of homothety. We first modify the deletion property of compositions by allowing a special distribution for the first deleted part (which is now the last part of the composition).
Definition 5.5. A composition structure is called Markovian if the CPF is of the product form where q (0) and q are two decrement matrices.
Similarly to (6), formula (36) says that 1's in the binary code of κ appear at sites Q ↓ n (t)+1 visited by a decreasing Markov chain, with the only new feature that the the distribution of the first decrement is determined by q (0) , and not by q.
The counterpart of Theorem 3.4 for (36) is straightforward. For (S t ) a subordinator, consider the process (V · exp(−S t ), t ≥ 0), where V takes values in (0, 1) and is independent of (S t ). The range of this process is a m-regenerative set (now with rightto-left regeneration) scaled by the random factor V . Taking this set for paintbox R, thus with the meander gap [V, 1], a Markovian composition structure is induced with q(n : m) = Φ(n : m)/Φ(n) as in (19), and Every Markovian composition structure is of this form.
If the string determines some composition structure κ, then κ is self-similar. A composition structure is called self-similar Markov if it has such a binary representation generated by an increasing Markov chain. A stationary regenerative set (or stationary Markov [38]) is the range of a process (X + S t , t ≥ 0) where (S t ) is a finite mean-subordinator, with Lévy measure satisfying m = ∞ 0 yν(dy) < ∞, drift d ≥ 0 and the initial value X whose distribution is (unlike ν in (35)ν lives on (0, ∞)).

Theorem 5.6. [26] A composition structure κ is self-similar Markov if and only if
The distribution of size-biased pick is then (35) with ν the image ofν under y → 1−e −y . The Green matrix can be written in terms of the Laplace exponent The relation beween this and (24) is that the RHS of (24) converges to g(j) as n → ∞. This fact is analogous to the elementary renewal theorem. Like in the regenerative case, the decrement matrices are determined, in principle, by the probabilities (p(n), n ≥ 0), which are moments of the structural distribution, whence the analogue of Theorem 4.1: Theorem 5.7. If a partition structure admits arrangement as a self-similar Markov composition structure, then such arrangement is unique in distribution.
Application to the two-parameter family For 0 ≤ α < 1 and θ > 0 let R α,θ be the mregenerative set associated with (α, θ) regenerative composition structure, and let V be an independent variable whose distribution is beta(θ +α, 1−α). Then the scaled reflected set V ·(1−R α,θ ) is associated with a self-similar Markov composition structure corresponding to (α, θ − α) partition struture. This follows from the stick-breaking representation of the frequencies in size-biased order, with independent factors beta(θ + jα, 1 − α), j = 1, 2, . . .. The Green function g and transition probabilities for Q ↑ can be readily computed.
A 'stationary' version of the regenerative (α, θ) composition is the self-similar Markov arrangement of the (α, θ − α) partition. The structural distribution is beta(1 − α, θ + α), which is also the law of the meander size 1 − V . Note that θ − α may assume negative values, hence every partition with θ > −α has a self-similar Markov arrangement. This 'rehabilitates' (α, θ) partitions with −α < θ < 0 that lack regeneration literally, the property appears in a modified form, as stationary regeneration. If θ ≥ 0 then both types of regeneration are valid 3 .
The (α, 0) composition with left-to-right regeneration is also self-similar Markov, i.e. has the 'stationary' right-to-left regeneration property. This combination of regeneration properties is characteristic for this class.
For the (α, α) partition structure there exists a regenerative arrangement associated with Bessel bridge, and there is another self-similar Markov arrangement. The latter is the self-similar version of the regenerative (α, 2α) composition.
The arrangement of (α, θ) partition in a self-similar Markov composition structure is the same on both paintbox and finite−n level. The size-biased pick is placed at the end, then the rest parts are arranged to the left of it as for the dual (α, θ + α) regenerative structure, see Section 3.4. Property (i) in Theorem 5.3 holds in the strong sense: conditionally given the unordered frequencies S, the length of the meander is a size-biased pick (see [45]).

Asymptotics of the block counts
For κ = (κ n ) a regenerative composition structure, let K n be the number of parts in κ n and let K n,r be the number of parts equal r, so that r rK n,r = n, r K r = K n . For instance, in the event κ 10 = (2, 4, 2, 1, 1) we have K 10 = 5, K 10,1 = 2, K 10,2 = 2, K 10,3 = 0 etc. The full vector (K n,1 , . . . , K n,n ) is one of the ways to record the partition associated with κ n . In the species sampling context, K n is the number of distinct species represented in a sample, hence it is often considered as a measure of diversity.
We are interested in the large-n asymptotics of K n and K n,r for r = 1, 2, . . .. This can be called the small-blocks problem. Typically the composition will have a relatively few number of large parts of size of order n and many parts of size r ≪ n, the latter making the principal contribution to K n .
Unless indicated otherwise, we assume that d = 0 (proper case, no drift) and that ν{∞} = 0 (no killing, no right meander). Then the order of growth of K n is sublinear, K n ≪ n, and K n ↑ ∞ almost surely.
One general tool is the structural distribution σ of the size-biased pickP , which can be used to compute the expectations via It is clear from these formulas that the asymptotics of the moments are determined by the behaviour of σ near 0, because (1 − x) n decays exponentially fast on any interval [ǫ, 1]. The block counts K n , K n,r depend only on the partition, and not on the order of the parts. Nevertheless, the Markovian character of regenerative compositions and the connection with subordinators can be efficiently exploited to study these functionals by methods of the renewal theory. This may be compared with other classes of partitions studied with the help of local limit theorems: partitions obtained by conditioning random sums of independent integer variables [2], and partitions derived from conditioned subordinators [42].
For Ewens' partitions it is well known that K n is asymptotically normal, with both mean and variance of the order of log n (see [2,43]). In contrast to that, for (α, θ) partitions with α > 0 the right scale for K n is n α (α-diversity [43]). These known facts will be embedded in a much more general consideration.
The number of parts satisfies a distributional fixed-point equation where K ′ m , m ≤ n − 1, are independent of the first part F n with distribution q(n : ·), and satisfy K ′ m d = K m . Known asymptotics (e.g. [39], [11]) derived from such identities do not cover the full range of possibilities and require very restrictive moment conditions which are not easy to provide (see however [30] for one application of this approach). In what follows we report on the asymptotics which were obtained by different methods, based on the connection with subordinators, poissonisation, methods of the renewal theory, and Mellin transform [29,30,3,20].
We assume as before the paintbox construction with balls U 1 , . . . , U n and R the closed range of a multiplicative subordinator (1 − exp(−S t ), t ≥ 0). In these terms, K n,r is the number of gaps in the range hit by exactly r out of n uniform points, and K n is the total number of nonempty gaps.
Remark If the subordinator has positive drift d > 0, then K n ∼ K n,1 ∼ n meas(R) a.s., so singletons make a leading contribution to K n . The Lebesgue measure of R is a random variable proportional to the exponential functional of the subordinator, It is informative to consider the number of parts K n as the terminal value of the increasing process K n := (K n (t), t ≥ 0), where K n (t) is the number of parts of the subcomposition derived from the configuration of uniform points not exceeding 1 − exp(−S t ), i.e. produced by the subordinator within the time [0, t]. The number of r-parts K n,r is the terminal value of another process K n,r := (K n,r (t), t ≥ 0) which counts r-parts, but this process is not monotone.
We can think of the subordinator representation of a regenerative composition structure as a coagulation process in which, if at time t there are n ′ particles, every m-tuple of them is merging to form a single particle at rate Φ(n ′ : m). The particle emerging from the coalescence is immediately frozen 4 . Starting with n particles, K n (t) counts the number of frozen particles at time t.
The asymptotics in the small-block problem largely depend on the behaviour of the right tail of the Lévy measure near 0. Ifν is finite, then simplyν[y, ∞] →ν[0, ∞] as y → 0, but ifν is infinite it seems difficult if at all possible to make any conclusions without the following assumption.
Note that the assumption is satisfied in the case of finiteν. By the monotone density version of Karamata's Tauberian theorem [9], for 0 ≤ α < 1 the condition (37) is equivalent to the asymptotics of the Laplace exponent Qualitatively different asymptotics are possible. Very roughly, the whole spectrum can be divided in the following cases, each requiring separate analysis.
• The finite Lévy measure case. This is the case of stick-breaking compositions, with (S t ) a compound Poisson process.
One principal difference between the cases of (proper) regular and slow variation is in the time scales at which major growth and variability of K n occur. In the case α > 0 all K n (t), K n,r (t) are of the same order as K n , whereas in the case α = 0 we have K n (t) ≪ K n .

Stick-breaking compositions
In the case of finite Lévy measure we scaleν to a probability measure. Thenν is the distribution of − log(1 − W ), where W is the generic stick-breaking factor. Introduce the moments which may be finite or infinite. Let M n be the index of the rightmost occupied gap, which contains the maximum order statistic U n:n . Roughly speaking, stick-breaking implies a fast exponential decay of the sizes of gaps, hence one can anticipate a cutoff phenomenon: empty gaps can occur only in a range close to U n:n . From the extreme-value theory we know that − log(1 − M n ) − log n has a limit distribution of the Gumbel type, thus M n can be approximated by the number of jumps of (S t ) before crossing level log n.
It should be noted that exponential decay of nonrandom frequencies, like for the geometric distribution, implies oscillatory asymptotics in the occupancy problem [10], [4]. By stick-breaking the oscillations do not appear since the main variability comes due to randomness in frequencies themselve, so the variability coming from sampling is dominated.
Consider a renewal process with distribution for spacings like that of − log(1 − W ). If the moments are finite, m < ∞, σ 2 < ∞, then a standard result from the renewal theory implies that the number of renewals on [0, log n] is approximately normal for large n, with the expected value asymptotic to (log n)/m. The same is valid for M n , and under the additional assumption m 1 < ∞ also for K n (see [17]). Under weaker assumptions on the moments, the possible asymptotics correspond to other limit theorems of renewal theory, as shown in [20]: The following assertions are equivalent.
(i) There exist constants a n , b n with a n > 0 and b n ∈ R such that, as n → ∞, the variable (K n − b n )/a n converges weakly to some non-degenerate and proper distribution. Furthermore, this limiting distribution of (K n − b n )/a n is as follows.
(a) If σ 2 < ∞, then for b n = m −1 log n and a n = (m −3 σ 2 log n) 1/2 the limiting distribution is standard normal.
for some ℓ slowly varying at ∞, then for b n = m −1 log n, a n = m −3/2 c ⌊log n⌋ and c n any sequence satisfying lim n→∞ nℓ(c n )/c 2 n = 1, the limiting distribution is standard normal.

Sketch of proof
The results are first derived for M n by adopting asymptotics from the renewal theory. To pass to K n it is shown, under the condition m 1 < ∞, that the variable M n − K n (the number of empty boxes to the left of U n:n ) converges in distribution and in the mean to a random variable with expected value m 1 /m.
Example Suppose the law of W is given by P(1 − W ≤ x) = (1 − log x) −1 , x ∈ (0, 1). It can be checked that m 1 < ∞, hence the case (c) applies and (log log n) 2 log n K n − log log n − log log log n converges to a 1-stable law with characteristic function (40). The number of empty boxes M n − K n converges in probability to 0.
Under assumptions m < ∞, m 1 < ∞ the limit behaviour of K n,r 's is read from a limiting occupancy model [22]. To describe the limit we pass to the dual composition, generated by right-to-left stick-breaking Let (X n,1 , X n,2 , . . .) be the occupancy numbers of the gaps in the left-to-right order, this is a random weak composition (0's allowed) of n with X n,1 > 0 and X n,j ≥ 0 for j > 1. By inflating [0, 1] with factor n, the uniform sample converges as a point process to a unit Poisson process (balls). On the other hand, nR converges to a self-similar point process Z, whose gaps play the role of boxes. From this, the occupancy vector (X n,1 , X n,2 , . . .) acquires a limit, which is an occupancy vector (X 1 , X 2 , . . .) derived from the limiting point processes. The limit distribution of the occupancy vector is . Correspondingly, K n,r 's jointly converge in distribution to #{i : X i = r}, r = 1, 2, . . .. The convergence also holds for K n,0 , defined as the number of g4 If W d = beta(1, θ) then Z is Poisson process with density θ/x. Then K n,r 's converge in distribution to independent Poisson variables with mean θ/r, which is a well known property of Ewens' partitions [2]. It is a challenging open problem to identify the limit laws of the K n,r 's for general distribution of W .
Suppose (37) holds with 0 < α ≤ 1. This case is treated by reducing the occupancy problem to counting the gaps of given sizes. For x > 0 let N x (t) be the number of gaps of size at least x, in the partial range of the multiplicative subordinator 1 − exp(−S u ), 0 ≤ u < t . Introduce the exponential functionals The distribution of I α (∞) is determined by the formula for the moments [8] where Φ is the Laplace exponent of the subordinator (S t ).

Sketch of proof
Let N x (t) be the number of gaps in the range of the (additive) subordinator restricted to [0, t]. By the Lévy-Ito construction of (S t ) from a Poisson process, we have the strong law N x (t) ∼ν[y, ∞]t a.s. for y ↓ 0. A small gap (s, s + x) is mapped by the function s → 1 − e −s in a gap of size e −s x, from which the result for finite t follows by integration. Special tail estimates are required to conclude that similar asymptotics hold with integration extended to [0, ∞].
The instance α = 1 may be called in this context the case of rapid variation. In this case ℓ in (37) must decay at ∞ sufficiently fast, in order to satisfy Φ(1) < ∞.
Conditioning on the frequencies S = (s j ) embeds the small-block problem in the framework of the classical occupancy problem: n balls are thrown in an infinite series of boxes, with positive probability s j of hitting box j. By a result of Karlin [35], the number of occupied boxes is asymptotic to the expected number, from which K n ∼ E[K n | R] a.s., and a similar result holds for K n,r under the regular variation with index α > 0. Combining this with Theorem 6.2, we have (see [30]) Theorem 6.3. Suppose the Lévy measure fulfills (37). Then, uniformly in 0 < t ≤ ∞, as n → ∞, the convergence holds almost surely and in the mean: K n (t) Γ(1 − α)n α ℓ(n) → I α (t), K n,r (t) Γ(1 − α)n α ℓ(n) → (−1) r−1 α r I α (t), for 0 < α < 1 and r ≥ 1, or α = 1 and r > 1. Similarly, K n (t)/(nℓ 1 (n)) → I 1 (t) for α = 1.
Thus K n , K n,r have the same order of growth if 0 < α < 1. In the case α = 1 of rapid variation, singletons dominate, K n,1 ∼ K n , while all other K n,r 's with r > 1 are of the same order of growth which is smaller than that of K n .

Slow variation: α = 0
The case of infinite Lévy measure with slowly varying tailν[y, ∞] ∼ ℓ(1/y) (y ↓ 0) is intermediate between finiteν and the case of proper regular variation. In this case K n,r → ∞ (like in the case α > 0) but K n,r ≪ K n (like in the case of finiteν). Following Barbour and Gnedin [3] we will exhibit a further wealth of possible modes of asymptotic behaviour appearing in this transitional regime.
We assume that the first two moments of the subordinator are finite. The assumption about the moments is analogous to the instance (a) of Theorem 6.1 in the case of finitẽ ν. The results will be formulated for the case x 2ν (dx) should be replaced by S t/m , then s 2 = v 2 /m. Because the linear time change does not affect the range of the process, it does not change the distribution of K n , K n,r . For the sample (balls) we take a Poisson point process on [0, 1] with intensity n > 0. This is the same as assuming a Poisson(n) number of uniform points thrown on [0, 1]. To avoid new notations, we further understand n as the intensity parameter, and use the old notation K n (t) to denote the number of blocks of the (poissonised) subcomposition on the interval [0, 1 − exp(−S t )]. The convention for K n is the same. For large samples the poissonised quantities are very close to their fixed-n counterparts, but the Poisson framework is easier to work with.
The total number of blocks is the terminal value K n = K n (∞) of the increasing process K n (t). Poissonisation makes the subcompositions within [0, 1 − exp(−S t )] and [1 − exp(−S t ), 1] conditionally independent given S t , hence K n (t) and K n (∞) − K n (t) are also conditionally independent. The consideration can be restricted to the time range t < τ n where τ n := inf{t : S t > log n} is the passage time through log n, since after this time the number of blocks produced is bounded by a Poisson (1)  For large n, we have Φ 0 (n) ∼ Φ(n), but the former is more convenient to deal with, since it enters naturally the compensator of (K n (t), t ≥ 0), By the assumption of slow variation and from Φ 0 (n) → ∞ it follows that Φ k 's are also slowly varying, and satisfy Φ 2 (n) ≫ Φ 1 (n) ≫ Φ 0 (n), n → ∞.
These functions give, asymptotically, the moments of K n and of the terminal value of the compensator The following approximation lemma reduces the study of K n (t) to the asymptotics of the compensator. Lemma 6.4. We have, as n → ∞, and for any b n such that Φ 1 (n)/b 2 n → 0 lim n→∞ P sup 0≤t≤∞ |K n (t) − A n (t)| > b n = 0.
Sketch of proof Noting that K n (t) − A n (t) is a square integrable martingale with unit jumps, we derive E[K n − A n (∞)] 2 = E[A n (∞)], from which the first claim follows. The second follows by application of Kolmogorov's inequality.
From this the law of large number is derived: Theorem 6.5. As n → ∞, we have K n ∼ A n (∞) ∼ Φ 1 (n) almost surely and in the mean.
For more delicate results we need to keep fluctuations of the compensator under control For this purpose we adopt one further assumption, akin to de Haan's second order regular variation [9]. As in Karamata's representation of slowly varying functions [9], write Φ 0 as Φ 0 (s) = Φ 0 (1) exp where L(n) := Φ 0 (n) nΦ ′ 0 (n) .
The key assumption. There exist constants c 0 , n 0 > 0 such that nL ′ (n) L(n) < c 0 log n for n > n 0 .
In particular, L is itself slowly varying, which is equivalent to the slow variation of nΦ ′ 0 (n) as n → ∞. Note that the faster L, the slower Φ 0 . The assumption allows to limit local variations of Φ 0 , which makes possible approximating the compensator by a simpler process in which the subordinator enters linearly. This in turn allows to derive the limit behaviour of the compensator from the functional CLT for (S t ) itself.
Some generalisations are considered in [29].

Slow growth case
Suppose that L(n) = c(n) log n, where c(n) → ∞ but slowly enough to have ∞ 2 dn c(n)n log n = ∞ (otherwiseν is a finite measure). For instance we can have c(n) ≍ log log n (in which case Φ(n) ≍ log log n), but the growth c(n) ≍ log γ n with γ > 0 is excluded. Like in the case of finiteν, almost all variability of K n comes from the range of times very close to the passage time τ n .
The key quantity describing the process K n (t) in this case is the family of integrals where η is a standard normal random variable.