Effect of microscopic pausing time distributions on the dynamical limit shapes for random Young diagrams

The irreducible decomposition of successive restriction and induction of irreducible representations of a symmetric group gives rise to a Markov chain on Young diagrams keeping the Plancherel measure invariant. Starting from this Res-Ind chain, we introduce a not necessarily Markovian continuous time random walk on Young diagrams by considering a general pausing time distribution between jumps according to the transition probability of the Res-Ind chain. We show that, under appropriate assumptions for the pausing time distribution, a diffusive scaling limit brings us concentration at a certain limit shape depending on macroscopic time which leads to a similar consequence to the exponentially distributed case studied in our earlier work. The time evolution of the limit shape is well described by using free probability theory. On the other hand, we illustrate an anomalous phenomenon observed with a pausing time obeying a one-sided stable distribution, heavy-tailed without the mean, in which a nontrivial behavior appears under a non-diffusive regime of the scaling limit.


Introduction
As a remarkable classical result in the field of asymptotic representation theory, the limit shape of random Young diagrams originated with Vershik-Kerov [17] and Logan-Shepp [13]. Let Y denote the set of Young diagrams. Set Y n = λ ∈ Y |λ| = n , where |λ| denotes the size of λ ∈ Y. For λ = (λ 1 ≧ λ 2 ≧ · · · ) ∈ Y, set m j (λ) = ♯{i|λ i = j}, namely the number of rows of length j in λ. The number of rows is l(λ) = ∞ j=1 m j (λ). The Plancherel measure on Y n is defined by Young diagram λ is identified with the profile y = λ(x) depicted in the xy coordinates plane, satisfying R λ(x) − |x| dx = 2|λ| (see Appendix, Figure 2). For λ ∈ Y n we set the profile rescaled by 1/ √ n as (1.1) The limit shape of Young diagrams with respect to the Plancherel measure is described in a form of weak law of large numbers as follows. An element of D = ω : R −→ R |ω(x) − ω(y)| ≦ |x − y|, ω(x) = |x| for |x| large enough is called a (centered) continuous diagram. Let Ω denote the continuous diagram, indeed a C 1 curve: This result of the limit shape is a static property for the Plancherel ensemble. In [9] we treated a dynamical limit shape, in other words, evolution of the limit shapes along macroscopic time. We considered a continuous time Markov chain keeping the Plancherel measure invariant, took a diffusive scaling limit in time and space, and found limit shape (or macroscopic profile) ω t depending macroscopic time t. A pioneering work about time evolution of profiles of Young diagrams is done by [8].
The purpose of the present paper is to introduce a pausing time not necessarily obeying an exponential distribution instead of sticking to Markovian property for the microscopic dynamics of continuous time and to observe how it produces an effect on the scale of micro-macro correspondence and macroscopic evolution of the limit shape. Actually, we will see the effect given by a pausing time distribution with a heavy tail.
Let us begin with recalling the restriction-induction (Res-Ind) chain on Young diagrams. For a finite group G and its subgroup H, composing restriction of an irreducible representation of G to H (Res G H ) and induction from H to G (Ind G H ), and considering the dimensions of irreducible decompositions, we get a transition probability on G. Namely, for λ ∈ G and ν ∈ H, we have Specializing in G = S n (the symmetric group of degree n) and H = S n−1 , and identifying S n with Y n , we get from (1.4) transition matrix P (n) = (P λµ ) of degree |Y n | which keeps the Plancherel measure on Y n invariant. Note that in this case where ν ր λ indicates that ν is formed by removing a box of λ. The Markov chain determined by P (n) is the Res-Ind chain on Y n . In this chain, a one step transition admits non-local movement of a corner box in a Young diagram. The Res-Ind chain was treated in [5], [6], and [3]. Let us construct a continuous time random walk on Y n , not necessarily Markovian, from transition matrix P (n) . We mention [18] as a nice reference on such a non-Markovian continuous time random walk. Consider Markov chain (Z (n) k ) k∈{0,1,2,··· } on Y n having transition matrix P (n) and initial distribution M . This sequence yields counting process (N s ) s≧0 in which pausing intervals are given by τ j 's: (sup ∅ = 0 conventionally). We assume nontriviality of ψ, ψ (0, ∞) > 0, which implies τ 1 + · · · + τ j diverges to ∞ a.s. as j → ∞. Set The process (X (n) s ) s≧0 is a desired continuous time random walk on Y n . We have where ψ * j means the ordinary j-fold convolution power of ψ. Regarding initial distribution M (n) = λ) as a row vector of degree |Y n |, we have the distribution at time s as At the beginning we stated a result of the limit shape for a sequence of the Plancherel measures {(Y n , M (n) Pl )} n∈N . A mechanism of causing such a concentration phenomenon was pointed out by Biane ([2]) as approximate factorization property for a sequence of probability spaces {(Y n , M (n) )} n∈N . Approximate factorization peoperty can be described in several equivalent ways. Here we define it in terms of irreducible characters of the symmetric groups as follows. The irreducible character of S n corresponding to λ ∈ Y n is denoted by χ λ . The value it takes at an element of the conjugacy class of S n corresponding to ρ ∈ Y n is χ λ ρ . Normalization of χ λ yields When we fix a type of a conjugacy class and let the size n tend to infinity, we use a convenient notation as for the Young diagram of size n indicating a type of a conjugacy class. In general, the expectation with respect to probability M is denoted by E M .
Concerning the decay order in the right hand side of (1.7), see also (2.1) in Section 2. Expectations of irreducible characters seen in (1.7) are analogous objects to characteristic functions of probabilities. Since (1.7) says that characteristic functions are nearly factorizable along cycle decomposition with small error terms in some sense, approximate factorization property is regarded as an analogous, but much weaker, notion to independence. Applying approximate factorization property, Biane extended the concentration phenomenon (1.3) for the Plancherel measure to a wide variety of interesting models ( [2]). For convenience of later reference, we here give a statement in the following form. See also Section 4.4 of [10] for a proof in detail.
exists and has an order of at most jth power: Then we have concentration at a continuous diagram ω ∈ D; namely, [λ] √ n converges to ω in D in probability M (n) as n → ∞. The limit shape ω is characterized by free cumulants of its transition measure m ω as A procedure of computing ω from a sequence of free cumulants {R j (m ω )} j∈N is given by the Markov transform. See Appendix. Let ϕ be the characteristic function (Fourier transform) of ψ: Differentiability at ξ = 0 of ϕ follows if ψ has the mean. The first result of the present paper is the following scaling limit of the continuous time random walk (X Assume that the sequence of initial distributions {(Y n , M tn as n → ∞. The transition measure of the limit shape ω t is given by In (1.11), ( · ) c denotes free compression of rank c, ⊞ denotes free convolution, and Ω is the limit shape (1.2) of Vershik-Kerov and Logan-Shepp. Equivalently to (1.11) in terms of the free cumulants, we have We note it is possible to choose a desired sequence of initial distributions for arbitrarily prescribed ω 0 ∈ D such that R (ω 0 (x) − |x|)dx = 2. We see from (1.11) A main result in [9] is a special case of Theorem 1.3, in which (X (n) s ) is a continuous time Markov chain, or the pausing time obeys an exponential distribution (with mean 1). Properties of such free convolution with semi-circular distributions as (1.11) were treated in detail in [1]. See [15] and Appendix also for necessary notions in free probability theory. Proof of Theorem 1.3 is presented in Section 2. In the situation of Theorem 1.3, microscopic time s = tn is of order n while the rescale of space is of √ n as in (1.1). We thus took a diffusive scaling limit. The Stieltjes transform of m ωt On the other hand, when we consider the case where a microscopic pausing time distribution is heavy-tailed so as not to have the mean any more, it is naturally expected that limiting behavior will be different from the one in Theorem 1.3. The second result of the present paper illustrates such an observation. Let us take a pausing time obeying the one-sided stable distribution ψ of exponent α ∈ (0, 1) whose characteristic function is given by (1.14) The distribution ψ is absolutely continuous. Especially in the simplest case of the exponent 1/2, its density is expressed as See e.g. [14] for one-sided stable distributions and their characteristic functions. As for the scaling limit for continuous time random walk (X (n) s ) s≧0 with such a ψ as its pausing time distribution, it proves that approximate factorization property of an initial ensemble is not propagated along positive macroscopic time.
Assume that the sequence of initial distributions {(Y n , M (n) 0 )} n∈N satisfies approximate factorization property together with (1.8) and (1.9), and hence has concentration at ω 0 ∈ D. Assume also (1.14) for the pausing time distribution. For macroscopic time t > 0, let s = tθ n , where the scaling factor θ : N −→ R + is taken to be such that Then, in either case of (i) or tθn )} n∈N inherits approximate factorization property together with (1.8) and (1.9), and hence has concentration at ω t ∈ D. The limit shape ω t is, however, rather trivial so that (i) ω t = ω 0 , that is, no macroscopic evolution observed (ii) ω t = Ω for any t > 0, that is, macroscopic evolution completed at once. In the case of (iii), we have the convergence of the averaged quantities tθn )} n∈N inherits approximate factorization property if and only if ω 0 = Ω (hence ω t = Ω for any t also).
Proof of Theorem 1.4 is given in Section 2.
The subsequent sections are organized as follows. In Section 2, we give proofs of the theorems. However, proofs of the essential propositions involving computational details are postponed until Section 3. Our method relies on Fourier analysis (both classical and more group-theoretical). Usefulness of Fourier analysis is already suggested in [18] in treating continuous time random walks under general pausing time.

Proofs of Theorems
The mechanism of propagating approximate factorization property along macroscopic time is exactly the same as treated in [9]. See also Section 5.2 in [10] for more information.
Normalizing an irreducible character of a symmetric group, let us consider a function on Y for each ρ ∈ Y where n ↓k = n(n − 1) · · · (n − k + 1). The algebra A consisting of all linear hulls of Σ ρ 's plays a fundamental role in the dual approach due to Kerov and Olshanski. Basically, our harmonic analysis is developed in this algebra. See [11] for its structure. For a sequence of probability spaces {(Y n , M (n) )} n∈N , approximate factorization property (1.7) and (1.8) are rephrased in terms of Σ ρ 's: for j ∈ {2, 3, · · · }. We note also that (2.1) and (2.2) yield Let wt denote the weight degree in A. Since wt(Σ j ) = j + 1 holds, the right hand side which is a decisive formula connetcing irreducible characters of symmetric groups with transition measures of Young diagrams. Actually, (2.4) makes our scaling arguments tranparent. The right hand side of (2.4) is a polynomial, known as a Kerov polynomial, in R j (m λ )'s.
The following formula for transition matrix P (n) of the Res-Ind chain is a key observation about propagation of approximate factorization property. Regarding Σ ρ | Yn as the column vector consisting of the values of Σ ρ on Y n , we have Especially in (2.6), considering a k-cycle, set  1.7)). For ρ, σ ∈ Y × , (2.6) and (2.7) yield for j ∈ {2, 3, · · · } by (1.8) for M The first and second ones hold before taking limit. We see from (2.4) and (2.13) 14) for k ≧ 2, and hence (1.12). This completes the proof of Theorem 1.3.

Proof of Theorem 1.4
The verification in the cases of (i) and (ii) goes on similarly to the preceding subsection, proof of Theorem 1.3, by using (2.9) and (2.10) in Proposition 2.2 instead of Proposition 2.1.
Let us consider the case of (iii). Similarly to (2.13) and (2.14) in the proof of Theorem 1.3, (1.15) is derived from (2.11) in Proposition 2.2. For simplicity set   If ω 0 = Ω is the initial profile, (2.15) contains only the error term since the jth free cumulant of m Ω vanishes for j ≧ 3. If ω = Ω, there exists k ≧ 2 such that r k+1 = 0. In the case of ρ = σ = (k m ) (m ∈ N) in (2.15), the main term is As verified below, for any t > 0, appropriately taken m yields g α (t(2km) 1/α ) = g α (t(km) 1/α ) 2 . Then the main term does not vanish in (2.15), which implies that {(Y n , M (n) tθn )} n∈N does not satisfy approximate factorization property. Since g α (u) satisfies In particular, this is larger than 1 if p is large enough. This completes the proof of Theorem 1.4.

Technical details
First we show an inversion formula expressing f (k, n, s) in (2.7) in term of the characteristic function of ψ.
putting (2.7) then interchanging the integral and infinite sum, we have Let ⋆ be the above three-fold integral. Then, since α < 0 < β hold, the convergence theorem for integral gives The third term is rewritten as After replacing r by ǫ, the part of a ↑ ∞ is the same. Then, by lim ǫ↓0

which agrees with (3.3).
On the other hand, the convergence theorem yields Let ⋆ ′ be the two-fold integral in (3.5). We have for ξ = 0 Hence, from the convergence theorem, .
Since we see from the expression of (3.6) that the left hand side of (3.5) is bounded jointly with respect to {ǫ < |ξ| < r} and a > 0, we have by the convergence theorem for integral. Combined with the former half, this completes the proof of (3.1).
Before entering into the proof of Proposition 2.1, let us note the case where the pausing time obeys an exponential distribution: Then Lemma 3.1 gives (with residue calculus)

Proof of Proposition 2.2
Putting (1.14), the expression of the characteristic function, into (3.1) of Lemma 3.1 and noting (absolute) continuity of ψ, we have (3.14) Let us refer to the two integrals in (3.14) as ⋆ − and ⋆ + respectively. Take line segments L ∓ , L ′ ∓ and arcs C ∓ , C ′ ∓ as in Figure 1:
Since the discriminant of the right hand side is we begin with ǫ > 0 which makes this discriminant < 0. , which is integrable in y and independent of n. The poitwise limit of the integrand is seen from (3.20 e − sy 2 2n 2 sin(y/n) y/n 1 2n(n − k)(1 − cos(y/n)) + k 2 dy.
This completes the proof of Proposition 2.2.

Remark
In order to describe time evolution of the limit shape (= macroscopic profile) ω t , we presented that of its transition measure m ωt in this paper. This expression enables us to read out the t-dependence of ω t by way of the Markov transform (3.24). Although it is often difficult to write down a concrete formula for ω t , we can, for example, appeal to numerical computation to follow the evolution of the shape. It is surely important to seek a partial differential equation for ω t itself in addition to (1.13) for the Stieltjes transform of m ωt . Another promising way is given by the logarithmic energy. It is known that the limit shape Ω of Vershik-Kerov and Logan-Shepp is the unique minimizer of the following functional on the continuous diagrams D: with Θ(Ω) = 0 (see [12] and also [10]). Since our limit shape ω t converges to Ω in D as t → ∞, it is interesting to ask whether Θ(ω t ) decreases as t goes by (maybe for sufficiently large t).
In this paper we focus on the limit shape evolution (law of large numbers) without mentioning fluctuation (central limit theorem) of the macroscopic profile. For a dynamical aspect of such fluctuation for Young diagrams, see [7]. As algebraic and systematical approach to static concentration and fluctuation problems for Young diagrams, we refer to [11], [16] and [4].
Free convolution and free compression Let (A, φ) be a pair of unital * algebra A over C and state φ of A. For self-adjoint a ∈ A and probability µ on R, we write as a ∼ µ if φ(a n ) = R x n µ(dx) for any n. If a, b ∈ A are free and a ∼ µ, b ∼ ν, then a + b ∼ µ ⊞ ν. The free convolution µ ⊞ ν is uniquely determined for arbitrary compactly supported probabilities µ and ν on R. If q ∈ A is a self-adjoint projection such that c = φ(q) = 0, a ∈ A with a ∼ µ (compactly supported), and a, q are free, then probability ν on R is determined in such a way that qaq ∼ ν in (qAq, c −1 φ qAq ), which is called the free compression of µ and denoted by µ c . For any compactly supported µ and 0 < c ≦ 1, µ c is uniquely determined. The free convolution and free compression for compactly supported probabilities on R are characterized in terms of their free cumulants by