Limit theorems for free L\'evy processes

We consider different limit theorems for additive and multiplicative free L\'evy processes. The main results are concerned with positive and unitary multiplicative free L\'evy processes at small time, showing convergence to log free stable laws for many examples. The additive case is much easier, and we establish the convergence at small or large time to free stable laws. During the investigation we found out that a log free stable law with index $1$ coincides with the Dykema-Haagerup distribution. We also consider limit theorems for positive multiplicative Boolean L\'evy processes at small time, obtaining log Boolean stable laws in the limit.


Background
This article investigates the asymptotic behavior of additive and multiplicative free Lévy processes (AFLP and MFLP, resp.) at small time and large time. These are the free analogs of Lévy processes and were introduced by Biane [Bia98] as particular cases of processes with free increments. There are two possibilities for doing these, depending on weather one considers stationary free increments or stationary Markov transition functions. We will only consider the former ones, since they are related directly to convolution semigroups for the free convolutions.
In this setting there are various interesting questions which naturally appear as analogs of classical results. However, in the free world the answer to this questions sometimes are similar and sometimes can be quite different to the classical.
The first question that we investigate is the following. Given an AFLP {X t } t≥0 such that X 0 = 0, when does the convergence in law of the process a(t)X t + b(t), as t ↓ 0 or t → ∞, (1.1) holds for some functions a : (0, ∞) → (0, ∞) and b : (0, ∞) → R? This problem can be settled by the Bercovici-Pata bijection, and the result has one-to-one correspondence with the classical case (see Section 3.1). In both cases of small time and large time, the set of limiting distributions is exactly the set of free stable distributions (Proposition 3.2). It is notable that, in classical probability, while limit theorems for sums of independent random variables (discrete time case) have been well studied around 1930's and 1940's [GK54], limit theorems for Lévy processes (continuous time case) were only settled rather recently by Maller and Mason [MM08,MM09]. The second question that can be considered concerns, given a positive MFLP such that X 0 = 1, the convergence in law of b(t)(X t ) a(t) , as t → ∞, (1.2) where a, b : (0, ∞) → (0, ∞) are some functions. This problem was solved by Haagerup-Möller [HM13], following previous results of Tucci [Tuc10]. The set of possible limit distributions is completely known (see Section 3.2). In fact, for every positive MFLP {X t } t≥0 , the law of the process (X t ) 1/t converges weakly to a probability measure ν, and this map {X t } t≥0 → ν (more precisely, the map L(X 1 ) → ν, where L(X 1 ) is the law of X 1 ) is injective. This result is quite different from classical probability (see Proposition 3.7) where the limit distributions must be log stable distributions, which are push-forward of stable distributions by the map x → e x . This terminology is adopted for other distributions as well, e.g. log Cauchy distributions. Note that in classical probability, additive and multiplicative classical Lévy processes (ACLP and MCLP, resp.) can be identified by the exponential map, so one need not study MCLPs. However, due to the non-commutativity of processes, MFLPs cannot be identified with AFLPs by the exponential map. The third question to be considered is the limit in law of (1.2) at small time, namely b(t)(X t ) a(t) , as t ↓ 0, (1.3) where {X t } t≥0 is a positive MFLP starting at 1, and a, b : (0, ∞) → (0, ∞) are some functions as before. However, as we will see, the situation is very different than for the large t limit. The mains results in this direction are summarized in Section 1.2. A similar question one can consider is the limit distribution of where {U t } t≥0 is a unitary MFLP such that U 0 = 1 and a : (0, ∞) → Z and b : (0, ∞) → T are some functions. The function a should take only integral values, since a power function z p is continuously defined on the unit circle only when p is an integer. Notice that in this case we should talk about small time limits, since for large time, the distribution of U t spreads and hence we have to require a(t) → 0 to get a non-Haar measure in the limit, but then a(t) ≡ 0 eventually. In other directions, we also consider the Boolean analogues of the processes (1.1) and (1.3). In the Boolean case we cannot talk about large time limits (1.2) since in generic cases positive multiplicative Boolean Lévy processes (MBLP) do not exist at large time [Ber06]. We do not analyze the unitary case in this paper.
We shall mention that another direction of study, not discussed in this paper, is the limit theorem for positive multiplicative monotone LPs, both when t → ∞ and t ↓ 0 and also unitary ones when t ↓ 0. Also, even additive monotone LPs have some open problems (see Remark 3.4). These problems are left to future research.

Main results
Our main results are summarized in the following list.
(1) The set of possible limit distributions of processes of the form (1.3) contains the following distributions: • the log free stable distributions with index 1, which contain log Cauchy distributions and the Dykema-Haagerup distribution (Theorems 4.9 and 4.17 and Corollary 4.16); • some log free α-stable distributions with 1 < α ≤ 2 (Theorem 4.12 and Corollary 4.15).
Moreover, we provide a general sufficient condition on {X t } t≥0 and on functions a and b such that the law of (1.3) converges to the log Cauchy distribution (Theorem 4.1).
(2) The set of possible limit distributions of (1.3), now with {X t } 0≤t≤1 a positive MBLP, contains the log Boolean stable distributions with index ≤ 1. We provide a general condition on {X t } t≥0 , functions a and b such that the process converges in law (Theorems 5.1 and 5.5).
(3) The set of possible limit distributions of processes of the form (1.4) contains all "wrapped free stable distributions", which are the distributions of random variables e iX where X follows a free stable law (Corollary 7.5). We also give a fairly large domain of attraction of a wrapped free stable distribution (Theorem 7.3). A similar result is obtained for unitary MCLPs, which seems unknown in the literature.
Before going into the proofs, we would like to make some comments regarding the above results.
The Dykema-Haagerup distribution mentioned in (1) was introduced in [DH04a] and it appeared as the limiting eigenvalue distribution of T * N T N where T N is an N ×N upper-triangular random matrices with independent complex Gaussian entries. During our investigation of (1), we discovered a mysterious fact that the Dykema-Haagerup distribution coincides with a log free 1-stable law (Proposition 4.4).
One observation on the result (1) is that the limit distributions for positive MFLPs at small time seem to be universal, in contrast to the non-universal limit distributions of MFLPs at large time.
The proof of (1) is mostly based on the moment method. We find explicit MFLPs {X t } t≥0 and explicit functions a and b such that the moments of (1.3) converge. A particularly strong result can be obtained for the convergence to log Cauchy distributions (Theorem 4.1). In this case we can reduce the problem to the Boolean case (2), which is rather easy to analyze. This reduction procedure, however, needs a considerable generalization of free and boolean convolutions beyond probability measures, which we prepare in Section 6. After all our investigation, it remains open whether the set of possible limit distributions of (1.3) is exactly the set of all log free stable distributions.
The term MBLP in (2) is not very rigorous since it is only defined in the sense of a convolution semigroup of distributions, and no operator model is known. Also, the convolution semigroup is only defined for time t ∈ [0, 1] in general. The proof of (2) is easier than the free case (1) and a more solid result can be proved. Thanks to a simple formula for multiplicative Boolean convolution, we can directly compute the density of the process, and show that it converges to the density of log Boolean stable distributions.
For the unitary case (3), it again remains open whether the set of limit distributions of (1.4) is exactly the set of push-forwards of free stable distributions by the exponential map x → e ix . The proof of (3) uses the (clockwise) exponential map x → e −ix , in order to reduce unitary MFLPs to AFLPs. In spite of the non-commutativity of the process, such a reduction is possible, thanks to the work of Anshelevich and Arizmendi [AA17]. This method of using the exponential map has been limited to multiplicative convolutions on T so far, and not available to positive multiplicative convolutions, and hence not available to (1).

Organization of the paper
Apart from this introduction there are six sections. We introduce notations and preliminaries needed for the subsequents section in Section 2. This includes standard background in free probability but also some useful lemmas on convergence of measures and the exponential map. In Section 3 we present, for completeness, results which are known or which follow directly from other known results, including limit theorems for additive free Lévy processes. The main results are in the rest of the sections. More specifically, in Section 4 we consider positive MFLPs at small time. This section is mostly devoted to give many examples of families for which we can prove convergence to log free stable distributions. The general result for log Cauchy is separated as Section 6 since, on one hand, the proof is rather technical, and on the other hand, we introduce a class of generalized η-transforms which may be helpful in other problems in future. Section 5 is devoted to the positive MBLPs. Finally, in Section 7 we use the exponential map to study unitary MCLPs and MFLPs. 6. µ p , p ∈ R: the push-forward of a probability measure µ on (0, ∞) by the map x → x p . If µ is a probability measure on [0, ∞) then we can define µ p for p ≥ 0, and if µ is a probability measure on T then we define µ n for n ∈ Z.
7. D s (µ), s ∈ R: the dilation of a probability measure µ, that is, the push-forward of µ induced by the map x → sx.
8. R w (µ), w ∈ T: the rotation of a probability measure µ on T induced by the map z → wz.
9. z α , log z: the principal value unless specified otherwise.

Classical convolution
Recall that the classical convolution µ 1 * µ 2 of Borel probability measures µ 1 and µ 2 on R is the law of X 1 + X 2 , where X 1 and X 2 are independent, R-valued random variables such that L(X i ) = µ i , i = 1, 2. Equivalently, it is characterized by for bounded continuous functions f on R. A central concept of this paper is infinite divisibility, which we define in a general framework for later use.
Definition 2.1. Suppose that ⋆ is an associative binary operation on Borel probability measures on a topological space T . A Borel probability measure µ on T is said to be ⋆-infinitely divisible (or ⋆-ID for short) if, for any n ∈ N, there exists a probability measure µ n on T such that µ = µ ⋆n n := µ n ⋆ · · · ⋆ µ n . The class of such probability measures is denoted by ID(⋆) or ID(⋆, T ).
Recall also that if a probability measure µ on R is * -ID then its characteristic function has the Lévy-Khintchine representation (see e.g. [GK54,Sat99]) where ξ ∈ R and τ is a nonnegative finite Borel measure on R. Conversely, given such a pair (ξ, τ ), the RHS of (2.2) is the characteristic function of a * -ID distribution. The pair (ξ, τ ) is unique and is called the (additive) classical generating pair of µ. We denote by µ ξ,τ * the * -ID distribution which has the classical generating pair (ξ, τ ). For each * -ID distribution µ, there exists an ACLP {X t } t≥0 such that X 0 = 0 and L(X 1 ) = µ (see [Sat99]). Then the law of X t is denoted by µ * t , which is characterized by the generating pair (tξ, tτ ) and which forms a convolution semigroup, µ * s * µ * t = µ * (s+t) for s, t ≥ 0.

Free convolution
The (additive) free convolution µ 1 ⊞µ 2 of (Borel) probability measures µ 1 and µ 2 on R is the distribution of noncommutative random variable X 1 +X 2 , where X 1 , X 2 are free selfadjoint random variables in some non-commutative probability space such that L(X 1 ) = µ 1 and L(X 2 ) = µ 2 . Free convolution was first defined by Voiculescu [Voi85,Voi86] for compactly supported distributions and then generalized by Maassen [Maa92] for probability measures with finite variances, and finally by Bercovici-Voiculescu [BV93] for arbitrary ones.
(3) There exist ξ ∈ R and a nonnegative finite Borel measure τ on R such that Conversely, given a pair (ξ, τ ) of a real number and a nonnegative finite Borel measure, there exists a ⊞-ID distribution µ such that (2.7) holds. The pair (ξ, τ ) is unique and is called the (additive) free generating pair of µ.
The bijection Λ : is called the Bercovici-Pata bijection. Moreover, this map is a homeomorphism with respect to weak convergence [B-NT02, Corollary 3.9].

Boolean convolution
The Boolean convolution µ 1 ⊎ µ 2 of probability measures µ 1 and µ 2 on R is the law of X 1 + X 2 where X 1 and X 2 are Boolean independent selfadjoint random variables such that L(X 1 ) = µ 1 and L(X 2 ) = µ 2 . Boolean convolution was introduced by Speicher and Woroudi [SW97] for compactly supported probability measures and then by Franz [Fra09a] for arbitrary ones. Boolean convolution is characterized by which is called the η-transform. It can be proved that for any t ≥ 0 and any probability measure µ on R, there exists a probability measure µ ⊎t which satisfies η µ ⊎t (z) = tη µ (z) in C − [SW97]. This implies that every probability measure µ on R is ⊎-ID. Since F µ is an analytic map from C + into itself such that F µ (z) = z(1 + o(1)) as z → ∞ non-tangentially, it has the Pick-Nevanlinna representation where ξ ∈ R and τ is a nonnegative finite measure on R. Conversely, if a map F has the representation of the RHS of (2.11), it can be written as F = F µ for some probability measure µ. Thus we may denote by µ ξ,τ ⊎ the probability measure having the representation (2.11), and then define the bijection

Stable distributions
Let ⋆ denote one of * , ⊞ and ⊎. A non-degenerate probability measure µ on R is stable (or free stable or Boolean stable, according to the choice of ⋆) if for every a, b > 0 there exist (2.13) If we can always take d = 0 then µ is said to be strictly stable. There are several equivalent definitions of stable distributions (see [Zol86]).
Let A be the set of admissible parameters (2.14) Up to scaling and shifts, stable distributions are classified by the admissible parameters. For (α, ρ) ∈ A let s α,ρ be a classical stable distribution characterized by R e xz ds α,ρ (x) = exp − 1 Γ(1+α) e iαρπ z α , α = 1, for z ∈ i(−∞, 0), and let f α,ρ be a free stable distribution characterized by (2.16) for z ∈ C + . Note that the parametrization is changed from that of [BP99]. The parameter ρ expresses the mass on the positive line: ρ = f α,ρ ([0, ∞)) if α = 1; see [HK14]. The above free stable distributions f α,ρ cover all the free stable distributions up to affine transformations; namely, the set is equal to the set of free stable distributions. Notice that the free shift ⊞δ b is equal to the usual shift * δ b . For notational simplicity we denote by f α the free stable distribution with α ≥ 1 and ρ = 1 − 1/α, namely, ϕ fα (z) = −(−z) 1−α for α ∈ (1, 2] and ϕ f 1 (z) = − log z. The support of f α is given by supp( Note that f 2 is the standard semi-circular law. Further information on free stable laws is found in [BP99,Dem11,HK14].
Finally, we mention that the Cauchy distribution plays a special role since it is a strictly 1-stable distribution in classical, free and Boolean senses, and it satisfies c β,γ * µ = c β,γ ⊞ µ = c β,γ ⊎ µ (2.21) for all probability measures µ. The Voiculescu transform of the Cauchy distribution is given by

Multiplicative classical convolutions
Let G be either (0, ∞) or T. The multiplicative classical convolution µ 1 ⊛µ 2 of Borel probability measures µ 1 and µ 2 on G is the law of X 1 X 2 , where X 1 and X 2 are independent, G-valued random variables such that L(X i ) = µ i , i = 1, 2. In fact for G = (0, ∞), the multiplicative group ((0, ∞), ·) is isomorphic to the additive group (R, +) by the exponential map, and so Lévy processes and probability measures on (0, ∞) can be identified with those on R. For the unit circle G = T, such an identification is not possible since the map x → e ix from R to T is not injective. However, this map is still useful to prove limit theorems for Lévy processes (see Sections 2.11 and 7). The structure of ⊛-ID distributions on T is well known. For simplicity let us avoid the case of vanishing mean; namely let ID * (⊛, T) be the set of ⊛-ID distributions µ on T such that T ζ dµ(ζ) = 0. Any such measure has the Lévy-Khintchine representation where γ ∈ T and σ is a finite Borel measure on T. Conversely for any such pair (γ, σ) there exists µ ∈ ID * (⊛, T) such that (2.23) holds. Note that given µ the pair (γ, σ) is not unique. We call (γ, σ) a (multiplicative) classical generating pair of µ and denote µ = µ γ,σ ⊛ . To each generating pair (γ, σ) and t ≥ 0 we can associate a probability measure µ γ t ,tσ ⊛ , denoted by µ ⊛t if we write µ = µ γ,σ ⊛ . Notice that a continuous function t → γ t is not uniquely defined, so we need to specify its branch. Once we choose a branch, we can associate a Lévy process on T which has the distribution µ ⊛t at time t ≥ 0. For further details see [Céb16,CG08,Par67].

Multiplicative free convolution on the positive real line
For (Borel) probability measures µ and ν on [0, ∞), the multiplicative free convolution µ ⊠ ν is the distribution of X 1/2 Y X 1/2 , where X and Y are nonnegative free random variables with distributions µ and ν, respectively. This binary operation was first introduced by Voiculescu for compactly supported probability measures [Voi87], and then by Bercovici and Voiculescu [BV93] for general probability measures on [0, ∞). The following presentation is based on [BV93,BB05].
where a ≥ 0, b ∈ R and τ is a non-negative finite measure on [0, ∞). The triplet (a, b, τ ) is unique.

Multiplicative free convolution on the unit circle
The multiplicative free convolution µ ⊠ ν of probability measures in P(T) is the distribution of U V when U and V are free unitary elements such that the laws of U and V are µ and ν, respectively [Voi87]. Let µ ∈ P(T). Now, we consider G µ (z) and F µ (z) for z outside the unit disc D, and η µ (z) = 1 − zF µ 1 z in the unit disc D. Suppose that the first moment m 1 (µ) = T w dµ(w) of µ is not zero. Then the function η µ has a convergent series expansion η µ (z) = m 1 (µ)z+o(z), and so one can define the compositional inverse η −1 µ (z) in a neighborhood of 0 as a convergent series, and define in a neighborhood of 0. Suppose that m 1 (µ) = 0 = m 1 (ν). Then the multiplicative free convolution is characterized by [Voi87] in a neighborhood of 0. It is known that only the normalized Haar measure h is a ⊠-ID distribution with mean 0. Thus we introduce the class ID * (⊠, T) := ID(⊠, T) \ {h}. A probability distribution µ is a member of ID * (⊠, T) if and only if Σ µ can be written as where γ ∈ T and σ is a non-negative finite measure on T. The pair (γ, σ) is unique and is called the (multiplicative) free generating pair of µ. We denote by µ γ,σ ⊠ the ⊠-ID distribution characterized by (2.31). The ⊠-infinite divisibility of µ is equivalent to the existence of a weakly continuous ⊠-convolution semigroup {µ ⊠t } t≥0 with µ ⊠0 = δ 1 and µ ⊠1 = µ. This convolution semigroup can be realized as the law of a unitary MFLP, whose asymptotic behaviour at time 0 is studied in Section 7.

Multiplicative Boolean convolution on the positive real line
There is no satisfactory definition of "multiplicative Boolean convolution on [0, ∞)". Bercovici considered a possibility of an operation ∪ × defined by but the formula (2.34) does not always define a probability measure on [0, ∞). In fact, Bercovici showed that the power µ ∪ ×n does not exist for sufficiently large n if µ ∈ P([0, ∞)) is compactly supported and non-degenerate [Ber06]. Franz also tried another definition of multiplicative Boolean convolution, which turned out to be non-associative [Fra09a]. On the other hand, Bercovici proved that the formula defines a probability measure µ ∪ ×t on [0, ∞) for any 0 ≤ t ≤ 1 and any probability measure µ on [0, ∞), and this definition works well e.g. in [AH13]. We will investigate limit theorems for this Boolean convolution power in Section 5.

Multiplicative Boolean convolution on the unit circle
The multiplicative Boolean convolution µ ∪ × ν of probability measures µ, ν on T was defined by Franz [Fra09b] as the distribution of U V when U and V are unitary elements such that U − 1 and V − 1 are Boolean independent and such that L(U ) = µ and L(V ) = ν. It is characterized in terms of the η-transforms by the formula Similarly to the free case, only the normalized Haar measure is a ∪ ×-ID distribution with mean 0, and so we set . This is also equivalent to the Lévy-Khintchine representation where γ ∈ T and σ is a non-negative finite measure on T. Similarly to the free case, ∪ ×-infinite divisibility of µ is equivalent to the existence of a weakly continuous ∪ ×-convolution semigroup {µ ⊠t } t≥0 with µ ⊠0 = δ 1 and µ ⊠1 = µ which can be realized as the law of a unitary MBLP. We do not analyze unitary MBLPs in this paper, but the class ID * (∪ ×, T) is important for multiplicative free convolution; see Section 2.11.

2.11
The wrapping map 2.11.1 The classical case In the last section of this paper we will study unitary MFLPs. For this we will use the wrapping (or exponential) map W : P(R) → P(T) defined by (2.36) for all probability measures µ and ν on R, and hence W maps ID( * , R) into ID(⊛, T). From the computation [Céb16, Proposition 3.1] we deduce the following formula for Lévy-Khintchine representations.
Proposition 2.6. Given a ⊛-ID law µ γ,σ ⊛ on T, we define for Borel subsets A, and arg γ is an arbitrary argument.
Proof. It suffices to check the three relations (2.38)-(2.40). The first and the third ones are easy to check. For the second relation, considering the 2π-periodicity of the measure (1 + x 2 ) dτ (x), we obtain where we naturally identified the measure [(1 + x 2 )τ (dx)]| (0,2π) with a measure on T \ {1} and used the identity . (2.44) Thus the second relation holds.

The free case
Furthermore, according to [AA17], the map W restricted to a subclass of probability measures provides a homomorphism from additive free/Boolean convolutions to multiplicative ones on the unit circle. Define An analytic function F : C + → C + in F L is the reciprocal Cauchy transform of some probability measure if and only if for some analytic transformation f : D → C + . Moreover, L is closed under the three additive convolutions ⊎, ⊞, and under Boolean additive convolution powers and free additive convolution powers whenever defined. On the other hand, for µ ∈ L and n ∈ Z, we have Hence for µ, ν ∈ L and a convolution ⋆ ∈ { * , ⊞, ⊎}, we may and do write "µ = ν mod δ 2π " if µ = ν ⋆ δ 2πn for some n ∈ Z. This defines an equivalence relation on L independent of the choice of a convolution ⋆.
It was proven in [AA17] that restricted to the class L the map W satisfies Moreover, W is weakly continuous and maps L onto where µ is any probability measure in (W | L ) −1 (ν). The most important property is that W | L is a homomorphism between additive free and multiplicative free convolutions (also true for Boolean and monotone convolutions).
Proposition 2.8. Let µ ∈ L. Then the family of distributions {W (µ ⊎t )} t≥0 defines a weakly continuous ∪ ×-convolution semigroup, which we express in the form Similarly, whenever µ ⊞t is defined, the family of distributions {W (µ ⊞t )} t defines a weakly continuous ⊠-convolution semigroup, which we denote by The following two results are not stated in [AA17], so we provide the proofs.
Proposition 2.9. Let µ ∈ ID(⊞) and let τ be the finite measure in (2.2). The following conditions are equivalent.

Convergence of probability measures
This section gives several facts on convergence in law of random variables. Most results are elementary.
Proof. Define µ t := L(X t ) and µ := L(X). Take any ε > 0, f ∈ C b (R) and take any decreasing sequence {t n } n≥1 such that lim n→∞ t n = 0. Since µ tn (2.51) The first term is bounded by 2 f ∞ ε, the second integral is bounded by ε for large n ∈ N by the uniform continuity of f on finite intervals, and the third integral tends to 0 as n → ∞ by the weak convergence µ tn This lemma can be expressed in the multiplicative form.
Proof. The group isomorphism exp : R → (0, ∞) changes dilation to power and shift to dilation, and hence Lemma 2.11 is available.
If we assume that the limit distribution is non-degenerate (i.e. not a point mass) and a(t) > 0 then we can show the converse result of Lemma 2.11.
Remark 2.14. By the transform t → 1/t, the same statement holds for the limit t → ∞.
Conversely, suppose that a(t)X t + b(t) converges in law to some non-degenerate random variable Y . Take a sequence t n ↓ 0 and consider the discretized random variables a(t n )X tn + b(t n ). Then [GK54, pp. 40, Theorem 1] implies that there exist α > 0 and β ∈ R such that Y law = αX + β. Then taking β n = 1, α n = 0 and replacing b n and a n respectively with α a(tn) and β−b(tn) a(tn) in [GK54, pp. 42, Theorem 2], we obtain the convergence a(t n ) → α and b(t n ) → β. Since the sequence {t n } n∈N was arbitrary, the conclusion follows.
We give a sufficient condition for weak convergence in terms of the local uniform convergence of the absolutely continuous part. Suppose that for any compact subset K ⊂ B there exists δ > 0 such that µ t is Lebesgue absolutely continuous on K for any 0 < t < δ, and Then µ t converges weakly to the probability measure p(x)1 B (x) dx.
Proof. Denote by Leb the Lebesgue measure on R. Take f ∈ C b (R) and ε > 0. Since Take δ > 0 such that µ t is absolutely continuous w.r.t. Leb| K for all t ∈ (0, δ), and also satisfying that

Organizing easy or known limit theorems
This section summarizes known results and results that follow readily from known results.

Additive Lévy processes at large and small time
Let {X t } t≥0 be an AFLP such that X 0 = 0. We discuss the convergence of the process where a : (0, ∞) → (0, ∞) and b : (0, ∞) → R are some functions. Alternatively, the above problem reads the weak convergence of Actually this problem can be solved by Bercovici-Pata bijection, and the result has a complete correspondence to a classical result.
converges in law to a non-degenerate R-valued random variable Y if and only if there exist α > 0, β ∈ R such that a(t) a(t) → α and b(t) − a(t)b(t) a(t) → β, and in this case Y law = αY + β. Thus it suffices to find one specific pair of functions (a(t), b(t)) for which the distribution of (3.1) converges.
First we establish that the limit distribution of (3.1), if exists, must be free stable. This fact follows from [MM08, Theorem 2.3] and the Bercovici-Pata bijection, but we give a direct simple proof which is valid for classical and Boolean cases as well.
Proof. We only focus on the limit t ↓ 0 since the other case is proved in the same way. Instead of distributions we use stochastic processes. Let {X t } t≥0 be an ACLP that has the distribution µ * t at time t ≥ 0, and let Y be a non-constant random variable such that L(Y ) = ν. Take i.i.d.
The following identity holds true for each n ∈ N: (3.4) Since a(nt)X nt + b(nt) converge in law to Y as t ↓ 0 and since the right hand side of (3.4) converge in law to Y 1 + · · · + Y n , it holds true from Lemma 2.13 that a(t) a(nt) converge to some α n ∈ [0, ∞) and −b(nt) a(t) a(nt) + nb(t) converge to some β n ∈ R as t → 0, and also Since Y is not a constant, we must have α n ∈ (0, ∞). This implies that Y is stable, see [Zol86, p.14, Equation I.24]. If b(t) ≡ 0 then β n = 0 and so Y is strictly stable. The proof for the free case is similar. ( Remark 3.4. The Bercovici-Pata-type bijection Λ M from * -ID distributions onto monotonic ID distributions is defined in [Has10], but only Λ M is known to be continuous. Therefore we cannot prove a monotone analogue of Theorem 3.3. Establishing the continuity of Λ −1 M is an open problem. Proof. For the equivalence between (1) and (2) we only have to use the distributional identities Λ(D a(t) (µ * t ) * δ b(t) ) = D a(t) (Λ(µ) ⊞t ) ⊞ δ b(t) and the fact that the Bercovici-Pata bijection Λ is a homeomorphism. The equivalence between (1) and (3) is proved similarly.
Let ⋆ denote any one of * and ⊞. In the present context, the ⋆-domain of attraction of a probability measure ν on R at large time (resp. small time) is the set of all Theorem 3.5. Let µ be a * -ID distribution with classical generating pair (ξ, τ ). Define (1) µ ∈ D 0 * (s 2,1/2 ) if and only if the function V is slowly varying as x ↓ 0. Proof. Most of the statements can be inferred from [MM08, Theorem 2.3] and its proof. We only mention that the finite measure τ α,ρ that appears in the Lévy-Khintchine representation of the stable distribution s α,ρ is given by Note that the last expression is an increasing function of ρ when 0 < α < 1 and decreasing function when 1 < α < 2.

Positive multiplicative free Lévy processes at large time
We consider convergence in law of the process where a, b : (0, ∞) → (0, ∞) are functions and {X t } t≥0 is a positive MFLP such that X 0 is an identity operator. In terms of the marginal law µ := L(X 1 ), the above problem is written in the form (3.9) In fact, the convergence of (3.8) follows from the work of Tucci [Tuc10] and Haagerup-Möller [HM13] and we can take the functions a(t) = 1/t and b(t) ≡ 1. Tucci [Tuc10] initiated the study of law of large numbers for multiplicative free convolution for compactly supported probability measures, and then Haagerup and Möller [HM13] gave a proof of the general case. The following statement is formulated in the setting of continuous time which can be proved without changing the proof.

Moreover, Φ(µ) is non-degenerate if and only if µ is non-degenerate.
Thus the problem of finding limit distributions of MFLPs at large time has been settled by Theorem 3.6; we only need to restrict the initial measure µ to ⊠-ID laws on (0, ∞).
The map µ → Φ(µ) is injective since the map µ → S µ is injective. This fact and arguments similar to the paragraph around (3.3) with Lemma 2.15 completely determine the ⊠-domain of attraction of Φ(µ) for a non-degenerate ⊠-ID law µ on (0, ∞): Thus the limit distributions are not universal since each domain of attraction is small. By contrast, in classical probability, the limiting distributions of positive MCLPs at large (and small) time are universal.
Proposition 3.7. Let {X t } t≥0 be a MCLP on (0, ∞) such that X 0 = 1. If there exist functions a, b : (0, ∞) → (0, ∞) such that b(t)X a(t) t converges in law to a non-constant positive random variable Y as t → ∞ or t ↓ 0, then Y is log stable, i.e. log Y is stable.
Proof. This follows from the additive case (Proposition 3.2) applied to the ACLP Z t = log(X t ).
Note that the Boolean analogue of (3.8) cannot be formulated, as Bercovici showed that a reasonable distribution µ ∪ ×n does not exist for sufficiently large n if µ is compactly supported and non-degenerate [Ber06]. The monotone version of (3.8) can be formulated but is not discussed in this paper.

Positive multiplicative free Lévy processes at small time
We consider the limit theorem of the type (3.8), but for small time. In terms of probability measures, the problem is convergence where a, b : (0, ∞) → (0, ∞) are functions and µ is a ⊠-ID distribution on (0, ∞). In the case of large time, recall that we can always take a(t) = 1/t and b(t) ≡ 1. However, such is not anymore true at small time. In classical probability, the possible limit distributions are only log stable distributions and degenerate distributions; see Proposition 3.7. Our results for free case are similar to this classical case; we find log free stable distributions as the limit distribution of (4.1).

Log Cauchy distribution
In this section we present a limit theorem (4.1) when the functions a(t) = 1/t and b(t) ≡ 1 can be taken. Let C β,γ be a random variable following the Cauchy distribution c β,γ . The law L(e C β,γ ) is called the log Cauchy distribution whose probability density is given by The main theorem here is the convergence to the log Cauchy distribution.
Theorem 4.1. Let µ be a ⊠-ID probability measure on (0, ∞). Assume that the analytic function v µ in (2.27) extends to a continuous function in (iC + ) ∪ C − ∪ I where I is an open interval containing 1, and assume that −β + iγ := v µ (1) ∈ C + . Then for any compact set K ⊂ (0, ∞), the measure (µ ⊠t ) 1/t is Lebesgue absolutely continuous on K for small t > 0, and the convergence as t ↓ 0 holds uniformly on K. In particular, (µ ⊠t ) 1/t converges to L(e C β,γ ) weakly. We reduce the problem to the Boolean case, and the proof is postponed to Section 6. The idea is the following. Suppose that we find a probability measure ν such that µ = (ν ⊠2 ) ∪ × 1 2 . Then using a commutation relation in [AH13] we obtain (4.2) The measure (ν ⊠(1+t) ) ∪ × 1 1+t is approximately ν when t ↓ 0, and so the study of D b(t) (µ ⊠t ) a(t) reduces to the study of D b(t) (ν ∪ ×t ) a(t) which is easier. The relation between µ and ν is that µ is the image of ν by the multiplicative Bercovici-Pata map, which is not a bijection. Therefore, for some µ ∈ ID(⊠), we cannot find such a pre-image ν. However, we do not need a "probability measure" ν, but only need its η-transform. This idea will be made more precise in Section 6. The equation (4.2) will then be generalized to (6.13).

Dykema-Haagerup distribution
In this section we find a limit distribution of (4.1) which is not a log Cauchy distribution but still a log free stable distribution with index 1. Dykema and Haagerup [DH04a] investigated the N × N strictly upper triangular random matrix where the entries {t ij } 1≤i<j≤N are independent complex Gaussian with mean 0 and variance 1/n. The showed that (T N , E ⊗ 1 N Tr N ) converges in * -moments to some (T, τ ), where τ is a trace. The operator T is called the DT-operator. They conjectured that τ ((T * ) k T k ) n = n kn (1 + kn)! , k, n ∈ N, which was proved by Dykema and Haagerup for k = 1 and then proved byŚniady [Śni03] in full generality. Cheliotis showed that the empirical eigenvalue distribution of T * N T N converges weakly almost surely [ Generalizing natural numbers k to positive real numbers, we introduce a probability measure DH r (r ≥ 0) whose moments are given by n rn Γ(2 + rn) , n = 0, 1, 2, 3, . . . , r ≥ 0, (4.3) with the convention 0 0 = 1. More generally, the Mellin transform is given by x γ DH r (dx) = γ rγ Γ(2 + rγ) , r, γ > 0. (4.4) The existence of such a probability measure is guaranteed by Theorem 4.9 in this section because its proof implies the positive definiteness of the sequence (4.3). We call DH r the Dykema-Haagerup distribution. It can be easily shown that (DH r ) a = D a ar (DH ar ), a, r ≥ 0. (4.5) The probability distribution DH 1 is the spectral distribution of the DT operator T * T , and it is Lebesgue absolutely continuous and is supported on [0, e] [DH04a, Theorem 8.9]. Hence, DH r = D r −r (DH r 1 ) is supported on [0, r −r e r ] for r > 0. The R-transform of DH 1 is explicitly computed in [DH04a, Theorem 8.7], which is not used in this paper.
The Dykema-Haagerup distribution is in fact a log free stable distribution, which seems unknown in the literature.
An interesting observation here is that the free 1-stable law has a random matrix model. We know that the semicircle law f 2,1/2 has the random matrix model T N + T * N . Considering these facts, the following question comes up.
Problem 4.6. Find a random matrix model whose eigenvalue distribution converges to another free stable distribution.
We use the moment method to prove the weak convergence. Haagerup and Möller [HM13, Lemma 10] found a connection between the Mellin transform and the S-transform, which is useful for our problem.
This formula is valid for all ξ ∈ (0, ∞) by analytic continuation. Then one may put ξ = 1/t and use the asymptotic form of gamma functions [AS70, 6.1.47] to obtain the convergence of moments (when γ ∈ N).

Log free stable distributions with index > 1
We find more log free stable distributions in the limit theorem. Suppose that α ∈ (1, 2]. As an initial probability measure, we take ν α defined by which is ⊠-ID on (0, ∞) by Theorem 2.3. The measure ν 2 is compactly supported and has the moment sequence ( n n n! ) ∞ n=0 (with convention 0 0 = 1 as before). This measure already appeared in M lotkowski [M lo10] and in a certain limit theorem proved by Sakuma and Yoshida [SY13].
We need the Laplace transform of the free stable distribution, which was computed in [HK14, Theorem 3].
Lemma 4.10. For α ∈ (1, 2], we have We are ready to prove the following result.
The above method can be generalized to a larger class of initial distributions µ.

Further examples
We find more examples of convergence to log free stable distributions by taking the multiplicative free convolution with Boolean stable distributions. We exploit several identities obtained in [AH16]. We start from an obvious property which shows that the dilation and power of limit distributions are also limit distributions.
We can then find more examples of probability measures which yield log free stable distributions.
Corollary 4.16. For 0 ≤ α ≤ β, we have the convergence Using Proposition 4.4 shows that where the random variables C β,γ and F 1 are assumed to be independent. Moreover, assuming free independence of C β,γ and F 1 gives the same distribution, thanks to the speciality of the Cauchy law (2.21). Since L(C β,γ + aF 1 ) covers all free 1-stable laws, considering Theorem 4.11, Corollary 4.16 and Proposition 4.13, we have obtained a certain class of possible limit distributions.
Note that the above probability measures are all log free stable with index ≥ 1.
Problem 4.18. Determine all the possible limit distributions of (4.1). In particular, determine whether the following distributions can appear in the limit theorem: • log free stable laws with index > 1 and with an arbitrary asymmetry parameter ρ; • log free stable laws with index < 1; • probability measures which are not log free stable laws.
Problem 4.19 (Domain of attraction). Characterize initial probability measures µ such that (4.1) converges to a given non-degenerate distribution (e.g. probability measures in Theorem 4.17) for some functions a, b : (0, ∞) → (0, ∞). Does a transfer principle (like in Theorem 3.3) holds between free and classical limit theorems?
5 Positive multiplicative Boolean Lévy processes at small time As mentioned in Section 2.9, the Boolean power µ ∪ ×t is well defined for 0 ≤ t ≤ 1 and for any probability measure µ on [0, ∞). Therefore, one may discuss the convergence of where a, b : (0, 1] → (0, ∞) are functions. We can give a more solid solution to this problem than the free case since the analysis is easier. The defining relation (2.33) for the Boolean convolution power, combined with We consider the following assumption on µ: (AS) There exists an open interval I ⊂ (0, ∞) such that 1 ∈ I and the limit exists for each x ∈ I, and the map F µ : I → C + ∪ R is continuous at 1.
A sufficient condition for (AS) is the existence of a Hölder continuous density around x = 1; see Example 5.3. The assumption (AS), equation (5.3) and Stieltjes inversion imply that µ ∪ ×t is Lebesgue absolutely continuous on I and unless the denominator is zero. Moreover, for 0 < t < 1 and s > 0, the probability measure (µ ∪ ×t ) 1/s is Lebesgue absolutely continuous on I 1/s := {x ∈ (0, ∞) : x s ∈ I} with density unless the denominator is zero.

Log Cauchy distribution
We first consider the log Cauchy distributions.
Remark 5.2. It is notable that the parameter γ is less than or equal to π, while it was an arbitrary positive number in the free case in Theorem 4.1.
Proof. Take any compact set K of (0, ∞). Then x t ∈ I for sufficiently small t ∈ (0, 1) and any x ∈ K, and hence the density formula (5.5) is valid on K unless the denominator is zero. Note that x t = 1 + t log x + o(t) as t ↓ 0 by calculus and F µ (x t + i0) = w + o(1) as t ↓ 0 uniformly on x ∈ K by (AS). Then This convergence is uniform on K. The weak convergence follows from Lemma 2.16 with B = (0, ∞).
Example 5.3. Suppose that µ is Lebesgue absolutely continuous in a finite open interval I containing the point 1, and dµ/dx is strictly positive and locally ρ-Hölder continuous on I for some 0 < ρ < 1. Then the assumption (AS) is satisfied and F µ (1) ∈ C + . Therefore, γ ∈ (0, π) and the convergence of Theorem 5.1 holds. The proof is as follows. In the decomposition the second part G 2 extends continuously to C + ∪ I, taking real-values on I. Considering and [Tit26, Lemmas α, β, δ] (with some modification of proofs because we only assume the local Hölder continuity, not the global one), the Cauchy transform G 1 extends to a continuous function on C + ∪ I and G 1 (x) = p.v.

Log Boolean stable distributions with index < 1
The distribution L(e Bα,ρ,r ) is called the log Boolean stable law, where B α,ρ,r is a random variable following the law b α,ρ,r . The convergence to log Boolean stable distributions is shown below.
(2) While the Cauchy distribution is a Boolean 1-stable law, we cannot unify Theorems 5.1 and 5.5. This is because the estimate (5.10) below fails to hold for α = 1.
Proof. We define θ := αρπ. Take any compact set K 1 of (1, ∞). Then x t ∈ I for sufficiently small t < 1 and any x ∈ K 1 , and hence the density formula (5.5) is valid on K 1 unless the denominator is zero. Note that as t ↓ 0 uniformly on x ∈ K 1 . By calculus we obtain The convergence is uniform on K 1 . Take a compact set K 2 ⊂ (0, 1). Note that for x < 1, Hence, (5.10) holds true if we replace e iθ by e i(θ+π(1−α)) and log x by − log x: uniformly on K 2 . The limiting function is the probability density function of L(e Bα,ρ ). The weak convergence follows from Lemma 2.16 with B = (0, ∞) \ {1}.
Example 5.7. Suppose that α ∈ (0, 1), c 1 , c 2 ≥ 0, c 1 + c 2 > 0, δ > 0 and µ is a Borel probability measure such that µ| (1−δ,1+δ) has a local density function p(x) of the form where f k is analytic in a neighborhood of 1 and f k (0) = 0, k = 1, 2 (the assumption of analyticity of f k can be weakened slightly). From the proof of [Has14, Theorem 5.1, (5.6)], for some β 1 ≥ 0 uniformly on z ∈ C + . Considering the symmetry, we obtain for some β 2 ≥ 0, Combining these two asymptotic behaviors gives and hence the assumption (5.7) of Theorem 5.5 is satisfied. Since c 1 + c 2 > 0, the Stieltjes inversion implies that β 1 + β 2 > 0 too. So far we have obtained limit theorems converging to log Boolean stable laws (including the log Cauchy as index 1), and described their domains of attraction. An unsolved problem is: Problem 5.8. Are there non-degenerate limit distributions (5.1) except log Boolean stable laws with index ≤ 1? 6 Proof of Theorem 4.1 The convergence in distribution of positive MFLPs to the log Cauchy distribution can be reduced to the easier problem of MBLPs, the latter of which was discussed in Section 5.1. However, we need a framework of free and Boolean convolutions beyond convolutions of probability measures. This framework is developed below, and in particular, we generalize concepts and results introduced in [AH13, BB05].
6.1 Convolutions of maps on the negative half-line Proof. The Pick-Nevanlinna representation (2.11) of F µ shows that is an analytic map from C \ [0, ∞) into C, and maps C − into C + ∪ (0, ∞). Its principal logarithm can therefore be defined as an analytic map from C − to C + ∪ R, and hence has the Pick-Nevanlinna representation for some a ≥ 0, b ∈ R and a nonnegative finite measure σ on [0, ∞). By calculus we see that u ′ (x) ≤ 0 for x < 0.
Definition 6.3. Given η ∈ E and s ≥ 0, we define a multiplicative Boolean convolution power η ∪ ×s ∈ E by Then we generalize multiplicative free convolution to the class E. For t ≥ 1, define a map Φ t : (−∞, 0) → (−∞, 0) by Therefore, Φ t is a homeomorphism of (−∞, 0). Denote by ω t its inverse map. We define a map η ⊠t ∈ E by η ⊠t (x) := η(ω t (x)). (6.3) It is not obvious if η ⊠t belongs to E, but it does. Since is continuous and non-increasing.
(1) The map η ⊠t ∈ E defined by (6.3) is called the multiplicative free convolution power of η.
(2) The map ω t := Φ −1 t is called the subordination function of η ⊠t with respect to η.
A formula for ω t is given. The proof of [BB05, Theorem 2.6(3)] is available without a change.
From the above expression, the subordination function ω t , t ≥ 1 belongs to E since Note that Φ t , t > 1 does not belong to E by definition unless u is constant.
(2) Let Φ s,t := x x η ⊠s (x) t−1 (6.4) and ω s,t be its inverse. Then (η ⊠s ) ⊠t = η ⊠s • ω s,t = η • ω s • ω s,t . Thus the claim is equivalent to ω st = ω s • ω s,t , which is also equivalent to Φ s,t = Φ st • ω s . This identity follows from the calculation where Proposition 6.5 was used on the third equality.
The following result extends [AH13, Proposition 4.13] with a slightly different formulation. The same proof is available, but we give a simpler proof.
Proposition 6.9. Embedding of a given η ∈ E into a ⊠-convolution semigroup is unique, if exists.
Thanks to the uniqueness, we may write η t = η ⊠t for t ≥ 0 without ambiguity, when η embeds into a ⊠-convolution semigroup {η t } t≥0 ⊂ E. This map is called the multiplicative Boolean-to-free Bercovoci-Pata map, which generalizes the injective map (but not surjective) defined in [AH13] from the class of probability measures on [0, ∞) to the class of ⊠-ID measures.
From now on we assume the analyticity of η ∈ E for a later application to the limit theorem. Let A(S) be the set of analytic functions in an open set S ⊂ C.

Proof of Theorem 4.1
In this section we apply the general framework of convolutions to the limit theorem. Suppose that µ is ⊠-ID on (0, ∞) whose Σ-transform has the expression (2.27). Then define The correspondence µ → η is a generalization of the multiplicative Bercovici-Pata map from Boolean to free [AH13] which is not surjective. We can show that η ∈ E, but η may not be the η-transform of a probability measure. Moreover, the map η is not even injective on (−∞, 0) in general, so it seems not easy to define a Σ-or S-transform of η. This is why the previous section has investigated convolution operations for maps on (−∞, 0) without using Σ-transforms.
The following result shows that the map M is a generalization of the multiplicative Bercovici-Pata map from Boolean to free (denoted by M 1 in [AH13]).
This convolution semigroup may be written as (η µ ) ⊠t and it coincides with (η ⊠(1+t) ) ∪ × t 1+t by Proposition 6.11. Now, we have the identity We have shown that the function (η ⊠(1+t) ) ∪ × 1 1+t is close to η up to O(t) when t is small. This estimate and (6.13) enable us to reduce the convergence problem of a MFLP to the Boolean case.
We do not know if this method somehow extends to other log free stable distributions. The main difficulty is that most of the log free stable distributions do not have explicit densities; they only have densities described by some implicit functions. In the above analysis, it is not obvious how these implicit densities appear in the limit.

Unitary multiplicative Lévy processes at small time
In this section we will find limit distributions for unitary MFLPs and MCLPs at small time. We do not discuss unitary MBLPs because of some technical difficulty. We want to study the convergence in law of the unitary process b(t)(U t ) a(t) , as t ↓ 0, (7.1) where {U t } t≥0 is a unitary MFLP and a : (0, ∞) → N and b : (0, ∞) → T are functions. Note that the function a is assumed to be N-valued, or at least Z-valued, because non-integral powers z p cannot be continuously defined on T. In terms of probability measures, our aim is to obtain weak limits of R b(t) (µ ⊠t ) a(t) , as t ↓ 0, (7.2) where {µ ⊠t } t≥0 is a weakly continuous ⊠-convolution semigroup on T such that µ 0 = δ 1 .
Remark 7.1. We cannot formulate a limit theorem for unitary MFLPs at large time. In order to do so we need to consider the situation where a(t) → 0 as t → ∞, but such is impossible for N-valued functions.
Similar statements hold for the classical case.
Proof. Let (ξ, τ ) be a pair defined by (2.41) and (2.42), and letμ := µ ξ,τ ⊞ . Thenμ is a preimage of µ by the map W | L from Proposition 2.10. The measure τ satisfies the assumption of Theorem 3.5, which implies that there exist functions A(t), B(t) > 0 such that A(t) → ∞ and we may a priori assume that A(t) is N-valued. Then, by Proposition 2.7 we have W (D A(t) (μ ⊞t ) ⊞ δ B(t) ) = R e −iB(t) (W (μ) ⊠t ) A(t) = R e −iB(t) (µ ⊠t ) A(t) , (7.5) which weakly converges to W (f α,ρ ). This shows that we can take a(t) = A(t) and b(t) = e −iB(t) such that (7.2) converges to W (f α,ρ ). The proof for the classical case is similar; one only needs to use Proposition 2.6 instead of Proposition 2.10, and replace free objects by the corresponding classical ones.
Remark 7.4. Note that the measure D A(t) (μ ⊞t ) above may not belong to L, since the class L is not closed under dilation. Due to this, the converse statement of Theorem 7.3 cannot be proved. Also, we cannot prove a similar statement for the Boolean case: since D A(t) ((µ ξ,τ ⊎ ) ⊎t ) may not belong to L and ⊎δ B(t) may not be the usual shift (cf. (2.45)), we do not know how to compute W (D A(t) ((µ ξ,τ ⊎ ) ⊎t ) ⊎ δ B(t) ). Corollary 7.5. The set of possible limit distributions of (7.2) contains the set {W (µ) : µ is free stable}. A similar statement holds for the classical case.
(7.10) Therefore takingã(t) = [1/t] andb(t) = t 1−ρ yields the convergence ϕμ t (z) → ϕ f 1,ρ (z). This shows the weak convergence In the unitary case, the Haar measure can appear as a limit distribution. For example if the measure µ itself is the Haar measure, then the measure (7.2) is the Haar measure at any time.