A diffusive matrix model for invariant $\beta$-ensembles

We define a new diffusive matrix model converging towards the $\beta$-Dyson Brownian motion for all $\beta\in [0,2]$ that provides an explicit construction of $\beta$-ensembles of random matrices that is invariant under the orthogonal/unitary group. We also describe the eigenvector dynamics of the limiting matrix process; we show that when $\beta<1$ and that two eigenvalues collide, the eigenvectors of these two colliding eigenvalues fluctuate very fast and take the uniform measure on the orthocomplement of the eigenvectors of the remaining eigenvalues.


Introduction
It is well known that the law of the eigenvalues of the classical Gaussian matrix ensembles are given by a Gibbs measure of a Coulomb gas interaction with inverse temperature β = 1 (resp. 2, resp. 4) in the symmetric (resp. Hermitian, resp. symplectic) cases; Such measures are associated with symmetric Langevin dynamics, the so-called Dyson Brownian motion, which describe the random motion of the eigenvalues of a symmetric (resp. Hermitian, resp. symplectic) Brownian motion. They are given by the stochastic differential system with iid Brownian motions (b i ). These laws and dynamics have been intensively studied, and both local and global behaviours of these eigenvalues have been analyzed precisely, starting from the reference book of Mehta [9]. More recently, the generalization of these distributions and dynamics to all β ≥ 0, the so-called β-ensembles, was considered. As for β = 1, 2, 4, the Langevin dynamics converge to their unique invariant Gibbs measure P β as times goes to infinity. Indeed, the stochastic differential system under study is a set of Brownian motions in interaction according to a strictly convex potential. Thus, one can then show by a standard coupling argument that two solutions driven by the same Brownian motion but with different initial data will soon be very close to each others. This entails the uniqueness of the invariant measure as well as the convergence to this Gibbs measure. It turns out that the case β ∈ [0, 1) and the case β ∈ [1, ∞) are quite different, as in the first case the eigenvalues process can cross whereas in the second the repulsion is strong enough so that the eigenvalues do not collide with probability one in finite time. However, the diffusion was shown to be well defined, even for β < 1, by Cépa and Lépingle [4], at list once reordered.
The goal of this article is to provide a natural interpretation of β-ensembles in terms of random matrices for β ∈ [0, 2]. Dumitriu and Edelman [6] already proposed a tridiagonal matrix with eigenvalues distributed according to the β-ensembles. However, this tridiagonal matrix lacks the invariant property of the classical ensembles. Our construction has this property and moreover is constructive as it is based on a dynamical scheme. It was proposed by JP Bouchaud, and this article provides rigorous proofs of the results stated in [1]. The idea is to interpolate between the Dyson Brownian motion and the standard Brownian motion by throwing a coin at every infinitesimal time step to decide whether our matrix will evolve according to a Hermitian Brownian motion (with probability p) or will keep the same eigenvectors but has eigenvalues diffusing according to a Brownian motion. When the size of the infinitesimal time steps goes to zero, we will prove that the dynamics of the eigenvalues of this matrix valued process converges towards the β-Dyson Brownian motion with β = 2p. The same construction with a symmetric Brownian motion leads to the same limit with β = p. This result is more precisely stated in Theorem 2.2. We shall not consider the extension to the symplectic Brownian motion in this paper, but it is clear that the same result holds with β = 4p. Our construction can be extended to other matrix models such as Wishart matrices, Circular and Ginibre Gaussian Ensembles and will lead to similar results.
We thus deduce from our construction that β-ensembles can be interpreted as an interpolation between free convolution (obtained by adding a Hermitian Brownian motion) and standard convolution (arising when the eigenvalues evolve following standard Brownian motions). It is natural to wonder whether a notion of β-convolution could be more generally defined.
Moreover we shall study the eigenvectors of our matrix-valued process. In the case where β ≥ 1, their dynamics is well known and is similar to the dynamics of the eigenvectors of the Hermitian or Symplectic Brownian motions, see e.g. [2]. When β < 1 the question is to determine what happens at a collision. It turns out that when we approach a collision, the eigenvectors of the non-colliding eigenvalues converge to some orthogonal family B of d − 2 vectors whereas the eigenvectors of the colliding eigenvalues oscillate very fast and take the uniform distribution on the ortho-complement of B, see Proposition 2.6.

Statement of the results
Let H β d be the space of d × d symmetric (respectively Hermitian) matrices if β = 1 (resp. β = 2) and O β d be the space of d × d orthogonal (respectively unitary) matrices if β = 1 (resp. β = 2).
We consider the matrix-valued process defined as follows. Let γ be a positive real number and M β 0 ∈ H β d with distinct eigenvalues λ 1 < λ 2 < · · · < λ d . For each n ∈ N, we let (ǫ n k ) k∈N be a sequence of i.i.d {0, 1}-valued Bernoulli variables with mean p in the sense that P[ǫ n k = 1] = p = 1 − P[ǫ n k = 0] . Furthermore, for t 0, we set ǫ n t := ǫ n [nt] . In the following, the process (H β (t)) t 0 will denote a symmetric Brownian motion, i.e. a process with values in the set of d × d symmetric matrices (respectively Hermitian if β = 2) with entries H β ij (t), t 0, i j constructed via independent real valued Brownian Definition 2.1. For each n ∈ N, we define a diffusive matrix process (M β n (t)) t 0 such that M β n (0) := M β 0 and for t 0 where (H β t ) t 0 is a d × d symmetric (resp. Hermitian) as defined in (2.1) whereas /n) is the spectral projector associated to the i-th eigenvalue λ i ([nt]/n) of the matrix M β n ([nt]/n) if the eigenvalues are numbered as λ 1 ([nt]/n) < λ 2 ([nt]/n) < · · · < λ d ([nt]/n) (we shall see that the above is possible as the eigenvalues are almost surely distinct at the given times {k/n, k ∈ N}).
As for all t, the matrix M β n (t) is in the space H β d , we know that it can be decomposed as is the diagonal matrix whose diagonal is the vector of the ordered eigenvalues of M β n (t) and where O β n (t) is in the space O β d for all t ∈ R + . We also introduce a matrix O β (0) to be the initial orthogonal matrix (resp. unitary if β = 2) such that M β The evolution of the eigenvalues of M β n (t) during the time interval [k/n; (k + 1)/n] is given by independent Brownian motions if ǫ n k = 0 and by Dyson Brownian motions if ǫ n k = 1. The eigenvectors of M β n (t) do not evolve on intervals [k/n; (k + 1)/n] such that ǫ n k = 0 and evolve with the classical diffusion of the eigenvectors of Dyson Brownian motions if ǫ n k = 1 (see [2] for a review on Dyson Brownian motion). Our main theorems describe the asymptotic properties of the ordered eigenvalues of the matrix M β n (t) denoted in the following as and also those of the matrix O β n (t) defined above, as n goes to infinity. Let (b i t ) t 0 , i ∈ {1, . . . , d} be a family of independent Brownian motions on R. Recall that Cépa and Lépingle showed in [4] the uniqueness and existence of the strong solution to the stochastic differential system starting from λ(0) = (λ 1 λ 2 · · · λ d ) and such that for all t 0 For the scaling limit of the ordered eigenvalues, we shall prove that Theorem 2.2. Let M β 0 be a symmetric (resp. Hermitian) matrix if β = 1 (resp. β = 2) with distinct eigenvalues λ 1 < λ 2 < · · · < λ d and (M β n (t)) t≥0 be the matrix process defined in Definition 2.1. Let λ n 1 (t) . . . λ n d (t) be the ordered eigenvalues of the matrix M β n (t). Let also (λ 1 (t), . . . , λ d (t)) t 0 be the unique strong solution of (2.4) with initial conditions in t = 0 given by (λ 1 , λ 2 , . . . , λ d ).
In the case where βp 1, the eigenvalues almost never collide and we will see (see section 6.1) in this case that it is easy to construct a coupling of λ and λ n so that λ n almost surely converges towards λ.
We shall also describe the scaling limit of the matrix O β n (t) (the columns of O β n (t) are the eigenvectors of M β n (t)) when n tends to infinity, at least until the first collision time for the eigenvalues, i.e. until the time T 1 defined as T 1 := inf{t 0 : ∃i ∈ {2, . . . , d}, λ i (t) = λ i−1 (t)}.
Let w β ij (t), 1 i < j d be a family of real or complex (whether β = 1 or 2) are standard Brownian motions on R), independent of the family of Brownian motions (b i t ) t 0 , i ∈ {1, . . . , d}. For i < j, set in addition w β ji (t) :=w β ij (t) and define the skew Hermitian matrix (i.e. such that R β = −(R β ) * ) by setting for i = j, . . , d} being the solution of (2.4) until its first collision time, there exists a unique strong solution (O β (t)) 0 t T 1 to the stochastic differential equation This solution exists and is unique since it is a linear equation in O β and R β is a well defined martingale at least until time T 1 . It can be shown as in [2,Lemma 4.3.4] that O β (t) is indeed an orthogonal (resp. unitary if β = 2) matrix for all t ∈ [0; T 1 ]. We mention at this point that the matrix O β n (t) is not uniquely defined, even when we impose the diagonal matrix to have a non-decreasing diagonal λ n 1 (t) . . . λ n (t). Indeed, the matrix O β n (t) can be replaced, for example, by −O β n (t) (other possible matrices exist). The following proposition overcomes this difficulty.

Proposition 2.3.
There exists a continuous process (O β n (t)) 0 t T 1 in O β d with a uniquely defined law and such that for each t ∈ [0; T n (1)], we have where ∆ β n (t) is the diagonal matrix of the ordered (as in (2.3)) eigenvalues of M β n (t).
Proposition 2.3 is proved in Section 7. We are now ready to state our main result for the convergence in law of the matrix O β n (t). Theorem 2.4 gives a convergence result as n goes to infinity for the eigenvectors of the matrix process (M β n (t)) but only until the first collision time T 1 . If pβ 1, the result is complete as one can show (see [2] and section 6.1) that the process (λ 1 (t), . . . , λ d (t)) is a non colliding process (i.e. almost surely T 1 = ∞). However, if pβ < 1, it would be interesting to have a convergence on all compact sets [0; T ] even after collisions occurred. Our next results describe the behavior of the columns of the matrix O β (t) denoted as (φ 1 (t), . . . , φ d (t)) when t → T 1 with t < T 1 .
We first need to describe the behavior of the eigenvalues (λ 1 (t), . . . , λ d (t)) in the left vicinity of T 1 . Proposition 2.5. If pβ < 1 then almost surely T 1 < ∞ and there exists a unique index i * ∈ {2, . . . , d} such that λ i * (T 1 ) = λ i * −1 (T 1 ). While we have, for all t 0 and almost surely, the following divergence occurs almost surely The first part of Proposition 2.5 is proved in subsections 3.1 and 3.2, the last statement is proved in 7. Hence equality (2.7) implies the existence of diverging integrals in the SDE (2.6). Because of this singularity, we will show Proposition 2.6. Conditionally on (λ 1 (t), . . . , λ d (t)), 0 t T 1 , we have: 1. For all j = i * , i * − 1, the eigenvector φ j (t) for the eigenvalue λ j (t) converges almost surely to a vector denoted φ j as t grows to T 1 . The family { φ j , j = i * , i * − 1} is an orthonormal family of R d (respectively C d ) if β = 1 (resp. β = 2). We denote by V the corresponding generated subspace and by W its two dimensional orthogonal complementary in R d (resp. C d ).
2. The family {φ i * (t), φ i * −1 (t)} converges weakly to the uniform law on the orthonormal basis of W as t grows to T 1 .
The paper is organized as follows. In Section 3, we review and establish some new properties for the limiting eigenvalues process (λ 1 (t), . . . , λ d (t)) defined in 2.4 that will be useful later in our proof of Theorems 2.2 and 2.4. We also introduce, in subsection 3.4, a process with fewer collisions that approximates the limiting eigenvalue process. In fact this gives a new construction of the limiting eigenvalues process already constructed in [4], perhaps simpler and more intuitive using only standard Itô's calculus. We give some useful estimates on the processes of eigenvalues and matrix entries of M β n in Section 4. In Section 5, we prove the almost sure convergence of the process (λ n 1 , . . . , λ n d ) to the limiting eigenvalues process (λ 1 , . . . , λ d ) until the first hitting time of two particles with a coupling argument. In Section 6, we finish the proof of Theorem 2.2 by approximating in the same way the process (λ n 1 , . . . , λ n d ) with the same idea of separating the particles which collide by a distance δ > 0. At this point, it suffices to apply that the result of Section 5 to show that the two approximating processes are close in the large n limit. In Section 7, we prove Theorem 2.4, the last statement of Proposition 2.5 and Propositions 2.3 and 2.6.

Properties of the limiting eigenvalues process
In this section we shall study the unique strong solution of (2.4) introduced by Cépa and Lépingle in [4]. We first derive some boundedness and smoothness properties. In view of proving the convergence of λ n towards this process, and in particular to deal with possible collisions, we construct it for pβ < 1 as the limit of a process which is defined similarly except when two particles hit, when we separate them by a (small) positive distance, see Definition 3.6.

Regularity properties of the limiting process
Then there exists a unique strong solution of (2.4). Moreover, it satisfies Furthermore, there exists α, M 0 > 0 finite so that for M M 0 and i = j, we have

Proof. The existence and unicity of the strong solution is [4, Proposition 3.2].
For the first point, we choose a twice continuously differentiable symmetric function φ, increasing on R + , which approximates smoothly |x| in the neighborhood of the origin so that We deduce from the above arguments that there exists C > 0 such that By usual martingales inequality, as φ ′ is uniformly bounded we know that, see e.g. [2, Corollary H.13], and therefore using the fact that |φ(x)| ≥ |x| × |x| ∧ 1, we deduce the first point with so that the first point gives the claim fo j = d. We then continue recursively.

Estimates on collisions
To obtain regularity estimates on the process λ, we need to control the probability that more than two particles are close together. We shall prove, building on an idea from Cépa and Lépingle [5], that We let, for ε > 0, τ r ε := inf{t ≥ 0 : min Then, for any T > 0 and η > 0, for any r ≥ 3 there exists ε r > 0 which only depends on Proof. The proof is done by induction over r and we start with the case r = d, I = {1, . . . , d}. Then, S verifies the following SDE (see e.g. [5, Theorem 1]): where β t is a a standard brownian motion and a = 2d(d − 1)(2 + pβd). The square root of Thus, as α < 0 for d ≥ 3, for any ε > 0, As a consequence, since α < 0, we have We can take ε small enough to obtain the claim for r = d. We next assume that we have proved the claim for u r + 1 and choose ε r+1 so that the probability that the hitting time is smaller than T is smaller than η/2. We can choose I to be connected without loss of generality as the λ i are ordered. We let R = min{τ I ε , τ r+1 ε r+1 } when τ I ε is the first time where S I reaches ε. Again following [5], we have For j, k ∈ I, we cut the last integral over times This term will therefore be compensated by the third term in (3.2). For the remaining As a consequence, we have the bound for all j, k ∈ I, all t ∈ Ω c j,k , t ≤ R, which entails the existence of a finite constant c so that Using Lemma 3.1 we hence conclude that there exists a universal finite constant c ′ depending only on T so that On the other hand, we have where the last term is bounded above by (3.1). We deduce that We finally choose ε small enough so that the right hand side is smaller than η/2 to conclude. We next show that not only collisions of three particles are rare but also two collisions of different particles rarely happen around the same time.
Then, for any T > 0 and η > 0, there exists ε ′ such that Proof. Using Itô's formula, it is easy to see that Thus there exists a standard Brownian motion B so that Note that, by the previous Lemma 3.2, we can choose ε such that Moreover, for all t τ 3 ε such that X t ε/4, we have for all k = i − 1, i, The same property holds for j. To finish the proof, we will use the fact that the sum in the last term is bounded for all t τ 3 ε such that X t ε/4. We thus need to introduce the process Y t defined by Y t = min(X t , ε 4 ). Let us set f (x) := min(x, ε/4) −pβ . Note that f is a convex function R + → R + and that the left-hand derivative of f is given by Its second derivative in the sense of distributions is the positive measure Thus, by Itô-Tanaka formula, see e.g. [8,Theorem 6.22], we have and thus, we obtain The definition of local time implies that, almost surely, L x t (X) t. We thus deduce from Taking ε ′ small enough gives the result with (3.3). As a direct consequence, we deduce the uniqueness of the i * of Proposition 2.5.
In particular, this gives the unicity of the i * in Proposition 2.5.
Proof. It is enough to write that for all ε > 0 and deduce from Lemmas 3.3 and 3.2 that the right hand side is as small as wished when ε goes to zero.

Smoothness properties of the limiting process
Lemma 3.5. We have the following smoothness properties: • For all T < ∞ and ε > 0, there exists C, c ′ , c finite positive constants so that for all δ, η positive real numbers so that η ≤ c ′ (ε 2 ∧ δε) we have • For all T < ∞ and ε > 0, there exists C, c ′ , c finite positive constants so that for all δ, η positive real numbers so that η ≤ c ′ (ε 2 ∧ δε) we have Proof. Let us first fix s ∈ [0, T ] and set I = {i ∈ {2, . . . , d} : |λ i (s) − λ i−1 (s)| ε/3} and note that on the event {s τ 3 ε }, the connected subsets of I contain at most one element.
. The continuity of the λ i implies that T ε is almost surely strictly positive. If Using (3.1) and [2, Corollary H.13], it is easy to deduce that there exists a constant c > 0 such that for η < εδ/(8pβ(d − 1)) Now, if i ∈ I, with the same argument as for (3.7) (the drift term in the SDE satisfied by λ i + λ i−1 is also bounded), we can show that there exists a constant c > 0 such that On the other hand, the process x i (t) := (λ i − λ i−1 )(t) verifies dt .
(3.9) where the first inequality is due to the fact that x i is non-negative. Using (3.8) and (3.9) gives for η < δε/c Thus, with (3.7), we deduce that for η < δε/c In particular, there exists c ′ > 0 so that if ε 2 > cη, which is as small as wished provided η is chosen small enough. This allows to remove the stopping time and get for any fixed s < T , and δ > cη/ε The uniform estimate on s is obtained as usual by taking s in a grid with mesh η/2 up to divise δ by two and to multiply the probability by 2T /η. Thus we find constant c, c ′ , and C so that if η ≤ c(ε 2 ∧ δε) we have The second control is a direct consequence of the first as we can first consider the cas j = d to deduce that for i < d where the right hand side is continuous. We then consider recursively the other indices.

Approximation by less colliding processes
When pβ 1, it is well known [2, Lemma 4.3.3] that the process λ has almost surely no collision. In this case, the singularity of the drift which defines the SDE is not really important as it is almost always avoided. In the case pβ < 1, we know that collisions occur and in fact can occur as much as for a Bessel process with small parameter. The singularity of the drift becomes important, in particular when we will show the convergence in law of the process of the eigenvalues λ n towards λ. To this end, we show that λ can be approximated by a process which does not spend too much time in collisions.
Lemma 3.8. Let δ > 0. Construct the process λ with the same Brownian motion b than λ δ . There exists a constant c > 0 such that, almost surely, for all ℓ ∈ N To finish the proof it is enough to show that T δ ℓ goes to infinity for ℓ ≪ 1/δ. This is the content of the next proposition. Proof of Lemma 3.8. We proceed by induction over ℓ to show that, for each ℓ, • We treat the case ℓ = 1. By definition of the processes, λ δ = λ on [0, T δ 1 ). At time t = T δ 1 , the separation procedure implies that The property is true for ℓ = 1.
• Suppose it is true for ℓ. For t ∈ [T δ ℓ , T δ ℓ+1 ), since λ δ and λ are driven by the same Brownian motion, we get as the (λ i ) 1≤i≤d and the (λ δ i ) 1≤i≤d are ordered. Thus, In addition, because of the separation procedure at time T δ ℓ+1 , we have where we used the induction hypothesis in the last line. The proof is thus complete.
Proof of Proposition 3.9. In the case pβ ≥ 1, it is well known [2, p. 252] that T 1 is almost surely infinite and therefore the proposition is trivial. We hence restrict ourselves to pβ ≤ 1. Let η > 0. Let us define the stopping times τ 3,δ ε := inf{t 0 : min |I|=3 S I,δ t ε} , τ 2,δ ε := inf{t 0 : min . Set also τ δ ε := τ 2,δ ε ∧ τ 3,δ ε . We know from Lemmas 3.2 and 3.3 that we can choose ε small enough so that The number ε being fixed, it is then straightforward to see from Lemma 3.8 that there exists δ 0 small enough so that for all δ δ 0 , we have We need to show that the second term goes to 0 when δ → 0. Let {F t } t≥0 be the filtration of the driving Brownian motion. We will prove in Lemma 3.12, there exists a constant c > 0 such that, on the event {τ δ ε T δ L }, almost surely In the following, we suppose that δ is small enough so that c δ −pβ+ξ δ −pβ+2ξ and δ −ξ T − δ −pβ+ξ − δ −pβ+2ξ . For such δ, we have where we used the Tchebychev inequality in the last line. Using Lemma 3.10, we get that there exists a constant C > 0 such that which goes to 0 when δ goes to 0. The proposition is proved.
Lemma 3.10. Let ξ ∈ (0; 2). Then there exists a constant C > 0 such that, almost surely, Proof. We know that there are no multiple collisions nor simultaneous collisions (because of Lemmas 3.2 and 3.3) and therefore we can denote by i the unique element such dt .
Let us define the Bessel like process (X t ) t 0 by X 0 = δ and for t 0, Using the comparison theorem for SDE [8, Proposition 2.18] (note that the drifts are smooth before T δ ℓ+1 − T δ ℓ ), we know that for all t ∈ [0, T δ ℓ+1 − T δ ℓ ), we have almost surely We finally conclude using a classical result for Bessel process, see e.g. [?, (13)]; the density with respect to the Lebesgue measure on R + of the law of the random variable T δ X is Hence we deduce that for ξ ≤ 2 there exists a constant c > 0 such that For time t ∈ [0; T ], we define the random set Note that, on the event Ω := {τ δ ε T }, for each t T , the set I t contains at most one element. For each ℓ ∈ {1, . . . , L}, and i ∈ {1, . . . , d}, we define the stopping times Lemma 3.11. If T δ ℓ τ δ ε and if i denotes the (unique) index such that λ δ i (T δ ℓ −) = λ δ i−1 (T δ ℓ −) , then there exists a constant c > 0 and δ 0 > 0 such that for all δ δ 0 , we have Proof. Note that i is the unique element of the set I T δ ℓ defined by (3.17) for which dt .
Proof.We assume in the sequel that δ ≤ 1. The proof is based on Lemma 3.11. It implies By Lemma 3.11, we deduce that Let us handle the first term of the previous right hand side where we used Lemma 3.5 for the last line (actually the proof since we used the estimate for a fixed s). For the second term, the idea is similar by Lemma 3.5. As for all ξ > 0, exp(− c δ ξ/4 ) ≪ δ 1−pβ for small enough δ, the proof is complete.

Properties of the eigenvalues of M β n
In this section, we will study the regularity and boudedness properties of the eigenvalues of M β n .
Remark here that we use the property that ǫ n t = (ǫ n t ) 2 . Proof. Let us show first that for each k ∈ N such that k/n < T n (1), we have almost surely the strict inequality (4.1). We will proceed by induction over k. Note that under our assumptions, it is true for k = 0. Suppose it is true at rank k and let us show it is then true at rank k + 1. From Definition 2.1, if the eigenvalues of M β n (k/n) are denoted as λ n 1 (k/n) < · · · < λ n d (k/n), then, depending on the value of the Bernoulli random variable ǫ n k , the dynamic for t ∈ [k/n; (k + 1)/n] is • if ǫ n k = 1, the process (λ n 1 (t), . . . , λ n d (t)) follows the Dyson Brownian motion with initial conditions (λ n 1 (k/n), . . . , λ n d (k/n)) (see [2,Theorem 4.3.2]); More precisely, we have for t ∈ [k/n; (k + 1)/n) .
• on the other hand, if ǫ n k = 0, we need to define a new process (µ n 1 (t), . . . , µ n d (t)) of independent Ornstein-Uhlenbeck processes with initial conditions (λ n 1 (k/n), . . . , λ n d (k/n)); More precisely, the evolution for t ∈ [k/n; (k + 1)/n] is given by where the Brownian motions B i are the ones of Definition 2.1. Note that, before time T n (1), the two processes λ n and µ n coincide. In this case, the µ n i (t) can cross and the ordering can be broken in the interval [k/n; (k + 1)/n]. However, if crossing for the process µ n happen before time t = (k+1)/n still we know that e γ(k+1)/n µ n i ((k+1)/n) are almost surely distinct. The re-ordering of the µ n i thus always gives λ n 1 ((k+1)/n) < · · · < λ n d ((k + 1)/n) a.s.
The induction is complete and proves equality (4.1) for all k ∈ N. We deduce from the above arguments that for k such that k/n < T n (1), the evolution of λ n (t) for t ∈ [k/n; (k + 1)/n ∧ T n (1)) is with initial conditions in t = k/n given by (λ n 1 (k/n), . . . , λ n d (k/n)). Let us define the process b i for t 0 by b i t := t 0 (ǫ n s dW i s + (1 − ǫ n s )dB i s ). Using the fact that the Brownian motions (W i t ) t 0 , i ∈ {1, . . . , d} are mutually independent and independent of the Brownian motions (B i t ) t 0 , i ∈ {1, . . . , d} (also mutually independent), it is straightforward to check that the processes (b i t ) t 0 , i ∈ {1, . . . , d} are mutually independent Brownian motions. It is also easy to see that, for all s, t ∈ [k/n; (k + 1)/n], the random variables and ǫ n k are independent. Therefore, we deduce that the brownian motions (b i t ) t 0 , i ∈ {1, . . . , d} are independent of the sequence (ǫ n k ) k∈N . The following regularity properties will be useful later on.
Proof. Using Itô's formula, we can check that Let us set ∆ n (s, t) := e γt M β n (t) − e γs M β n (s). The entries of ∆ n (s, .) are martingales with respect to the filtration of the Brownian motions conditionally to the Bernoulli random variables (ǫ n k ) k∈N (this is due to the independence between the Brownian motions (B i t ) t 0 , (H β t (ij)) t 0 , 1 i, j d and the sequence of Bernoulli random variables (ǫ n k ) k∈N . Using the fact that |χ n i ([ns]/n) ij | 1 for all i, j, we can check that there exists a constant C(d, T ) which does not depend on n such that for all n ∈ N | ∆ n (s, ·) ij , ∆ n (s, ·) kl t | C(T, d)|t − s| .
Let A > 0, using [2, corollary H.13], we have P max Similarly, for any given s ∈ [0, T ], for ε > 0, using [2, Corollary H.13], we have, for each entry ij and for every δ > 0: and therefore there exists a positive constant c ′ so that Proof. This lemma is a consequence of Lemma 4.3 and the inequalities where, for the second inequality, we used [2, lemma 2.1.19] and the fact that the λ n i are ordered.
5 Convergence of the law of the eigenvalues till the first hitting time Proposition 5.1. Take λ(0) = (λ 1 < λ 2 < · · · < λ d ). Construct µ n , strong solution of (4.2), with the same Brownian motion than λ, strong solution of (2.4), both starting from λ(0). λ n equals µ n till T n (1). For all T > 0, we have the following almost sure convergence As a consequence, if we let T 1 = inf{t > 0, ∃i = j, λ i (t) = λ j (t)}, we have almost surely We point out that this convergence does not happen on a trivial interval since we have Remark 5.2. For any η > 0, there exists τ (η) > 0 so that Proof of Remark 5.2. By the same arguments developed in (4.9), we find that But since also the λ n i are uniformly bounded with high probability, we can choose for any η > 0 the parameter T small enough so that P max Proof of Proposition 5.1 Using Itô's formula, we can compute By the same argument as in (3.11) the second term in the right hand side is non positive. Thus using equations 5.1, we find for t T n (1) We next prove that lim We first handle the convergence of Q n (t). Set Ω 1 = {sup |s−t|≤1/n t≤T max 1≤i≤d |λ n i (t)−λ n i (s)| n −1/2+ǫ }. On the event Ω 1 , we have Following (4.9), we know that P (Ω c 1 ) ≤ ce −cn 2ǫ . We thus deduce from Lemma 3.1 that c e −c δ 2 n 1−2ǫ + c e −c n 2ǫ .
Hence, Borel Cantelli's Lemma insures the almost sure convergence of Q n to zero. We now turn to the convergence of P n (t). Let η > 0 small and write The process t 0 (ǫ n s − p)ds is a martingale and by Azuma-Hoeffding inequality, for any δ > 0 P max .
We now use the independence between the brownian motions (b i t ) 0 t T , i = 1, . . . , d and the Bernoulli random variables ǫ n k , k = 1, . . . , [nT ]. Conditionally on the (b i t ) 0 t T , i = 1, . . . , d, the processes λ i (t), i = 1, . . . , d are deterministic and the process P n is a martingale with respect to the filtration of the ǫ n k . We let By Lemma 3.5 and Lemma 4.4, the set has probability larger than 1 − e −cn 1/16 . Moreover, by martingale property it is easy to see that for all λ ≥ 0, ] ≤ 1 .
Lemma 6.2. Let T < ∞ and δ > 0. We have the following convergence in probability, for all ℓ ∈ N, In particular, for every ℓ, if T δ n is the first collision time for λ n,δ after T δ ℓ−1 , Proof Again, we prove this Lemma by induction over ℓ.
We now turn to the analysis of the behavior of the columns φ i (t) of the matrix O β (t) when t→T 1 with t < T 1 . Those vectors φ i (t) form an orthonormal basis of R d (respectively C d ) if β = 1 (resp. β = 2) and it is easy to check that they verify the following stochastic differential system In the following of this section, we will denote by i * the unique (because of Lemma 3.4) index such that λ i * (T 1 ) = λ i * −1 (T 1 ).
The main issue we meet at this point in the presence of collisions (that will occur if pβ < 1; see [4]) lies in the divergence of the integral 2.7 that we now prove.
We now describe the behavior of the d − 2 vectors φ j (t), j = i * , i * − 1 just before the first collision time T 1 .
Proof of the first statement of Proposition 2. 6 We will denote by φ jℓ (t) the ℓ-th entry of the d-dimensional vector φ j (t). For 0 t < T 1 , we have We recall from section 3.2 that there are no multiple collisions nor two collisions at the same time for the system (λ 1 (t), λ 2 (t), . . . , λ d (t)) 0 t T 1 verifying (2.4), and therefore we may assume without loss of generality that for j = i * , i * − 1, every diffusions and drift terms of (7.5) remains almost surely bounded for t ∈ [0; T 1 ]. To prove the lemma, we just need to prove that almost surely The drift terms appearing in (7.5) are obvious to deal with since 1/(λ j − λ k )(t) is bounded in the vicinity of T 1 and that |φ jℓ (t)| 1 for all t < T 1 . For the diffusion terms, we have for every ℓ ∈ {1, . . . , d} and for every s ∈ [0; T 1 ] the following estimate where M = sup t∈[0;T 1 ] max k =j 1 (λ j −λ k ) 2 (t) . Using the Borel-Cantelli Lemma, we deduce the result.
Let V be the (d − 2)-dimensional subspace spanned by the orthonormal family { φ j ; j = i * , i * −1} and W its orthogonal complement in R d . Let us define the"diffusive orthonormal basis" in the space W that will describe the evolution of the two vectors ( φ i * −1 (t), φ i * (t)) on the interval [T 1 − δ; T 1 ] (up to the initial conditions at time t = T 1 − δ we will explicit later). Lemma 7.1. Let δ > 0 and (u, v) an orthonormal basis of the two-dimensional subspace W . We consider the following stochastic differential system with initial conditions ( φ i * −1 (T 1 − δ), φ i * (T 1 − δ)) = (u, v). This stochastic differential system has a unique strong solution defined on the interval [T 1 − δ; T 1 ) such that for each t ∈ [T 1 − δ; T 1 ), { φ i * −1 (t), φ i * (t)} is an orthonormal basis of W .
Proof. For all ǫ > 0, the function t → 1/(λ i * − λ i * −1 )(t) is bounded on the interval [T 1 − δ; T ǫ 1 ] and therefore there is a unique strong solution to the stochastic differential system (7.6) till the time T ǫ 1 where |λ i * − λ i * −1 | < ǫ as it is driven by bounded linear drifts. As T ǫ 1 grows to T 1 the proof is complete.
To show that for all t ∈ [T 1 − δ; T 1 ) the family { φ i * −1 (t), φ i * (t)} is an orthonormal basis of W , we proceed along the same line as in the proof of [2,Lemma 4.3.4].
Lemma 7.2. Let η > 0 and κ > 0. Then there exists an orthonormal basis (u, v) of W and δ > 0 small enough such that if we denote by ( φ i * −1 (t), φ i * (t)) t∈[T 1 −δ;T 1 ) the unique strong solution of the stochastic differential system (7.6) with initial conditions given in t 0 = T 1 − δ by ( φ i * −1 (t 0 ), φ i * (t 0 )) = (u, v), we have P sup Proof. Using Itô's formula, we find 1 for all t ∈ [t 0 ; T 1 ), dw β ij (s) φ i (s), φ j (s) . (7.7) As for i ∈ {i * , i * − 1} and j ∈ {i * , i * − 1} the terms 1/(λ i − λ j ) 2 (t) have almost surely a finite integral with respect to Lebesgue measure on the interval [t 0 ; T 1 ) (in fact those terms are almost surely bounded as the corresponding particles remain at finite distance), the quadratic variation of the last term is of order δ and therefore is smaller than η/2 with probability greater that 1 − κ for δ small enough.
is now defined for all t ∈ R + and verifies the following stochastic differential equation where B is a standard Brownian motion on R and where A is the two by two matrix defined by Note in particular that A 2 = −I. It is clear that there is pathwise uniqueness in the stochastic differential equation (7.9) (it is linear in ψ). Therefore to solve entirely this equation, we just need to exhibit one solution. Using Itô's Formula, one can check that the solution is Note that for all t ∈ R + , the matrix ψ(t) is indeed in the space of orthogonal matrices. But (cos( √ pB t ), sin( √ pB t )) converges in law as time goes to infinity towards the law of (θ, ε √ 1 − θ 2 ) with θ uniformly distributed on [−1, 1] and ε = ±1 with probability 1/2, from which the result follows.