Poisson statistics for 1d Schr\"odinger operators with random decaying potentials

We consider the 1d Schr\"odinger operators with random decaying potentials where the spectrum is pure point(sub-critical case). We show that the point process composed of the rescaled eivenvalues, together with those zero points of the corresponding eigenfunctions, converges to the Poisson process.

The purpose of this paper is to study the local fluctuation of the eigenvalues in the positive energy axis. In order for that, let H L := H| [0,L] be the restriction of H on the interval [0, L] with Dirichlet boundary condition, and let {E j (L)} j≥j0 (0 < E j0 (L) < E j0+1 (L) < · · · ) be the set of positive eigenvalues of H L . Take the reference energy E 0 > 0 arbitrary, and consider the point process where we take the square root of each eigenvalues which corresponds to the unfolding with respect to the integrated density of states N (E) = π −1 √ E. For a Borel measure µ on R d , we denote by P oisson(µ) the Poisson process on R d with intensity measure µ. Similarly, for a constant c > 0, we denote by P oisson(c) the Poisson distribution with parameter c. The first theorem of this paper is Theorem 1.1. Let α < 1/2. Then ξ L converges in distribution to the Poisson process of intensity dλ/π 1 ξ L d → Poisson dλ π , L → ∞.

Remark 1.2.
When we consider two reference energies E 1 , E 2 , E 1 = E 2 , then the corresponding point processes ξ 1 , ξ 2 jointly converge to the independent Poisson processes of intensity dλ/π. Remark 1.3. Together with results in [7,11], we have 2 (1) α > 1 2 =⇒ ξ L → clock process Such kind of results have been known for discrete models: [5] proved (1)-(3) above for CMV matrices, [3] proved "clock behavior" (similar to (1)) for Jacobi matrices, and [9] proved (2) for 1d discrete Schrödinger operators. Hence our result is a continuum analogue of them. The model-independent nature of those results is due to the fact that the Prüfer phases of those models obey the similar equations and thus have similar behavior. The global fluctuation of eigenvalues is studied in [13] which also shows different behavior in above three cases. Theorem 1.1 also works for H L so that together with results in [11] we have 3 (1) α > 1 2 =⇒ ξ L → clock process 1 We consider the vague topology on the space of point measures on R. (2) α = 1 2 =⇒ ξ L → Sch τ process (3) α < 1 2 =⇒ ξ L → Poisson dλ π [9] proved (2) for 1d discrete Schrödinger operators.
Remark 1.5. It would be interesting to study the behavior of eigenvalues near the bottom edge of the essential spectrum (i.e., to study ξ L for E 0 = 0), for which the technique in this paper does not apply. For recent development in this respect, we refer to [2].
To see the outline of proof, we introduce the Prüfer variable as follows. Let x t be the solution to the Schrödinger equation Hx t = κ 2 x t , x 0 = 0, which is represented in the following form.
Since, by Sturm's oscillation theorem, E = E j (L) if and only if θ L ( √ E) = jπ, the Laplace transform of ξ L has the following representation.
Thus our aim is to study the joint limit of (Θ L (λ), φ L (E 0 )). Here we replace L by n and consider the family H nt (t ∈ [0, 1]) of Hamiltonians. We will show that the following limits exist.
In the first equation, both sides are regarded as the non-decreasing function(with the weak topology as a measure)-valued processes in t. Then we have the following theorem. Theorem 1.6.
On the other hand, for the Anderson model H = − + V ω (x) on l 2 (Z d ), the following facts are known [4,10]. Let H L := H| {1,··· ,L} d be the restriction of H on the box of size L, with {E j (L)} j≥1 being its eigenvalues. Let x j (L) ∈ R d be the localization center corresponding to E j (L). If E 0 lies in the localized region, we have where n(E 0 ) := d dE N (E)| E=E0 is the density of states at E = E 0 . The jump points of the function t → Θ nt (λ)/π are (modulo some errors) related to the zero points of the eigenfunction such that the corresponding eigenvalue is less than λ. Since the eigenfunction decays sub-exponentially and since the set of jump points of the function t →Θ t (λ)/π has the monotonicity in λ to be described in eq.(1.5), those jump points are close to the localization center of each eigenfunctions. Hence we believe that the statement like eq.(1.2) holds also for our case and that Theorem 1.6 (2) is related to this speculation.
We shall explain the idea of proof. The Pfrüfer phase satisfies the integral equation (2.1) by which we compute the equation satisfied by Θ nt (λ). By using "Ito's formula" (2.3) we can show that, up to error terms, where Z t = X t + iY t is the complex Brownian motion. At this point, we have a general picture: (1) α > 1/2: second term vanishes which implies the convergence to the clock process, (2) α = 1/2: Θ nt (λ) converges to the solution to a SDE, and (3) α < 1/2: the diffusion term will be dominant so that Θ nt (λ) should be in a vicinity of πZ in order to have (e 2iΘnt(λ) −1) small. Here we note that Θ nt (λ) > 0 for λ > 0 and E[Θ nt (λ)] = λt+o(1) (Proposition 2.4). By the change of variables Here we recall the definition of the Sine β -process [14]. Let α t (λ) be the solution to the following SDE.
Then the function t → α t (λ)/2π is non-decreasing and the limit α ∞ (λ) := lim t→∞ α t (λ) satisfies α ∞ (λ) ∈ 2πZ, a.s. Then Sine β -process on the interval [λ 1 , λ 2 ] is defined by Allez-Dumaz [1] showed that Sine β d → P oisson(dλ/2π) as β → 0. This fact can easily be generalized to other processes where the drift term in the corresponding SDE (1.3) is replaced by functions f with mild conditions [12]. Moreover, by a scaling t → β 4 t, eq.(1.3) and C(E 0 , F ) is a positive constant depending on E 0 , F . Here we use assumptions on a, a to estimate error terms. By a time-change, we can suppose that M t is a Brownian motion. We divide the interval [0, 1] into small random ones I k = [τ k /N, τ k+1 /N ] and consider the stationary processes S ± which are the solution to the following SDE's on each I k 's.
The rest of this paper is organized as follows. In Section 2, we study the behavior of Θ nt (λ) and derive some properties of the expectation of Θ nt (λ) and the monotonicity of the function t → Θ nt (λ)/π . In Section 3, we derive the Ricatti equation (1.4) satisfied by R(nt). In Section 4, we estimate R(nt γ ) from above and below by solutions R ± to simple SDE's. In Section 5, following the argument in [1], we consider the stationary approximation S ± of R ± and compute the explosion time of them. Then we show that the jump points of the function t → Θ nt /π converge to a Poisson process and that the processes P λ1 and P λ1,λ2 mentioned above are independent. In Section 6, we prove

Behavior of Θ nt (λ)
In this section we introduce notations and derive some basic properties of the relative Prüfer phase Θ nt (λ). Letθ t (κ) be defined by which satisfies the following integral equation.
Here we make use of the following identity which is a consequence of Ito's formula [8]: for f ∈ C ∞ (M ) and κ = 0, L is the generator of X t . Eq.(2.3) and the integration by parts yields the following equation. Putting m = 1, ϕ = F , and b(t) = a(t) in Lemma 2.2, we have Proof. Note that lim n→∞ r (n) m is uniformly bounded. Iterating this process until we have a(s) j0 yields EJP 22 (2017), paper 69.

Ricatti equation
This definition is different from that in Section 2. To study the hitting time of Θ nt (λ) to the set πZ, or that of (Θ nt (λ ) − Θ nt (λ)) in general, we consider Note that Here we recall that, for Sine β -process, the corresponding processR(t) := log tan (α t (λ)/4) with α t (λ) being the solution to eq.(1.3) satisfies The following Proposition implies that R(nt) is close to the solution to a SDE which is similar to eq.(3.2).
where M is a martingale with The last term E(nt) in eq.(3.3) is an negligible error compared to 1st and 2nd terms of RHS in eq.(3.3), and has the following form.
Proof. First of all, we introduce a notation A ≈ B meaning that A − B is a sum of an negligible error E(nt) and a martingale N whose quadratic variation is negligible compared to that of M in eq.(3.4): EJP 22 (2017), paper 69.
, the integrands of III, IV are equal to cosh(R(s)) · a(s)n −1 multiplied by bounded functions so that III, IV ≈ 0. Hence it suffices to compute the 2nd term II which has the following form: In order to compute J(κ) we introduce as n → ∞. By Set M to be the sum of (2κ 0 ) −1 ReN and all other martingales appeared in the above argument, after taking real part and multiplying (2κ 0 ) −1 . Then M satisfies eq.(3.4).

A comparison argument
In this section we consider R := R − e (n) , carry out scaling and time-change, and bound from above and below by the diffusions R ± which obey simple SDE's (4.1), (4.2).
We first prepare some notations. Let We consider diffusions R ± which are the solutions to where W t is a standard Brownian motion starting at 0. Then we have a following bound on R.
provided the initial values coincide.
We note ψ κ0 = −2Re F g κ0 and let to the 1st and the 2nd terms, and then by the comparison theorem, we have
Propositions and lemmas in this section can be proved in the same manner as in [1] by putting β = n − 1 γ , but we give proofs of them in Appendix II for the sake of completeness.

Preliminary: explosion time of stationary approximation
In this subsection we study the explosion time of the stationary approximation S ± of R ± which are the solution to another SDE's (5.1) where the coefficient γt γ−1 in the drift term in eq.(4.1), (4.2) are replaced by 1: If |S ± | > δ, the drift term of these SDE's are just the constant multiples of the shift of cosh, tanh, so that the analysis in [1] also works. Because the potential corresponding to the drift term in SDE (5.1) has a barrier between the local minimum in the well and the local maximum, we have a "memory-loss effect" so that the explosion time converges to the exponential distribution. More precisely, let ζ ± be the explosion time of S ± and let be the expectation value and the Laplace transform of ζ ± conditioned S ± (0) = r respectively. We then have

Poisson convergence for marginals
In this subsection, we prove that the marginal ξ L (I) of ξ L on an interval I converges to a Poisson distribution by showing that the jump points of the function t → Θ nτ (t) γ converges to a Poisson process. This will be done by dividing the time interval [0, 1] into small random ones I k and approximating R ± by S ± on each I k 's. In order that such approximation work, we need to show that {Θ nτ (t) γ (λ)} π is sufficiently small on sufficiently large portion of the time interval, which is guaranteed by Lemma 5.4. In order to prove Lemma 5.4, we need some estimates on the explosion time for respectively. Since τ (t) = 1 + o(1) uniformly in ω ∈ Ω, all statements in this subsection are also valid for R(nt γ ). Let T r := inf s R (n) (s) = r be the hitting time of R (n) to r ∈ R ∪ {+∞}. We denote by P r0, t0 the law of R (n) conditioned R (n) (t 0 ) = r 0 . If t 0 = 0, we simply write P r0, t0 = P r0 . Lemma 5.2. Let 0 < < 1, c > γ + 1 2 . Then we can find a constant c > 0 such that Idea of proof: (i) we derive the probability of the event that R (n) reaches c log n

du .
Then we can find a constant C such that Ξ n (t) ≤ Cn − 1 2γ log n We shall study the distribution of the jump points of the function t → Θ nτ (t) γ (λ)/π . The corresponding point process is defined bỹ Then the statements of Lemma 5.2, 5.4 have the following form. du .
Then we can find a constant C such that We can now prove that the jump points of the function t → Θ nτ (t) γ (λ)/π converges to a Poisson process.
and the same statement also holds for the point process µ (n) λ whose atoms consist of Idea of Proof: Let ± be the solution to the following SDE's where the constant λ in SDE (5.1) is We remark that, once S Thus we can estimate the number of jump points of Θ nτ (t) γ (λ)/π from above and below by those of Θ (n) ±,t (λ)/π . By Lemma 5.6 and by the definition of T k , on each starting point of the interval I k , we can suppose Θ nτ (t) γ (λ) ≤ 2 arctan n − 1 4γ with a good probability, so that by Proposition 5.1, the explosion time of Θ (n) ± converges to the exponential distribution on each intervals, which proves the statement of Proposition 5.7 for Θ nτ (t) γ (λ). Since τ (t) = 1 + o(1) uniformly in ω ∈ Ω, the same statement also holds for µ We can apply all the arguments in previous sections for Θ nt γ (λ ) − Θ nt γ (λ) yielding
By using these lemmas, we can show Proposition 5.13. Let ν (n) be a point process on R defined by 6 Proof of Theorems

Proof of Theorem 2
The first statement (1) of Theorem 1.6 can proved in the same manner as [7] Proposition 7.1: the only major difference is to show  In other words, the function t → Θ nt (λ)/π converges to a Poisson jump process.

Proof of Theorem 1.1
By Proposition 5.13, we have for any k ∈ N, c i , d i ∈ R and Θ 1 (·) is a Poisson jump process. By [7] Lemma 9.1, In [14], they showed that, for β ≤ 2, αt(λ) converges to α∞(λ) from above which is consistent with this argument.

Appendix I
In this section we prepare some estimates necessary to prove Proposition 3.1. The basic strategy of our computation is that, for the terms whose integrand contains a factor of the form e iκs H(X s )ds (κ = 0), we use eq.(2.3) and perform the integration by parts to obtain the terms whose integrands are multiplied by a(s) or a (s) so that they have better decay. We may continue this process as many times we need to finally obtain the negligible terms. On the other hand, for the terms with H(X s )ds (that is, κ = 0), we use eq.(2.4) instead to obtain the 2nd term of RHS in eq.(3.3).
We first consider the following quantity which often appears in the computation of J(k; j; H).
We estimate ∆J 1 , · · · , ∆J 5 separately. It will turn out that ∆J 1 , ∆J 4 are negligible, ∆J 2 is equal to the 1st term of RHS in (7.1) modulo error, ∆J 3 is equal to the 2nd term of RHS in (7.1) or is equal to RHS in (7.2).
(2) J 2 : we separate the discussion into the following two cases. (i) j ≥ 2: as in the proof of Lemma 7.1, we may ignore the term with (c − d)/n factor and replace 1/2κ c , 1/2κ c by 1/2κ 0 : And we compute ∆J 2 using (7.3): ∆J 2−1−1 already has the desired form. For ∆J 2−1−2 , integration by parts yields As in the proof of Lemma 7.1, 1st and 3rd terms are negligible ; in the 2nd term, the term with (c − d)/n factor is also negligible and 1/2κ c , 1/2κ d may be replaced by 1/2κ 0 up to negligible error: In the last step, we used Lemma 7.2. For ∆J 2−1−3 , so that ∆J 2−1−3 ≈ 0. Therefore, we have For ∆J 2−2 , we use (2.3) with κ = 4κ 0 , perform the integration by parts, estimate as before, and use Lemma 7.2: To summarize: cos(∆θ s )a(s) j+1 ds.

Appendix II
In Appendix II, we provide the proofs of Proposition 5.1 and statements in Section 5 for the sake of completeness, all of which are done by tracing those in [1].
Proof of Proposition 5.1. We discuss the computation of t satisfies V + (r) = W + (r) for r = 0, −δ. We first derive the critical points r = a n , b n such that W + (r) = 0: − log cosh δ ×1(±(a n + x) > −δ) Substituting above equations, we have Noting that → 0,λ → λ, a n → −∞, b n → 0 as n → ∞, we have The statement for the Laplace transform is derived by the same way as in the proof of Proposition 2.2 [1].

Proof of Lemma 5.2. LHS of the inequality in question is bounded from below by
which we estimate separately.
(1) If 2 log n 1 γ < r < c log n 1 γ , the drift term of the SDE for R − satisfies (drift term) ≥ 1 2 C 2 n tanh r ≥ 1 4 C 2 n so that the first factor (1) is bounded from below by the probability of the following event.
where B t is a Brownian motion with B 0 = 0. By the reflection principle, we have γ ] contains at least one explosion .
being the explosion points, we have It thus suffices to take the expectation of both sides and use the following inequality: From now on, for the sake of simplicity, we use the following notation. (1) We may take I = [0, t]. Upper bound simply follows from For the lower bound, we consider the 2nd term of which vanishes as n → ∞: For the 1st term, we note that E[N − k |C k ] = π −1 λγ T k N γ−1 · τ k+1 N by Proposition 5.1. Hence by the convergence of the Riemannian sum to the integral, γs γ−1 ds as N → ∞.
(2) We first suppose I = [t 1 , t 2 ]. Since P µ (n) and since Taking N → ∞ proves (2) proving the last inequality in (8.1). Now we integrate both sides of (8.1) and use Lemma 5.6 for the 1st and 3rd terms of RHS.
It is then sufficient to show lim sup n→∞ p n N = 0. Let (T k ) k be the random division of intervals used in the proof of Proposition 5.7. Then we have p n N ≤ P ∃k ≤ [2N t] + 1 : where we used the monotonicity of Θ (n) λ,λ /π . It thus suffices to use Lemma 5.9.
π jumps more than 2-times on where we used the monotonicity of Θ (n) λ,λ /π in the last inequality. The 1st term in RHS of (8.2) has been estimated in the proof of Lemma 5.11. For the 2nd term, we use Proposition 5.7.
lim sup