Spectral analysis of 1D nearest-neighbor random walks and applications to subdiffusive trap and barrier models

We consider a family X^{(n)}, n \in \bbN_+, of continuous-time nearest-neighbor random walks on the one dimensional lattice Z. We reduce the spectral analysis of the Markov generator of X^{(n)} with Dirichlet conditions outside (0,n) to the analogous problem for a suitable generalized second order differential operator -D_{m_n} D_x, with Dirichlet conditions outside a given interval. If the measures dm_n weakly converge to some measure dm_*, we prove a limit theorem for the eigenvalues and eigenfunctions of -D_{m_n}D_x to the corresponding spectral quantities of -D_{m_*} D_x. As second result, we prove the Dirichlet-Neumann bracketing for the operators -D_m D_x and, as a consequence, we establish lower and upper bounds for the asymptotic annealed eigenvalue counting functions in the case that m is a self--similar stochastic process. Finally, we apply the above results to investigate the spectral structure of some classes of subdiffusive random trap and barrier models coming from one-dimensional physics.


Introduction
Continuous-time nearest-neighbor random walks on Z are a basic object in probability theory with numerous applications, including the modeling of one-dimensional physical systems. A fundamental example is given by the simple symmetric random walk (SSRW) on Z, of which we recall some standard results. It is well known that the SSRW converges to the standard Brownian motion under diffusive space-time rescaling. Moreover, the sign-inverted Markov generator with Dirichlet conditions outside (0, n) has exactly n − 1 eigenvalues, which are all positive and simple. Labeling the eigenvalues in increasing order λ (n) k : 1 k < n , the k-th one is given by λ (n) k = 1 − cos(πk/n) with associated eigenfunction f k (nx) = sin(kπx) =: f k (x) , where the last limit is in the space C([0, 1]) endowed of the uniform norm. On the other hand, the standard Laplacian −(1/2)∆ on [0, 1] with Dirichlet boundary conditions has λ k : k 1 as family of eigenvalues and f k as eigenfunction associated to the simple eigenvalue λ k .
Considering this simple example it is natural to ask how general the above considerations can be. In particular, given a family of continuous-time nearest-neighbor random walks X (n) defined on the rescaled interval [0, 1]∩ Z n , Z n := {k/n : k ∈ Z}, killed when reaching the boundary, one would like very general criteria to establish (i) the convergence of X (n) to some stochastic process X (∞) , (ii) the convergence of the eigenvalues and eigenfunctions of the Dirichlet Markov generator of X (n) to the corresponding spectral quantities of the Dirichlet Markov generator of some stochastic process Y (∞) . Note that we have not imposed X (∞) = Y (∞) and the reason will be clarified soon.
Criteria to establish (i) also in a more general context have been developed by C. Stone in [S], while in the first part of this paper we develop a general criterion to establish (ii). In order to allow a better understanding of the connection between the two solutions of (i) and (ii), we briefly recall the approach of [S]. The starting observation is that X (n) can be expressed as (S n , dM n )-space-time change of the (suitably killed) standard Brownian motion B, for some scale function S n and some speed measure dM n (cf. [IM], [D], [L2]). If S n is the identity function I and dM n converges to some measure dM (as for the SSRW), one can apply Stone's result and conclude, under suitable weak technical assumptions, that X (n) converges to the process X (∞) obtained as (I, dM )-space-time change of B, suitably killed. If S n is not the identity function, one first introduces a new random walk Y (n) as follows. Observing that dM n must be of the form dM n = i w i δ y i for a countable set {y i }, while S n is an increasing function on {y i }, one sets dm n = i w i δ Sn(y i ) . Then Y (n) is defined as the nearest-neighbor random walk on {S n (y i )} obtained as (I, dm n )-space-time change of B, suitably killed. If dm n converges to some measure dm, then one can try to apply Stone's result to get the convergence Y (n) → Y (∞) , Y (∞) being the (I, dm)-spacetime change of B, suitably killed. Afterwards, one can try to derive from this limit the convergence of X (n) to some process X (∞) using the fact that X (n) = S −1 n (Y (n) ). These methods have been successfully applied in order to study rigorously asymptotic behavior of nearest-neighbor random walks on Z with random environment, as the random barrier model [KK], [FJL] and the random trap model [FIN], [BC1], [BC2] (see below).
We briefly describe our spectral continuity theorem concerning problem (ii). As remarked above, one can always transform X (n) into the random walk Y (n) having identity scale function. This transformation reveals crucial, since the Markov generator of Y (n) can be defined on continuous and piecewise-linear functions and the convergence of eigenfunctions is simply in the uniform topology (otherwise one is forced to deal with rather complex function spaces as in [FJL]). We show that the sign-inverted Markov generator of Y (n) can be written as a generalized differential operator −D mn D x on (0, S n (1)) with Dirichlet b.c. (boundary conditions), having n − 1 eigenvalues λ (n) k : 1 k < n which are all positive and simple. Suppose now that S n (1) → ℓ and that dm n vaguely converges to some measure dm, which is not given by a finite set of atoms and whose support has 0, ℓ as extremes. Then the eigenvalues and the associated eigenfunctions of −D mn D x converge to the corresponding quantities of the generalized differential operator −D m D x on (0, ℓ) with Dirichlet b.c. It is well known (cf. [L1], [L2]) that this operator is the Markov generator of the above limit process Y (∞) , and we show that it has only positive and simple eigenvalues. We point out that a similar convergence result is proven by T.Uno and I. Hong in [UH] for a family of differential operators on Γ n , where Γ n is a suitable sequence of subsets in R converging to the Cantor set. Some ideas in their proof have been applied to our context, while others are very model-dependent. The route followed here is more inspired by modern Sturm-Liouville theory [KZ], [Ze], where the continuity of the spectral structure is related to the continuity properties of a suitable family of entire functions. Our continuity result is also near to Theorem 1 in [K]. There, the author considers generalized second order differential operators without boundary conditions. As second step in our investigation we have proved the Dirichlet-Neumann bracketing for the generalized operator −D m D x (Theorem 8.8). This is a key result in order to get estimates on the asymptotics of eigenvalues. We recall that the limit distribution of the eigenvalues has been studied for several operators, we mention the Weyl's classical theorem for the Laplacian on bounded Euclidean domains (see [W1], [W2], [CH1], [RS4][Chapter XIII.15]). A key ingredient in this analysis is given by the Dirichlet-Neumann bracketing. The form of the bracketing used in our investigation goes back to G. Métivier and M.L. Lapidus (cf. [Me], [L]) and has been successfully applied in [KL] to establish an analogue of Weyl's classical theorem for the Laplacian on finitely ramified self-similar fractals. In order to apply the Dirichlet-Neumann bracketing to our context we have first analyzed the generalized differential operators −D m D x with Dirichlet and Neumann b.c. as selfadjoint operators on suitable Hilbert spaces and we have studied the associated quadratic forms. Finally, from the Dirichlet-Neumann bracketing we have derived the behavior at ∞ of the averaged eigenvalue counting function of the operator −D m D x on a finite interval with Dirichlet b.c. under the assumption that m is a self-similar stochastic process (see Proposition 2.2). We point out that in [Fr], [H], [KL] [UH] the authors study the asymptotics of the eigenvalues for the Laplacian defined on self-similar geometric objects. In our case, the self-similarity structure enters into the problem through the self-similarity of m.
As application of the above analysis (Theorem 2.1, Theorem 8.8 and Proposition 2.2) we have investigated the small eigenvalues of some classes of subdiffusive random trap and barrier models (Theorems 2.3 and 2.5). Let T = {τ x : x ∈ Z} be a family of positive i.i.d. random variables belonging to the domain of attraction of an α-stable law, 0 < α < 1. Given T , in the random trap model the particle waits at site x an exponential time with mean τ x and after that it jumps to x − 1, x + 1 with equal probability. In the random barrier model, the probability rate for a jump from x − 1 to x equals the probability rate for a jump from x to x − 1 and is given by 1/τ (x). We consider also generalized random trap models, called asymmetric random trap models in [BC1]. Let us call X (n) the rescaled random walk on Z n obtained by accelerating the dynamics of a factor of order n 1+ 1 α (apart a slowing varying function) and rescaling the lattice by a factor 1/n. As investigated in [KK], [FIN] and [BC1], the law of X (n) averaged over the environment T equals the law of a suitable V -dependent random walkX (n) averaged over V , V being an α-stable subordinator. To this last random walkX (n) one can apply our general results, getting at the end some annealed spectral information about X (n) .
Random trap and random barrier walks on Z have been introduced in Physics in order to model 1d particle or excitation dynamics, random 1d Heisenberg ferromagnets, 1d tightbinding fermion systems, electrical lines of conductances or capacitances [ABSO]. More recently (cf. [BCKM], [BDe] and references therein) subdiffusive random walks on Z have been used as toy models for slowly relaxing systems as glasses and spin glasses exhibiting aging, i.e. such that the time-time correlation functions keep memory of the preparation time of the system even asymptotically. Our results contribute to the investigation of the spectral properties of aging stochastic models. This analysis and the study of the relation between aging and the spectral structure of the Markov generator has been done in [BF1] for the REM-like trap model on the complete graph. Estimates on the first Dirichlet eigenvalue of X (n) in the case of subdiffusive (also asymmetric and in Z d , d 1) trap models have been derived in [Mo], while the spectral structure of the 1d Sinai's random walk for small eigenvalues has been investigated in [BF1]. The method developed in [BF1] is based on perturbation and capacity theory together with the property that the random environment can be approximated by a multiple-well potential. This method cannot be applied here and we have followed a different route.
Finally, we mention that we have applied our spectral continuity theorem also to diffusive random walks improving some previous results (cf. [BD]) as described in Propositions 2.4 and 2.6.

Model and results
We consider a generic continuous-time nearest-neighbor random walk (X t : t 0) on Z. We denote by c(x, y) the probability rate for a jump from x to y: c(x, y) > 0 if and only if |x − y| = 1, while the Markov generator L of X t can be written as for any bounded function f : Z → R. The random walk X t can be described as follows: arrived at site x ∈ Z, the particle waits an exponential time of mean 1/[c(x, x − 1) + c(x, x + 1)], after that it jumps to x − 1 and x + 1 with probability respectively.
By a recursive procedure, one can always determine two positive functions U and H on Z such that c(x, y) = 1/ [H(x)U (x ∨ y)] , ∀x, y ∈ Z : |x − y| = 1 .
(2.2) Moreover, the above functions U and H are univocally determined apart a positive factor c multiplying U and dividing H. Indeed, the system of equation (2.2) is equivalent to the system We observe that U is a constant function if and only if the jump rates c(x, y) depend only on the starting point x. Taking without loss of generality U ≡ 2, we get that after arriving at site x the random walk X t waits an exponential time of mean H(x) and then jumps with equal probability to x − 1 and to x + 1. This special case is known in the physics literature as trap model [ABSO]. Similarly, we observe that H is a constant function if and only if the jump rates c(x, y) are symmetric, that is c(x, y) = c(y, x) for all x, y ∈ Z.
Taking without loss of generality H ≡ 1, we get that c(x, This special case is known in the physics literature both as barrier model [ABSO] and as random walk among conductances, since X t corresponds to the random walk associated in a natural way to the linear resistor network with nodes given by the sites of Z and electrical filaments between nearest-neighbor nodes x − 1, x having conductance c(x − 1, x) = U (x) [DS]. If the rates {c(x, x ± 1)} x∈Z are random one speaks of random trap model, random barrier model and random walk among random conductances.
In order to describe some asymptotic spectral behavior as n ↑ ∞, we consider a family X (n) (t) of continuous-time nearest-neighbor random walks on Z n := {k/n : k ∈ Z} parameterized by n ∈ N + = {1, 2, . . . }. We call c n (x, y) the corresponding jump rates and we fix positive functions U n , H n satisfying the analogous of equation (2.3) (all is referred to Z n instead of Z). Below we denote by L n the pointwise operator defined at x ∈ Z n for all functions f whose domain contains x − 1 n , x, x + 1 n . The Markov generator of X (n) t with Dirichlet conditions outside (0, 1) will be denoted by L n . We recall that it is defined as the operator L n : V n → V n , where As discussed in Section 4, the operator −L n has n − 1 eigenvalues which are all simple and positive, while the related eigenvectors can be taken as real vectors. Below we write the eigenvalues as λ n−1 . In order to determine the suitable frame for the analysis of the eigenvalues and eigenvectors of −L n , we recall some definitions from the theory of generalized second order differential operators −D m D x (cf. [KK0], [DM], [K1][Appendix]), initially developed to analyze the behavior of a vibrating string. Let m : R → [0, ∞) be a nondecreasing function with m(x) = 0 for all x < 0. Without loss of generality we can suppose that m is càdlàg. We denote by dm the Lebesgue-Stieltjes measure associated to m, i.e. the Radon measure We define E m as the support of dm, i.e. the set of points where m increases: We suppose that E m = ∅, 0 = inf E m and ℓ m := sup E m < ∞. Then, F ∈ C([0, ℓ m ], C) is an eigenfunction with eigenvalue λ of the generalized differential operator −D m D x with Dirichlet boundary conditions if F (0) = F (ℓ m ) = 0 and if it holds for some constant b. We point out that (2.7) together with the boundary condition F (0) = 0 implies that b = lim ε↓0 F (x + ε) − F (x) /ε and that F must be linear on the intervals of R \ E m . The number b is called derivative number and is denoted F ′ − (0) (see Section 4 for further details).
As discussed in [L1], [L2], the operator −D m D x with Dirichlet conditions outside (0, ℓ m ) is the generator of the quasidiffusion on (0, ℓ m ) with scale function s(x) = x and speed measure dm, killed when reaching the boundary points 0, ℓ m . This quasidiffusion can be suitably defined as time change of the standard one-dimensional Brownian motion [L2], [S].
The spectral analysis of −L n can be reduced to the spectral analysis of a suitable generalized differential operator −D mn D x as follows. We define the function S n : [0, 1] ∩ Z n → R as (2.8) To simplify the notation, we set x (n) k := S n (k/n) , for k : 0 k n . (2.9) Finally, we define the nondecreasing càdlàg function m n : (2.10) Then We denote by C n [0, ℓ n ] the set of complex continuous functions on [0, ℓ n ] that are linear on [0, ℓ n ] \ E n . Then, the map is trivially bijective. As discussed in Section 4, the map T n defines also a bijection between the eigenvectors of −L n with eigenvalue λ and the eigenfunctions of the differential operator −D mn D x with Dirichlet conditions outside (0, ℓ n ) associated to the eigenvalue λ.
We can finally state the asymptotic behavior of the small eigenvalues: Theorem 2.1. Suppose that ℓ n converges to some ℓ ∈ (0, ∞) and that dm n weakly converges to a measure dm, where m : R → [0, ∞) is a càdlàg function such that m(x) = 0 for all x ∈ (−∞, 0). Assume that 0 = inf E m , ℓ = sup E m and that dm is not a linear combination of a finite family of delta measures. Then the generalized differential operator −D m D x with Dirichlet conditions outside (0, ℓ) has an infinite number of eigenvalues, which are all positive and simple. List these eigenvalues in increasing order as {λ k : k 1}, and list the n − 1 eigenvalues of the operator −L n , which are all positive and simple, as λ (n) 1 < · · · < λ (n) n−1 . Then for each k 1 it holds lim n↑∞ λ (n) k = λ k . (2.12) For each k 1, fix an eigenfunction F k with eigenvalue λ k for the operator −D m D x with Dirichlet conditions. Then, by suitably choosing the eigenfunction F where F k and F (n) k are set equal to zero on (ℓ, ℓ + 1] and (ℓ n , ℓ + 1], respectively. Since by hypothesis the supports of dm n and dm are all included in a common compact subset, the above weak convergence of dm n towards dm is equivalent to the vague convergence: R f (s)dm n (s) → R f (s)dm(s) for any function f ∈ C c (R) (i.e. continuous with compact support).
The proof of the above theorem in given in Section 7.
We describe now another general result relating self-similarity to the spectrum edge, whose application will be relevant below when studying subdiffusive random walks. Recall the definition (2.6) of E m .
m is càdlàg and increasing a.s., (iii) m has stationary and independent increments, (iii) m is self-similar, namely there exists α > 0 such that for all γ > 0 the processes m(x) : x 0 and γ 1/α m(x/γ) : x 0 have the same law, (iv) extending m to all R by setting m ≡ 0 on (−∞, 0), for any x ∈ R with probability one x is not a jump point of m. Then, a.s. all eigenvalues of the operator −D m D x with Dirichlet conditions outside (0, 1) are simple and positive, and form a diverging sequence λ k (m) : k 1 if labeled in increasing order. The same holds for the eigenvalues λ k (m −1 ) : k 1 of the operator −D m −1 D x with Dirichlet conditions outside (0, m(1)), where m −1 denotes the càdlàg generalized inverse of m, i.e. m −1 (t) = inf{s 0 : m(s) > t} , t 0 . (2.14) Moreover, if there exists x 0 > 0 such that then there exist positive constants c 1 , c 2 such that Similarly, if there exists x 0 > 0 such that (2.19) This will be understood also below, in Theorems 2.3 and 2.5. Since m is càdlàg, it has a countable (finite or infinite) number of jumps {z i }. For x 0 it holds (2.20) Since we have assumed E m = [0, ∞) a.s., m −1 must be continuous a.s. (observe that the jumps of m −1 correspond to the flat regions of m).
The proof of the above Proposition is given in Section 9 and is based on the Dirichlet-Neumann bracketing developed in Section 8 (cf. Theorem 8.8). When applying Proposition 2.2 we will present a simple argument to check (2.15) and (2.17).
As application of Theorem 2.1 and Proposition 2.2, we consider special families of subdiffusive random trap and barrier models (cf. [ABSO], [KK], [FIN], [BC1], [BC2], [FJL] and references therein). To this aim we fix a family T := {τ (x) : x ∈ Z} of positive i.i.d. random variables in the domain of attraction of a one-sided α-stable law, 0 < α < 1. This is equivalent to the fact that there exists some function L 1 (t) slowly varying as t → ∞ such that Let us define the function h as h(t) = inf{s 0 : 1/F (s) t} . (2.21) Then, by Proposition 0.8 (v) in [R] we know that for some function L 2 slowly varying as t → ∞.
Finally, we denote by V the double-sided α-stable subordinator defined on some probability space (Ξ, F, P) (cf. [B] Section III.2). Namely, V has a.s. càdlàg paths, V (0) = 0 and V has non-negative independent increments such that for all s < t (2.23) (Strictly speaking, inside the exponential in the r.h.s. there should be an extra positive factor c 0 that we have fixed equal to 1). The sample paths of V are strictly increasing and of pure jump type, in the sense that Since V is strictly increasing P-a.s., V −1 has continuous paths P-a.s. For random trap models we obtain: Theorem 2.3. Fix a 0 and let T = {τ (x)} x∈Z be a family of positive i.i.d. random variables in the domain of attraction of an α-stable law, 0 < α < 1. If a > 0, assume also that τ (x) is bounded from below by a non-random positive constant a.s.
Given a realization of T , consider the T -dependent trap model {X(t)} t 0 on Z with transition rates n−1 (T ) the (simple and positive) eigenvalues of the Markov generator of X(t) with Dirichlet conditions outside (0, n). Then i) For each k 1, the T -dependent random vector weakly converges to the V -dependent random vector where γ = E (τ (0) −a ), the slowly varying function L 2 has been defined in (2.22) and {λ k (V ) : k 1} denotes the family of the (simple and positive) eigenvalues of the generalized differential operator −D V D x with Dirichlet conditions outside (0, 1).
ii) If a = 0 and E (exp{−λτ (x)}) = exp{−λ α }, then in (2.25) the quantity L 2 (n) can be replaced by the constant 1. iii) There exist positive constants c 1 , c 2 such that The above random walk X(t) can be described as follows: after arriving at site x ∈ Z the particle waits an exponential time of mean after that it jumps to x − 1 and x + 1 with probability given respectively by The random walk X(t) is called random trap model following [BC1], although according to our initial terminology the name would be correct only when a = 0. Sometimes we will also refer to the case a ∈ (0, 1] as generalized random trap model. The additional assumption concerning the bound from below of τ (x) when a > 0 can be weakened. Indeed, as pointed out in the proof, we only need the validity of strong LLN for a suitable triangular arrays of random variables.
Of course, one can consider also the diffusive case. Extending the results of [BD] we get Proposition 2.4. Fix a 0 and let T = {τ (x)} x∈Z be a family of positive random variables, ergodic w.r.t. spatial translations and such that E(τ (x)) < ∞, E(τ (x) −a ) < ∞. Given a realization of T , consider the T -dependent trap model {X(t)} t 0 on Z with transition rates (2.24) and call λ (n) 1 (T ) < λ (n) 2 (T ) < · · · < λ (n) n−1 (T ) the (simple and positive) eigenvalues of the Markov generator of X(t) with Dirichlet conditions outside (0, n). Then for each k 1 and for a.a. T , (2.27) Let us state our results concerning random barrier models: Theorem 2.5. Let T = {τ (x)} x∈Z be a family of positive i.i.d. random variables in the domain of attraction of an α-stable law, 0 < α < 1. Given a realization of T , consider the T -dependent barrier model {X(t)} t 0 on Z with jump rates n−1 (T ) the eigenvalues of the Markov generator of X(t) with Dirichlet conditions outside (0, 1). Recall the definition (2.22) of the positive slowly varying function L 2 . Then: i) For each k 1, the T -dependent random vector weakly converges to the V -dependent random vector where {λ k (V −1 ) : k 1} denotes the family of the (simple and positive) eigenvalues of the generalized differential operator −D V −1 D x with Dirichlet conditions outside (0, V (1)). ii) If E(e −λτ (x) ) = e −λ α then in (2.29) the quantity L 2 (n) can be replaced by the constant 1. iii) There exist positive constants c 1 , c 2 such that Again, one can consider also the diffusive case. Extending the results of [BD] we get Proposition 2.6. Let T = {τ (x)} x∈Z be a family of positive random variables, ergodic w.r.t. spatial translations and such that E(τ (x)) < ∞. Given a realization of T , consider the T -dependent barrier model {X(t)} t 0 on Z with transition rates (2.28) and call λ (n) n−1 (T ) the (simple and positive) eigenvalues of the Markov generator of X(t) with Dirichlet conditions outside (0, n). Then for each k 1 and for a.a. T , (2.31) Theorem 2.3 and 2.5 cannot be derived by a direct application of Theorem 2.1. Indeed, for any choice of the sequence c(n) > 0, fixed a realization of T the measures dm n associated to the space-time rescaled random walks X (n) (t) = n −1 X c(n)t do not converge to dV or dV −1 restricted to (0, 1), (0, V (1)) respectively. On the other hand, for each n one can define a random field T n in terms of the α-stable process V , i.e. T n = F n (V ), having the same law of T . CallingX (n) the analogous of X (n) with jump rates defined in terms of T n , one has that the associated measures dm n satisfy the hypothesis of Theorem 2.1. This explains why Theorems 2.3 and 2.5 give an annealed and not quenched result. On the other hand, for the random walksX (n) the result is quenched, i.e. the convergence of the eigenvalues holds for almost all realizations of the subordinator V . We refer to Sections 10 and 11 for a more detailed discussion of the above coupling and for the proof of Theorems 2.3 and 2.5.
2.1. Outline of the paper. The paper is structured as follows. In Section 3 we explain how the spectral analysis of −L n reduces to the spectral analysis of the operator −D mn D x . In Section 4 we recall some basic facts of generalized second order operators. In particular, we characterize the eigenvalues of −L n as zeros of a suitable entire function. In Section 5 we apply some general theorem about the dependence on the parameter of the zeros of a continuously parameterized family of entire functions. In Section 6 we investigate the eigenvalues of −D mn D x using the minimum-maximum characterization. This completes the preparation to the proof of Theorem 2.1, which is given in Section 7.
In Section 8 we prove the Dirichlet-Neumann bracketing. This result, interesting by itself, allows us to prove Proposition 2.2 in Section 9. Finally, we move to applications: in Section 10 we prove Theorem 2.3, in Section11 we prove Theorem 2.5, while in Section 12 we prove Propositions 2.4 and 2.6.
Recall the definition of the local operator L n given in (2.4) and of the bijection T n given in (2.11).
Lemma 3.1. Given functions f, g : [0, 1] ∩ Z n → R, the system of identities is equivalent to the system In particular, f : [0, 1] ∩ Z n → R is an eigenvector with eigenvalue λ of the operator −L n if and only if T n f is an eigenfunction with eigenvalue λ of the generalized differential operator −D mn D x with Dirichlet conditions outside (0, ℓ n ).
Proof. For simplicity of notation we write U, H instead of U n , H n . Moreover, we use the natural bijection Z ∋ k → k/n ∈ Z n , denoting the point k/n of Z n simply as k. Setting ∆f (j) = f (j) − f (j − 1), we can rewrite (3.1) by means of the recursive identities ∆f (j + 1) This system of identities is equivalent to This proves that (3.1) is equivalent to (3.2). Using T n , F, G, m n we can rewrite (3.2) as where in the last identity we have used the convention that 1} automatically extends to all x ∈ (0, ℓ n ]. This concludes the proof of the equivalence between (3.2) and (3.5). Trivially, equation (3.5) is equivalent to (3.3). Finally, the conclusion of the lemma follows from the previous observations and the discussion about the generalized differential operator −D m D x given in the Introduction.

Generalized second order differential operators
For the reader's convenience and for next applications, we recall the definition of generalized differential operator. We mainly follow [KK0], with some slight modifications that we will point out. We refer to [KK0], [DM] and [Ma] for a detailed discussion.
Let m : R → [0, ∞) be a càdlàg nondecreasing function with m(x) = 0 for all x < 0. We define m x as the magnitude of the jump of the function m at the point x: We define E m as the support of dm, i.e. the set of points where m increases (see (2.6)).
We suppose that E m = ∅, 0 = inf E m and ℓ m : We remark that the integral term in equation (4.2) can be written also as We point out that equation As discussed in [KK0], the function G is not univocally determined from F . To get uniqueness, one can for example fix the value of b and b − [0,ℓm] G(s)dm(s). These values are called derivative numbers and denoted by F ′ − (0) and F ′ + (ℓ m ), respectively. Indeed, in [KK0] the domain D m of the differential operator −D m D x is defined as the family of complex-valued extended functions F [x], given by the triple . We prefer to avoid the notion of extended functions here, since not necessarily.
It is simple to check that the function F satisfying (4.2) fulfills the following properties: In view of the definition of F ′ − (0) and F ′ + (ℓ m ), the above identities extend to any . As discussed in [KK0], fixed λ ∈ C there exists a unique function F ∈ C([0, ℓ m ]) solving equation (4.2) with G = λF for fixed a, b. In other words, fixed F (0) and F ′ − (0) there exists a unique solution of the homogeneous differential equation (4.5) Given λ ∈ C, we define ϕ(x, λ) and ψ(x, λ) as the solutions (4.5) satisfying respectively the initial conditions It is known that each function F ∈ C([0, ℓ m ]) satisfying (4.5) is a linear combination of the independent solutions ϕ(·, λ) and ψ(·, λ).
By the above observations, we get that F is a Dirichlet eigenfunction if and only if F is a nonzero multiple of ψ(x, λ) for λ ∈ C satisfying ψ(ℓ m , λ) = 0, while F is a Neumann eigenfunction if and only if F (x) is a nonzero multiple of ϕ(x, λ) with λ ∈ C satisfying ℓ 0 ϕ(s, λ)dm(s) = 0 . (4.8) In particular, the Dirichlet and the Neumann eigenvalues are all simple.
The following fact should be more or less standard. Since we were unable to find a self-contained reference, for the reader's convenience we sketch its (very short) proof in Appendix A.
Dirichlet conditions outside (0, ℓ m ) has a countable (finite or infinite) family of eigenvalues, which are all positive and simple. The set of eigenvalues has no accumulation points. In particular, if there is an infinite number of eigenvalues {λ n } n 1 , listed in increasing order, it must be lim n↑∞ λ n = ∞.
The above eigenvalues coincide with the zeros of the entire function C ∋ λ → ψ(ℓ m , λ) ∈ C. The eigenspace associated to the eigenvalue λ is spanned by the real function ψ(·, λ). Moreover, F is an eigenfunction of −D m D x with Dirichlet conditions outside (0, ℓ m ) and associated eigenvalue λ if and only if where, given an interval [a, b], the Dirichlet Green function G a,b : [a, b] 2 → R is defined as (4.10) In particular, for any Dirichlet eigenvalue λ it holds As discussed in [KK0], page 29, the function ϕ can be written as λ-power series ϕ(s, λ) = ∞ j=0 (−λ) j ϕ j (s) for suitable functions ϕ j . Therefore the l.h.s. of (4.8) equals From the bounds on ϕ j one derives that the l.h.s. of (4.8) is an entire function in λ, thus implying that its zeros (or equivalently the eigenvalues of the operator −D m D x with Neumann b.c.) form a discrete subset of [0, ∞). Moreover (cf. [KK0]) the eigenvalues are nonnegative and 0 itself is an eigenvalue.

Characterization of the eigenvalues as zeros of entire functions
At this point, we have reduced the analysis of the spectrum of the differential operator −D m D x with Dirichlet conditions outside (0, ℓ m ) to the analysis of the zeros of the entire function ψ(ℓ, ·). As in [KZ] and [Ze] a key tool is the following result: Lemma 5.1. Let Ξ be a metric space, f : Ξ × C → C be a continuous function such that for each α ∈ Ξ the map f (α, ·) is an entire function. Let V ⊂ C be an open subset whose closureV is compact, and let α 0 ∈ Ξ be such that no zero of the function f (α 0 , ·) is on the boundary of V . Then there exists a neighborhood W of α 0 in Ξ such that: (1) for any α ∈ W , f (α, ·) has no zero on the boundary of V , (2) the sum of the orders of the zeros of f (α, ·) contained in V is independent of α as α varies in W .
From now on, let m n and m be as in Theorem 2.1. Given λ ∈ C, define ϕ(x, λ) and ψ(x, λ) as the solutions on the homogeneous differential equation (4.5) satisfying the initial conditions (4.6) and (4.7) respectively. Define similarly ϕ (n) (x, λ) and ψ (n) (x, λ) by replacing m with m n .
By applying Lemma 4.1 and Lemma 5.1 we obtain: Then there exists an integer n 0 such that: i) for all n n 0 , the spectrum of −L n has only one eigenvalue in J i , ii) for all n n 0 , −L n has no eigenvalue inside (0, Proof. As discussed in [KK0], page 30, one can write explicitly the power expansion of the entire functions C ∋ λ → ψ (n) (x, λ), ψ(x, λ) ∈ C. In particular, it holds In the above integrals we do not need to specify the border of the integration domain since the integrand functions vanish both at 0 and at x.
We already know that the Dirichlet eigenvalues of the operator −D mn D x [−D m D x ] are given by the zeros of the entire function ψ (n) (ℓ n , ·) [ψ(ℓ, ·)]. Hence, it is natural to derive the thesis by applying Lemma 5.1 with different choices of V . More precisely, we take α 0 = ∞ and Ξ = N + ∪ {∞} endowed of any metric d such that all points n ∈ N + are isolated w.r.t. d and lim n↑∞ d(n, ∞) = 0. We define f : is an entire function for any α ∈ Ξ. Moreover, f (α 0 , ·) has no zero at the border of V for any of the above choices of V , f (α 0 , ·) has only one zero (which for any sequence of complex numbers {λ n } n 1 , converging to some λ ∈ C. In order to prove the above statement, we observe that ψ j (x) 0, ψ 0 (ℓ) = ℓ and that for j 1 it holds Above, I(·) denotes the characteristic function. By symmetry we can remove the characteristic function and earn a factor 1/j!. Therefore we get (5.5) Since ℓ n → ℓ and sup n m n (ℓ n ) < ∞, we can find positive constants c and A such that the r.h.s. of (5.4) and the r.h.s. of (5.5) are bounded by Ac j /j!.
Let us consider now the case λ = 0. Since λ n → λ we restrict to n large enough that |λ n /λ| 2. We introduce a complex-valued measure ν on N setting ν(j) = (−λ) j /j!. Moreover we write |ν| for the positive measure on N such that |ν|(j) = |ν(j)|. Finally, we set Then we can write Since |a (n) (j)|, |a(j)| c(j) and c(·) ∈ L 1 (N, |ν|), by the Lebesgue Theorem in order to conclude we only need to show that lim n↑∞ a (n) The case j = 0 follows from our assumption ℓ n → ℓ. In order to avoid heavy notation, we discuss only the case j = 2 (the general case is completely similar). Let us set Let us fix γ > ℓ, thus implying that γ > ℓ n for n large enough as we assume. Moreover, we fix a continuous function ρ : [0, ∞) → [0, 1] such that ρ ≡ 1 on [0, γ] and ρ ≡ 0 on [γ + 1, ∞). Then, the function Writing dm⊗dm(F ) for the integral of the function F w.r.t. the product measure dm⊗dm and similarly for dm n ⊗ dm n (F ), we get ψ 2 (ℓ) = dm ⊗ dm(F ) and C n = dm n ⊗ dm n (F ). Since dm n weakly converges to dm, the same property holds for dm n ⊗ dm n and dm ⊗ dm. Using that F ∈ C c ([0, ∞) 2 ) we conclude that C n = ψ 2 (ℓ) + o(1). Together with the above result C n = ψ (n) 2 (ℓ n ) + o(1) (see (5.6)), we get the thesis.
The above lemma is still not enough in order to prove that −D m D x has infinite eigenvalues λ k and that λ (n) k → λ k . As explained in Section 7 we only need to prove that the sequence {λ (n) k } n>k is bounded. This will be done in the next section, using a different characterization of the eigenvalues λ (n) k .

Minimum-maximum characterization of the eigenvalues
For the reader's convenience, we list some vector spaces that will be repeatedly used in what follows. We introduce the vector spaces A(n) and B(n) as where the map T n has been defined in (2.11).
j ], 1 j n. Since we already know that the eigenvalues and suitable associated eigenfunctions of −L n are real, we can think of −L n as operator defined on A(n). Finally, given a < b we write Let us recall the min-max formula characterizing the k-th eigenvalue λ (n) k of −L n , or equivalently of the differential operator −D mn D x with Dirichlet conditions outside (0, ℓ n ). We refer to [CH1], [RS4] for more details. First we observe the validity of the detailed balance equation: for the generator on the random walk on Z n with jump rates c n (x, y) and definingf : Note that the second identity follows from (6.2). Given f ∈ A(n) we write D n (f ) for the Dirichlet form D n (f ) := µ n (f, −L n f ). By simple computations, we obtain where V k varies among the k-dimensional subspaces of A(n). Moreover, the minimum is k , defined as the subspace spanned by the eigenvectors f We can rewrite the above min-max principle in terms of F = T n f and dm n . Indeed, Hence, whenever the denominator is nonzero.
Here and in what follows, we write The following observation will reveal very useful: In particular, if F ≡ 0 then Φ n (F ) and Φ n (G) are both well defined and Φ n (F ) Φ n (G).
Proof. In order to get (6.7) it is enough to observe that by Schwarz' inequality it holds From (6.7) one derives the last issue by observing that dm n (F 2 ) = dm n (G 2 ) (dm n (·) denoting the average w.r.t. dm n ).
We have now all the tools in order to prove that the eigenvalues λ Proof. Given a function f ∈ C 0 [0, ℓ n ] and n 1, we define K n f as the unique function in j ) for all 0 j n. Note that K n commutates with linear combinations: K n (a 1 f 1 + · · · + a k f k ) = a 1 K n f 1 + · · · + a k K n f k .
Due to the assumption that dm is not a linear combination of a finite number of delta measures, for some ε > 0 we can divide the interval [0, ℓ − ε) in k subintervals I j = [a j , b j ) such that dm(int(I j )) > 0, int(I j ) = (a j , b j ).
Since dm n converges to dm weakly, it must be dm n (int(I j )) > 0 for all j : 1 j k, and for n large enough. For each j we fix a piecewise-linear function f j : R → R, with support in I j and strictly positive on int(I j ). Since ℓ n → ℓ > ℓ − ε, taking n large enough, all functions f j are zero outside (0, ℓ n ), hence we can think of f j as function in C 0 [0, ℓ n ]. Having disjoint supports, the functions f 1 , f 2 ,..., f k are independent in C 0 [0, ℓ n ].
We claim that K n f 1 , K n f 2 ,..., K n f k are independent functions in B(n) for n large enough. Indeed, we know that dm n (int(I j )) > 0 for all j : 1 j k, if n is large enough. Hence, for n large, each set int(I j ) contains at least one point x r ) = 0 for all u = j such that 1 u k, K n f j cannot be written as linear combination of the functions K n f u , u = j, 1 u k.
Due to the above independence, we can apply the min-max principle (6.5). Let us write S k for the real vector space spanned by K n f 1 , K n f 2 , . . . , K n f k andS k for the real vector space spanned by f 1 , f 2 , . . . , f k . As already observed, S k = K n (S k ). Using also Lemma 6.1, we conclude that for n large enough , without loss of generality we can assume that k i=1 a 2 i = 1. Since the functions f j have disjoint supports, it holds (D s . (6.10) Taking n large enough that ℓ − ε ℓ n , equations (6.9) and (6.10) together imply that (6.11) Since dm n weakly converges to dm, the k terms appearing in the denominator converge to positive numbers as n ↑ ∞. Hence, the r.h.s. converges to a positive number, thus implying (6.8).

Proof of Theorem 2.1
Most of the work necessary for the convergence of the eigenvalues has been done for proving Lemma 5.2 and Lemma 6.2. Due to Lemma 4.1, we know that the eigenvalues of −L n and the eigenvalues of the differential operator −D m D x with Dirichlet conditions outside (0, ℓ) are simple, positive and form a set without accumulation points. Since −L n is a symmetric operator on the (n − 1)-dimensional space L 2 ((0, 1) ∩ Z n , µ n ), where µ n has been introduced in Section 6, we conclude that −L n has n − 1 eigenvalues.
Given k 1 we take a(k) as in Lemma 6.2 and we fix L a(k) such that L is not an eigenvalue of −D m D x with Dirichlet conditions. Let k 0 , ε and n 0 be as in Lemma 5.2. Then for n n 0 the following holds: in each interval J i = [λ i − ε, λ i + ε] there is exactly one eigenvalue of −L n and in [0, L) \ ∪ k 0 i=1 J i there is no eigenvalue of −L n . Since we know by Lemma 6.2 that −L n has at least k eigenvalues in [0, L] it must be k k 0 and λ (n) i ∈ J i for all i : 1 i k. In particular, it holds lim sup ∀i : 1 i k . (7.1) Using the arbitrariness of ε and k we conclude that the operator −D m D x with Dirichlet conditions outside (0, ℓ) has infinite eigenvalues satisfying (2.12).
7.1. Convergence of the eigenfunctions. Having proved (2.12), the convergence of the eigenfunctions can be derived by arguments close to the ones of [UH]. Alternatively, one could try to estimate ψ (n) (x, λ (n) k ) − ψ(x, λ k ) with ψ (n) and ψ defined as before Lemma 5.2. Below, we follow the first route.
Let us define L = ℓ + 1. By restricting to n large enough, we can assume that ℓ n L. Using (4.10), we define the function G n on [0, L] × [0, L] as Proof. Since we know that lim n↑∞ λ L]). To this aim we only need to apply Ascoli-Arzelà Theorem, showing that the sequence is uniformly bounded and uniformly continuous. Indeed, from (7.2), (7.4) and (7.5), we get Lm n (L) 1/2 . (7.6) Moreover, from (7.4) and (7.5), we get which by (7.3) is bounded by |x − x ′ |m n (L) 1/2 if x, x ′ ℓ n , by 0 if x, x ′ > ℓ n and by The thesis now follows from the above bounds and from the limit lim n↑∞ m n (L) = m(L), consequence of the weak convergence of dm n to dm.
It remains now to characterize the limit points of {F (n) k } n>k . We fix a point s 0 ∈ (0, ℓ) such that ψ(s 0 , λ k ) = 0. Then for n large, at cost to multiply F (n) k by ±1, we can assume that F (n) k (s 0 ) is not zero and has the same sign of ψ(s 0 , λ k ). We come back to (7.5). Since ℓ n → ℓ we know that Hence, from (7.5) and from the convergence λ (n) k → λ k , we derive that any limit point On the other hand, (L). The above bound, together with (7.9) and (7.10), implies the normalization [0,L] F k (s) 2 dm(s) = 1. Finally, we observe that F k is a real function and F k (s) = 0 if s ∈ [ℓ, L]. Lemma 4.1, together with (7.8) and the normalization of F k , implies that F k (s) = ±Cψ(s, λ k ) for all s ∈ [0, ℓ], where 1/C = [0,ℓ] ψ(s, λ k ) 2 dm(s). Since by construction F (n) k (s 0 ) is not zero and has the same sign of ψ(s 0 , λ k ), we conclude that F k = Cψ(·, λ k ). In particular, the exists a unique limit point of the sequence F (n) k n>k . That concludes the proof of Theorem 2.1.

Dirichlet-Neumann bracketing
Let m : R → [0, ∞) be a càdlàg nondecreasing function with m(x) = 0 for all x < 0. We recall that E m denotes the support of dm, i.e. the set of points where m increases (see (2.6)) and that m x denotes the magnitude of the jump of the function m at the point We suppose that E m = ∅, 0 = inf E m and ℓ m := sup E m < ∞. We want to compare the eigenvalue counting function for the generalized operator −D x D m with Dirichlet boundary conditions to the same function when taking Neumann boundary conditions. In order to apply the Dirichlet-Neumann bracketing as stated in Section XIII.15 of [RS4] and as developed by Métivier and Lapidus (cf. [Me] and [L]), we need to study generalized differential operators as self-adjoint operators on suitable Hilbert spaces.
Proof. It is trivial to check that (8.2) can be rewritten as Hence, by definition D(−L D ) = Ran(K) and L D (K(g)) = g for all g ∈ H and K is injective (see the discussion on the well definition of −L D ). Since K(x, y) = K(y, x), the operator K is symmetric. Since K ∈ L 2 (dm ⊗ dm) (K is bounded and dm has finite mass), by [RS1][Theorem VI.23] K is an Hilbert-Schmidt operator and therefore is compact (cf. [RS1][Theorem VI.22]). In particular, H has an orthonormal basis {ψ n } such that Kψ n = γ n ψ n for suitable eigenvalues γ n (cf. Theorems VI.16 in [RS1]). Since K is injective, we conclude that γ n = 0, ψ n = K((1/γ n )ψ n ) ∈ Ran(K) = D(−L D ) and It follows that {ψ n } is an orthonormal basis of eigenvectors of −L D . By (8.2), the function ψ n ∈ L 2 (dm) must have a representative in C[0, ℓ m ]. Taking this representative, the identity ψ n = −(1/γ n )L D ψ n simply means that ψ n is an eigenfunction with eigenvalue 1/γ n of the generalized differential operator −D x D m with Dirichlet boundary conditions as defined in Section 4. Finally, since −L D admits an orthonormal basis of eigenvectors, its spectrum is pure point and is given by the family of eigenvalues. This concludes the proof of point (ii). In order to prove (i), we observe that D(−L D ) contains the finite linear combinations of the orthonormal basis {ψ n } and therefore it is a dense subspace in H. Given f,f ∈ D(−L D ), let g,ĝ ∈ H such that f = Kg,f = Kĝ. Then, using the symmetry of K and point (ii), we obtain This proves that −L D is symmetric. In order to prove that it is self-adjoint we need to show that, given v, w ∈ H such that ( . Since this holds for any f ∈ D(−L D ) and therefore for any g ∈ H, it must be v = Kw. By point (ii), this is equivalent to the fact that w ∈ D(−L D ) and w = −L D v. This concludes the proof of (i).

By (4.3) d(D +
x f )(x) = −dm(x)g(x) as Stieltjes measure (similarly forf andĝ). Therefore, the above integral can be rewritten as We observe that The above remark allows us to rewrite (8.7) as Substituting this expression in (8.6) and making another integration by parts, we get that The thesis then follows recalling that D x f , D xf are well defined Lebesgue a.e. and that on the definition points it holds D 8 (8.10) Proof. We start with point (i). First we prove that −L N is symmetric. Take f, g, a as in (8.8) and (8.9), and takef ,ĝ,â similarly. Then, Using that dm(x)ĝ(x) = 0 by (8.9), we conclude that Since, by (8.9) and its analogous version forĝ, it holds dm(x) dm(z)g(x)ĝ(z)(z−x) = 0, we can rewrite the above expression in the symmetric form which immediately implies that −L N is symmetric.
Let us consider the Hilbert subspace W = {f ∈ H : (1, f ) = 0}, namely W is the family of functions in H having zero mean w.r.t. dm. Then we define the operator T : H → H as (8.12) Finally, we write P : H → W for the orthogonal projection of H onto W: Since H ∈ L 2 (dm⊗dm), due to [RS1][Theorem VI.23] P •T is an Hilbert-Schmidt operator on H, and therefore a compact operator. In particular, the operator W : W → W defined as the restriction of P • T to W is again a compact operator. We claim that W is symmetric. Indeed, setting f = W g and f ′ = W g ′ , due to the first identity in (8.12) we get that f, f ′ ∈ D(−L N ) and −L N f = g, −L N f ′ = g ′ . Then, using that L N is symmetric as proven above, we conclude Having proved that W is a symmetric compact operator, from [RS1][Theorem VI.16] we derive that W has an orthonormal basis {ψ n } n of eigenvectors of W , i.e. W ψ n = γ n ψ n for suitable numbers γ n . Since W is injective (recall the discussion on the well definition of −L N ), it must be γ n = 0. From the identity W ψ n = γ n ψ n we conclude that for some constant a n ∈ R. The above identity implies that Let us now prove (iii). From (8.8) we derive that for Lebesgue a.e. x it holds D We now observe that in the last term ℓ m can be erased due to (8.9). Using again (8.9), we can write D = dm(z)g(z) dm(u)ĝ(u) min(z, u) . (8.14) Taking the symmetric average between (8.13) (after removing ℓ m ) and (8.14), observing that min(z, u) − max(z, u) = −|z − u|, we conclude that Comparing the above identity with (8.11), we get point (iii).
8.3. The quadratic forms q D and q N . We call q D , q N the quadratic forms associated to −L D , −L N , respectively, and write Q(q D ), Q(q N ) for the associated form domains (see [RS1][Section VIII.6] for their definitions). Due to Exercises 15(b) and 16(b) in [RS1][Chapter VIII], q D , q N can be defined also as follows: the domain Q(q D ) of q D is given by the elements f ∈ H such that there exists a sequence For the reader's convenience and for later use, we recall the definition given in [RS4][page 269]: given nonnegative self-adjoint operators A, B, where A is defined on a dense subset of a Hilbert space H ′ and B is defined on a dense subset of a Hilbert subspace H where Q(q A ) and Q(q B ) denote the domains of the quadratic forms q A and q B associated to the operators A and B, respectively.
This implies that f ∈ Q(q N ). By (8.4) and (8.10), we also deduce that q N (f ) = q D (f ).
Given now a generic f ∈ Q(q D ) we fix a sequence f n ∈ D(−L D ) such that On the other hand, for what proven at the beginning, we know that f n ∈ Q(q N ), while (ii) and (iii) remain valid with q D replaced with q N . Since Q(q N ) is an Hilbert space with respect to the scalar product (·, ·) 1 := (·, ·) + q N (·, ·) (cf. Exercise 16 in [RS1][Chapter VIII]), we conclude that f ∈ Q(q N ) and q N (f ) = q D (f ).
Lemma 8.4. Let f ∈ H be a function such that for some function g ∈ H and some constants a, b ∈ R. Then there exists a family of We point out that if f is of the form (8.17), then D x f is well defined for (Lebesgue) almost every x ∈ (0, ℓ m ).
Due to Lemma 8.3 and the Lemma preceding Proposition 4 in [RS4][Section XIII.15], keeping in consideration that all eigenvalues are simple (cf. Section 4), we conclude that, given x 0, (8.20) We will recover the above result in Subsection 8.4, following the approach of [Me].
Up to now we have defined −L D and −L N referring to the interval (0, ℓ m ), where 0 = inf E m , ℓ m = sup E m , m 0 = 0 and m ℓm = 0. In general, given an open interval we define −L I D , −L I N as the operators L D and −L N but with the measure dm replaced by its restriction to I. For simplicity, we write L 2 (I, dm) for the space L 2 (I, dm) where dm denotes the restriction of dm to the interval I. Then, f ∈ D(−L I D ) ⊂ L 2 (I, dm) if and only if there exists g ∈ L 2 (I, dm) such that, writing I = (u, v), . Then, the above g ∈ L 2 (I, dm) is unique and one sets −L I  (a 1 , b 1 ),...,I k = (a k , b k ) be a finite family of disjoint open intervals, where a 1 < b 1 a 2 < b 2 a 3 < · · · a k < b k and m ar = 0 , m br = 0 ∀r = 1, . . . , k , Then for any x 0 it holds If in addition the intervals I r are neighboring, i.e. b r = a r+1 for all r = 1, . . . , k − 1, then for any x 0 it holds The above result is the analogous to Point c) in Proposition 4 in [RS4][Section XIII.15].
Proof. We begin with (8.24). We consider the direct sum ⊕ k r=1 L 2 (I r , dm). We define A = ⊕ k r=1 (−L Ir D ) as the operator with domain . Due to the properties listed in [RS4][page 268] and due to Proposition 8.1, the operator A is a nonnegative self-adjoint operator.
Trivially, the map is injective and conserves the norm. In particular, the image of ψ is a closed (and therefore Hilbert) subspace of L 2 ([a 1 , b k ], dm). Consider, the operator defined as A ′ (ψ(f )) = ψ(Af ) for all f ∈ D(A). Then, A ′ is a nonnegative self-adjoint operator.
Claim: It holds L where the inequality has to be thought in the sense specified after Lemma 8.3.
Assuming the above claim, the conclusion (8.24) then follows from the Lemma stated in [RS4][page 270] and property (5) on page 268 of [RS4]. It remains then to prove our claim.
Proof of the claim. For simplicity of notation we restrict to the case k = 2 (the arguments are completely general). We take (f 1 , f 2 ) ∈ D(A). Then there exist constants κ 1 , κ 2 and functions g 1 ∈ L 2 (I 1 , dm), g 2 ∈ L 2 (I 2 , dm) such that . We need to exhibit a family of functions f ε ∈ L 2 ((a 1 , b 2 ((a 1 , b 2 ), dx) as ε → 0. This would assure that f belongs to the form domain associated to L (a 1 ,b 2 ) D . Note that, due to (8.4), at this point the conclusion of the claim becomes trivial.
Due to the integral representation (8.28) and since f ε (a 1 ) = f ε (b 2 ) = 0, we conclude that f ε ∈ D(L (a 1 ,b 2 ) D ) (property (i) above). Moreover, we point out that f ε (x) = 0 for all x ∈ [b 1 , a 2 ] and that for all x ∈ [a 2 , b 2 ]. From the above observations one can easily check that the functions f ε satisfy also properties (ii) and (iii).
In the general case, i.e. k 2, the idea is the following: by a small perturbation k (a k ) = 0. Then the good approximating function is f ε = ψ((f r ) k r=1 ). In order to prove (8.25) under the hypothesis b r = a r+1 for all r = 1, . . . , k − 1, we first observe that the map (8.26) is indeed an isomorphism of Hilbert spaces (recall that m ar = 0 and m br = 0). Given f ∈ D −L (a 1 ,b k ) N , let (f r ) k r=1 = ψ −1 (f ). Then, we denote by a and g the unique constant a ∈ R and the unique function g ∈ L 2 ([a 1 , b k ], dm) satisfying under the constraint [a 1 ,b k ] dm(z)g(z) = 0. From the above identity (8.29) one easily derives that, given r = 1, . . . , k, there exist suitable constants A r , B r ∈ R such that Applying Lemma 8.4 we get that f r ∈ Q q Ir N , i.e. f r belongs to the domain of the quadratic form q Ir N associated to the operator −L Ir N and moreover q Ir N (f r ) = Ir D x f r (x) 2 dx. Since f r is simply the restriction of f to the interval I r , we get that D x f r (x) exists and equals D x f (x) for almost all x ∈ I r . In particular, since dm gives zero mass to the complement of ∪ k r=1 I r , invoking (8.10) we get where the operator on the right is simply the self-adjoint operator on ⊕ k r=1 L 2 (I r , dm) f . At this point, (8.25) follows from the Lemma on page 270 of [RS4] and property (5) on page 268 of [RS4].
8.4. Variational triple. In order to go beyond the estimates (8.24) and (8.25) (obtained mainly by adapting the arguments presented in [RS4][Chapter XIII]) we need the abstract approach to the eigenvalue counting functions developed in [Me]. To this aim we consider the space Q(q N ) endowed of the scalar product where (·, ·) denotes the scalar product in H. We write · 1 for the associated norm. Due to Lemma 8.3, we know that Q(q D ) ⊂ Q(q N ) and that on Q(q D ) the scalar product (·, ·) 1 coincides with q D (·, ·) + (·, ·).
In order to investigate better the spaces Q(q N ) and Q(q D ) endowed of the scalar product (·, ·) 1 we need the following technical fact: Lemma 8.6. Given f ∈ Q(q N ), there exists a function F ∈ C([0, ℓ m ]) such that (i) f = F dm-almost everywhere and (ii) Moreover, lim x↓0 F (x) and lim x↑ℓm F (x) are the same for all functions F ∈ C[0, ℓ m ] satisfying the above properties (i) and (ii).
Proof. Since f ∈ Q(q N ) there exists a sequence of functions f n ∈ D(−L N ) such that f n → f in H and f n − f m , −L N (f n − f m ) → 0 as n, m → ∞. At cost to take a subsequence, we can assume that f n converges to f dm-almost everywhere, namely there exists a Borel subset A ⊂ [0, ℓ m ] such that dm(A c ) = 0 and f n (x) → f (x) for all x ∈ A. Due to (8.8) it holds We point out that the limit lim n,m→∞ f n − f m , −L N (f n − f m ) = 0 is equivalent to the fact that (D x f n ) n 0 is a Cauchy sequence in L 2 ([0, ℓ m ], dx), hence converging to some function g ∈ L 2 ([0, ℓ m ], dx).
In particular, passing to the limit (8.31) for x < y in A we get At this point, we fix x 0 ∈ A and set F ( This identity, (8.32) and Schwarz' inequality trivially imply (8.30). Moreover, by (8.33) we conclude that f (y) = F (y) for all y ∈ A, and therefore f = F dm-almost everywhere.
Let us now take generic functions F, F ′ ∈ C([0, ℓ m ]), satisfying (i) and (ii). We know that F = F ′ dm-almost everywhere. Since 0 = inf E m and m 0 = 0, it must be dm (0, ε) > 0 for all ε > 0. In particular, F = F ′ on a set having 0 as accumulation point, thus implying that lim x↓0 F (x) = lim x↓0 F ′ (x). A similar argument holds for ℓ m instead of 0.
Motivated by the above result, given f ∈ Q(q N ) we write f (0) and f (ℓ m ) for the limits lim x↓0 F (x) and lim x↑ℓm F (x), respectively, where F is any continuous function satisfying properties (i) and (ii) of Lemma 8.6.
We can now prove the following fact: Lemma 8.7. The following holds: (i) The subset Q(q N ) is dense in H.
(ii) The space Q(q N ) endowed of the scalar product (·, ·) 1 is an Hilbert space.
(iii) The inclusion map is a continuous compact operator. (iv) Q(q D ) is a closed subspace of the Hilbert space Q(q N ), (·, ·) 1 . Moreover, and Q(q D ) has codimension 2 in Q(q N ).
Proof. (i) The set Q(q N ) includes the domain D(−L N ), which we know to be dense in H.
(ii) This is a general fact, stated in Exercise 16 of [RS1][Chapter VIII].
(iii) Since f f 1 for each f ∈ Q(q N ), the inclusion map ι is trivially continuous. In order to prove compactness, we need to show that each sequence f n ∈ Q(q N ) with f n 1 1 admits a subsequence f n k , which converges in H. Using Lemma 8.6 we can assume that f n ∈ C([0, ℓ m ]) and that |f n (x) − f n (y)| √ y − x for all x, y ∈ [0, ℓ m ].
Applying Ascoli-Arzelà Theorem, we then conclude that f n admits a subsequence f n k , which converges in the space C([0, ℓ m ]) endowed of the uniform norm. Trivially, this implies the convergence in H.
(iv) We first prove the following: Proof of the claim. To simplify the notation, we think h as the continuous representative described in Lemma 8.6. We take h n ∈ D(−L N ) such that h n → h in H and h n − h m , −L N (h n − h m ) → 0 as n, m → ∞. By definition of D(−L N ), we can write where g n ∈ H satisfies [0,ℓm) dm(z)g n (z) = 0. Due to (8.35) h n can be thought of as a continuous function on [0, ℓ m ]. We claim that lim n→∞ h n (0) = lim n→∞ h n (ℓ m ) = 0, at cost to take a subsequence. Indeed, the convergence in H implies that, at cost to take a subsequence, there exists a subset A ⊂ [0, ℓ m ] with dm(A c ) = 0 and h n (x) → h(x) for all x ∈ A. Since by assumption dm (0, ε) , dm (ℓ − ε, ℓ) > 0 for all ε > 0, 0 and ℓ m are accumulation points of A. Using that h(0) = 0 and applying Lemma 8.6, we can write for x ∈ [0, ℓ m ] Taking x ∈ A, the middle term in the r.h.s. disappears as n → ∞. Using now that q N (h n ) → q N (h) < ∞ and that 0 is an accumulation point for A we conclude that h n (0) → 0. Similarly, we can prove that h n (ℓ m ) → 0. Now we defineh n (x) = h n (x)−h n (0)+c n x, where c n is defined by the identityh n (ℓ m ) = h n (ℓ m ) − h n (0) + c n ℓ m = 0. Comparing with (8.35) and the definition of D(−L D ) we get that (1)h n ∈ D(−L D ). Since h n (0) → 0 and h n (ℓ m ) → 0, we get that h n − h n ∞ → 0 and therefore h n − h n → 0. It follows that (2) Due to Exercise 16 in [RS1][Chapter VIII], the space Q(q D ) endowed of the scalar product (·, ·) + q D (·, ·) is an Hilbert space, hence complete. Since, as already observed, the above scalar product coincides with (·, ·) 1 we conclude that Q(q D ) is a complete, and therefore close, subspace of Q(q N ), (·, ·) 1 .
Let us now prove (8.34). To this aim we call W the set appearing in r.h.s. of (8.34). Due to the above claim, we know that W ⊂ Q(q D ). By definition, the domain D(−L D ) is included in W . Since, by Exercise 16 in [RS1][Chapter VIII], D(−L D ) is a dense subset of Q(q D ), (·, ·) 1 , in order to prove (8.34) we only need to show that W is closed. To this aim, take f n ∈ W with f n → f ∈ Q(q N ) w.r.t. · 1 . Again, we suppose f n and f to be continuous functions in [0, ℓ m ] as in Lemma 8.6. At cost to take a subsequence, we can positive constant c independent from n and x. Taking the limit we get |f (x)| c √ x for all x ∈ A, thus implying that f (0) = 0. Similarly, one get that f (ℓ m ) = 0. This concludes the proof of (8.34). The fact that Q(q D ) has codimension 2 in Q(q N ) follows immediately from Lemma 8.4 and the characterization (8.34).
Considering the space Q(q N ) endowed of the scalar product (·, ·) 1 , the above Lemma 8.7 implies that Q(q N ), H, q N (·, ·) is a variational triple (cf. [Me][Section II-2]). Indeed, the following holds: (i) Q(q N ) and H are Hilbert spaces, (ii) the inclusion map gives a continuous injection of Q(q N ) into H, (iii) q N (·, ·) is a continuous scalar product on Q(q N ) since |q N (f, g)| f 1 g 1 for all f, g ∈ Q(q N ), (iv) the scalar product q N (·, ·) is coercive with respect to H: f 2 1 − f 2 q N (f, f ) for all f ∈ Q(q N ) (the inequality is indeed a strict inequality).
Finally, by Lemma 8.7 the inclusion map ι : Q(q N ) ֒→ H is compact and Q(q D ) is a closed subspace in Q(q N ). Applying Proposition 2.9 in [Me] we get the equality N , let a = a 0 < a 1 < · · · < a n−1 < a n = b be a partition of the interval I and set I r := [a r , a r+1 ] for r = 0, . . . , n − 1. Suppose that m : I → R is a nondecreasing function such that Proof. The bounds in (8.38) have been obtained in (8.37) (note that the first bound follows also from (8.20)). The inequalities (8.39) and (8.40) follow from Lemma 8.5.
As immediate consequence of (8.38) and (8.40) we get a bound which will reveal very useful to derive (2.15) and (2.17): Corollary 8.9. In the same setting of Theorem 8.8 it holds N I m,D (x) 2n+ n−1 i=0 N I i m,D (x).

Proof of Proposition 2.2
We first consider how the eigenvalue counting functions change under affine transformations (recall the notation introduced after (8.20)): Proof. For simplicity of notation we take a = 0. Suppose that λ is an eigenvalue of the operator −D m D x on [0, b] with Dirichlet b.c. at 0 and b. This means that for a nonzero function F ∈ C(I) with F (b) = 0 and a constant c it holds Taking X ∈ J, the above identity implies that Since trivially F (X/γ) = 0 for X = bγ, the above identity implies that λ/γ 1+1/β is an eigenvalue of the operator −D M D x on J with Dirichlet b.c. and eigenfunction F (·/γ). This implies (9.1) in the case of Dirichlet b.c. The Neumann case is similar.
We have now all the tools in order to prove Proposition 2.2: Proof of Proposition 2.2 Take m as in the Proposition 2.2 and recall the notational convention stated after the Proposition. We first prove (2.16), assuming without loss of generality that (2.15) holds with x 0 = 1. By assumption, with probability one, for any n ∈ N + and any k ∈ N : 0 k n it holds: (i) dm({k/n}) = 0, (ii) dm((k/n, k/n + ε)) > 0 for all ε > 0 if k < n, (iii) dm((k/n−ε, k/n)) > 0 for all ε > 0 if k > 0. Below, we assume that the realization of m satisfies (i), (ii) and (iii). This allows us to apply the Dirichlet-Neumann bracketing stated in Theorem 8.8 to the non-overlapping subintervals I k := [k/n, (k+1)/n], k ∈ {0, 1, . . . , n − 1}. Due to the superadditivity (resp. subadditivity) of the Dirichlet (resp. Neumann) eigenvalue counting functions (cf. (8.39) and (8.40) in Theorem 8.8), we get for any x 0 that N We remark that (2.15) with x 0 = 1 simply reads EN [0,1] m,D (1) < ∞. Since the eigenvalue counting functions are monotone, in the above estimate (9.7) we can think of n as any positive number larger than 1. Then, substituting n 1+1/α with x we get (2.16).

Proof of Theorem 2.3
As already mentioned in the Introduction, the proof of Theorem 2.3 is based on a special coupling introduced in [FIN] (and very similar to the coupling of [KK] for the random barrier model). If τ (x) is itself the α-stable law with Laplace transform E e −λτ (x) = e −λ α , this coupling is very simple since it is enough to define, for each realization of V and for all n 1, the random variables τ n (x)'s as Due to (2.23) and the fact that V has independent increments, one easily derives that the V -dependent random field {τ n (x) : x ∈ Z n } has the same law of {τ (nx) : x ∈ Z n }. In the general case one proceeds as follows. Define a function G : (Recall that V is defined on the probability space (Ξ, F, P).) The above function G is well defined since V (1) has continuous distribution, G is right continuous and nondecreasing.
Then the generalized inverse function is nondecreasing and right continuous. Finally, set It is trivial to check that the V -dependent random field {τ n (x) : x ∈ Z n } has the same law of {τ (nx) : x ∈ Z n }. Indeed, since V has independent and stationary increments one obtains that the τ n (x)'s are i.i.d., while since n Proof. Due to our definition (2.8) we have with the convention that the sum in the r.h.s. is zero if k = 0. If a = 0 trivially γ = 1 and S(k/n) = k/n. If a > 0 we can apply the strong law of large numbers for triangular arrays. Indeed, all addenda have the same law and they are independent if they are not consecutive, moreover they have bounded moments of all orders since τ (x) is bounded from below by a positive constant a.s. (this assumption is used only here and could be weakened in order to assure the validity of the strong LLN). Due to the choice of γ we have that γ −2 τ n j−1 n −a τ n j n −a has mean 1. By the strong law of large number we conclude that for a.a. V it holds lim n↑∞ S ⌊xn⌋/n = x for all x 0. This proves in particular that ℓ n := S n (1) → 1. It remains to prove that for all f ∈ C c (R) it holds lim n↑∞ n k=0 f (S n (k/n))H n (k/n) = 1 0 f (s)dV * (s) . (10.3) This limit can be obtained by reasoning as in the proof of Proposition 5.1 in [BC1], or can be derived by Proposition 5.1 in [BC1] itself together with the fact that P a.s. V has no jump at 0, 1. To this aim one has to observe that the constant c ε (where ε = 1/n) in [FIN] and [BC1][eq. (49)] equals our quantity 1/h(n) = 1/ n 1/α L 2 (n) (recall the definitions preceding Theorem 2.3). In particular, H n (k/n) = c 1/n τ n (k/n).
Due to the above result, Point (i) in Theorem 2.3 follows easily from Theorem 2.1 and the fact that the random fields {τ n (x) : x ∈ Z n } and {τ (nx) : x ∈ Z n } have the same law for all n 1.
10.2. Proof of Point (ii). Point (i) can be proved in a similar and simpler way. In this case, we define τ n (x) as in (10.1) and we consider the generalized trap model {X (n) (t)} t 0 on Z n with jump rates By this choice, dm n = n k=0 δ k/n ∆ n V (k/n). Trivially, ℓ n = 1 and dm n → dV * for all realizations of V giving zero mass to the extreme points 0 and 1. Since this event takes place P-almost surely, the proof of part (ii) is concluded.
10.3. Proof of Point (iii). Part (iii) of Theorem 2.3 (i.e. (2.26)) follows from Proposition 2.2 and Lemma 10.2 below. The self-similarity of V is the following: for each γ > 0 it holds Indeed, both processes are càdlàg, take value 0 at the origin and have independent increments with the same law due to (2.23).
Lemma 10.2. Taking m = V , the bound (2.15) is satisfied. Proof. Using the notation of Section 9, we denote by N [0,1] V,D (1) the number of eigenvalues not larger than 1 of the operator −D V D x on [0, 1] with Dirichlet boundary conditions. We assume that V has no jump at 0, 1 (this happens P-a.s.). We recall that V can be obtained by means of the identity dV = j∈J x j δ v j , where the random set ξ = {(x j , v j ) : j ∈ J} is the realization of a inhomogeneous Poisson point process on R×R + with intensity cv −1−α dxdv, for a suitable positive constant c. In order to distinguish between the contribution of big jumps and not big jumps it is convenient to work with two independent inhomogeneous Poisson point processes ξ (1) and ξ (2) on R × R + with intensity cv −1−α I(v 1/2)dxdv and cv −1−α I(v > 1/2)dxdv. We write ξ (1) = {(x j , v j ) : j ∈ J 1 } and ξ (2) = {(x j , v j ) : j ∈ J 2 }. The above point process ξ can be defined as ξ = ξ (1) ∪ ξ (2) . Moreover, a.s. it holds ξ (1) ∩ ξ (2) = ∅ (this fact will be understood in what follows). By the Master Formula (cf. Proposition (1.10) in [RY]), it holds We label in increasing order the points in {x j : j ∈ J 2 , x j ∈ [0, 1]} as y 1 < y 2 < · · · < y N (note that the set is finite due to (10.6)). Given δ ∈ (0, 1/8), we take ε ∈ (0, 1) small enough that (i) the intervals (y i − ε, y i + ε) are included in (0, 1) and do not intersect as i varies from 1 to N , (ii) for all i : 1 i N , it holds j∈J 1 :x j ∈(y i −ε,y i +ε) v j < δ, (iii) for all i : 1 i N , the points y i − ε and y i + ε do not belong to {x j : j ∈ J 1 }.
Defining V (1) (t) = j∈J 1 : x j t v j , the last condition (iii) can be stated as follows: for all i : 1 i N , the points y i − ε and y i + ε are not jump points for V (1) .

Proof of Theorem 2.5
Recall the definition of T n given in the previous section. Given a realization of V , for each n 1 we consider the continuous-time nearest-neighbor random walkX (n) on Z n with jump rates c n (x, y) = L 2 (n)n 1+ 1 α τ n (x ∨ y) −1 if |x − y| = 1/n , 0 otherwise . (11.1) The rates c n (x, y) for |x − y| = 1/n can be written as c n (x, y) = 1/ H n (x, y)U n (x ∨ y) , where H n (x) = 1/n and U n (x) = L 2 (n) −1 n − 1 α τ n (x). To the above random walk we associate the measure dm n defined in (2.10).
11.1. Proof of Point (i). Let us show that dm n weakly converges to d(V −1 ) * (recall (2.19)). We point out that in [KK] a similar result is proved, but the definition given in [KK] of the analogous of dm n is different, hence that proof cannot be adapted to our case. In order to prove the weak convergence of dm n to d(V −1 ) * , we use some results and ideas developed in Section 3 of [FIN]. Recall that the constant c ε of [FIN] equals our quantity 1/h(n) = 1/ n 1/α L 2 (n) if ε = 1/n . Given n 1 and x > 0 we define We point out that g n coincides with the function g ε defined in [FIN][(3.12)] if ε = 1/n. As stated in Lemma 3.1 of [FIN] it holds g n (x) → x as n → ∞ for all x > 0. Since g n is nondecreasing, we conclude that g n (x n ) → x as n → ∞ , ∀x > 0, ∀{x n } n 1 : x n > 0 , x n → x . (11.2) As stated in Lemma 3.2 of [FIN], for any δ ′ > 0 there exist positive constants C ′ and C ′′ such that g n (x) C ′ x 1−δ ′ for n − 1 α x 1 and n C ′′ . (11.3) Since U n (x) = g n V (x + 1/n) − V (x) , we can write S n k/n = k−1 j=0 g n V (k + 1)/n − V k/n . Proof. We recall that V can be obtained by means of the identity dV = j∈J x j δ v j , where the random set ξ = {(x j , v j ) : j ∈ J} is the realization of a inhomogeneous Poisson point process on R × R + with intensity cv −1−α dxdv, for a suitable positive constant c. Given y > 0, let us define J n,y := {r ∈ {0, 1, . . . , n − 1} : V ((r + 1)/n) − V (r/n) y} , Note that the set J y is always finite. Reasoning as in the Proof of Proposition 3.1 in [FIN], and in particular using also (11.3), one obtains for P-a.a. V that lim sup n↑∞ r:0 r<n ,r ∈J n,δ g n V (r + 1)/n − V r/n = 0 , ∀δ > 0 . (11.6) We claim that, given δ > 0, for a.a. V it holds J n,δ = r ∈ {0, 1, . . . , n − 1} : ∃j ∈ J δ such that x j ∈ (r/n, (r + 1)/n] (11.7) eventually in n. Let us suppose that (11.7) is not satisfied. Since the set in the r.h.s. is trivially included in J n,δ , there exists a sequence of integers r n with 0 r n < n such that a n := V ((r n + 1)/n) − V (r n /n) δ while v j < δ for all x j ∈ (r n , (r n + 1)/n]. We introduce the càdlàg functionV (t) = j∈J:x j t v j I(v j < δ) and we note that, if ∀j ∈ J with x j ∈ (r n /n, (r n +1)/n] it holds v j < δ, then a n =V ((r n +1)/n)−V (r n /n). At cost to take a subsequence, we can suppose that r n /n converges to some point x. It follows then thatV (x+) −V (x−) δ, in contradiction with the fact thatV has only jumps smaller than δ. This concludes the proof of our claim.
Due to the above claim and due to (11.2), we conclude that a.s., given δ > 0, it holds lim n↑∞ sup 1 k n r∈J n,δ ,r<k g n V (r + 1)/n − V r/n − j∈J δ :x j k/n v j = 0 . (11.8) Combining (11.8) and (11.6), we conclude that for any ε > 0 one can fix a.s. δ > 0 small enough such that max 0 k n S(k/n) − j∈J δ :x j k/n v j ε (11.9) for n large enough. On the other hand, a.s. one can fix δ small enough that j∈J δ :x j ∈[0,1] v j is bounded by ε. This last bound and (11.9) imply (11.5). f (x)dV −1 (x) . (11.10) Proof. Since f is uniformly continuous, by Lemma 11.1 it is enough to prove (11.10) with S n (k/n) replaced by V (k/n). Approximating f by stepwise functions with jumps on rational points, it is enough to prove that, fixed t ∈ Q, for P-a.a. V the limit (11.10) holds with S n (k/n) replaced by V (k/n) and with f (x) = I(x t). This last check is immediate.
We have now all the tools in order to prove Point (i) of Theorem 2.5. Indeed, by Lemma 11.1 ℓ n = S n (1) → V (1) P-a.s. Moreover, by Lemma 11.2 the measure dm n defined in (2.10) weakly converges to the measure d(V −1 ) * . In order to get Point (i) of Theorem 2.5 it is enough to apply Theorem 2.1. 11.2. Proof of Point (ii). If E(e −λτ (x) ) = e −λ α one can replace L 2 (n) with 1 in (11.1) and in the above definition of U n (x), and one can define τ n (x) directly by means of (10.1). In this case, definition (2.8) gives S n (k/n) = V (k + 1)/n and therefore dm n = 1 n n+1 k=1 δ V (k/n) . It is simple to prove that a.s. dm n weakly converges to dm := d(V −1 ) * . Hence, one gets that the assumptions of Theorem 2.1 are fulfilled with ℓ n = V (n + 1)/n), ℓ = V (1) and dm = (V −1 ) * , for almost all realization of V . As a consequence, one derives Point (ii) in Theorem 2.5. 11.3. Proof of Point (iii). The proof of point (iii) of Theorem 2.5 follows from Proposition 2.2 once we prove (2.17) with m = V . As in the proof of Lemma 10.2 we denote by 0 < y 1 < y 2 < · · · < y N < 1 the points in [0, 1] where V has a jump larger than 1/2 (note that V is continuous in 0 and 1 a.s.). We set a i := V (y i −), b i = V (y i ) and remark that the function V −1 is constant on [a i , b i ]. Then we fix ε > 0 (which is a random number) such that the following properties holds: (i) the intervals U i := [a i − ε, b i + ε], i = 1, ..., N , are disjoint and included in [0, V (1)], (ii) V has no jump at a i − ε and b i + ε, for all i = 1, . . . , N , (iii) for all i = 1, . . . , N , (11.11) Note that, since V −1 is continuous a.s. and flat on U i , condition (iii) is satisfied for ε small enough. Moreover, due to condition (ii) it holds V −1 (x) < V −1 (y) < V −1 (z) if y ∈ {a i − ε, b i + ε} and x < y < z. Let now f be an eigenfunction of the operator −D V −1 D x on U i with Dirichlet boundary conditions. Writing λ for the associated eigenvalue, by equation (4.9) in Lemma 4.1 it holds (11.12) Combining (11.11) and (11.12) we conclude that λ 2. Hence N U i V −1 ,D (1) = 0. We now observe that the set W = [0, V (1)] \ ∪ N i=1 U i is the union of N + 1 intervals and its total length is smaller than V (1) (1) (see the proof of Lemma 10.2 for the definition of V (1) ).
It follows that we can partition W in at most 2V (1) (1) + N subintervals A r of length bounded by 1/2. Since the dV −1 -mass of any subinterval A r is bounded by the total dV −1 -mass of [0, V (1)] (which is a.s. 1), by the estimate (4.11) in Lemma 4.1 we get that all eigenvalues of the operator −D V −1 D x restricted to any subinterval A r (with Dirichlet b.c.) is at least 2, hence N Ar V −1 ,D (1) = 0. We now apply Corollary 8.9, observing that we are in the same setting on Theorem 8.8 (recall that V −1 is continuous a.s. and recall our condition (ii), thus leading to (i)-(iii) in Theorem 8.8). By Corollary 8.9, we conclude that N 12.1. Proof of Proposition 2.4. We consider the diffusively rescaled random walk X (n) on Z n with jump rates c n (x, y) = E(τ (0) −a ) 2 E(τ (0))n 2 τ (nx) −1+a τ (ny) a if |x − y| = 1/n 0 otherwise .
By the ergodic theorem and the assumption E τ (0) −a < ∞, it holds lim n↑∞ S n ⌊xn⌋/n) = x for all x 0 (a.s.). In particular, it holds ℓ n = S n (1) → 1. Since π 2 k 2 is the k-th eigenvalue of −∆ with Dirichlet conditions outside (0, 1), by Theorem 2.1 it remains to prove that, a.s., for all f ∈ C c ([0, ∞)) it holds By the ergodic theorem and the assumption E τ (0) < ∞, the total mass of dm n , i.e. n k=0 H n (k/n), converges to 1 a.s. Hence, by a standard approximation argument with stepwise functions, it is enough to prove (12.1) for functions f of the form f = I([0, t)). By the ergodic theorem a.s. it holds: for any ε > 0 there exists a random integer n 0 such that S n (k/n) < t for all k (t − ε)n and S n (k/n) > t for all k (t + ε)/n. Therefore, for f as above and n n 0 , we can bound 1 nE(τ (0)) k∈N:k (t−ε)n τ (k) dm n (f ) 1 nE(τ (0)) k∈N:k (t+ε)n τ (k) .
Applying again the ergodic theorem, it is immediate to conclude. 12.2. Proof of Proposition 2.6. We sketch the proof since the technical steps are very easy and similar to the ones discussed above. We consider the diffusively rescaled random walk X (n) on Z n with jump rates c n (x, y) = n 2 E(τ (0))τ (nx ∨ ny) −1 if |x − y| = 1/n , 0 otherwise .
The rates c n (x, y) for |x − y| = 1/n can be written as c n (x, y) = 1/ H n (x, y)U n (x ∨ y) , where H n (x) = 1/n and U n (x) = τ (nx)/nE(τ (0)). By the ergodic theorem and the assumption E(τ (0)) < ∞, a.s. it holds lim n↑∞ S n (⌊nx⌋) = x for all x 0. In particular, a.s. S n (n) → 1 and Appendix A. Proof of Lemma 4.1 For simplicity of notation we write ℓ = ℓ m . As already observed, the Dirichlet eigenvalues are all simple and the λ-eigenspace is spanned by ψ(·, λ). The fact that ψ(·, λ) is a real function for any real λ is a simple consequence of the expression of ψ(·, λ) as series given at page 30 in [KK0] and recalled in the proof of Lemma 5.2.
As discussed in [KK0][Section 2], the function C ∋ λ → ψ(ℓ, λ) ∈ C is an entire function, having only positive zeros, which are all simple. It is well known that the set of zeros of any entire functions on C is given by C or is a countable (finite or infinite) set without accumulation points. We can exclude the first alternative since we know that the zeros of ψ(ℓ, ·) must lie on the halfline (0, ∞). In particular, if there are infinite eigenvalues they must diverge to +∞.
It remains to prove the last statement concerning (4.9) and the estimate (4.11). By definition, F is a Dirichlet eigenfunction of −D m D x with eigenvalue λ if and only if for some b ∈ C F solves the integral equation It is simple to check that the above identity (A.4) is equivalent to (4.9). On the other hand we know that (A.4) is equivalent to equation (A.1) together with (A.3), and the latter is equivalent to F (ℓ) = 0.