Local equilibrium properties of ultraslow diffusion in the Sinai model

We perform numerical studies of a thermally driven, overdamped particle in a random quenched force field, known as the Sinai model. We compare the unbounded motion on an infinite 1-dimensional domain to the motion in bounded domains with reflecting boundaries and show that the unbounded motion is at every time close to the equilibrium state of a finite system of growing size. This is due to time scale separation: Inside wells of the random potential, there is relatively fast equilibration, while the motion across major potential barriers is ultraslow. Quantities studied by us are the time dependent mean squared displacement, the time dependent mean energy of an ensemble of particles, and the time dependent entropy of the probability distribution. Using a very fast numerical algorithm, we can explore times up top $10^{17}$ steps and thereby also study finite-time crossover phenomena.


Introduction
Among many models for subdiffusion, the Sinai model sticks out due to the fact that diffusion is ultraslow. This means that the mean squared displacement grows slower than any power of time, namely like ln 4 t, see [1] for a recent review on other such systems. Physically, the Sinai model describes the one-dimensional thermal random motion of a particle in a random potential, more specifically, a potential which is constructed from a Brownian path. If the spatial domain is infinite, as it is usually assumed when writing down the model, the potential can have arbitrarily high barriers and arbitrarily deep wells, however, due to the recurrence properties of Brownian paths in one dimension, there will be potential zero-crossings at arbitrarily far distances from the origin. This model was introduced by Yakov Sinai [2] as a special case of models with a sitedependent jump probability and has found much attention in the literature since then, see, e.g., [3][4][5][6][7][8] for some thorough analysis. The concept has found many applications, including the dynamics of random field magnets and dislocation dynamics [7], glass dynamics [9], aging phenomena [10], random-field Ising models [11,12] and helixcoil boundaries in random heteropolymers [13,14]. With the inherently quenched heterogeneity of biomolecules, Sinai-type models describe mechanical DNA unzipping [15,16], translocation of biopolymers through nanopores [17,18], and molecular motors [19]. Also, quantum transport in disordered topological quantum wires [20] has been related to the Sinai model. More generally, there are many phenomena of ultraslow diffusion in disordered systems of low dimension, such as in vacancy-induced motion [21,22], biased motion in exclusion processes [23], local relaxation dynamics in DNA [24], paper crumpling under a heavy piston [25] or compaction of granular systems [26], glassy systems [27], statistics of extreme events [28], the ABC model [29], dynamics in nonlinear maps [30], interacting many-particle systems [31], in dynamics of cooling granular gases [32,33], and short range correlated Gaussian potentials [34].
Previous works concern the properties of a random walk in the Sinai model, for example the probability density function (PDF), transport phenomena and the mean squared displacement, based on long-time disorder-averaged dynamics by different approaches such as scaling arguments [7,34], the renormalisation group technique [46] and a discrete random walk model [49,53,54] as well as approaches based on timeaveraged observables [35]. However, experimental observations often relate to non-equilibrium properties of the system, which is in the focus of the present paper. More specifically, based on a discrete random walk model, we look into correlation phenomena and ageing when starting ensembles of independent trajectories in the same realisation of the random potential but with their individual thermal noises by analysis of the ensemble mean potential energy and of the Shannon entropy of the time dependent probability density. We then average over the disorder of the potential. In our setup all particles are starting from the origin and also the potentials performing a random walk from this point to both directions. The main conclusion of this work is that due to the extreme slowness of the diffusion, the system is always in quasi-equilibrium on the domain explored so far. This is formalised by the concept of infinite, non-normalised densities which was developed for non-confining, i.e., asymptotically one dimensional flat potentials [55], also for log potentials [56], and which allows one to represent a time dependent density by the invariant density on a finite domain dressed by a time dependent factor. For Sinai diffusion a remarkable result was obtained by Golosov: for a given realisation of a potential landscape all trajectories with the same initial condition in the deepest well stay close together forever [57][58][59][60]. More practically, Golosov proved that in the long time limit, the distribution of the relative distance y = x(t) − m(t), where m(t) is the most probable position, after averaging over random potentials, tends to a limit distribution [57]. However, by the renormalisation group analysis it was shown that for the same thermal initial conditions the existence of the limit distribution for the random variable y, does not imply that its moments remain finite in the long time limit [58][59][60]. Here we scrutinise the case of non-equilibrium initial conditions and show that quasi-equilibrium states in the random potential emerge due to a time scale separation with respect to escape times over larger local potential maxima. In contrast to the Golosov result, however, these "local equilibria" depend on the starting point of the trajectory and are non-universal in this sense.
Our paper is organised as follows. In section 2 we introduce an ergodic lattice hopping model to study the dynamics of the Sinai diffusion, followed by a numerical scheme using Markov matrix approach in section 3. Sections 4 and 5 reports the time dependent properties of the Sinai model such as the mean squared displacement, mean potential energy and the Shannon entropy, respectively. Equivalence of an unbounded motion in an infinite system and the equilibrium sate of a finite system is then investigated in section 6. We discuss the different energy regimes of the infinite system with open boundary condition (open system) and the finite system with reflecting boundary condition (closed system) in section 7, and a comparison with infinite densities presented in section 8. Finally, a summary and discussion is provided in the last section.

Lattice Sinai model
We study a version of the Sinai model on the infinite lattice of integers, also called lattice hopping model [61,62]. To create the random potential, we start a lattice random walk at site i = 0 into both directions, i.e., we define a path V i with V 0 = 0 and |V i±1 −V i | = 1, so that from one lattice site to its neighbour, V i jumps upward or downward by one unit with equal probability. Then V i forms a lattice Brownian path in i, which here is a spatial coordinate. We then interpret V i to be the potential for an overdamped particle which is driven by thermal noise. The balance between the deterministic downhill motion of an overdamped particle in this potential V and thermal noise is determined by the parameter ǫ ∈ [0, 1], which enters in the probability for the particle at site i to hop either to the left or to the right or to stay: where evidently the sum q i→i+1 +q i→i−1 +q i→i = 1 irrespective of ǫ. Since |V i −V i±1 | = 1, the probabilities to jump to a neighbouring site is given by one of the two values 1/4−ǫ/4 (uphill, ) and 1/4 + ǫ/4 (downhill, ), while the probability to stay is one of the three values 1/2 ( ), 1/2 − ǫ/2 ( ), 1/2 + ǫ/2 ( ), if the site i is either on a slope, at a maximum, or at a minimum of the potential, respectively (see figure 1). This restricts 0 ≤ ǫ ≤ 1, where ǫ = 0 is the infinite temperature limit, in which the potential does not influence the hopping rates, and ǫ = 1 is the zero-temperature limit in which only downhill motion and resting, namely deterministic motion, with no fluctuations are possible and no particle can escape a potential minimum.
In contrast to implementations of this model without resting probabilities, i.e. q i→i = 0, this version is ergodic (no splitting into even and odd sub-lattices) [49,62]. The even bigger advantage is that, as we will show below, equation (1) has an exact analytical solution if we restrict the dynamics to a finite domain with reflecting boundary conditions. Before we do so, let us discuss a special, non-random potential with infinite walls which has a V-shape: V i = |i|. The stationary probability distribution P i is given through the detailed balance condition as P i q i→i+1 = P i+1 q i+1→i . We compare this to the analytical solution of the continuous in time and space Fokker-Planck equation with the potential V (x) = c|x|, where the constant c carries the physical units. It is the Boltzmann distribution P B (x) = Z −1 exp (−c|x|/k B T ), where Z is the normalising partition function, k B is the Boltzmann factor, and T is the temperature. Note that the diffusion coefficient reads D = k B T /η where η is the damping, due to the Einstein relation. For a lattice point i in the slopes of this potential, the ratio of the hopping rates is the ratio (1 − ǫ)/(1 + ǫ) of uphill probability and downhill probability. The same ratio evaluated for the stationary solution of the Fokker-Planck equation is simply e −c|a|/k B T , with the lattice size a = 1, which together yields Hence, we can relate ǫ also quantitatively to temperature and as stated before, T = 0 corresponds to ǫ = 1 and T = ∞ to ǫ = 0. In the numerical simulations we set c = 1.
If we consider the system with its random potential on a finite domain i ∈ [−L, L] with reflecting boundary conditions, then an analytical expression for its invariant density P i can be shown to be the Boltzmann distribution. The derivation works as follows: The invariant distribution P i satisfies detailed balance, P i q i→i+1 = P i+1 q i+1→i . We fix arbitrarily the value P 0 = 1 and normalise all P i after we have calculated them. The precise discrete space solution can be easily obtained in the following way: P i+1 = P i q i→i+1 /q i+1→i and hence P i+1 = P 0 i k=0 q k→k+1 /q k+1→k . Inserting the transition probabilities defined in equation (1) we see that every ratio q k→k+1 /q k+1→k can only assume the value (1 − ǫ)/(1 + ǫ) or its inverse, depending on whether the jump is uphill or downhill. Hence, in the product, an equal number of uphill and downhill jumps cancel out their contributions, so that the result is: where the latter is a consequence of equation (2). So we see that the equilibrium probabilities follow a Boltzmann distribution which can be normalised for finite L by adjusting P 0 . These P i s can be calculated numerically with high accuracy so that we can numerically evaluate all kinds of averages in thermodynamic equilibrium. The above calculation represents a closed system with reflecting boundaries, since there is no inor outflow of probability to lattice sites outside [−L, L].
In order to study the dynamics of this model, in a straightforward numerical simulation one would first generate the random potential V i and then iterate a trajectory by random jumps from one lattice site to one of its neighbours according to the probabilities of equation (1). Repeating this many times for the same random potential, one would simulate an ensemble of non-interacting particles from which one can approximate time dependent position distributions. We are interested in such distributions as a function of time. However, since the diffusion in this potential is extremely slow, we need a much faster iteration scheme, so that we are able to average also over many random potentials.

Numerical scheme for the non-equilibrium system
In the following, we focus on the initial condition P i (t = 0) = δ i,0 , where all particles start at the lattice site i = 0. Instead of time consuming single particle simulations, we use a Markov matrix approach: The discreteness of our physical space and time allows us to summarise a single step in the time evolution of the probability P(t) by the multiplication of a Markov matrix with the vector P(t), where the elements of the Markov matrix are the transition probabilities q i→j from one lattice site to any other. These transition probabilities are given by equation (1) and hence the Markov matrix M has nonzero entries only on the diagonal and the two secondary diagonals. Hence, P(t + 1) = MP(t). Instead of performing this multiplication one-by-one in time, we simply take squares of the actual Markov matrix and thereby create a sequence of matrices M 2 , M 4 , M 8 , etc. which generate P(t k ), where t k = 2 k , by k matrix multiplications. Even though a single such operation scales like N 3 where N is the rank of the matrix, we can quickly achieve large t k . If the initial condition is P 0 (t = 0) = 1 and P i =0 (t = 0) = 0, then the distribution at time t k = 2 k is simply the central row of Since there is a non-zero probability that a particle hops one step to the right in every iteration step, after 2 k time steps in principle a range of 2 k lattice points might be explored by a trajectory. However, this probability, although theoretically strictly non-zero, in practice is extremely small.
While the disorder-averaged mean squared displacement MSD grows very slowly in time, the motion in a single realisation of the potential is more complicated. Actually, as we will illustrate later in more detail, particles and also the time dependent probability distribution explore the lattice in a highly intermittent way. If a particle is trapped (the probability is localised) in a deep potential well, then for a long time the lattice will not be explored any further, only on a much larger time scale we will see hopping to an even deeper well farther away from the origin. This slowness implies that on average over many such potentials the exploration horizon grows only like ln 2 t [7,8]. This is also true for most individual realisations of the random potential, so that the rank of the matrix 2L + 1, which is given by the range −L : L of the lattice which we model explicitly, can be chosen much smaller than N = 2 kmax . Since our (truncated) Markov Matrix M is not conserving probability (there is leakage out of the finite range of the lattice which we consider), eventually, the norm of P will decrease. We can detect this numerically, and we will stop the time evolution when this leakage exceeds a total probability of 0.01. After which time this occurs depends on the individual realisation of the random potential, on the size of the resolved domain, and on the temperature T or the noise strength ǫ, respectively. In all the analyses below, we therefore vary the modelling range [−L, L] of the open system, and we show results only up to times for which the leakage of probability was sufficiently small. For comparison, we also include results obtained for closed systems (reflecting boundaries) on equally large domains, obtained by the same numerical scheme.
On a standard workstation we thereby arrive at t > 10 17 time steps for an ensemble of 10 4 potential landscapes. Actually, there is another problem besides leakage, which prevents us from going to much larger times: numerical inaccuracy. For k being larger than 58, the matrix elements are very heterogeneous in their magnitude, so that under squaring such a matrix, we add very large and very small numbers. In such a summation, the small numbers tend to be truncated by round-off. This expresses itself in resultant matrices which from some number of squaring onward violate the normalisation of probability considerably, independent of leakage. Hence, we trust our simulations only up to 2 58 ≈ 10 17 time steps.
Direct numerical validations for this algorithm are contained in figures 2-4, where in the long time limit, averages of the Markov matrix simulations for finite, closed domains are compared to the numerical evaluation of the exact invariant probability distributions in the same potentials, equation (3).

Mean squared displacement
We first reproduce the well known result for the ensemble averaged mean squared displacement, which is defined as: where the latter is correct only for an ensemble of trajectories starting at i = 0, i.e., P i (0) = δ i,0 . The disorder-averages · are taken over realisations of the potential landscape and in an ensemble average · over the thermal noises. In a trajectorysimulation, one would, for every potential landscape, run a large number of trajectories with their own thermal noises. In the Markov matrix approach, the ensemble average over the thermal noise is already built in. The numerical results shown in figure 2 are in excellent agreement with the theoretical prediction MSD(t) ∝ ln 4 t [7,11], if time t is sufficiently large, namely where the length scale λ and time scale τ defined as Here, η is the friction coefficient and γ denotes the strength of the disorder. Note that in all numerical simulations we use η = γ = k B = 1. For smaller t we observe some deviations from the asymptotic behaviour, which depend on the temperature T . This is emphasised in figure 2, where we divide the numerically determined disorder-averaged MSD by the asymptotic behaviour, R(t) = MSD(t)/ ln 4 t. Not only does the constant   This exercise leads us to the conclusion that if we want to observe asymptotic properties in the shortest simulation time, we should use ǫ in a range of values of 0.6-0.8, or, respectively, k B T ≈ 0.7 − 0.5. However, we will usually perform our numerical simulations for a whole range of ǫ-values.

More time dependent properties
On an infinite lattice, there is no stationary state. Due to the randomness of the potential in the Sinai model, which has only a statistical self-similarity, we cannot expect some simple behaviour here. In order to gain insight, we calculate the time dependent mean potential energy, and the time dependent Shannon entropy of the probability distribution P i (t),    where the sums extend over the whole infinite lattice and 0 ln 0 = 0. We study both quantities as averages over many realisations of the potential landscape. Numerical results obtained by the Markov Matrix method are shown in figures 3 and 4.
The mean potential energy drops as a function of time. This is to be expected, since, the longer the particle moves through the potential landscape, it will typically get trapped in even deeper potential wells. This is also exemplified by a sequence of snapshots of the time dependent probability distributions P i (t) in figure 5. In every instance, most of the probability is concentrated in deep potential wells, but when time goes on, deeper and deeper wells are explored, and more shallow wells are vacated. Empirically, we observe the following behaviour: with a temperature dependent pre-factor c(T ). Numerics indicates that this pre-factor is c(T ) ≈ 1.3k B T for T < 1 (see figure 3, right panel). This observation can be explained as follows. Consider a particle explores distance x(t). Then, since V (x) is Brownian motion we have V min (x) ∝ x(t) and since x(t) goes like ln 2 t, we get the mentioned scaling of E(t) with ln t. This means that E(t) is controlled by the minimum of the potential explored by the particle in time t, which is in agreement with other results in this paper. Despite this non-stationarity, the snapshots of the time dependent density show some similarity in their clustering (see figure 5). This clustering in quantified by the Shannon entropy, equation (8). For a uniform distribution over N lattice points, S = ln N, whereas for the initial δ-peak it is S = 0. Numerically, we observe convergence to an ǫ-dependent constant, S(t → ∞) = const.. Its numerical value suggests that the density asymptotically concentrates on a few lattice points. Figure 5 makes it plausible that this clustering of the probability density takes place inside the deepest potential well in the explored region of the given potential.
So the intuition which we gain from this low-temperature non-equilibrium simulation is that at time t the particles explore a range L eff of the potential which scales like ln 2 t, and that they settle down in the absolute minimum of this part of the potential. As time goes on, the range grows, and therefore new and even deeper absolute minima are explored. This leads to a decrease of the mean potential energy but to constancy of the entropy. The energy barrier after release from x = 0 at t = 0 is of order √ γx, and the time required to cross this barrier is given by the Arrhenius Here γ represents the strength of the disorder and τ defines a fundamental time scale. According to these scaling relations, after the time t the particle typically has covered the distance x 2 (t) ≃ ln 4 (t/τ ), see [7].

Golosov effect
We now address the localisation effect of the Sinai model described by Golosov in [57] (see also [7]) by computing the standard deviation σ(t) = x 2 (t) − x(t) 2 , in a semiinfinite domain where a particle starts its motion from x = 0 with a reflecting boundary condition at the origin. In [57] it was stated that this σ(t) would asymptotically for large t approach a finite value (disorder dependent), while the mean value x(t) is governed by a function m(t) which tracks the deepest well in the explored range of the for a single arbitrarily chosen potential (ǫ = 0.6). One clearly sees the localisation of probability in the deepest potential well inside the exploration range, when the initial distribution is P 0 (0) = 1 and P i =0 (0) = 0. The black line represents the random potential and the initial delta-function distribution on x = 0 is shown in red.
random potential. More precisely, he proved that the disorder-averaged relative distance x(t) − m(t), in the long time limit converges towards a limit distribution. Hence, in the Golosov scenario the width of a packet of particles within a single realisation of the random potential does not grow with time, in the long time limit. However, taking an average over different disorders leads to a divergent standard deviation. This is due to those configurations of the random potential in which there are more than a single deep well, and where the particles usually are localised in different spatially separated minima, which leads σ(t) to diverge in the infinite-time limit (see also [53,54]). Indeed, it has been shown that the disorder-averaged standard deviation of the Sinai model has the following long-time asymptotic [10,58,63], where λ and γ defined as equation (6). In figure 6 (left panel) we show the results of disorder-averaged standard deviation for different temperatures (see equation (2)) and compare it with the asymptotic behaviour (10). As can be seen, for large ǫ (low T ), there is a very good agreement between the analytical prediction and the numerical data. However, for small ǫ (high T ), since this is affected by the finiteness of system size L, we observe a deviation from the theory. Moreover, in the right panel of figure 6 we demonstrate the PDF of the standard deviation for ǫ = 0.6 at different times. As can be seen, at long times the PDF of  σ(t) is almost time independent, which is in agreement with Golosov's theorem, with a power-law decay whose power guarantees normalisation but, without cut-off, would yield a diverging mean value. The cut-off at large σ(t), however, is a function of t since σ(t) in every individual potential is strictly bounded by the largest distance a particle can travel in time t from the origin, and this propagation is extremely slow. Therefore, the mean values of these PDFs at any finite time are finite, but slowly growing in time and eventually diverging, in full agreement with equation (10). In this sense, the statement of Golosov's theorem about an asymptotic shape of the distribution of σ(t) and equation (10) are not in contradiction, as it is illustrated by our numerics, due to the power-law tail with time dependent cut-off.
Our analysis in terms of entropy, however, shows that entropy S(t) converges to a finite value, which characterises the localisation of the thermal particles regardless of whether this takes place in a single or in multiple wells, and which is also insensitive to the distance between these wells. Therefore, the entropy is a much more suitable indicator for localisation than the standard deviation.

Equivalence of equilibrium and non-equilibrium dynamics
In this section we want to prove our claim that the unbounded motion of the infinite system is always in close vicinity of an equilibrium solution of a system whose size is given by the average exploration range of the unbounded motion at the respective time.
For the open systems, we therefore fix the range [−L, L] on which we model its time evolution, and we stop the iterations when probability starts to leak out through the open boundaries. We then calculate the entropy and the energy of this probability distribution in the given potential, and again perform an average over 10000 realisations of the random potential. These values will then be compared to those calculated with the Boltzmann distribution equation (3) in the same potentials.
The dependence of both entropy and mean potential energy on the temperature for different lattice sizes L are shown in figures 7 and 8, respectively.
Following the above argument, the random potential is given as a random walk, whose deviation from the origin scales like √ L. Assuming the same scaling for the deepest potential well on the finite range 2L, we expect the equilibrium mean potential energy, which for small T is dominated by exactly the deepest well, to scale like − √ 2L. This is similar to the above argument, in which instead of t we have L. Indeed, when re-scaling E in this way, we obtain a nice data collapse for very small and very large temperatures, see figure 8. Actually, the data collapse for T ≫ 1 is no surprise, since the mean potential energy is zero in this limit of a uniform distribution, independent of L. For the intermediate range, one finds a data collapse as well when re-scaling also temperature by 1/ √ 2L, see figure 8. This result can be explained as follows. Consider a system with size [−L, L]. We order the potentials at each lattice point, from minimum to maximum. The two lowest potential traps are called V min and V next . We can add more minima to the argument below, but we use only two deepest valleys and assume a Boltzmann distribution. In this approximation, the mean energy reads where Z is the partition function of this two level system, Z = exp(−V min /k B T ) + exp(−V next /k B T ). Note that V min and V next are typically negative. Now, the usual argument is that V min = c min √ L and V next = c next √ L, in which c min and c next are statistically independent of L, and specific to the system. Inserting this in the ensemble average gives us Therefore, plotting of E / √ L versus T / √ L is L independent, as is shown in figure 8 (right panel). As mentioned, we can also add other minima, and the same trick will work. Furthermore, the ensemble average over the disorder when added will give a non-random result (the result will not depend on the specific values of c min and c next ).
The entropy, in contrast, turns out to be independent of system size L in the T → 0 limit, since only the shape of the deepest well and its degeneracy matters. The shape can be assumed to be independent of L. There is a very slow increase with L since the bigger L the larger is the probability that there is a second, independent well with the same depth and hence a larger degeneracy of the minimum.
As said, the open system results shown in figures 7 and 8 show mean values taken at the time when the total probability in the range [−L, L] starts to drop, which means that the support of the time dependent probabilities starts to extend beyond the range    Figure 8. Disorder-average of the mean potential energy in equilibrium of Sinai models of different size L. We rescale the energy with 1/ √ 2L since this is how the well depths scale. Symbols represent the corresponding results to the Boltzmann distribution, blue dotted and orange dashed lines show the simulation results for an open system and a reflecting boundary condition, respectively. As can be seen, at low temperatures in the system with reflecting boundaries, the mean potential energy deviates from the values of the equilibrium due to the numerical error of the Markov matrix algorithm. The same phenomenon is observed for the open system.
[−L, L]. Thus the time dependent exploration range L eff becomes larger than the modelled domain, and this occurs after a time given by L eff ∝ ln 2 t. Hence, for the open system, the behaviour E(t) ∝ − ln t observed above translates into E(t) ∝ L eff (t) in which L eff ≈ 0.57L, thus is consistent with the equilibrium result E(L) ∝ √ L. This analysis therefore suggests that the non-equilibrium dynamics relaxes close to equilibrium in the finite range it has explored so far. This is an evident consequence of the time scale separation of the problem: Ultraslow diffusion across large scales, but thermal noises which are strong enough to have fast local relaxation.

Three temperature regimes and scaling
For two types of dynamics, finite time simulation for finite L system and an open system, it is reasonable to distinguish the low, medium, and high temperature regimes separately. In the low temperature limit, both types of dynamics are related through the relationship of system size L and the exploration horizon L eff , as explained above.
In the high temperature limit, the equilibrium distribution of a finite size system is uniform, hence the mean potential energy is zero, and the entropy of this distribution is ln L. In the non-equilibrium situation, high temperature implies irrelevance of the potential and hence free diffusion with its well known properties. In particular, while the mean potential energy remains 0 for all times, the entropy increases as ln t without upper bound ‡. Employing again the scaling of the MSD in time in order to translate time into system size, we now have L eff ∝ √ t. The finite-time entropy of the infinite system therefore depends on the effective system size as: S ≃ ln L 2 eff ≈ 2 ln L eff . Apart from the mismatch of the prefactor, the difference of the distributions (Gaussian versus uniform) make it evident that in the high temperature regime, there is no equivalence of the non-equilibrium and equilibrium behaviour.
Finally, let us shortly discuss the regime of intermediate energies. We call energies intermediate, if the equilibrium density is concentrated in many more wells of the potential than the deepest one but is still localised. Due to the scaling of the energy landscape in system size, we expect that equilibrium systems of different size are equivalent when the temperature divided by √ L is the same. Actually, this is what we observe in figure 8 in the central range of the temperature scale. Note that whenever we compare energies for different system sizes, we normalise them by √ L since this is the expectation value for the deepest well.

Relation to infinite densities
Recently, there has been much progress in the study of systems with asymptotically flat potentials. The Boltzmann distribution e −V (x)/k B T for flat potentials is not normalisable, since it is constant for large |x|. This gives rise to the notion of infinite densities and to approaches to deal with these, see [55,56,[64][65][66][67][68][69][70][71][72][73][74]. Inspired by these works, we study  Figure 9. Ratios of the empirical probabilities r i = P i (t)/e −Vi/kB T after different times t. Evidently, for every time there is a core region where r i = O(1), and exterior regions where r i ≈ 0, with very narrow transitions (except at short times since there was not sufficient time to relax to a flat shape). Different lines represent different times t = 2 j up to j = 40. Here we use ǫ = 0.6 (or k B T ≈ 0.7) and L = 256.
here the ratio r i of the time dependent, numerically generated probability P i (t) and of the non-normalised Boltzmann distribution on the lattice, P B i = e −V i /k B T , namely r i = P i (t)/P B i . For large |i|, P i (t) = 0, and so is this ratio r i . However, inside the exploration range, this ratio is an i-independent constant c(t) with a very narrow range. We eliminate this constant by summing up the product P i (t)e V i /k B T which is essentially (but not precisely) the norm of the Boltzmann distribution inside the exploration range. Hence we use this value as normalisation of the Boltzmann distribution, see figure 9 for an illustration for a single random potential. This is another way to verify the validity of the concept of local equilibrium: Ensemble of the open system on the infinite lattice explores some range which its inside almost always, to some good approximation, represents the equilibrium distribution of a finite, closed system, while outside the probability to find the particle is essentially zero, with a narrow transition in between. Note that time steps between the different snapshots in figure 9 are exponentials, t = 2 j .

Conclusions
Sinai diffusion belongs to the class of classical hard problems in statistical mechanics and numerous questions on its detailed behaviour are still open. Here we consider the local equilibrium behaviour of a thermal particle in presence of a random force field. We make use of a very fast numerical scheme to study the time dependent densities, mean potential energy and the Shannon entropy as well as the mean squared displacement of the ensemble particles in a bounded domain with reflecting boundaries and in an open system by averaging over 10 4 random potentials with a Brownian path. Our analysis in terms of the time dependent densities shows that while the system is in a non-equilibrium state, as manifested in the time dependent mean square displacement, still it exhibits some properties which are inherently related to thermal equilibrium. For example with this insight we could use simple scaling arguments, based on the extreme of the minima equation (13). The discrepancy might be due to different definitions of the parameters such as D eff in the lattice hopping model. to find the energy of the system, see figure 8.
In contrast to the setup analysed by Golosov, who considered the localisation of a particle packet released in the deepest well of a specific realisation of the random potential we here consider the case when ensembles of particles are seeded in an arbitrary position of the random potential and allowed to evolve independently. While Golosov stated that the variance of an ensemble of particles should be approximately constant over time, we find instead that the variance continues to increase even in the long time limit, but that instead the Shannon entropy converges to a constant. Together this implies that the time dependent PDF is localised in a small number of potential wells, but that the distance between wells which are populated at a time usually grows in time.
In agreement with Golosov, we find that in an ensemble average over different random potentials, there is convergence in time to a unique distribution with power law tails but a time dependent cut-off, which hence yields a finite variance at any finite time.
Comparing the unbounded motion of the particles in an infinite 1-dimensional domain with the motion in finite, bounded domains with reflecting boundaries we demonstrate that the unbounded motion is close to the equilibrium state of a finite system of growing size at all times. This observation is due to the distinct time scale separation, according to which inside the wells of the random potential, there is a relatively fast equilibration, while the motion across major potential barriers is ultraslow. Our results shed new light on the equlibration behaviour of particle packets in quenched, disordered potential landscapes.
Studying the time averaged spreading characteristics of a non-normalised state would be interesting extensions of the present work. The quantitative characterisation in terms of trajectories, such as time averaged MSD and width of the particle packet will be of use in the analysis of dynamic phenomena in strongly disordered energy landscapes. For instance, the strong ageing observed in simulations of single proteins may indicate that protein dynamics may belong to this class of problems [75].
Another open problem is how to relate between the infinite densities, namely the fact that within a range the system is in a non-normalisable Boltzmann-Gibbs state, and the well known disorder average propagation P(x, t), equation (13).