Information transport in classical statistical systems

For"static memory materials"the bulk properties depend on boundary conditions. Such materials can be realized by classical statistical systems which admit no unique equilibrium state. We describe the propagation of information from the boundary to the bulk by classical wave functions. The dependence of wave functions on the location of hypersurfaces in the bulk is governed by a linear evolution equation that can be viewed as a generalized Schr\"odinger equation. Classical wave functions obey the superposition principle, with local probabilities realized as bilinears of wave functions. For static memory materials the evolution within a subsector is unitary, as characteristic for the time evolution in quantum mechanics. The space-dependence in static memory materials can be used as an analogue representation of the time evolution in quantum mechanics - such materials are"quantum simulators". For example, an asymmetric Ising model on a Euclidean two-dimensional lattice represents the time evolution of free relativistic fermions in two-dimensional Minkowski space.


Introduction
Memory and information transport are key issues in information technology. The study of statistical systems of Ising spins or bits has shaped the conceptual advances for the role of information [1]. The understanding of computations with materials able to conserve memory [2][3][4][5] may well influence future information processing. In this paper we propose a formalism for the problem of (static) information transport based on the concept of classical wave functions. It resembles the derivation of the wave function from the path integral in quantum mechanics by Feynman [6]. However, our approach remains entirely rooted in classical statistics, describing classical Ising spins in thermal equilibrium or more elaborate static states in classical statistical systems.
We investigate probability distributions in classical statistics for systems with boundaries and a "bulk" limited by the boundaries. How is a change in the boundary conditions reflected by the observables within the bulk? This amounts to the question how a signal propagates from the boundary into regions within the bulk or how information is transported within the bulk. In turn, this issue is directly related to the question how information can be transported from one boundary to another, say between the two ends of a wire. We address these questions in a static context, for example thermal equilibrium, without any dependence on time. The absence of a genuine time evolution associates the bulk of such static states to a generalized notion of "equilibrium state", even for situations where the latter is not unique.
One finds a rather rich variety of different possible behaviors for the information transport. For the most common situation the boundary properties are not relevant for the bulk. Boundary information is lost within a finite correlation length, either monotonically or as damped oscillations. This situation is realized if the bulk equilibrium state is unique. A neighboring case is the powerlike decay in case of an infinite correlation length, as for critical phenomena. Most interesting in our context are static memory materials for which expectation values of observables in the bulk depend on the boundary conditions. This becomes possible if no unique bulk equilibrium state exists, as in case of spontaneous symmetry breaking. Information can now be transported from the boundary to the bulk, imprinted on the properties of the degenerate "equilibrium states." For example, we find models with oscillating local probabilities in the bulk, with details of the oscillations depending on the boundary conditions.
In general, local probabilities and therefore the expectation values of local observables A(t) will depend on the location t of some hypersurface in the bulk. For an Ising model with a finite correlation length ξ the boundary information will be exponentially erased for ∆t = min(t f − t, t − t in ) larger than ξ withĀ the "bulk expectation value" or equilibrium value. At a phase transition ξ diverges and A is approached with a power law in ∆t. We will see that this loss of memory of boundary conditions is characteristic for all systems with a unique equilibrium state. The loss of memory of boundary information is, however, not the only possibility. One may ask under which circumstances memory of boundary conditions is kept, for example by an oscillating behavior as A(t) = a 0 cos(ω t + α), (2) with α depending on the boundary conditions. We find such a behavior for highly interesting "static memory materials" where bulk observables keep high sensitivity to boundary conditions. Static memory materials can be realized if no unique equilibrium state for the bulk exists. Degenerate generalized equilibrium states occur, in particular, in case of spontaneous symmetry breaking or in presence of conserved quantities. A simple example for a static memory material is the asymmetric diagonal two-dimensional Ising model, with action taken in the limit β → ∞. The points (t, x) are on a quadratic lattice, and local observables at given t correspond to Ising spins s(x) = ±1 or occupation numbers n(x) = s(x) + 1 /2 = (1, 0). The interactions are only on one diagonal, and β may be considered as a combination of interaction strength and inverse temperature.
For suitable boundary conditions the expectation values s(t, x) at fixed x oscillate with t, similar to eq. (2). For β → ∞ the asymmetric diagonal Ising model constitutes a simple cellular automaton [7][8][9][10][11]. For the evolution from a hypersurface at t to the next hypersurface at t + ǫ each spin up or particle at x hops to the next side on the right at t + ǫ, and each spin down or hole does the same. The system can be visualized as propagating fermions, with occupation numbers n(t, x) either one or zero. Cellular automata may be realized by experimental setups or as computing architectures. Finite β or the addition of other interactions may be considered as probabilistic perturbations to the deterministic cellular automata. Also a probabilistic distribution of boundary values turns this model into a genuine classical statistical system. Our general formalism embeds the deterministic cellular automata into a classical statistical probabilistic setting. We will further be interested in the conditions for the realization of general static memory materials that do not need to be deterministic cellular automata.
Our investigation addresses a particular form of "equilibrium signal transport" for which the usual dynamics of a system plays no role. In particular, the signal is available "simultaneously" at all locations t. This could be an interesting aspect for numerical computations, in particular since reading out the information at t by measuring some local observable A(t) would influence the probability distribution, similar to quantum mechanics. An experimental realization of static memory materials may use recent advances in spin based information technology [12][13][14]. In our context it is important that the system obeys classical statistics, at least to a good approximation. This requires the control of the quantum fluctuations in the real world [15].
It is the aim of our paper to develop a general formalism for the problem of information transport. It is based on the central concept of classical wave functions. While Feynman's path integral for quantum mechanics involves a complex weight factor, our investigation concerns a real positive weight factor ∼ exp(−S cl ). We therefore expect analogies, but also important differences as compared to quantum mechanics. Similar to quantum mechanics the classical wave functions obey a linear evolution law realizing the superposition principle. Probabilities are bilinears in the wave functions, such that interference is, in principle, possible. As in quantum mechanics the expectation values of observables involve bilinears of the wave functions. The main differences to quantum mechanics are twofold. Instead of the complex wave function in quantum mechanics the classical statistical setting involves two different real wave functions. While their evolution is linear, it is, in general, not unitary. The norm of the wave functions can change. Only for par-ticular static memory materials the evolution becomes unitary and the two wave functions can be identified. In this situation the system obeys all axioms of quantum mechanics.
The transfer matrix formalism [16][17][18] introduces the notion of non-commuting operators into classical statistics. In a sense it can be viewed as the analog of the Heisenberg picture in quantum mechanics. Our approach supplements a Schrödinger picture for classical statistics by implementing classical wave functions. Their derivation from the partition function or functional integral shows many analogies to Feynman's derivation [6] of the quantum wave function from the path integral.
We will demonstrate our general considerations with several examples for which the boundary value problem can be solved explicitly. The first is the well-known onedimensional Ising model [19,20] in a homogeneous magnetic field. This model serves as a demonstration how the concept of a classical wave function can be used in practice. We recover the known results of the exact solution [21] directly from the evolution of the classical wave function. The expectation values of local spins for arbitrary boundary conditions are described explicitly.
As a second explicit solution we consider the "fourstate oscillator chain". This is again a one-dimensional Ising type model, now with two species of spins s 1 and s 2 . For a particular choice of next-neighbor interactions this system shows oscillatory behavior of expectation values of local observables. These oscillations are damped, however, since they occur in a sector for which the eigenvalues of the step evolution operator obey |λ| < 1. Only for a particular limit this system becomes a memory material with all four eigenvalues of S obeying |λ| = 1. It is then a "unique jump chain", corresponding to a simple cellular automaton [7][8][9][10][11].
Our third example is the asymmetric diagonal twodimensional Ising model (3), as well as neighboring models. For β → ∞ we find the explicit solution of the boundary value problem in terms of the free propagation of relativistic Weyl, Majorana, Majorana-Weyl or Dirac fermions in two dimensions. The limit β → ∞ of this model can again be associated with cellular automata. This model describes a genuine memory material and we briefly discuss a possible realization in experiments or by numerical simulation. This model constitutes an existence proof for static memory materials within the setting of classical statistical systems. It is a quantum simulator with non-trivial Hamiltonian and associated oscillations of local probabilities and expectation values of local observables.
The aim of the present paper is mainly on the conceptual side. We only briefly discuss some aspects of an experimental realization of static memory materials, or of computer realizations of such materials. It is our hope that the concepts developed here are helpful for a future practical realization of static memory materials, and that such objects will reveal interesting new aspects for information processing. This paper is organized as follows. In sec. 2 the wave functionq(t) and conjugate wave functionq(t) are defined as suitable "functional integrals" over variables at t ′ < t or t ′ > t, respectively. Thusq(t) depends on the "initial boundary factor" f in , whileq(t) involves the "final boundary factor"f f . The local probability distribution p(t) can be expressed as a bilinear inq(t) and q(t). In sec. 3 we discuss the evolution (t-dependence) of q(t) andq(t), and therefore of p(t). The explicit solution for the classical wave functionq(t) of the one-dimensional Ising model in a homogeneous magnetic field contains the full information about the equilibrium properties in the bulk, as well as the quantitative approach to equilibrium as one moves from the boundary to the bulk. We show that the monotonic information loss in the Ising model is not the only possible behavior. This is done by the explicit construction of classical probability distributions for which the local probability distribution p(t) oscillates with t. The evolution of quantum mechanics is found if the step evolution operator S is a rotation. This generalizes to subsectors of S acting on a suitable subspace of wave functions. In sec. 4 we discuss the generalized Schrödinger equation. This linear differential evolution equation can be obtained if a suitable continuum limit ǫ → 0 can be realized. We also introduce the density matrix for mixed state boundary conditions. We summarize the basic concepts and the relation to the quantum formalism in sec. 5. To gain a first impression of the content of the formalism the reader could start with this section.
Sec. 6 is devoted to the discussion of simple static memory materials. We first discuss "unique jump chains" with properties similar to cellular automata. This is extended in sec. 7 to Ising type models describing the propagation of free fermions in one space and one time dimension. They realize oscillating local probabilities and expectation values of the type of eq. (2). We briefly discuss the possibility of an experimental realization. In sec. 8 we comment on the general conditions for memory materials. For memory materials the wave function, or a subsector of the density matrix, follows a quantum evolution. We discuss some conceptual implications of our results in the concluding sec. 9, in particular a possible impact on the foundations of quantum mechanics and ideas of an emergent time [22][23][24].

Functional integral for occupation numbers and classical wave functions
In secs. 2 to 4 we discuss the basic formalism for our discussion of information transport in static classical statistical systems. Since the formalism of classical wave functions is unfamiliar to most readers we proceed in detail. An overview of the formalism is provided in sec. 5, to which a reader mainly interested in examples for static memory materials may jump. We employ a discrete setting such that all quantities are mathematically well defined as long as the number of degrees of freedom remains finite. The continuum limit of an infinite number of degrees of freedom does not pose a particular problem for the investigations of this paper.

Spin chains
In order to be specific we consider a one-dimensional wire with locations in the wire labeled by a discrete position variable t, and endpoints at both sides denoted by t in and t f , t in ≤ t ≤ t f . For the particular set of observables used to define the probability distribution we take here as "variables" the occupation numbers n γ (t). They can take the values zero or one, implying the relation The occupation numbers could also express bits of information. They can be directly related to Ising spins s γ = 2n γ − 1 that take the values ±1. For our purposes it will be more convenient to work directly with the occupation numbers, but is is evident that a configuration sum or functional integral for occupation numbers can be translated directly to a functional integral for Ising spins.
In this sense we discuss here generalized Ising models. In analogy to the Ising model we mean by "functional integral" a sum over distributions of discrete numbers. For a finite number of degrees of freedom this is a finite sum. All expressions are therefore regularized. The index γ labels different species of occupation numbers at a given t. It could comprise location or momentum labels. Discrete variables with more than two values can be constructed by combining several n γ . For example, integer values from zero to three obtain as binary code from two occupation numbers. The generalization to infinitely many discrete values or continuous values is straightforward. In this sense our setting will correspond to the most general quasilocal classical statistical system. A "local configuration" or set of occupation numbers n γ (t) at a given t can also be expressed in a fermionic language. For n γ = 1 a fermion of species γ is present, while for n γ = 0 it is absent.
The overall classical statistical probability distribution p {n} is specified by associating to every sequence of values of the occupation numbers n γ (t), e.g. {n} = n γ 1 (t 1 ), n γ 2 (t 1 ), . . . n γ 1 (t 2 ), . . . , a probability, We investigate "quasi-local" probability distributions that can be written in the form Here K(t) involves only variables n γ (t ′ ) with t ′ in the neighborhood of t, while the boundary terms f in andf f involve only n(t ′ ) in the vicinity of t in or t f , respectively. This is a generic class of statistical systems. It comprises Ising models and all statistical systems with local interactions. Mostly we consider only interactions between neighboring t-layers, with K(t) depending on n γ (t) and n γ (t + ǫ). Also f in will depend only on n γ (t in ), andf f on n γ (t f ). This setting is easily generalized to multi-dimensional systems. In this case t labels a sequence of hypersurfaces, and the index γ comprises a label for the location within the particular hypersurface at a given t. For the two-dimensional Ising models the observables n γ (t) may be associated with Ising spins s(t, x) = ±1, and K(t) contains the information on next-neighbor or diagonal interactions.
We employ the label t for the different hypersurfaces in analogy with time, but we stress again that we have not introduced any a priori concept of time or dynamics. The wire is seen as a timeless or static object. Its description is given by a static probability distribution depending on boundary conditions. Thus our systems do not represent dynamical systems with time dependent boundary conditions as studied in the wide field of signal transmission. Nevertheless, the dependence of expectation values of observables on t shares many features with a time evolution. We therefore associate sometimes t with an emergent time and use the corresponding language.
The boundary conditions can be collected in a "bound- This may be extended to a statistical distribution of boundary conditions with w α appropriate weights. General local observables A(t) can be constructed from n γ (t). In our context the central question of information transport asks how the expectation value of a local observable in the bulk, A(t) , responds to a change in the boundary matrix b. The question of "equilibrium signal transport" from one boundary to another can be explored as a special case. The boundary matrix b is varied only by varying the initial condition f in at one end ("initial time"). One then investigates observables A(t f ) at the other end ("final time") in their response to the variation of b. For the asymmetric diagonal Ising model (3) the initial signal is completely transmitted to the final boundary.
We start by introducing the concepts of a local probability distribution and classical wave function, and discuss subsequently how the local probability distribution and wave functions are obtained from the "overall probability distribution" p {n} . We employ a normalized p {n} , corresponding to a partition function Z = 1. The partition function is expressed as a product of step evolution operators, multiplied by boundary terms. Splitting the functional integral for Z into parts with t ′ < t and t ′ > t introduces the concept of the classical wave functionq(t) and conjugate wave functionq(t).

Local probabilities and wave functions
We begin with the simple case where γ takes only two values, n 1 and n 2 . The species can be associated with spin up and spin down of a fermion. There are four different local states at a given t that we label by τ = 1 . . . 4. For τ = 1 two fermions are present, n 1 = 1, n 2 = 1. For τ = 2 only one fermion with spin down is present, n 1 = 0, n 2 = 1, while for τ = 3 one has only a spin up fermion, n 1 = 1, n 2 = 0. For τ = 4 no fermion is present, n 1 = n 2 = 0. Local probabilities p τ (t) for the four states obey the usual requirements, p τ (t) ≥ 0, τ p τ (t) = 1. At a given t the expectation values of local occupation numbers or local observables A(t) follow the standard classical statistical rule with A τ (t) the value of the observable in the state τ . A general local observable A(t) can be expressed as a linear combination of four basis observables constructed from products of occupation numbers, as 1, n 1 , n 2 , n 1 n 2 . For a given t the expectation values are given by We will discuss later how the local probabilities p τ (t) can be computed from the overall probability distribution p {n} which depends on the sequence of occupation numbers {n} for all t. We will express the set of local probabilities [p τ (t)] in terms of a "classical wave function" [q τ (t)] and "conjugate classical wave function" [q τ (t)] as (no sum over τ here) In the next section we will formulate the t-dependence of the local probabilities p τ (t) in terms of the wave function and its conjugate. The four real numbersq τ can be seen as the coefficients of an abstract wave function in the occupation number basis, and similar for the conjugate wave function.
As local basis observables we employ four basis functions h τ (n) given by In this short notation we have suppressed the t-argument. It will always be understood that local basis observables h τ (t) depend on occupation numbers n γ (t). The local basis observables are eigenfunctions of occupation numbers with eigenvalues one or zero, in the sense , one easily verifies three central relations: the first two are the product rule and the integration rule while the third is the sum rule stating that the sum of the basis functions obeys The wave function f (t) and the conjugate wave functionf (t) are defined as (Sums over repeated indices are implied unless stated otherwise.) Similarly, the local observables A(t) can be written as In particular, the observable for the local occupation number N γ (t) is simply given by n γ (t). With eq. (11) we can write the expectation value (9) as where dn denotes the sum over local configurations according to eq. (15) We can generalize the notion of local observables by introducing operators A ′ (t) as matrices with elements A ′ τ ρ (t). Their expectation value is expressed by the "quantum rule" Local observables that are constructed as sums of products of occupation numbers are represented by diagonal operators For our simple setting the operators A ′ are 4×4-matrices with elements A ′ τ ρ . Occupation numbers correspond to diagonal operators For diagonal operators the equivalence of the expressions in eq. (20) with eqs. (9) and (19) is easily verified by using the properties of the basis functions and the definition (11). The physical interpretation of observables represented by off-diagonal operators is discussed in ref. [25].
Our setting can be extended to an arbitrary number of species γ ∈ {1, . . . , M }. The sequences of occupation numbers n γ ∈ {0, 1} describe now N = 2 M local states. The corresponding basis functions h τ , τ ∈ {1, . . . , 2 M }, are defined by the possible products of M factors n γ or 1 − n γ , in analogy to eq. (12). If we label τ by a sequence of occupation numbers, say (0, 1, 1, 0, 1 . . . ), the corresponding basis function h τ = (1−n 1 ) n 2 n 3 (1−n 4 ) n 5 . . . obtains by inserting n for each value one, and 1 − n for each value zero. The basis functions obey where (n γ ) τ "reads out" the corresponding value zero or one in the sequence τ . The three central relations (14) to (16) remain valid for arbitrary M .

Overall probability distribution
The overall probability distribution p {n} associates to each sequence of occupation numbers {n} = n γ (t) a probability. We consider a discrete set of equidistant locations t ∈ {t in , t in + ǫ, t in + 2ǫ, . . . , t f }. (The continuum limit ǫ → 0 can be taken, if appropriate, at the end.) A sequence of occupation numbers specifies first all n γ (t in ), then all n γ (t in + ǫ) and so on. If there is only one species (M = 1), the sequences are the configurations of an Ising chain. For an arbitrary number of species the sequences are the configurations of generalized Ising models. For the formal part of our discussion we generalize to an arbitrary weight function w {n} which is not necessarily positive. For the classical statistical systems of interest w {n} will then be restricted to be positive semidefinite. A general observable depends on the sequence of occupation numbers n γ (t) . Expectation values obey the standard statistical rule We switch here to the standard "functional integral" no- (The naming "functional integral" corresponds to the continuum limit ǫ → 0 where n γ (t) become discrete functions of t. For finite ǫ the functional integral is simply a finite configuration sum which can be considered as an appropriate regularization of the functional integral.) The symbol {n} denotes the sum over all configurations of occupation numbers at arbitrary t. We will equivalently use the symbol for functional integration, Eq. (24) assumes that the weight distribution is normalized, Boundary conditions at t in and t f can be implemented by weight distributions of the form where K[n] is independent of the boundary conditions. The "initial boundary term" f in only depends on the occupation numbers at t in , e.g. n γ (t in ), while the "final boundary term"f f involves only occupation numbers at t f . In this paper we discuss quasi-local probabilities We will concentrate on next-neighbor interactions where The product is over all time points t in ≤ t ≤ t f − ǫ. The factor K(t) depends on two sets of neighboring occupation numbers n γ (t) and n γ (t + ǫ). Extensions beyond next-neighbor interactions are discussed in app. A. They do not lead to new structural elements. Local observables A(t) depend only on local occupation numbers n γ (t) at a given t. Their expectation value is given by the local probability distribution p(t) = p n γ (t) = p τ (t) h τ (t) = p τ (t) h τ n(t) according to eq. (9). With eq. (24) the local probability distribution obtains from the overall weight distribution as a configuration sum of occupation numbers at t ′ different from t It will be our aim to express the t-dependence or evolution of the local probabilities p τ (t) by an evolution law for the time dependent wave function f (t) and conjugate wave functionf (t) as given by eq. (17). From the evolution of the wave functions we can compute directly the evolution of expectation values of local observables A(t) according to eq. (19), and thereby investigate the propagation of information from the boundary into the bulk.

Partition function and step evolution operator
The partition function Z can be written as a functional integral By the definition of the overall weight distribution (27) it equals one, The real initial and final wave functionsf f and f in depend only on n γ (t f ) and n γ (t in ), respectively. They associate weights to different initial and final boundary conditions. Their normalization should be compatible with eq. (32). In eq. (31) the factor, K[n] takes a real value for each configuration {n}. Consider now the quasi-local probability distribution (29) with next-neighbor interactions. Arbitrary K(t) can be expanded as with basis functions h τ (t + ǫ) and h ρ (t) given in terms of n γ (t + ǫ) and n γ (t), respectively. Due to the identity (14) eq. (33) is indeed the most general function of n γ (t + ǫ) and n γ (t). The "step evolution operator" S, with matrix elements S τ ρ , is a central quantity for our discussion. We only consider invertible S that can be diagonalized by complex regular matrices. The multiplicative normalization of K(t) is chosen such that the largest absolute value of the eigenvalues of S equals one. For t-independent S we observe the identity (34) where S 2 denotes the matrix multiplication This generalizes to chains of K-factors. The matrix multiplication property allows us to write with G = (t f −t in )/ǫ the number of points between t f and t in and S G the matrix product of G factors S.
Eq. (37) shows the close relation of the step evolution operator S with the transfer matrix. What is particular is the normalization. We employ a normalized partition function Z = 1. Eq. (37) still leaves some freedom in the relative normalization of S,q(t in ) andq(t f ). We will choose a normalization where the largest absolute value among the eigenvalues of S is normalized to one. If S depends on t one replaces S G by the ordered product of matrices S(t i ), with t f − ǫ to the left and t in to the right.

Wave function from functional integral
The wave function f (t) and conjugate wave functionf (t) correspond to partial functional integrations of the overall weight distribution w[n]. Here f (t) involves an integration over variables n(t ′ ) with t ′ < t, whilef (t) results from a similar integration of n(t ′ > t). For this purpose we split The wave functions are defined as The integral Dn(t ′ < t) restricts the product in eq. (25) to also depends on n γ (t). These definitions allow us to express Z in terms of the wave function and conjugate wave function, By suitable normalizations of f in ,f f and K(t) we can always implement Z = 1.
Inserting the definitions (38) and (39) into eq. (30) expresses the local probabilities in terms of the wave functions Expanding in basis functions according to eq. (17), yields eq. (11). This identifies the wave functions in eq. (11) with the ones constructed from eq. (39). For diagonal local observables (21) the expectation values therefore obey eq. (20) Knowledge of the wave functions is sufficient for the determination of A(t) , while all additional information contained in w[n] is not relevant.

Positivity of overall probability distribution
A local probabilistic setting (11) is realized if the product q τ (t)q τ (t) is positive semidefinite for all t, and for each τ individually. While this condition is sufficient for a well defined local probability distribution, it is necessary but not sufficient for a well defined overall probability distribution. The latter requires positivity for all w[n] = p[n]. The overall probability distribution (6) can be represented as a product of elements of step evolution operators. We use the explicit representation is the initial wave function andf f =q τ (t f )h τ (t f ) the final conjugate wave function. Insertion of eq. (33) and use of eq. (14) yields with (no summations in the following expression) Here we have numbered the time arguments, t in = t 1 , We could specify w[n] directly by the product of step evolution operators (45) and (46). However, arbitrary S will not yield a positive overall probability distribution p[n] obeying eqs. (5) and (6). In general, eqs. (45) and (46) define a weight distribution that can take negative values for certain sequences {n}. A classical probability distribution requires then that all weights are positive. In turn, this requires all coefficients w ρ 1 ρ 2 ...ρ G+1 to be positive semidefinite, since they can be identified with the probabilities to find a particular sequence of occupation numbers. If negative w ρ 1 ...ρ G+1 occur, we deal with the weight function for some generalized statistical setting, but no longer with classical statistics. Positivity of w can always be realized if all elements of the step evolution operator as well asq ρ 1 (t 1 ) andq ρ G+1 (t G+1 ) are positive or zero. This property is, however, not necessary. We do not necessarily require translation invariance such that the step evolution operators for different t may differ. For example, there could be purely positive S (all S ρτ ≥ 0) and purely negative S (all S ρτ ≤ 0), with an even number of purely negative S.
For establishing a necessary condition for the positivity of w we introduce the product (no sum over ρ m ) This factor appears in w, while the other factors do not depend on the index ρ m for 2 ≤ m ≤ G. If B ρm,αβ (t m ) has different signs for different ρ m , with some given values for α, β and t m , there are necessarily elements w ρ 1 ...ρm...ρ G+1 that change sign as the index ρ m varies. Therefore some elements of w must be negative and the weight factor is not a classical statistical probability distribution. A necessary condition for a classical statistical system requires that two consecutive matrix elements S αρm (t m ) and S ρmβ (t m−1 ) either have the same sign for all ρ m , or at least one element vanishes. This should hold independently of α, β and t m . For a more extended discussion of the question which type of step evolution operators are compatible with the notion of an overall classical statistical probability distribution p[n] we refer to ref. [25]. In the present paper we only discuss examples for which S τ ρ (t),q τ (t in ) andq τ (t f ) are all positive semidefinite.

Evolution
The dependence of the local probabilities p τ (t) on the location t can be described by the t-dependence of the wave function f (t) and the conjugate wave functionf (t), (no sum over τ here) We will often call the t-dependence of the wave function a "time evolution", even though it describes the dependence on some general location. The notion of evolution distinguishes the coordinate t, for which the evolution is considered, from other possible coordinates x in a multidimensional setting. It does not mean that t and x are conceptually different.

Evolution of wave functions
The evolution of the wave function f (t) follows directly from its definition (39), Using the explicit expressions (17) and (33) yields (50) Here we have used the relations (14) and (15). Eq. (50) establishes the linear evolution law Using analogous steps one obtains for the conjugate wave function the evolution or, for invertible matrices S, The time evolution of the classical wave functionq and conjugate wave functionq is therefore described by the step evolution operator S, as encoded in the overall probability distribution by eqs. (29) and (33). The naming "step evolution operator" reflects the role of S for the evolution in a minimal discrete time step.

Classical Ising-spin systems
For a classical Ising-spin system the factor K(t) can be expressed by an exponential where L(t) depends on n γ (t) and n γ (t + ǫ). For the productf f Kf in is positive semidefinite. Therefore is a normalized probability distribution. For general L(t) the partition function and the classical wave functions are not yet normalized. We therefore takeZ > 0 arbitrary, while we will later achieve Z = 1 by a suitable additive constant in L(t) and normalization of the boundary factors. For the Ising model with generalZ > 0 one has the standard functional integral expression for expectation values of local observables where we employ in eq. (31) the classical action S cl [n], (Recall that a local observable A(t) is an arbitrary function of occupation numbers n(t) = s(t)+1 /2 at a given t.) Using the factorization (38) one arrives at withZ given by eq. (40). We can express A(t) as a linear combination of basis observables h τ (t) according to eq. (18). The expansion (17) of f (t) andf (t) in terms of basis functions defines the local probabilities as (sum over τ only if indicated) such that eq. (59) is consistent with eq. (19). We want to work with a normalization of the wave function where eq. (11) holds. By a suitable additive constant in L(t) and/or suitable multiplicative constants in f in andf f we can always achieve Z = 1, such that p τ =q τqτ . We will adjust the additive constant in L(t) such that the largest eigenvalue of S obeys |λ| = 1. Then only f inff has to be normalized accordingly.
The most general action involving next-neighbor interactions and local terms can be written as By use of eqs. (14) and (16) one finds and therefore the matrix elements of the step evolution operator The step evolution operator S has only positive elements, S τ ρ ≥ 0. In this case we call S a "positive matrix". (In the present paper a positive matrix means that all matrix elements are positive or zero, in distinction to the positivity of all eigenvalues.) The trivial evolution S τ ρ = δ τ ρ is found if the diagonal elements of M vanish, M τ τ = 0, while all off-diagonal elements diverge, M τ ρ → ∞ for τ = ρ. Our choice of basis functions allows for a straightforward and efficient computation of the step evolution operator or transfer matrix by eqs. (61) and (63). All these statements apply equally for multicomponent Ising spins or occupation numbers n γ (t). The standard Ising-type models assume translation symmetry, with M τ ρ and S τ ρ independent of t. As an explicit example we next describe the one-dimensional Ising model in our formalism. A few generalized Ising-type models are discussed in app. B.

One-dimensional Ising model
The one-dimensional Ising model in a homogeneous magnetic field is one of the best known exact solutions in statistical physics. It is therefore a good example to demonstrate our formalism based on classical wave functions explicitly. The known properties of the exact solution are recovered from the solution of the evolution equation for the wave function. Furthermore, the use of the occupation number basis for the computation of the transfer matrix and step evolution operator, as well as the solution of the model based on the behavior of classical wave functions, can be generalized to many other spin models.
The one-dimensional Ising model with next-neighbor interaction is described by a single spin s(t) = ±1 for each site t. The factor K(t) in eqs. (33) and (54) reads (64) Here we use a single occupation number n(t) = (1 + s(t))/2 and basis functions h 1 = n, h 2 = 1 − n. The constants β and γ are related to the next-neighbor coupling J and the magnetic field From the relation (14) one infers and computes with and (68) This expresses K(t) in terms of the basis functions and one infers the well known result for the transfer ma-trixS At this point the probability distribution is not normalized and one may computeZ(β, γ) from eq. (37). The normalization can be achieved by a multiplicative con- Employing the same factor for the boundary term q(t f )q(t in ) we obtain from eq. (37) We can identify ϕ with the free energy per degree of free- Free energy The association of the free energy density ϕ with the normalization of the step evolution operator in eq. (71) permits us to compute ϕ(β, γ) from the evolution of the wave function. For this purpose we recall the evolution of the wave functioñ Since S is a real symmetric matrix it can be diagonalized by orthogonal transformations. The eigenvalues λ j are real. Eigenvectors with λ j > 1 will grow, while eigenvectors with λ j < 1 decreasẽ Consider now the final "time" t f , With we conclude that for any finiteq(t f ) eq. (77) can hold only ifq(t f ) is finite and different from zero. (We exclude here particularly "fine tuned" final statesq(t f ) that are orthogonal to a diverging part ofq(t f )). For G → ∞ this requires that one eigenvalue of S equals one, while the other eigenvalue is smaller than one. This condition fixes the multiplicative factor in the step evolution operator or normalized transfer matrix The largest eigenvalue of S should be equal to one. This in turn determines ϕ as a function of β and γ. In consequence, the thermodynamics can be extracted from the normalization of the step evolution operator. From the eigenvalue condition det(S−1) = 0 one infers For the appropriate sign this is the standard exact solution for the free energy of the one-dimensional Ising model. The largest eigenvalue ofS matters, corresponding to the plus sign in eq. (80) or the minus sign in eq. (79). We observe that e ϕ is an eigenvalue of the transfer matrixS in eq. (70), as expected. From the free energy F = (G + 1)ϕ the magnetization and thermodynamic quantities of the Ising model can be computed. In other words, the requirement that the wave functionq(t) does not diverge or vanish for (t f − t in )/ǫ → ∞ can be used for the computation of the thermodynamic equilibrium properties. For more general models with not too large N = 2 M this property may be useful for a determination of equilibrium properties by a numerical solution of the discrete evolution equation (51).
If the largest eigenvalue of S equals one, and all other eigenvalues are smaller than one, the product S n will converge for n → ∞ to a scaling form S * . This implies the matrix identity In App. C we use this relation in order to determine ϕ and S * . For the one-dimensional Ising model the explicit scaling form S * is found as One verifies det S * = 0, corresponding to one eigenvalue one and the other zero.
The explicit form of the step evolution operator reads For γ = 0 one has the particular simple form g = 1 and Knowing S * we can fix (for G → ∞) the normalization of the initial and final factors, with eq. (37) readinḡ We can decomposeq(t in ) andq(t f ) into eigenvectors of S * with eigenvalues one or zero. Only the eigenvectors with eigenvalue one contribute to Z b . They can be normalized by multiplication with 1/ √ Z b in order to achieve Z = 1.

Magnetization
We can compute the magnetization in the bulk directly from the asymptotic behavior of the wave function. Exploiting eq. (74) for large n only the largest eigenvalue of S will contribute. Thus the wave functionq(t) converges to an equilibrium valueq * inside the bulk, given by This fixes the wave function up to a multiplicative con- A similar argument for the conjugate wave function yields for large m (using S = S T in eq. (52)) implying for the equilibrium conjugate wave function For properly normalized boundary conditions (Z b = 1) we can employ the normalization (40) Knowing the bulk wave functions explicitly we can compute expectation values in the bulk from eq. (43) For example, the occupation number N ′ = diag(1, 0) has in the bulk the expectation value This translates to the average spin where ∆ = sinh γe 2β 1/g + sinh γe 2β .
We observe that s * only depends on the parameter combination For δ → ∞ one finds the usual saturation As expected, s * coincides with the mean spin as computed from the free energy Effect of boundary conditions Besides the asymptotic behavior in the bulk the explicit knowledge of the evolution of the wave functions permits a detailed description of the influence of boundary conditions on the expectation values of local spins. For the boundary problem we split the initial wave functionq(t in ), which fixes the boundary term, into a part proportional toq * and a part proportional to the eigenvector of S with eigenvalue where The evolution equation (74) yields Arbitrary initial wave functions approach c 1q * as n increases, according tõ The correlation length ξ is related to the second eigenvalue of S, For a quantitative estimate of the memory of the boundary conditions after n steps into the bulk we need the second eigenvalue λ − of S. Since one eigenvalue of S equals one we compute λ − from Restricting the discussion to γ = 0 one finds The correlation length ξ diverges for β → ∞ It is the same correlation length as the one defined by the decay of the correlation function for the known exact solution of the Ising model. The conjugate wave function for t in the vicinity of t in , with large G → ∞, can be approximated by the equilibrium valueq where the normalization is chosen such that Z = 1. The expectation value n(t) follows as eq. (104) yields the simple final result We observe that the orthogonality of eigenvectors to different eigenvalues implies The bounds 0 ≤ n(t in ) ≤ 1 limit the allowed values of δq(t in ), whileq(t in ) can be arbitrary by use of the constant c 1 . We conclude that the classical wave functionq(t), together with the conjugate wave functionq(t), solve the issue of the influence of boundary conditions for the one-dimensional Ising model completely. This method is closely analogous to the use of the transfer matrix. The simple concept of wave functions allows a straightforward use of methods known from quantum mechanics. It is suitable for more complex models and the discussion of general properties.

Different types of evolution
Many classical statistical systems share the qualitative properties oft the one-dimensional Ising model. For this class of systems the transfer matrixS has positive real eigenvalues. The step evolution operator S is then related toS by a multiplicative constant, chosen such that the largest eigenvalue of S equals one. If the second largest eigenvalue λ − is separated from the largest eigenvalue, it defines a finite correlation length ξ according to eq. (105). The memory of boundary terms is exponentially damped, similar to eq. (112). For some subset of initial conditions the damping even occurs faster, according to other eigenvalues of S smaller than λ − . This scenario with a separated largest eigenvalue of the transfer matrix is realized if the dimension ofS is finite and all elements are strictly positive,S τ ρ > 0.
This loss of boundary information is, however, not the most general behavior. Interesting situations arise if the largest eigenvalue of S becomes degenerate. In this limit the correlation length diverges. This situation is typically realized for critical phenomena. For practical purposes it is sufficient that ξ exceeds the size of the system t f − t in , as realized for the strongly coupled or low temperature Ising model for large β.
One may also encounter unstable situations where the transfer matrixS has negative eigenvalues. For the onedimensional Ising model this is realized for β < 0. Typically, the equilibrium stateq * is not invariant under translations by ǫ in this case. The equilibrium state may still have translation symmetry with respect to translations by 2ǫ, as for the antiferromagnetic Ising model for β < 0.
It is possible that S has complex eigenvalues. This can occur if S is not symmetric. In this case one expects oscillatory behavior. An example is the four-state oscillator chain, as realized by the four by four matrix The four eigenvalues are For 0 ≤ η ≤ 1 all entries of S are positive and the model can be realized with Ising spins. Since some elements of S vanish it is a type of "constrained Ising model". Its detailed realization is discussed in app. B. We may avoid negative eigenvalues by requiring η < 1/2. The (unnormalized) eigenvectors to the eigenvalues λ 0 , λ 1+ , λ 1− , λ 2 are given by Since the real part of λ 1± is smaller than one, one encounters damped oscillations of the wave function if the initial wave function contains components ∼ v 1± . Whenever a unique largest eigenvalue one of S is separated from the other eigenvalues λ i , with |λ i | < 1, one may suspect a loss of memory of boundary conditions with a finite correlation length. The situation is subtle, however. The reason is that an exponentially decaying q −q * may be compensated by an exponentially increasingq −q * . This could lead to undamped oscillations of the local probabilities. In the next section we will define a density matrix as a bilinear inq andq. The general solution of the evolution equation (172) admits indeed undamped oscillatory behavior [25]. One then has to investigate if such density matrices can be realized by suitable boundary conditions. The answer is negative, such that the evolution is indeed described by damped oscillations [25].
One may ask if there are simple exact models where the loss of boundary information by the damping towards the equilibrium state is absent. This is indeed the case. A sufficient condition for keeping the memory of boundary conditions at t in far inside the bulk, and even at the opposite boundary at t f , is the presence of more than one largest eigenvalue of S with |λ| = 1. (Largest is here in the sense of largest |λ|.) The simplest and rather trivial example is S = 1. In this case the expectation value of a local observable A(t) is simply given by This holds independently of t, including t = t f . The expectation value obviously depends on the initial wave function q(t in ). A somewhat less trivial example is In the notation of eq. (12), with two species of occupation numbers interpreted as fermionic particles at two different sites, this corresponds to one particle jumping from one site to the other, while states with no particles or two particles, one at each site, are t-independent. The matrix S has three eigenvalues +1, and one eigenvalue −1. Since S 2 = 1 one finds with For even G = (t f − t in )/ǫ one has m(t) = n(t), while m and n differ for G odd. In particular, one finds at t f In both cases the information on the boundary at t in is transported completely to the other boundary at t f . Let us restrict the discussion for simplicity to G even and to t-independent local observables, represented by the same operatorÂ for each t. Operators commuting with S have static expectation values On the other hand, if A ′ S = −S A ′ , the expectation value oscillates with period 2ǫ An example is which measures the difference n 2 − n 1 . Obviously, this example realizes oscillating local probabilities p τ (t), with period 2ǫ. Indeed, the wave function obeys an oscillating behavior while q 1 and q 4 are static. The same holds for the conjugate wave function, such that with static p 1 and p 4 .
A further example of complete information transport is given by eq. (114), with η = 1. It corresponds tõ . This evolution is periodic with period 4ǫ. Correspondingly the eigenvalues of S are ±1, ±i. With S T S = 1 the step evolution operator describes a rotation. The conjugate wave functionq(t) follows the same evolution asq(t) according to eq. (53). Therefore the local probabilities p(t) oscillate with period 4ǫ, according to These simple examples demonstrate already that the issue of information transport is expected to differ substantially from a simple approach to equilibrium in the bulk whenever more than one eigenfunction to eigenvalues of S with |λ| = 1 exists. In this case there is no unique equilibrium wave functionq * as in the Ising model, cf. eq. (88). Memory of boundary conditions is expected to occur due to a "degeneracy" of generalized equilibrium wave functions.

Quantum mechanics
Let us investigate the particular case where S = R is a rotation matrix. Then f (t) andf (t) obey the same evolution law. If they are equal for one particular time pointt they will remain equal for all t. The wave func- (This is formally achieved by solving the evolution law (49) or (51) in order to compute f (t f ), and then to choosef f (t f ) = f (t f ).) We will assume here the boundary conditionf (t f ) = f (t f ), while more general choices off f are discussed in ref. [25].
A normalized classical wave function q(t) is defined [26] by the square root of the local probability up to a sign functionŝ τ (t) = ±1, According to eq. (48) it is realized if Forf (t) = f (t) we can indeed replace bothq andq by a common normalized classical wave function q.
A normalized wave function obeys and this normalization is preserved by the evolution if S describes a rotation. In this case it is sufficient that the initial wave function f in is normalized. For a normalized wave function also Z is normalized, Z = 1. An evolution that preserves the length of the vector q τ (t) will be called unitary. (In the present case of real q τ the unitary transformations are rotations.) We can now identify p τ (t) = q 2 τ (t) with the local probability for finding at time t an appropriate combination of occupation numbers. For M species of occupation numbers this is precisely the setting of quantum mechanics for the special case of a real 2 M -component wave function.
If the rotation matrix R τ ρ is compatible with a complex structure we can order the 2 M real components into a complex 2 M −1 -component wave function that obeys a unitary evolution, as we will see in the next section.
Expectation values of occupation numbers at a given t can be computed from the normalized wave function as Employing the diagonal operators N ′ γ defined in eq. (22) we also have the quantum rule for expectation values For a normalized wave function andf = f the basic definition of these expectation values is given by the functional integral In accordance with the previous general discussion, eq. (132) follows from eq. (134) by using the split (38) and the definition of the wave functions (39). The question arises under which circumstances the step evolution operator can be a rotation if p[n] is a classical probability distribution. Acceptable local probabilities are found for arbitrary rotation matrices S = R, since q 2 τ ≥ 0. In contrast, a positive overall probability distribution is, in general, not realized. For arbitrary rotations the weight distribution w[n], defined by eqs. (45) and (46) In case of translation symmetry and for finite M it is easy to convince oneself that step evolution operators corresponding to infinitesimal rotations cannot be realized by a positive p[n]. Indeed, for infinitesimal rotations the factor B ρm,αβ (t m ) in eq. (47) exhibits both positive and negative values as ρ m is varied. We conclude that in case of translation symmetry and finite M the realization of a quantum evolution by step evolution operators that are rotation matrices differing only infinitesimally from one is not compatible with an overall classical probability distribution p[n]. Nevertheless, one can find interesting examples of quantum evolution. They can be realized by extensions of the examples (118) and (127) or if translation symmetry is abandoned. It is also possible that a subsystem follows a quantum evolution even though the total system does not. Furthermore, for M → ∞ new possibilities open up. The notion of infinitesimal may now refer to the neighborhood in a dense space of states.
In any case, particular conditions are needed for the whole system to follow a quantum evolution. Generic generalized Ising-type models (54) and (61) do not follow a unitary evolution. Since all matrix elements S τ ρ are positive semidefinite, the matrix S is a rotation matrix only for special cases. Such special cases are static memory materials and will be discussed in secs. 6 to 8. While the generic Ising-spin systems do not represent a quantum system of the type discussed above, we will argue that the evolution of the local probabilities admits undamped oscillatory behavior under a rather wide range of circumstances. For these cases a suitable subsystem follows a unitary time evolution. We briefly discuss in app. D that the generic evolution can be viewed as a non-linear unitary evolution of a suitably defined normalized wave function. The quantum subsystems are then subsystems that follow a linear unitary evolution.

Generalized Schrödinger equation
This section addresses the continuum limit of the evolution (51) and (53). In the continuum limit the wave functions obey linear differential equations -the generalized Schrödinger equation. If the evolution admits a complex structure the generalized Schrödinger equation takes the usual complex form. Only the generalization of the Hamiltonian operator is, in general, not hermitian for classical statistical systems. Similar to quantum mechanics one can define a density matrix, which evolves according to a generalized von Neumann equation.

Continuous evolution limit
A continuous time evolution can be realized if the change of the wave function after one or a few time steps is in some sense small. For small changes after two time steps we define such that If the limit ǫ → 0 can be taken,q(t) is a differentiable function and ∂ tq becomes the standard derivative. For the particular case where S is independent of t and deviates from the unit matrix only by elements ∼ ǫ one has S τ ρ = δ τ ρ + ǫW τ ρ , S = 1 + ǫW.
This is, however, not the only possibility how a continuum limit can be realized. The evolution equation (136) is a real equation for a real wave function. We can formally write it as a complex equation which makes the difference to the Schrödinger equation in quantum mechanics apparent. We split W into a hermitian (symmetric) and antihermitian (antisymmetric) part and write without loss of generality with H and J hermitian matrices. For real wave functionsq and real W the Hamilton operator H is antisymmetric and purely imaginary, while J is real and symmetric. This yields a generalized Schrödinger equation For quantum systems with t-invariant orthogonal S the matrix J vanishes. For the conjugate wave function eq. (53) implies For t-independent S one hasW = W and therefore For quantum systemsq andq obey the same evolution equation which amounts to the Schrödinger equation for the normalized wave function q. For J = 0 the evolution ofq andq differs. We may discuss simple examples. For the onedimensional Ising model with γ = 0 the continuous evolution is realized for β → ∞. In this case one has The equilibrium solutioñ corresponds to the zero eigenvalue of W . The general solution of the generalized Schrödinger equation (139), is the equivalent of eq. (104), with ξ −1 = 2ω. Another example is the four-state oscillator chain with step evolution operator (114). For small η, we define ω = η/ǫ and extract The general solution of the evolution equation (139) shows damped oscillations, as expected.

Complex wave function and generalized Schrödinger equation
In quantum mechanics we are used to employ complex wave functions. Rather trivially we can write every complex wave function as a real wave function with twice the number of components. A hermitian Hamiltonian becomes in this real language an antisymmetric purely imaginary matrix, such that −iH is real, as in eq. (139).
In the opposite direction a real wave function can be combined to a complex wave function with half the number of components if a suitable complex structure exists. This complex structure is very useful in quantum mechanics. One often encounters a complex structure also for the evolution in classical statistics described in this paper. A complex structure does not need a unitary evolution. For many interesting cases there exists a basis for which the N × N -matrix W can be written in terms of N/2 × N/2-matrices W 1 and W 2 in the form We can then write the evolution equation in the form of a complex generalized Schrödinger equation, For the generalized Schrödinger equation G is not hermitian. The hermitian and antihermitian parts are associated toĤ =Ĥ † andĴ =Ĵ † , respectively, witĥ and W iS , W iA the symmetric and antisymmetric parts of W i . According to the block structure (147) we group the components ofq asq and define the complex wave function as The equivalence of eq. (148) with eq. (139) is established by insertion of eq. (151). The complex N/2 × N/2-matricesĤ andĴ correspond to the antisymmetric and symmetric parts of W , respectively For the conjugate wave function we emploȳ For quantum systems withq =q one hasψ = ψ * . For J = 0, eq. (154) is the complex conjugate of eq. (148). Thus for antisymmetric W andq =q we recover the standard complex Schrödinger equation of quantum mechanics if a complex structure (147) exists. With eq. (153) andψ = ψ * it is easy to verify that eq. (43) becomes in the complex formulation the standard expression of quantum mechanics for the expectation value of observables.
For a generic evolution with J = 0 one observes in accordance with eq. (40). On the other hand, one finds The form of eq. (156) shows that the antihermitian part of G acts as a generalized damping term that can change the norm |ψ|.

Density matrix
For "pure states" the real classical density matrix ρ ′ τ ρ (t) obtains by multiplication of the wave function with its conjugate (Primes are used here in order to make the distinction to an equivalent complex formulation more easy to follow.) The diagonal elements are the local probabilities (no sum here) This holds by virtue of eq. (11), and no particular conditions onq orq need to be imposed except for the normalization Z = 1. For pure states the quantum expression for expectation values of local observables follows directly from eq. (43), Mixed states can be obtained for the general boundary conditions (8). The expression (159) continues to hold. The time evolution of the density matrix follows directly from eqs. (51) and (53) In the continuum limit, ǫ → 0, one finds with eqs. (135), (136) and (141) and for sufficiently smooth For S independent of t one has W =W . More generally, we concentrate in the following on the caseW (t) = W (t), for which If W admits a complex structure according to eq. (147) the pure state density matrix obtains from the complex wave function ψ and conjugate wave functionψ as This generalizes to mixed states according to the generalized boundary condition (8), The complex evolution equation is the generalization of the von Neumann equation to non-hermitian G, We conclude that the quantities carrying the local information, such as the wave functionsq,q or ψ,ψ or the density matrices ρ ′ or ρ, all obey linear evolution equations. The superposition principle holds. Memory materials can be realized if J orĴ vanish on a subspace with more than one (real) dimension. (The one-dimensional case corresponds to a unique equilibrium state.) Oscillating local probabilities further require H = 0 orĤ = 0.
Similar to quantum mechanics, the density matrix is a convenient concept for the description of subsystems. "Reduced" or "coarse grained" density matrices for subsystems can be obtained by taking a partial trace. A necessary condition for a unitary evolution of the subsystem is the vanishing of J orĴ for the reduced evolution equation. In other words, the symmetric part of W or the antihermitian part of G have to commute with the reduced ρ. This condition is, however, not sufficient to guarantee a non-trivial behavior of a subsystem as undamped oscillations. An additional condition is the realization of the potentially oscillating behavior by appropriate boundary conditions.
A quantum behavior for subsystems typically holds for idealized isolated subsystems. In many circumstances the isolation may only be approximate, resulting in decoherence [27][28][29] or syncoherence [30] for the subsystem. Decoherence or syncoherence in subsystems can be described by additional terms in the evolution equation. In their presence a pure state density matrix can evolve into a mixed state density matrix (decoherence), or a mixed state can evolve into a pure state (syncoherence).

Basic concepts and quantum formalism
Before proceeding to detailed examples of static memory materials it may be useful to summarize the main features of our formalism of classical wave functions and density matrices. For readers less interested in the formal aspects of this paper we have written this section to be self-contained, necessarily involving some repetition of material of the previous sections. Our formalism is based on the notion of a "classical wave function"q(t) and the conjugate wave functionq(t) [26]. Here t denotes the location of a hypersurface in the bulk, with boundary conditions set at t in and t f . The expectation values of local observables can be computed fromq(t) andq(t). It is therefore sufficient to understand the t-dependence or "evolution" of these wave functions.
It is remarkable that the formalism for information transport in classical statistical systems turns out to be conceptually close to quantum mechanics. It resembles Feynman's derivation of the wave function from the path integral for quantum mechanics [6]. In particular, the change of local probabilities between different positions of hypersurfaces in the bulk is described by a linear evolution equation for a classical density matrix, rather than by the local probabilities alone. Indeed, we can construct fromq(t) andq(t) a "classical density matrix" ρ ′ (t) at a given location t. It is bilinear inq(t) andq(t), ρ ′ τ ρ =q τqρ , and permits to compute expectation values by the standard quantum formula Similar to quantum mechanics A ′ (t) is an operator associated to the local observable A(t). The local probabilities are the diagonal elements of the density matrix ρ ′ (t).
The evolution law for the density matrix is linear, while any formulation in terms of the local probability distribution alone would result in a complicated non-linear equation. It is the presence of additional local information in the off-diagonal elements of the density matrix that renders the evolution simple and allows for the superposition principle for solutions. The central issue for information transport is the evolution of the wave function between two neighboring points or hypersurfaces t and t + ǫ. The evolution law for the classical wave function is linear with S the "step evolution operator". The step evolution operator is related to the transfer matrix [16][17][18] by a suitable multiplicative renormalization. For the step evolution operator S all eigenvalues obey |λ| ≤ 1, with a set of "largest eigenvalues" |λ| = 1.
One often can define a continuum limit ǫ → 0, which reads for t-independent S where Eq. (168) is a linear differential equation for the wave function and constitutes the generalized Schrödinger equation.
The usual complex Schrödinger equation can always be written in the real form (168) by splitting a complex wave function into real and imaginary parts, ψ =q R +iq I , such that it becomes a real linear first order differential equation forq = (q R ,q I ). Inversely, if W has appropriate properties for the introduction of a complex structure (cf. sec. 4) we can write eq. (168) in the familiar complex form It is a key difference between the evolution of the wave function for classical statistical systems and the Schrödinger equation for quantum mechanics that G is, in general, not hermitian, or W not antisymmetric. Decomposing G into its hermitian and antihermitian parts one finds thatĴ is responsible for the loss of information as t increases from some boundary or initial value t in into the bulk. For memory materialsĴψ vanishes in the bulk. The evolution in the bulk obeys then the Schrödinger equation and is unitary. Static memory materials are therefore quantum simulators for which the dependence of observables on the location t traces the time evolution of observables in a quantum system with the sameĤ.
Similarly, the dependence of the density matrix on t obeys an evolution equation that is a generalization of the von Neumann equation in quantum mechanics In particular, this equation describes the t-dependence of the local probabilities, which are given by the diagonal elements of ρ ′ , p τ (t) = ρ ′ τ τ (t). In the presence of a complex structure eq. (172) translates to a generalized von Neumann equation for the complex density matrix ρ, One recovers the unitary evolution according to the von Neumann equation forĴ = 0, or more generally, for [Ĵ, ρ] = 0. ForĴ = 0 eq. (173) one finds a modification of the von Neumann equation. Also in classical statistics a pure state density matrix remains pure state in the course of the evolution. For decoherence in subsystems one expects additional terms similar to the Lindblad equation [31][32][33].
It is remarkable that the simple linear time evolution (172) and (173) is formulated for a density matrix. No such simple evolution law can be formulated in terms of the local probabilities p τ (t) = ρ ′ τ τ (t) alone. The density matrix arises as the natural object for the description of information transport in a completely classical statistical context. It is not an object specific to quantum systems. The off-diagonal elements of the density matrix should be considered as a genuine part of the local information at a given t. Only with this information, which goes beyond the local probabilities, a simple evolution law can be formulated.
The linearity of the evolution equation implies the superposition principle. Again, this central principle for the wave interpretation is not a specific quantum feature. It characterizes the classical statistical information transport as well. Particle-wave duality appears in classical statistics. The discrete "particle properties" are associated to the discrete values of observables, while the continuous wave aspect arises from the continuous probabilistic description in terms of classical wave functions or the density matrix.
If the step evolution operator S admits a unique largest eigenvalue |λ| = 1, the evolution is characterized by the approach to a unique equilibrium state, which is the eigenstate to this largest eigenvalue. In this case the memory of boundary conditions is lost in the bulk, with a typical behavior as in eq. (1). Indeed, the eigenstates to eigenvalues |λ| < 1 are damped towards zero by multiple repetition of eq. (167). Static memory materials can be realized if the set of largest eigenvalues of S is not unique. In particular for complex largest eigenvalues, λ = e iα , α = 0, π, one finds an oscillatory behavior similar to eq. (2). Such undamped oscillations occur in the subsector of wave functions that are eigenstates to eigenvalues |λ| = 1. The boundary information concerning this subsector is transported inside the bulk without loss of memory.
The hermitian and antihermitian parts of G (or antisymmetric and symmetric parts of W) have a direct relation to the eigenvectors of the step evolution operator. For the subspace of eigenvectors to eigenvalues |λ| = 1 the length of the vectorq is conserved. The evolution within this subspace is unitary, with antisymmetric W or hermitian G. The symmetric part of W or antihermitian part of G acts only on the subspace corresponding to eigenvalues |λ| < 1. These parts of the wave function go to zero for large ∆t = t − t in , typically exponentially with a correlation length ξ. For the asymptotic behavior inside the bulk the components ofq corresponding to |λ| < 1 can be neglected. The asymptotic evolution inside the bulk is therefore always the unitary "quantum evolution".
For systems with a unique equilibrium state (unique eigenvalue |λ| = 1) the "quantum evolution" is trivial, however, corresponding toĤ = 0. A non-vanishing HamiltonianĤ, with the typical associated oscillatory behavior of the wave function, can only be realized if the step evolution operator has more than one eigenvalue |λ| = 1. (Memory of boundary conditions can also be conserved forĤ = 0, provided the equilibrium is not unique. In this case the memory corresponds to tindependent expectation values of local observables that depend on the boundary conditions.) The asymmetric diagonal Ising model (3) with β → ∞ is a memory material for which all eigenvalues of the step evolution operator obey |λ| = 1. The complete boundary information is transported into the bulk. Correspondingly, the classical wave function obeys a real Schrödinger equation (168) with antisymmetric W . We will find that this model actually describes the quantum field theory for free massless relativistic fermions in two-dimensional Minkowski space. This suggests that the association of the variable t with an emergent effective time may have a deeper meaning.
In a complex formulation the Schrödinger equation for a one-particle excitation involves the momentum opera-torP , with generalizations to multi-particle states. The information in the initial wave function ψ(t in , x) is transported completely to the bulk, Initial oscillations in x translate to oscillations in t for fixed x, e.g.
Many other initial conditions are possible. Distributions that are located around x 0 at t in will propagate into the bulk as wave packets, localized around x 0 + t − t in .
The realization of such memory materials could offer new possibilities for information processing. A large amount of information could be transported. In the ideal case these are complete sequences of bits. Even if individual bits can not be controlled separately, any probabilistic distribution of initial bits will be transported to the bulk. Furthermore, the initial information is available at every t. The shift of characteristic features in x for different t could also be exploited.

Simple static memory materials
In secs. 6 to 8 we discuss detailed examples and general features of static memory materials. Static classical statistical systems with the property that the information about the boundary conditions at t in is not completely lost at the other boundary at t f , or inside the bulk at arbitrary t, may be called "static memory materials". In short, the material keeps memory of its boundary condition. Memory materials can be realized in the presence of more than one eigenvalue λ of the step evolution operator S with |λ| = 1. If all eigenvalues of S obey |λ| = 1 the information is completely transmitted. If there is only a subset of eigenvalues |λ i | = 1, information relating to a subsystem corresponding to this subset is transmitted, while other parts of the boundary information can be lost by exponential damping. The limits between the different cases get somewhat washed out in the presence of eigenvalues with |λ| only slightly smaller than one. If some of the eigenvalues λ i with |λ i | = 1 have an imaginary part the local probability distribution will typically show oscillations. In this section we discuss materials for which all eigenvalues of the step evolution operator obey |λ i | = 1. In sec. 8 we investigate systems where only a subset of λ i has unit absolute value.

Unique jump chains
One of the simplest examples for a memory material is the "unique jump chain". It is a one dimensional chain, labeled by positions t i , on which an excitation, defect or impurity, called "particle" for our purpose, can propagate or "jump" from a given property at t in to another one at the next neighbor at t i + ǫ. Let the number of particles first be conserved, e.g. the same for all t i . The chain may have at each t i several sites labeled by γ. Alternatively, γ may label different internal properties of the particle. For a unique jump chain there is a unique possibility for every particle configuration at t to jump to a particle configuration at t + ǫ. All elements of the step evolution operator S are one or zero, with a single one in each column. This property should hold for jumps in both directions, e.g. from t to t+ǫ or from t+ǫ to t, such that S is invertible, with a single one in each row. Thus S is a rotation matrix and unique jump chains are quantum systems. If we take boundary conditions such that q =q the evolution can be described by a normalized classical wave function q(t).
The realization by Ising spins involves constrained Ising models, as discussed in app. B. The elements M τ ρ in eq. (61) vanish or diverge according to eq. (63). For our particular example neighboring configurations with a different number of particles are forbidden. For neighboring configurations with the same particle number all possibilities except one are forbidden as well. For the simplest case with two species, γ = 1, 2, there are two possibilities for neighboring one particle states t + ǫ : 01 10 01 10 t : 01 10 .
While (a) corresponds to S = 1, (b) is realized for S given by eq. (118). The limitation to a conserved particle number is instructive, but not necessary. What is needed is only the uniqueness of the jump. The step evolution operator (127) can be seen as particle number non conserving jumps. Alternatively, this can be viewed as a single particle with four different internal properties labeled by τ . The period of oscillations is 4ǫ in this case.
For M difference species of occupation numbers n γ (t), γ = 1 . . . M , τ = 1 . . . 2 M , a suitable choice of S can achieve oscillations with a maximum period of τ p = N ǫ, N = 2 M . Smaller periods are also possible, by closing circles of jumps after n steps. Thus possible periods are There is no need that t f − t in precisely corresponds to the completion of some oscillation. Different subsystems may also exhibit different periods. We conclude that for large G = (t f − t in )/ǫ and large M rather arbitrary time evolutions of expectation values A(t) can be realized without loss of memory. (A minimal period of 2ǫ is simply a consequence of discrete time steps.) The way to realize unique jump chains is to forbid certain types of neighboring configurations, implementing zero elements S τ ρ . Characterizing Ising models by M τ ρ in eqs. (61) and (63), with step evolution operators S τ ρ = exp(−M τ ρ ), this can be achieved for classical Ising spin systems by letting M τ ρ → ∞ for the forbidden combinations of sets of occupation numbers. If the unique allowed finite M τ ρ have all the same value, the normalization of the probability distribution by an additive constant in L will lead for the finite values to M τ ρ = 0, S τ ρ = 1. As an alternative we can achieve the unique jumps by letting M τ ρ → −∞ for the allowed transition, keeping finite M τ ρ for the forbidden transitions. If all M τ ρ are equal for the allowed jumps, the normalization of the step evolution operator will again result in S τ ρ = 1 or 0 for the allowed or forbidden jumps.
More generally, it is sufficient that the elements M τ ρ belong to two classes with small and large values. All elements of the class with small M τ ρ should be equal. It is then sufficient that the difference between the large M τ ρ and the small ones diverges. After subtraction of the additive normalization constant the small elements all equal zero, while all large elements diverge. The infinite difference between the large and small elements of M can be achieved by the "zero temperature limit" β → ∞ similar to the Ising model.
Unique jumps chains bare resemblance to deterministic cellular automata [7][8][9][10][11]. For a given initial sequence of occupation numbers at t in the sequence at every later time step is uniquely fixed. For every time step from t to t + ǫ the sequence changes in a deterministic way. The probabilistic element concerns then only the probabilities of initial sequences at t in . This is determined by both the boundary factors at t in and t f .

Single-particle propagation in two dimensions
Let us next describe a model for the propagation of a single particle, realized by a single impurity, excitation or defect, in two dimensions. We use the N = 2 M values of τ in order to label the coordinate x of a second dimension. Correspondingly, the wave function f τ (t) can be written as f (t, x). Neighboring τ correspond to neighboring x, with τ = (x − x in + ǫ)/ǫ. For finite M and finite ǫ the coordinate x extends from x in to x f = x in + (N − 1)ǫ. We may adopt for simplicity periodic boundary conditions with x f + ǫ identified with x in , or The step evolution operator S τ ρ is assumed to be independent of t and can be written as S(x, y). Let us consider the particular unique jump chain with discrete δ-function δ(x, y) = 1 for x = y and δ(x, y) = 0 otherwise. This implies the evolution of the wave function The matrix S has ones just above the diagonal. For N = 4 and periodic boundary conditions it reads describing an evolution of the same type as for eq. (127), with period 4ǫ. For large N we may consider the jump from x to x + ǫ as a small change and use the approximation of a continuous evolution equation. From we obtain, by dividing both sides by 2ǫ, This describes the propagation of a particle moving to the positive x-direction as t increases. The general solution reads Multiplying eq.
where we use For periodic boundary conditions in x, eq. (179), the time-period is N ǫ. The different powers S n can be seen as elements of the abelian rotation group SO(2), more precisely the subgroup Z N . With S N = 1 and S n = 1 for 1 ≤ n < N the N eigenvalues of S are given by For N → ∞ we can interpret S as an infinitesimal SO(2)rotation. The Schrödinger equation (184) describes the propagation of a single Majorana-Weyl fermion [34] in one time and one space dimension. The occupation number is one or zero, as appropriate for a fermion. The Weyl condition implies that the particle moves only in one direction, towards increasing x in our case. The Majorana condition permits a real one-component wave function q(t, x). Majorana-Weyl spinors do not admit a mass term, implying the energy-momentum relation E = |p|. It is remarkable that a quantum system with time evolution arises from a classical statistical setting that has not introduced time as an external parameter. The coordinate t has been introduced just as a position label on a one-dimensional chain. Nevertheless, all properties of the quantum theory for a Majorana-Weyl fermion in two-dimensional Minkowski space are represented by this classical statistical system.

Ising models for massless relativistic fermions
In this section we describe a class of asymmetric diagonal Ising models for which the "zero temperature state" for β → ∞ is exactly solvable. They are perfect memory materials for which the boundary information is completely transmitted to the bulk. The solutions of the evolution equation can be interpreted as the propagation of an arbitrary number of free massless fermions in two-dimensional Minkowski space.

Multi-fermion systems
In the previous example we have treated t and x in a different fashion. Whereas t labels different occupation numbers, x has labeled different sequences of M occupation numbers with values one or zero. We next discuss memory materials where t and x are treated equally as locations on a two-dimensional lattice. Instead of n γ (t) we now discuss occupation numbers n(t, x), with x corresponding to γ. For periodic boundary conditions in A static memory material can be realized as an Ising type model with interactions only among diagonal neighbors in one direction We consider an attractive interaction, β > 0, and the limit β → ∞. Equivalently, we can write the classical action S cl = t L(t) in terms of Ising spins s(t, x), This "asymmetric diagonal Ising model" can be solved exactly in terms of the free propagation of an arbitrary number of Weyl fermions. This solution is easily understood by considering two neighboring sequences of x-occupation numbers, e.g. n(t + ǫ, x) and n(t, x) . Whenever the occupation numbers n(t + ǫ, x + ǫ) and n(t, x) are the same, either both one or both zero, the factor K(t) in eq. (54) receives a factor one. If they are different, n(t + ǫ, x + ǫ) = n(t, x), a factor e −β occurs. The leading term in K(t) thus copies the sequence n(t, x) from t to t+ǫ, now translated by one unit in the positive x-direction. For this configuration of two neighboring displaced sequences K(t) assumes the value K(t) = 1. In other words, the leading term copies a bit sequence n(x) at t to t + ǫ, displacing all bits by one unit in x. A copy error in one bit reduces K(t) by a factor e −β , and errors in k-bits produce a penalty e −kβ . In the limit β → ∞ the weight of configurations with copy errors goes to zero. This realizes a unique jump chain, where every sequence n(x) at t is mapped uniquely to a sequence n ′ (x) at t + ǫ. This model is a perfect copy-machine of any initial sequence n(x) at t in to later times, with an ǫ-displacement in the x-direction for every ǫ-advance in t. The

wave function f (t) is a function of the configuration n(x) of occupation numbers n(t, x), f (t) = f t; n(x) . In this language the general exact solution of the evolution equation takes the form
where [ñ(x)] obtains from [n(x)] by shifting all zeros and ones of the sequence by one place to the left. In other words, at t + ǫ the value of f for a given configuration n(t + ǫ, x) is the same as the value of f at t for a configuration ñ(t, x) . We may phrase this situation in terms of the transfer matrixS. A given sequence n(x) corresponds to a given basis function h τ . The sequence displaced by one unit, e.g. n ′ (x) = n(x + ǫ), corresponds to a different basis function h α(τ ) . (The map τ → α(τ ) depends on the particular ordering in τ .) The transfer matrixS ρτ equals one whenever ρ = α(τ ), while all other elements are suppressed by factors e −kβ , with k the "number of errors". The relative size of the suppression remains the same if we normalizeS by a multiplicative factor in order to obtain the step evolution operator S. For β → ∞ one hasS = S, where S ρτ = 1 for ρ = α(τ ), and S ρτ = 0 for ρ = α(τ ). We recognize a unique jump chain. The evolution of the wave function is given by The displacement by one x-unit of the copied n(x)sequence does not change the number of ones in this sequence. If we associate each n(x) = 1 with a particle present at x (and n(x) = 0 with no particle present), the total number of particles is the same at t and t + ǫ. The step evolution operator conserves the particle number. We can therefore decompose the wave function q τ (t) into sectors with different particle numbers F . They do not mix by the evolution and can be treated separately.
The sectors with F = 0 and F = M are static. The sector with F = 1 describes the propagation of a single particle. It is characterized by a single-particle wave function q (1) (t, x), where x denotes the location of the single particle or the position of the unique one in the sequence n(x) . Its evolution obeys eq. (181), and the previous discussion of the propagation of a single particle can be taken over completely for the wave function in this subsection. A similar discussion holds for F = M −1, where x denotes now the position of a hole.
The subsector with F = 2 is characterized by the positions x and y of the two particles. There is no distinction which particle sits at x and which one at y, and we may use an antisymmetric wave function Using variables the evolution obeys q(t + ǫ; s, r) = q(t; s − ǫ, r).
The distance r between the occupied sites remains the same for all t. The evolution equation has the same structure as eq. (184), with inter-particle distance r an additional label not affected by the evolution. The case F = M − 2 can be treated in the same way, now with two "holes" (sites with n = 0 in even environment of n = 1) playing the role of particles. This setting is easily generalized to sectors with arbitrary F , all inter-particle distances being conserved.

Single-particle wave function
For a single particle (F = 1) the wave function q(t, x) depends on the position x of the particle or the unique Ising spin up. In the continuum limit the evolution reads with W (x, y) a derivative operator The two-dimensional lattice with points can be composed of an "even sublattice" with n ′ + m ′ even and an "odd sublattice" where n ′ + m ′ is odd. The propagation of a particle on a diagonal does not mix the points of the even and odd sublattice. We can employ this observation for the introduction of a simple complex structure. The complex conjugation corresponds to an involution K acting on q(t, x) = q(n ′ , m ′ ) by reversing the sign on the odd sublattice, For even t we may restrict the positions to even x by denoting while for odd t we take odd x − ǫ with The complex wave function is defined by The map K translates to the complex conjugation of ψ. The matrix W does not mix q R and q I . It is antisymmetric and acts in the same way on the two blocks q R and q I , such that in the language of eq. (152) one has W 1S = W 2S = W 2A = 0. In the complex formulation W is represented by the hermitian Hamiltonian We recognize the momentum operatorP and the standard form of the Schrödinger equation for a simple free massless Weyl fermion in two dimensions, As an example for a specific initial condition we may consider with normalization c chosen such that With this initial condition the solution of the evolution equation (206) reads For fixed x the solution oscillates with t, with period 2π/ω. This example provides for a simple existence proof of static memory materials with oscillating local probabilities in the bulk. Depending on initial conditions the Schrödinger equation (206) has many other solutions as, for example, propagating wave packets. The expectation value of the momentum operatorP can be computed from the usual rule of quantum mechanics and is conserved. We also stress that a "particle" is an excitation with respect to a given "ground state". For the discussion above we have assumed the ground state to be the unique configuration with F = 0, e.g. all spins down. Many other ground states are possible, for example with half filling. The criterion for a possible ground state is its independence on t. All probability amplitudes f (t in , x) that are invariant under translations in x lead to such a "ground state". Local excitations (as compared to a given ground state) will be described by classical wave functions that obey eq. (198) -see ref. [35] for a related discussion.

Memory materials for Dirac, Weyl, Majorana and Majorana-Weyl fermions in two dimensions
The Ising type model (191) describes a quantum field theory for free Weyl fermions in two-dimensional Minkowski space. (For a more detailed discussion of wave functions and the realization of Lorentz-symmetry cf. refs. [35,36].) The extension to free Dirac fermions is straightforward. Dropping the Weyl constraint one has "left movers" as well as "right movers". A free massless Dirac fermion can be viewed as two Weyl fermions, one moving to the left and the other to the right. Correspondingly, we introduce two species of occupation numbers n α (t, x) at each site of the lattice, α = 1, 2, γ = (α, x). The weight factor in the partition function is now specified by (β → ∞) While the species α = 1 describes the right movers, the species α = 2 accounts for the left movers. Each species α = 1, 2 can actually be viewed as describing two distinct Majorana-Weyl spinors. They correspond to particles on the even or odd sublattice. Altogether we distinguish four sorts of particles that propagate independently. They correspond to occupation numbers n (even) α and n (odd) α , or n αR and n αI . The total number of particles for each sort is conserved separately. We therefore could introduce separate wave functions with F αδ the number of particles of sort (α, δ), and σ denoting the locations of these particles at different x. They will not be mixed by the evolution. A Majorana constraint can be imposed by eliminating one of the sublattices, keeping, for example, only the odd sublattice. The odd sublattice is again a quadratic lattice, now with lattice distance ǫ ′ = √ 2 ǫ. The diagonal neighbors on the original lattice become next neighbors on the sublattice. Majorana-Weyl or Majorana spinors can therefore be realized by Ising models with next-neighbor interaction. What is important is that species α = 1 has only interactions in one direction, say the z 1 -direction, while species α = 2 has only interactions in the orthogonal z 2 -direction. A memory material for Majorana spinors is realized for The coordinates t and x are given by Initial boundaries and final boundaries at constant t correspond now to diagonals on the (z 1 , z 2 )-lattice. Also the evolution described by the transfer matrix is in a diagonal direction. We have described here two-dimensional fermions by an "Euclidean quantum field theory" for Ising spins. The fermionic character is expressed by the property that occupation numbers take values one or zero. We have not put much weight on the issue of statistics. Usually, quantum field theories for fermions are described by Grassmann functional integrals, which make the fermionic statistics manifest by the anticommuting properties of the Grassmann variables. In ref. [37] we construct an explicit map between the generalized Ising model (210) and an equivalent Grassmann functional integral.

Experimental and computer realization of two-dimensional static memory materials
One may wonder if static memory materials exist in nature for certain solids, if they can be technically produced, or if they can be experimentally simulated by ultracold atoms on a lattice. If one of the memory materials describing the propagation of free fermions could be realized for practical use in a computer, it would provide new algorithmic possibilities. Not only the information stored in whole bit sequences could be substantial. The realization of oscillating information or the transport of information along diagonal directions may open new aspects. The close analogy to quantum evolution may suggest some analogies to quantum computers.
A "Majorana-Weyl material" corresponds to eq. (212) with s 2 left out. The crucial ingredient is the absence of the interaction in one of the directions and the limit of very large β. The limit β → ∞ can be approached by lowering the temperature towards the ground state. The absence of other interactions is more difficult to realize. Consider a weight factor e −S cl Depending on the sign of σ the interaction in the z 2direction will favor an alignment of spins in this direction, or alternating signs. Without imposing boundary conditions the ground state has then uniform signs or alternating stripes. In both cases it is unique up to an overall flip of signs for all spins. One may therefore expect that a large part of the information is damped out for any σ = 0. In the presence of fixed values of the spins at the initial boundary the aligned state involves errors in the transmission of boundary information. For small enough |σ| there will be a competition between the ordering tendency due to σ and the memory of the initial spin sequence on a boundary. For β → ∞ the question if memory of the initial boundary is transmitted or not amounts to the determination of the minimum of S cl in the presence of fixed values of boundary spins. Consider first σ > 0 with a tendency to alignment of all spins. For simplicity we begin with the case where boundary conditions are given for fixed z 1 =t. One may compute the difference of the action ∆S cl between a state where the initial information is transmitted and a state where it is lost. For the first state the spins atx = z 2 have for allt the same value as att in , while for the second state they are all positive or all negative for allt ≥t in + ǫ. Consider an initial state where all s(t in ,x) are positive except for an interval [x − ,x + ] with D sites where they are all negative. The difference in action amounts to since the "memory state" has 2G spin flips on the sides of the domains with negative spin, while the "no-memory state" has D errors in the transition from t in to t in + ǫ. The boundary information is transmitted for D > 2σG, and lost for D < 2σG. For small D and large G very small σ are required for memory transmission. Otherwise memory is transmitted only partially. For a given σ and G the transmitted part of the memory is the one stored in large enough domains, D > 2σG. The situation for σ < 0 is similar. (An exception is the case of periodicity inx with an odd number ofx-points. This necessitates a defect in the stripe sequence, and the information of the location of the defect is transmitted for arbitrary values of σ.) Using similar arguments one finds that imposing boundary conditions at fixed t leads to the same result as for boundary conditions at fixedt. For transmitting a maximum amount of information one would attempt to realize |σ| ≪ 1. Since interactions in one of the directions can often not be eliminated completely a good strategy may be to realize some random distribution of σ that is tunable such that σ vanishes in the average. Another possibility is the use of symmetry. For σ = 0 the weight factor (214) is invariant under a reversal of the sign of all s for z 2 odd, while σ = 0 breaks this symmetry.
Beyond the preparation of the material one needs the ability to impose an initial wave function on some boundary, for example by imposing a fixed sequence of spins on this boundary. Similarly, a read out mechanism has to measure the expectation values of spins on some "final boundary". If the boundaries are at fixed z 1 the transfer of information is in the z 1 -direction. This realizes the mathematically trivial case S = 1. For a realization of the fermion system described above the boundaries should be at fixed t and therefore on diagonals in the (z 1 , z 2 )-lattice.
We finally observe that the periodic boundary conditions in the x-direction introduced before are of a purely technical nature and not needed for real systems. The information propagation is a local phenomenon. For the information at a given (t, x) only the "past light cone" at t in , x in = x ± (t − t in ) plays a role. If the range in x is large enough this light cone does not feel the presence of periodic boundary conditions or not. Our memory material realizes in a simple way the causal structure of a quantum field theory, with a maximal velocity of propagation of information. (In our units the "speed of information" or speed of light is set to one.) While a material realization of static memory materials is an experimental challenge, their numerical realization on a computer seems rather straightforward. The probability distribution p[n] can be sampled by a suitable Monte-Carlo algorithm. There is no problem to implement only diagonal interactions in one direction and to investigate the limit β → ∞. Finite large β can be used to investigate how the loss of memory sets in. Suitable boundary conditions can be imposed without particular problems. Expectation values of local observables can be "measured" for all t. Such a "computer experiment" should reveal the predicted oscillatory patterns.

Oscillating local probabilities
The asymmetric diagonal Ising model (191) with β → ∞ provides for a simple example of a classical statistical sys-tem with oscillating local probabilities. The probability distribution 216) is considered here for simplicity on a torus in the xdirection. We have takenf f = 1 while f in = f in s(t in , x) specifies boundary values at t in .The conjugate wave functionq(t f ) corresponding tof f = 1 readsq τ (t f ) = 1. It is invariant under the evolution, resulting for all t and τ in q τ (t) = 1. (217) For the particular conjugate wave function (217) the local probability distribution is given by the wave function, Consider now a boundary term Since f in is positive for all values of the initial spins s(t in , x) = 2n(t in , x) − 1, the distribution p[n] in eq. (216) defines indeed a probability distribution (for an appropriate normalization constant c). The boundary term determines the expectation values of the spins or occupation numbers at the initial boundary The periodicity ofh(x) results in periodic g(x) From the general solution of the evolution equation for the wave function (192) we can compute the local probabilities for all t, cf. eq. (48) forf (t) = 1, (There should be no confusion between the "initial magnetic field"h(x) and the basis functions h τ .) According to we conclude from the periodicity ofh the periodicity of f , The local probabilities p τ (t) are periodic functions of t. Correspondingly, the expectation values of spins at a given position x are periodic in time The initial wave function (219) could be realized by a periodic magnetic field only present at the initial boundary t in . In terms of fermion numbers this initial state contains contributions from all sectors with fixed particle numbers. If we want to realize a one fermion state the corresponding boundary wave function f in should vanish for all configurations that contain more than one or zero particles. An example for a positive wave function and positive p[n] is (227) Local probabilities and expectation values are again periodic, obeying eqs. (225) and (226). We have discussed the corresponding one-particle wave function above.

General static memory materials
Based on the examples discussed so far one may ask what are the general conditions for memory materials. For low N we can explicitly classify the positive matrices that lead to several "largest eigenvalues" |λ| = 1. This is done in app. E for N = 2 and N = 4. For large N we rather may concentrate on general properties, as the absence of a unique equilibrium state or the presence of conserved quantum numbers.

General conditions for static memory materials
An important ingredient for the realization of static memory materials is a degeneracy of the equilibrium state. By equilibrium state we mean here the bulk, with boundaries moved far away. If the equilibrium state is unique there will always be a tendency to approach this state when t starts to deviate from the boundary at t in . Uniqueness of the equilibrium state comes in pair with a loss of memory of boundary conditions. In case of degeneracy, however, there is an ambiguity which may prevent the complete loss of memory. Typically, the information that can be transmitted by a memory material reflects a degeneracy of the equilibrium state. For the Weyl material given by eq. (191), β → ∞, the degeneracy concerns an arbitrary sequence of s(x) at a given t. For every such sequence there exists an associated equilibrium state. A particular equilibrium state is realized whenever all spins on a given diagonal have the same signwhich sign does not matter. More generally, the nonuniqueness of the equilibrium state is a necessary condition for a memory material. In the other direction the preservation of boundary information in a memory material implies that the equilibrium state cannot be unique.
Let us next assume that the different degenerate equilibrium states can be labeled by the expectation values A i (t) of local observables at some givent. If those are sufficient to map out completely the degeneracy of the equilibrium state, the expectation values A i (t) are fixed in terms of A i (t) for all t. This can describe a non-trivial evolution of A i (t) , rather than an approach to a unique value in case of a unique equilibrium state. A memory material is then realized if one can connect A i (t) to some appropriate initial boundary conditions.

Spontaneous symmetry breaking
A rather common reason for a degenerate equilibrium state is spontaneous symmetry breaking. Consider a two-dimensional Ising model with Z 2 -symmetry at low enough temperature, such that the Z 2 -symmetry is spontaneously broken. The equilibrium state has either s = m or s = −m. Depending on the boundary condition at t in the bulk will be in one of the two equilibrium states. This information can be transmitted to another boundary at t f . Even though being a rather trivial example, with two eigenvalues of S equal to one, and all other |λ| < 1, it demonstrates that static memory materials exist in nature. For models with spontaneous symmetry breaking in the bulk the selection of the equilibrium state depends on the boundary condition.
We can understand the fermion models of the preceding section in terms of a symmetry which multiplies for a given t each s(x) with δ(x) = ±1. If at t + ǫ one multiples s(x + ǫ) with δ(x), the expression L(t) in eq. (191) is invariant. This extends to other L(t ′ ) by multiplying appropriately shifted variables s(x + kǫ) with δ(x). As a result, all spins on a given diagonal can be multiplied by the same sign, without changing S cl = t L(t). This model can be decomposed into independent onedimensional Ising models. For one-dimensional Ising models spontaneous symmetry breaking is possible in the zero temperature limit β → ∞. In this case the boundary conditions decide which one of the two ground states of the one-dimensional Ising model is realized, all spins up or all down. This is done for every diagonal independently. For finite β the one-dimensional Ising model has a unique ground state and no longer features spontaneous symmetry breaking. The Ising model (191) for finite β is no longer a memory material.
An example for a memory material with finite β is a three dimensional generalized Ising model on a lattice with points (t, x, y), given by

y)s(t, x, y) (228) + s(t, x, y + ǫ)s(t, x, y) − c(β) .
This model is composed of independent two-dimensional Ising models on diagonal hypersurfaces. In the infinite volume limit (infinitely many points in the t-, x-and y-directions) spontaneous symmetry breaking occurs for β > β c . The boundary information at t in decides for every surface separately which one of the two ground states is selected. This constitutes an example for a memory material that is not a unique jump chain. Local probabilities and observables at fixed x and y may oscillate with t for suitable boundary conditions. Much more complicated and interesting situations of information transport can be conceived in the presence of spontaneous symmetry breaking. Still rather simple, but somewhat less trivial examples concern the propagation of topological defects.

Conserved quantities
A simple obstacle for the existence of a unique equilibrium state are conserved quantities. Examples are conserved particle numbers or a conserved number of topological defects. If the evolution conserves in each step a certain quantity as, for example, the total number of spins up, this quantity is the same at every t as at t in . Different values of the conserved quantity specify different equilibrium states or families of equilibrium states, such that a unique equilibrium state is excluded. The memory of the conserved quantity is transmitted and a static memory material can be realized. In app. B we explicitly discuss some simple models with conserved quantities.
In case of conserved quantities the step evolution operator becomes block diagonal, with different sectors corresponding to the different possible values c k of the conserved quantity C. The expectation value of C at t in is given by the probabilities p k (t in ) that C takes the value c k at t in , By a conserved quantity we understand that C(t) is independent of t. Typically this is realized by tindependent probabilities p k (t). In this case the relative weight of the different sectors is not changed by the evolution. Therefore, each block of S corresponding to a given sector must have at least one eigenvalue with |λ| = 1.
If each sector has precisely one eigenvalue λ = 1, and all other eigenvalues |λ| < 1, the memory is preserved in a rather simple fashion. Each sector approaches its own separate equilibrium state. The total "equilibrium state" is the weighted sum of the individual equilibrium states, and the transmitted information concerns the probabilities p k . Our example of oscillating local probabilities for generalized Ising models with conserved particle numbers demonstrates that more complex memory transport is possible.
For observables that are expressed by t-independent operators A ′ , and for t-independent step evolution operators S, the expectation value A is independent of t if A ′ commutes with S, This follows directly from eqs. (43), (51) and (53), In the other direction, a quantity is conserved for a given state if S −1 A ′ S = A ′ . If we require this condition for all possible states one finds the condition (230). For every t the expectation value of conserved quantities is given by the boundary conditions Despite the possible presence of conserved quantities a unique equilibrium state cannot preserve memory of boundary conditions. In case of a unique equilibrium state in the bulk and large enough t f − t in we can replacē q(t in ) by the equilibrium conjugate wave functionq * , For A ′ commuting with S the initial value A(t in ) is uniquely given by eq. (233). It cannot be chosen freely and no information associated to A(t) can be transported into the bulk. (In particular, this holds for The scaling step operator S * , defined by eq. (81), is a projector, Its eigenvalues are one or zero. This limit matrix S * always exists for a unique equilibrium state where only one eigenvalue equals one. For degenerate bulk states S * may no longer exists. There are no longer unique equilibrium wave functionsq * ,q * , and non-trivial information can now be encoded in A(t in ) . If S * exists, more than one eigenvalue differs from zero for degenerate bulk states.

Memory materials as asymptotic states
The length of eigenvectors to eigenvalues of S with |λ| < 1 decreases as t increases. For the asymptotic behavior of the wave function for large t − t in (typically t−t in ≫ ξ) all eigenvectors to eigenvalues |λ| < 1 have become very close to zero. Neglecting them one is still left with the subspace spanned by the eigenvectors to eigenvalues |λ| = 1. For wave functionsq belonging to this subspace the length |q| no longer changes as t increases. The evolution far inside the bulk always becomes the unitary evolution of quantum mechanics.
The unitary evolution can be trivial. If the step evolution operator projected on the "bulk subspace" is the unit matrix, one has H = 0 and the wave functionq * becomes independent of t. This is automatically the case if the equilibrium state is unique -cf. the discussion of the Ising model in sec. 3. Such a behavior may also occur in the presence of conserved quantities or for degenerate equilibrium states. In this case memory of boundary conditions is preserved in the bulk, while local probabilities are independent of t. There are, however, also models for which the evolution equation in the bulk follows a non-trivial quantum mechanical evolution with H = 0. A very simple example with M = 4 is displayed in app. E, where we discuss explicitly how the appropriate two-component subsystem is constructed. Other examples are the two-dimensional fermion models discussed in sec. 7.
In summary, the quantum evolution with orthogonal step evolution operators for suitable subsystems is the generic case for large t − t in (or t f − t) far inside the bulk. If this evolution is not trivial, the generalized Ising model can be used as a quantum simulator of a quantum system with the same H. The space-dependence of observables in the generalized Ising model can be used to "measure" the time dependence of observables in quantum mechanics.
The class of quantum systems that can be realized as appropriate memory materials is restricted by the spectrum of the step evolution operator, more precisely by the set of eigenvalues with |λ| = 1. If the (sub)space of eigenvalues with |λ| = 1 has finite dimension M ′ , one can have at most M ′ different eigenvalues. This can describe discrete quantum mechanics, where the continuous time evolution is replaced by the evolution operator for discrete time steps, U (t+ǫ, t). The time evolution of continuous quantum mechanics requires M ′ → ∞. This is realized for the continuum limit of the two-dimensional fermion models discussed before.

Quantum evolution in generalized Ising models and unique jump chains
For generalized Ising models with a positive step evolution operator, S τ ρ ≥ 0, one may investigate the conditions for a "unitary evolution" corresponding to quantum mechanics. We want to classify positive orthogonal N × N -matrices, S T S = 1, for finite N . The unique possibilities are unique jump operators. One concludes that a quantum evolution for the whole system (not subsystems), that does not correspond to the almost deterministic unique jump chains, requires an infinite number of degrees of freedom M . In practice, this condition is not very strong since N increases very rapidly with M .
The orthogonality condition requires for S ij ≥ 0 that all terms in the sums vanish separately for i = k Furthermore, invertibility of S requires that each column and each row has at least one non-zero element. We want to show that two or more non-vanishing elements in a given row or column lead to a contradiction to the property (236). Then positive orthogonal matrices have in each column and each row precisely one positive non-zero element. The normalization condition (235) for i = k implies that all these elements equal one and S is therefore a "unique jump operator". In order to establish the contradiction we assume that for a given i there are two non-zero elements S ij 1 and S ij 2 . Then eq. (236) implies S kj 1 = S kj 2 = 0 for all k = i. More generally, each S ij = 0 "blocks" the rowj for all other k, in the sense S kj = 0. Once we have distributed N non-zero values S ij (for different i) all columns are blocked. If for a given row i we have already blocked two columns j 1 and j 2 , at most N − 2 other rows can admit a non-zero element before all columns are blocked. There remains then at least one row without any non-zero element, which contradicts the invertibility of S.
For finite N we conclude that orthogonal step evolution operators that are not unique jump operators require some of the elements S τ ρ to be negative. This cannot be realized by a classical Ising spin system with positive step evolution operator (63). We recall, however, that negative elements S τ ρ can still be compatible with a positive weight distribution w [n]. Also subsystems can follow a quantum evolution even for positive S that are not unique jump operators. We present a simple explicit example in app. E. For the limit N → ∞ the simple counting argument above is not necessarily valid. For the action (228) we have realized an Ising type model that admits spontaneous symmetry breaking for finite β in the "infinite volume limit" corresponding to N → ∞. This constitutes an example for a quantum evolution of a subsystem that is realized by step evolution operators that are not unique jump operators.

Extension to quantum statistics
Even though the main emphasis of the present paper is on information transport in classical statistical systems, many aspects of the formalism can be taken over to information transport in quantum statistical systems in thermal equilibrium. For this purpose we employ the Matsubara formalism for quantum statistics which is based on a "functional integral" where Euclidean time is compactified on a torus with circumference 1/T . In our twodimensional setting of sec. 7 the periodic variable x corresponds then to (discretized) Euclidean time, while t plays the role of a space coordinate. The wave function and density matrix describe the evolution of the quantum statistical equilibrium system in space and can therefore be used for an investigation of information transport in space. The two-dimensional models describe the onedimensional quantum statistical models, with appropriate generalization to higher dimensions.

Conclusions and discussion
We have developed a formalism for understanding the influence of boundary conditions on observables in the bulk for static classical statistical systems. Key elements are local probabilities at particular points t of a wire or, more generally, on particular hypersurfaces of a multidimensional system. Local probabilities permit the computation of expectation values of local observables. They obtain from the overall probability distribution by integrating out all degrees of freedom except those on the given hypersurface.
The transport of information between neighboring hypersurfaces is, however, not described in terms of the local probabilities alone. It rather can be encoded in the evolution equation for classical wave functions. More generally, the evolution of the local probabilities is described by a density matrix which obeys a generalization of the von Neumann equation. The local probabilities are the diagonal elements of this density matrix. It is remarkable that the evolution of the local probabilities is not described by a linear first order differential equation involving the probabilities alone. The appropriate linear first order differential equation also needs the offdiagonal elements of the density matrix as necessary local information for the evolution. A structure well known from quantum mechanics arises naturally in the context of classical statistics.
The analogy to quantum mechanics becomes even more striking if the density matrix of the system (or an appropriate subsystem) obeys a unitary evolution. In this case the evolution equation for the density matrix is given precisely by the von Neumann equation with an appropriate hermitian Hamiltonian. We have presented several explicit examples for static classical statistical systems with this property. For these systems we have solved the boundary problem exactly by use of the quantum formalism.
For a non-unitary evolution the generalized von Neumann equation can describe the approach to "equilibrium states" in the bulk. The only difference between the evolution of the classical wave function and the Schrödinger equation for the quantum wave function concerns the lack of hermiticity of the evolution operator G in eq. (148) -and similarly for the evolution of the density matrix (165). The antihermitian part of G drives the wave function or density matrix towards generalized equilibrium states. This family of states is reached asymptotically in the bulk. On these asymptotic states G acts as a hermitian Hamiltonian operator and the evolution becomes unitary.
One may view the family of asymptotic states in the bulk as a subsystem obeying the unitary "quantum evolution". Even for pure classical states of the overall system the subsystem may be described by a mixed system in terms of a "coarse grained" density matrix. Projecting the overall evolution equation onto the subsystem typically generates additional terms, similar to the Lindblad equation [31][32][33]. They account for decoherence [27][28][29] or syncoherence [30].
In case of a unique equilibrium state the unitary evolution of the subsystem is a trivial t-independent behavior, while oscillations may be found if no unique equilibrium state exists. If the asymptotic bulk state has more than one independent component of the wave functionq, the system constitutes a static memory material. For such memory materials the properties of a boundary remain relevant for observables in the bulk. Boundary information is transported from the boundary to the bulk, and also to some other boundary. The usual complete loss of memory of boundary conditions in the approach to an equilibrium bulk state does not occur. Very generally, static memory materials become possible if there is no unique equilibrium state in the bulk. If one encounters with comparable probability a family of "equilibrium states" in the bulk, the boundary information can be transported as selection and evolution within this family.
For static memory materials one often encounters os-cillating local probabilities and expectation values. This occurs whenever the HamiltonianĤ acting on the asymptotic states differs from zero. We have discussed the possible experimental realization of static memory materials, as well as their numerical simulation. The very particular properties of static memory materials may find applications in information processing or elsewhere. Boundary information becomes available "simultaneously" at all local t-hypersurfaces.
Realizing a static memory material in experiment or by numerical simulation constitutes a "quantum simulator". The observed oscillations in space as a function of t are mapped one to one to the oscillations in time of the corresponding quantum system described by the same von Neumann equation. The time dependence of quantum systems can be made visible by the space dependence in static memory materials.
The close correspondence between the evolution in space of local probabilistic information with the time evolution in quantum mechanics suggests that quantum systems can be understood as appropriate subsystems of classical statistical systems, as proposed earlier [30,38]. In the present paper we have not addressed the issue of non-commuting observables. This is related to observables represented by off-diagonal operators. A typical off diagonal operator appearing in our setting is the Hamiltonian. Also derivatives of local observables with respect to t can be represented by off diagonal operators [24]. A more extensive discussion of the question which type of observables are described by non-diagonal operators can be found in refs. [25,37].
The particular "quantum properties" related to noncommuting operators, as the uncertainty relation, are rooted in "incomplete statistics" [39]. Incomplete statistics emerges naturally in our setting where the local probabilistic information is given by the density matrix. The density matrix contains statistical information beyond the local probabilities. This information is sufficient for the computation of expectation values of additional observables -namely the ones represented by off-diagonal operators. It is incomplete in the sense that the probabilities for finding simultaneously the values of two noncommuting observables are not part of the local information contained in the density matrix. Typically, the overall probability distribution may include the information about joint probabilities, but the latter is lost in the reduction to the local density matrix. Quite generally, incomplete statistics characterizes subsystems. In this case the appropriate subsystem is associated to the local probabilistic information, as given by the density matrix.
A second quantum issue that we have not addressed here concerns correlations. Different types of sequences of measurements and associated conditional probabilities are described by different correlation functions [30]. Ideal classical sequences of measurements are described by the classical correlation function which involves joint probabilities. While computable from the overall probability distribution, the classical correlation needs information beyond the local subsystem characterized by the density matrix. Ideal quantum measurements are described by quantum correlations that are computable from the information of the subsystem. The quantum correlations involve non-commuting operator products and violate [30] Bell's inequalities [40].
We can associate the sequence of hypersurfaces with a concept of probabilistic time [24]. In this view time emerges as an ordering concept of general statistical systems. In our case it orders the hypersurfaces on which "local physics" is defined. Remarkably, time and quantum mechanics emerge together in our formalism. (The emergent time should not be confounded with some "external time" in dynamical systems. With respect to external dynamics our systems are static.) In two -or higher -dimensional classical statistical systems there is no a priori difference between time and space directions. The time direction is simply the one in which the transport of statistical information is studied. As one of our examples we have studied asymmetric Ising models in two dimensions. For these models there is no difference between the time direction (t) and the space direction (x). Nevertheless, the system describes the propagation of free fermions in Minkowski space, with the associated Lorentz-symmetry. The asymmetric signature of the Minkowski metric arises as a result of the evolution in a particular t-direction. This picture describes the world by an overall probability distribution, covering past, present and future. Quantum mechanics arises by "integrating out" the past and future, concentrating on the probabilistic information of the "present" local subsystem.
The present paper has developed and employed the quantum formalism for the description of static memory materials or quantum simulators. We have presented a few explicit examples for static classical overall probability distributions that realize a unitary quantum evolution and induce oscillating local probabilities in the bulk. Nevertheless, our examples still describe rather simple physical situations as the propagation of non-interacting massless fermions in two-dimensional spacetime. It will be interesting to see if more complex situations can be realized by classical overall probability distributions, as massive particles in a potential with typical quantum effects as interference and tunneling.
Despite its present limitations this paper contributes a possible answer to the question why our probabilistic description of the world uses probability amplitudes or wave functions rather than directly the probabilities. Probability amplitudes arise as the natural concept if one deals with local observables defined on a hypersurface at the location t for static systems or time in quantum mechanics. Local probabilities obtain by integrating out the "past" and the "future". Each of these integrations separately produces a probability amplitude, with local probabilities bilinear in these amplitudes. Typical evolution equations are linear in the amplitudes, implementing the superposition principle fort their possible solutions. Corresponding evolution laws formulated only in terms of local probabilities are complicated non-linear equations. It is the simplicity of the linear evolution law for the wave functions or the density matrix that singles out these "quantum concepts" for the description of the local probabilistic information.
If one accepts the view that time and evolution are ordering concepts in a classical statistical description of our world, our approach also answers the question why evolution is described by quantum mechanics. Integrating out the past and future, with a number of time steps going to infinity, all information is lost except the one for subsystems described by quantum mechanics. Moving boundaries to infinity, any remaining possible non-trivial evolution is described by quantum mechanics. Acknowledgment This work is part of and supported by the DFG Collaborative Research Centre "SFB 1225 (ISOQUANT)"

Appendix A Beyond next-neighbor interactions
In the main text we have concentrated on quasi-local probability distributions (6) for which K(t) involves only spins or occupation numbers on two neighboring t-layers. In this appendix we consider more general quasi-local probability distributions and show that the discussion in the main text also covers these more general settings. The representation of K[n] as a product of factors K(t) involving each only occupation numbers on two neighboring layers can be extended to products of factors involving three or more neighboring layers. We will show that this extended setting can be represented in terms of interactions on a block lattice, with suitable block-occupation numbers on two neighboring layers of the block lattice. In this sense our discussion of next-neighbor interactions covers these more extended settings as well.
As an example we consider a product of building blocks G(t) containing each three neighboring occupation num-bers, Here the product ′ t extends from t = t in + ǫ to t = t f − ǫ. The initial and final factors involve now two neighboring occupation numbers (A.2) (Factors involving only n(t in ) or n(t f ) are a special case of eq. (A.2).) For simplicity we assume the number of lattice points to be even.
We can group two neighboring lattice points into points of a block lattice with sites at t + ǫ/2. For the blocks the values of t are given by t = t in + 2mǫ, such that the block sites are at t in + ǫ/2, t in + 5ǫ/2 and so on. We employ extended wave functions for the occupation numbers in the blocks For M different occupation numbers on a given site the number of occupation numbers on the block sites is 2M . Thus N = 2 M states on a single site, labeled by τ , imply N 2 = 2 2M states for each site of the block lattice, labeled now by the double index (τ, ρ). Correspondingly, the wave functionsq ρτ t+ ǫ 2 can be viewed as N 2 -component vectors or N × N -matrices. The initial and final factors in eq. (A.2) can directly be expressed in terms of the block-wave functions, The factors G(t + ǫ) and G(t + 2ǫ) involve only the occupation numbers belonging to the blocks at t + ǫ/2 and t + 5ǫ/2. We group them together to a "block evolution factor" involving only two neighboring blocks where (no sums here) With we recover the structure of next-neighbor interactions between blocks. The only change is the extended wave function (A.3) which involves now an extended number of states, with indexρ = (ρ, τ ) taking N 2 values. This is easily generalized: Factors with up to four neighboring occupation numbers involve the N 3 different states in blocks of three sites, and so on. The setting with a block lattice can also be employed in case of next-neighbor interactions (29) and (33). In this case one has The "double site" wave functionq ρτ t + ǫ 2 transports now additional information beyond the one contained in the "simple site" wave functionq τ (t).

Appendix B Generalized Ising models
In this appendix we display several simple generalized Ising models that can be solved exactly. They demonstrate as explicit examples several features of the discussion in the main text. In particular, constraints can be used in order to formulate situations with conserved quantities.

B.1 Constrained Ising models
Ising spins can be associated with occupation numbers, n γ = 2s γ − 1, for particles being present (n γ = 1, s γ = 1) or absent (n γ = 0, s γ = −1). Two Ising spins, γ = 1, 2, correspond to two species of particles. We can construct models for which the total particle number N tot = n 1 +n 2 is preserved by the evolution. A conserved particle number enhances the symmetry of the model. For example, if the particle number is conserved modulo two one has an additional Z 2 -symmetry. With respect to this symmetry the configurations with N tot = 0, 2 are considered as even, while those with N tot = 1 are odd. The step evolution operator conserves the Z 2 -symmetry if it does not mix sectors with even or odd N tot .
In the notation of sec. 2 with basis functions h τ , τ = 1 . . . 4, this implies for the step evolution operator S that all elements S τ ρ must vanish for which τ refers to an even basis element and ρ to an odd one, or τ to an odd element and ρ to an even one. In the occupation number basis even (odd) elements have an even (odd) number of occupied states n γ = 1. With the definitions (12) the basis elements h 1 and h 4 are even, while h 2 and h 3 are odd. The Z 2 symmetry maps h 2,3 → −h 2,3 , with h 1,4 invariant. A Z 2 -symmetric evolution is therefore realized by S 12 = S 13 = S 42 = S 43 = 0, and the same for the transposed elements S 21 = S 31 = S 24 = S 34 = 0. With eq. (63) this implies that the corresponding elements M τ ρ must diverge.
Conserved quantities and symmetries as the Z 2symmetry are actually often more easily expressed as properties of the step evolution operator S, where they can be realized by vanishing elements. For a more direct comparison with Ising models we will continue here the formulation as a standard action, characterized by M τ ρ . We may consider a class of models with part of the elements of M given by eq. (B.2). The limit κ → ∞ of our example therefore imposes effectively the constraint n 1 (t + ǫ) + n 2 (t + ǫ) = n 1 (t) + n 2 (t). (B.3) The model is therefore a "constrained Ising model" where the total number of spins up is required to be the same for two neighboring sites. This constraint is implemented effectively by a term in the action L κ = κ n ′ 1 n ′ 2 (−3n 1 n 2 + 3n 1 + 3n 2 − 1) (B.4) with n ′ γ = n γ (t + ǫ), n γ = n γ (t), taking the limit κ → ∞. In terms of Ising spins this reads (B.5) For many models considered in this paper certain entries of the step evolution operator vanish. This can be seen as a constraint that certain transitions between configurations at t and configurations at t+ǫ are not allowed. Typically, vanishing elements of the step evolution operator S can be implemented by "constraint terms" of the type of eq. (B.4) with κ → ∞.

B.2 Four-step oscillator chain
As an example for a constrained Ising model we may take the four-step oscillator chain as specified by the evolution operator (114). It can be realized by an Ising model with with κ → ∞. Here we use the shorthands h ′ τ = h τ (t + ǫ), h τ = h τ (t). We can write this in terms of Ising spins using eq. (12) This model has interactions involving up to four spins. Its purpose is not a particular interest for its realization, but rather a simple explicit demonstration of oscillating local probabilities.
Since S is symmetric, also S * is symmetric and the coefficient in front of τ 2 has to vanish, where the second relation employs eq. (C.6) and coincides with eq. (C.8). Together with the coefficient of τ 0 we are left with three independent equations for the four variables a 0 , a 1 , a 3 and ϕ. The relation SS * = S * is linear in S * and can therefore not determine the overall normalization of the coefficients a µ . For the normalization we employ the knowledge that S * has one eigenvalue one and the other eigenvalue zero. This requires the additional condition Tr S * = 1. in accordance with eq. (80). The advantage of the use of the relations (C.10) and (81) for more complicated situations is the possibility to exploit directly symmetries of S and S * .
In this appendix we discuss properties of step evolution operators for memory materials, concentrating on a small number of local states N . We investigate regular positive N × N -matrices with more than one eigenvalue |λ| = 1, while all other eigenvalues have absolute value smaller than one. We proceed by specifying the eigenvalues and construct positive matrices that realize these eigenvalues.
The key restriction arises from the requirement that all matrix elements are positive or vanish. The simplest example for N = 2 has eigenvalues λ ∈ {1, −1}. The only positive matrix obeying this condition is τ 1 . This constitutes a unique jump operator. For a = 1, b = i one needs A = 0. Each one of the six terms in eq. (E.2) has to vanish. The conditions A = 0 and B = 0 can be obeyed only if each column and each row has a unique non-zero element. Indeed, without loss of generality we can take S 12 = 0 such that S 21 = 0. Then invertibility of S requires that either S 31 or S 41 differs from zero, and we take without loss of generality S 31 = 0, S 13 = 0. From B = 0 one infers S 23 = 0, such that invertibility requires S 24 = 0, S 42 = 0. In turn, B = 0 now needs S 41 = 0 and therefore S 43 = 0, S 43 = 0. Finally, B = 0 requires S 32 = 0, S 14 = 0.

E.1 Traceless 4 × 4-matrices
With det S = −S 12 S 24 S 43 S 31 = −1 the normalization of the product of the nonzero elements is fixed. The most general positive 4×4-matrices with eigenvalues λ = (1, −1, i, −i) are therefore characterized by three positive numbers S 12 , S 24 , S 43 , with S 31 = (S 12 S 24 S 43 ) −1 , or suitable permutations of columns and rows of these matrices. Rotation matrices are realized only for S 12 = S 24 = S 43 = S 31 = 1, and we recover the step evolution operators of the unique jump chains.
We next consider a = 1, −1 < b 2 ≤ 1, such that A = 1 + b 2 > 0, Det S = b 2 > −1. As a first example, one may achieve B = 0 by The examples (E.7) or (E.11), with the restrictions on the non-zero elements as discussed above, realize all the eigenvalues λ = (1, −1, b, −b). This demonstrates that many 4 × 4-matrices can have two different largest eigenvalues λ = ±1. These matrices do not need to be unique jump operators.

E.2 Matrices close to unique jump operators
We may investigate more generally the 4 × 4-matrices close to the unique jump operators with eigenvalues λ ∈ {1, −1, b 1 , b 2 }, |b 1 | ≤ 1, |b 2 | ≤ 1. (E.20) For this purpose, we make the ansatz Since S is close toŜ, the eigenvalues of S have also to be close to the ones ofŜ, and we take (E. 26) and therefore (in lowest order) One concludes that c 1 + c 2 is real while c 1 − c 2 is purely imaginary. This yields

E.3 Unitary subsystems
For E < 0 the asymptotic evolution in the bulk proceeds in the eigenspace of the eigenvalues λ = ±1. The eigenvectors to the other eigenvalues approach zero asymptotically. We are interested to construct the effective two-state system corresponding to the two largest eigenvalues. Let us first compute the eigenvectors to the two largest eigenvalues. For λ = ±1 one finds to lowest order, respectively, (E.36) The asymptotic wave function will be a linear combination of v + and v − , The four-component wave function will therefore be reduced to an effective two-component wave function q r (t). We are interested in the evolution of this subsystem and want to construct an effective 2 × 2 step evolution operator S r , such that q r (t + ǫ) = S rqr (t). (E.38) For this purpose we first have to specify a suitable basis for the reduced system. Let us assume that we are mainly interested in observables that can be computed from the first two componentsq 1 andq 2 of the wave function. It will then be useful to define the reduced system such thatq r,1 (t) =q 1 (t),q r,2 (t) =q 2 (t). (E.39) The other two componentsq 3 (t) andq 4 (t) can be expressed by eq. (E.37) in terms ofq 1 (t) andq 2 (t). This procedure will define the reduced step evolution operator S r . We first expressq * 1 andq * 2 in terms of w + and w − This relatesq r and w ± , q r = Qw, w = w + w − , Q = 1 1 1 + x 2 −1 − y 2 .
(E.41) We can invert eq. (E.41), , (E.42) or (E.43) From eq. (E.37) we can expressq * 3 andq * 4 in terms of w ± and therefore in terms ofq r1 andq r2 , q 3 * = 1 2 (x 3 + x 4 − y 3 − y 4 )q r1 + (2 + x 3 + x 4 + y 3 + y 4 )q r2 , q 4 * = 1 2 (2 + x 2 + x 4 + y 2 + y 4 )q r1 These results can be inserted into the evolution equation forq r q r1 (t + ǫ) = T 11qr1 (t) + T 12qr2 (t) + (1 + T 13 )q * 3 (t) + T 14q * 4 (t), q r2 (t + ǫ) = (1 + T 21 )q r1 (t) + T 22qr2 (t) + T 23q * 3 (t) + T 24 (t)q * 4 (t). In this case the diagonal elements of V vanish. Up to rescalings the step evolution operator of the reduced system is a unique jump operator. This is perhaps not surprising since the only positive 2 × 2-matrix with eigenvalues ±1 is given by The precise form of the reduced step evolution operator S r depends on the basis that we choose for parametrizing the degrees of freedom of the subsystem. A change of basis for the subsystem results in a transformation S r → S ′ r = DS r D −1 . For example, if we choose the eigenfunctions w + and w − as basis functions the reduced step evolution operator becomes S ′ r = τ 3 . This demonstrates that the reduced step evolution operator no longer needs to be positive.