Numerical simulation of critical dissipative non-equilibrium quantum systems with an absorbing state

The simulation of out-of-equilibrium dissipative quantum many body systems is a problem of fundamental interest to a number of fields in physics, ranging from condensed matter to cosmology. For unitary systems, tensor network methods have proved successful and extending these to open systems is a natural avenue for study. In particular, an important question concerns the possibility of approximating the critical dynamics of non-equilibrium systems with tensor networks. Here, we investigate this by performing numerical simulations of a paradigmatic quantum non-equilibrium system with an absorbing state: the quantum contact process. We consider the application of matrix product states and the time-evolving block decimation algorithm to simulate the time-evolution of the quantum contact process at criticality. In the Lindblad formalism, we find that the Heisenberg picture can be used to improve the accuracy of simulations over the Schrödinger approach, which can be understood by considering the evolution of operator-space entanglement. Furthermore, we also consider a quantum trajectories approach, which we find can reproduce the expected universal behaviour of key observables for a significantly longer time than direct simulation of the average state. These improved results provide further evidence that the universality class of the quantum contact process is not directed percolation, which is the class of the classical contact process.


Introduction
Despite the recent experimental progress in probing the emergent behaviour of out-of-equilibrium ensembles of cold atoms or trapped ions, [1][2][3][4][5][6], a clear understanding of these quantum non-equilibrium systems remains a major challenge. While in classical settings the study of such systems-of their phases and critical phenomena -are well developed, options for going beyond semi-classical treatments or the physics of exactly solvable quantum models are rather limited. This is especially true for open (dissipative) systems, which are of interest as they have potential to display a rich set of novel non-equilibrium physics-e.g. critical dynamical behaviour or dynamical phase transitions-not possible in closed (unitary) settings.
Here, we explore the simulation of critical dynamics in dissipative quantum many body systems, in the case of the quantum contact process (QCP) [7][8][9][10][11]. The QCP is an attractive model to study for a number of reasons: Firstly, the QCP is the coherent version of the well-understood classical contact process (CCP) [12], which exhibits a non-equilibrium phase transition (NEPT) in the directed percolation (DP) universality class, even in 1d. Formulating both the CCP and QCP in the Lindblad formalism then allows for a direct comparison between the performance of a given numerical approach in the classical and quantum cases.
Secondly, the QCP contains an absorbing state, which leads to a rich-and computationally challenginguniversal dynamics [11]. To explore the link between the specific dynamics of the problem and the computational difficulty, one can compare simulations performed in the Schrödinger picture with those of the Heisenberg picture: In dissipative systems generically, there is an asymmetry between dynamics in the 2. The classical and quantum contact processes 2.1. Lindblad formalism for open quantum dynamics In many physically relevant scenarios, it is not possible to characterise the time-evolution of quantum system by means of unitary closed dynamics. In fact, the simple presence of thermal surroundings or of stochastic processes, e.g. measurements of the system or of its output, makes the study of the time-evolution of the system much more involved. These open quantum systems (OQSs) are not described by usual Schrödinger equation. Instead, their average dynamics is implemented, in the Markovian regime, by means of Lindblad-type master equations. These describe the evolution of the average quantum system state ρ(t), where average might mean expectation over a stochastic process or over the degrees of freedom of an environment, according to the physical scenario one has in mind.
Concretely, the quantum state of the system ρ(t) obeys the Lindblad differential equation [33], where the map , also known as the Lindblad generator, is made of two different contributions, with {A, B}=AB+BA, in order to guarantee a physical meaning for the evolved quantum state ρ(t). Note that the dynamics implemented by (1) does not preserve the purity of states and, therefore, in general ρ(t) will represent a mixed state.

Model definitions
As illustrated in figure 1, in a contact process, lattice sites can be either occupied ( ñ |• ) or empty ( ñ |⚪ ) and evolve under processes of spontaneous decay and branching. The CCP, i.e. a contact process with classical branching ( figure 1(a)), can be represented in the Lindblad formalism, thus representing the CCP as an OQS. The evolution of the CCP is then given by a Lindblad master equation with a zero Hamiltonian and dissipative contribution given by two terms, For a system of size L, the dissipative term  is defined as, , and the parameter γ sets the strength of the spontaneous decay. The term br 1 2 , In the limit L  ¥, the CCP exhibits a NEPT between a unique steady state devoid of particles and a degenerate steady state with finite particle density [12]. The former steady state is known as the absorbing state, as once it is reached during the dynamics it cannot be escaped, and the corresponding phase is called the absorbing or inactive phase.
Absorbing phases occur when the strength of the decay process, characterised by the parameter γ, is sufficient to overcome the process of branching, characterised by the parameter Γ. Conversely, when the process of branching is strong enough relative to decay, i.e. the dimensionless branching is large, 1 g G  , a steady state with finite particle density can been found, in what is known as the active phase. Corresponding rules for the quantum contact process, consisting of coherent branching with strength Ω and decay. (c) The classical contact process displays an absorbing state phase transition, as illustrated by the three trajectories shown, one of which is typical of the absorbing phase, one typical of the activate phase and one typical at criticality (the values of Γ/γ for these are 2,6.75 and 10 respectively). Since during a classical trajectory sites can only be empty or occupied with certainty, all squares are either black or white. (d) As with the classical contact process, the quantum contact process displays an absorbing state phase transition. This is illustrated again by three typical trajectories, with Ω/γ=2, 6 and 10. These show a similar behaviour to the corresponding classical trajectories, as expected by the construction of the dynamical rules for the quantum process. However, the coherent branching in the quantum case leads to expected occupancies different from zero or one.
In the QCP, branching is coherent and represented by the Hamiltonian, The parameter Ω thus sets the strength of coherent evolution and, as a consequence, of the branching/ coagulation process. The relevant dimensionless parameter is then Ω/γ, which determines the relative strength of coherent evolution, tending to create excitations, and dissipation, represented by , tending to remove them. As with the CCP, in the limit L  ¥, the QCP displays a non-equilibrium absorbing state phase transition [7][8][9]. In fact, below the critical points, Ω<Ω c and Γ<Γ c , in both models the stationary state is the same unique absorbing state, Above the critical point, instead, the stationary state is degenerate and has a finite density of particles. Thus, both models have an active phase, though the stationary state in this phase will differ between the CCP and the QCP.

Non-equilibrium setting
The specific setting we investigate is that of an initial single seed state evolving under the contact process dynamics [34], see figure 1. In this scenario, the initial state of the system is a product state. This state is unoccupied at all sites except the central one, x seed =floor(L/2)+1. Therefore, the initial state can be written as, 0 ) . Starting from an initial seed state, the contact process can be characterised by the survival probability, P sur , total density, N a , and seed-site density, n seed , defined as: where n t k t n , Tr As illustrated in figures 1(c) and (d), in the absorbing phase, all clusters generated from a single seed die out so that P sur (t), N a (t) and n seed (t) all tend to zero as t  ¥. In contrast, within the active phase in the limit L  ¥, these observables tend to non-zero values as t  ¥. At criticality, the observables are characterised by universal power-law behaviour, which defines the exponents δ, Θ and z as:

Universality classes of the contact processes
While the CCP has been established to belong to the 1d DP universality class [12], the universal properties of the QCP-in particular the values of exponents and to which class the QCP belongs-is under debate. However, there have been a number of studies performed from different perspectives. For instance, the QCP has been examined from the perspective of mean-field theory [7,10], functional renormalisation group [8,9] and tensor networks [11]. The transition has been argued to be continuous [9], with the critical point being estimated as Ω c ≈6γ, [11]. A selection of critical exponents for the QCP have been estimated in [11] as well as in [8], where the latter includes the effects of classical branching. Table 1 collects a number of these estimated exponents for the QCP, as well as those of the DP universality classes for comparison. We also emphasise that the QCP can in principle be experimentally realised in Rydberg quantum simulators [7,[35][36][37][38], providing a possible check for theoretical results in the future.

Time-evolving block decimation in closed quantum systems
The numerical simulations we perform are based on MPSs and the Time-Evolving Block Decimation (TEBD) algorithm, [13], which we briefly summarise in this section. The TEBD algorithm has been applied extensively in closed quantum systems, on which we focus. We consider many-body quantum systems (in 1d) made of L equal components with single-site Hilbert space having dimension K. In TEBD, the state yñ | of such a system is represented as an MPS [14], The total number of parameters in the MPS representation is where χ is the maximum matrix dimension across the system and is known as the bond dimension. In principle, any state in the many-body Hilbert-space In general, the exact representation of a state as an MPS is not possible, as the number of parameters increases exponentially with L. Nonetheless, frequently, relevant quantum states belong to set of states which can be represented as an MPS with L poly  c = ( ( )), or approximated to some arbitrary accuracy by such an MPS [39]. These are said to be represented/approximated efficiently by MPSs.
As a trivial example, product states can be represented by MPSs with χ=1. More generally, the L dependence of the von Neumann entanglement entropy, where A B  is the reduced density matrix of subsystem A/B generated by a bipartition of the system, tr , can be used to characterise the efficiency of an MPS representation. In one spatial dimension, an entanglement area law, S L 0 , for a given state suggests that the state can be efficiently represented as an MPS. This is indeed true for the ground-states of gapped local Hamiltonians, [40], or states with exponentially decaying correlations, [41]. However, generally the scaling of S alone is not enough to establish the accuracy of an efficient MPS approximation (in fact all Renyi entropies with index α<1 must be used [42,43]). Nonetheless, the entanglement entropy is useful in practice, particularly as a number of physical systems have been shown to obey area laws [44].
Assuming the initial state of the system to be one represented efficiently by an MPS, then an MPS representation of the time-evolved state can be found by applying some set of operators [14], typically those constituting a Trotter decomposition of a quantum dynamics [45]. When these operators are applied, the resulting MPS will generally have a larger bond dimension than before. In fact, over time this will lead to an exponential increase in the required value of χ and generally time-evolution can only be treated exactly with MPS for short times [46]. This can be linked to the scaling of S with time: if the entanglement entropy is growing linearly with time, as is the case in common quantum quench scenarios [47], an efficient MPS approximation of the exact state is impossible [42].
Given the build-up of bond-dimension in an MPS representation over time, the key to performing timeevolution with MPSs is to repeatedly approximate the time-evolved MPSs, thus keeping the total number of parameters under control.
To achieve this, consider the MPS approximation to the state at time t, t y ñ | , and assume it has a bond dimension χ, which is the maximum we will allow. To approximate the state at time t', we then apply the operator, Ô , so that O t t y y ñ = ñ ¢ |ˆ| . The new state t y ñ ¢ | will now have some higher bond-dimension c¢, beyond the maximum we allow in our simulation. To remedy this, we want to find an MPS approximation to t y ñ ¢ | that has bond dimension χ. Calling this approximation t f ñ ¢ | , we then want to solve the minimisation problem, where |·| indicates the Hilbert-space norm. One can iterate this procedure to produce an approximation to the time-evolution of the state, allowing one to calculate desired observables along the way [14]. Table 1. Relevant exponents of the QCP and CCP. The exponents δ, z and Θ can be associated to non-equilibrium observables when starting from an initial seed-state, see (9)- (11). The exponent α can be associated to the decay of excitation density when starting from a homogenous fully-occupied state, [34]. The exponents of the CCP are given by the DP universality class [12]. The QCP estimates of δ, z and Θ are given in this work (see section 6) with two standard errors, while the value of α indicated by an asterisk is estimated in [11]. The two final columns give exponents for the QCP with the inclusion of classical branching at the tricritical point where quantum and classical branching compete, estimated using functional renormalisation group [8]. Note that for DP α=δ, a consequence of the rapidity reversal symmetry that is characteristic of the class [34]. The TEBD algorithm is, in essence, a simple approximation to the solution of (14) given by considering successive bipartitions of the system at k=1, 2, K, L. At each cut, one performs a Schmidt decomposition of the state t y ¢ ñ | and discards a sufficient number of the smallest Schmidt coefficients so as to reduce the bonddimension to χ. More specifically, across the cut at site-k the Schmidt coefficients, λ j

QCP
[k] , are placed into a nonascending order 0 The MPS approximation with bond-dimension χ at site k is then constructed by retaining the χ largest Schmidt coefficients. In other words, all Schmidt coefficients with j 1  c + are discarded, and the corresponding discarded weight, measures the error in this approximation. At any given cut, this approximation is optimal. However, overall, the error in the approximation depends on the error made at each cut. Summing the discarded weight from every cut then provides an approximation for the overall error, and the resulting state is a simple approximation to the solution of (14).
In order to calculate the Schmidt decomposition at a given cut when using MPS, singular value decompositions can be used to place the MPS in 'canonical form' about the chosen cut. Details for implementing this can be found in, e.g. [14]. However, we note here that when applying non-unitary operators to the MPS, as is the case in open systems, the canonical form of the MPS is destroyed, so that truncation at a given cut would no longer be optimal. This can be remedied by simply recalculating the necessary Schmidt decompositions after the application of such operators (i.e. 'recanonicalising'), prior to performing the trunctions. We will use this method throughout, though we remark that many more sophisticated variations exist [15].

Simulation of universal dynamics in the double-space 4.1. Double-space representation of lindblad dynamics
Perhaps the most straightforward way to apply TEBD to the study of Lindblad dynamics, (1), is to represent the density matrix ρ(t) as a vector, t Mapped in this way, the evolution of the quantum state can then be shown to be generated by the following Schrödinger-like equation , where  is the representation of the Lindblad map (1) in the double space. It is possible to show that one must have, where H D is Hermitian and has the form, where * means complex conjugation and T matrix transposition. Solutions to (16) give the evolved state up to time t, t e 0 is the initial condition for the density matrix in the vectorized representation. By performing a Trotter decomposition of the timeevolution operator e t  , which we choose to be a second-order scheme, one can apply the TEBD algorithm naturally to approximate t r ñ | ( ) using an MPS with bond-dimension χ. From the approximation of t r ñ | ( ) , observables can then be calculated as | is the double-space state representation of the identity operator. When considering the MPS representation of t r ñ | ( ) , a natural quantity to consider is the operator space entanglement,

S
tr log tr log , 21 The value of S˜plays an analogous role to the von Neumann entropy, S, for closed quantum systems and provides a characterisation of the computational difficulty of TN simulations.

Schrödinger picture results
The evolution of the total density, survival probability and seed-site density are shown for the CCP and QCP in figure 2. In both cases the same TEBD algorithm is used, with a fixed Trotter step of γδt=0.1 and bonddimensions of χ=16, 32, 64 and 256, 512, 1024 for the CCP and QCP respectively. The system size was set to L=256, for which the observables agree closely with those calculated using L=128 up to γt=10. The simulations are performed with γ=1 at the estimated critical points, Γ=6.75γ and Ω=6γ for the CCP and QCP respectively. For the CCP the critical point was estimated by scanning various values of Γ and finding where both the total density and the survival probability show little deviation from a straight line in a log-log plot. In the case of the QCP, we take the previously estimated critical value of Ω=6γ, [11]. Both sets of observables show the correct qualitative behaviour, in line with the expectations of the critical dynamics, (9)- (11).
To approximate the critical exponents, power-law fits were performed to N a (t), P sur (t) and n seed (t) thus estimating , d Q and z. To provide a best-estimate of these values, the simulations with largest χ were used for the fits, while the absolute differences between these estimates and those obtained by fitting to a χ of half the maximum were used for the error estimates.
In both the CCP and QCP, while all the different bond-dimension simulations agree at early times, at later times the low bond-dimension runs deviate considerably. This suggests that finite-bond effects can lead to a significant build-up of errors in observables, as in the closed quantum system case, even for classical states. However, the bond-dimensions needed to reach convergence until γt=20 are very modest for the CCP. Since the fits for the CCP performed over γt ä [5,10] lead to estimated exponents within a few percent of the true 1d DP values, we see that critical dynamics of the CCP can be accurately simulated with MPSs in the double-space, and critical exponents estimated with small errors. (a) The plot shows the total density, N a (t), calculated using the TEBD algorithm in the double space, with bond-dimensions χ=16,32 and 64. At late times, finite bond-dimension effects can be seen clearly. To establish the critical exponents, power-law fits were performed in the interval γt ä [5,10] for χ=32 and 64, with the latter fit determining the best estimate of Θ and the difference between the two used as the error. (b) The plot shows the survival probability, P sur (t), calculated using the same MPS approximations as for N a (t). As in that case, significant finite-bond effects can be seen, but again only a modest χ=64 is needed to accurately approximate the observable. (c) The plot shows the evolution of the seed-site density, n seed (t), and a power-law fit determines the exponents z 1 Q - . Combined with the estimated Θ from (a), this leads to an estimate of z, albeit with a relatively high error estimate due to error propagation. Critical dynamics in the quantum contact process (d)-(f): The plots show approximations of universal observables for the quantum contact process at Ω=6γ, estimated with TEBD in the doublespace. These can be compared with the corresponding plots for the classical contact process, which are estimated using an identical algorithm and show similar qualitative behaviour. See also figure 6 for the same quantities and analysis using a trajectories approach.
(d) The plot shows the total density, N a (t), with bond-dimensions χ=256, 512 and 1024. Compared with the corresponding classical plot, the finite-bond effects are larger and much higher bond-dimensions are required to achieve convergence to a given time. (e) The survival probability, P sur (t). Once again finite χ effects are significant and to reach γt=10 a considerably higher bond-dimension is likely required. (f) The seed-site density evolution n seed (t) for the QCP. The estimate of the exponent z is obtained, as in the classical case, from a power-law fit within γt ä [2,4], with error propagation leading to a relatively large error estimate for this value.
Compared with the CCP, simulation of the QCP requires much larger values of χ to achieve convergence in the examined observables to a given time, with χ=1024 showing large deviations from a power-law by γt=10. As such, fitting from γt ä [2,4] was used to establish these exponents, where the χ=1024 simulations closely follow a power-law.
We emphasise that, while the errors shown quantify the finite bond-dimension effects in the estimate of the exponents when fitting over the chosen interval, there can still be considerable errors in the exponent estimates due to the interval used when performing the fits. In the case of the CCP, fitting from γt ä [1,5] as opposed to γt ä [5,10] only increases the errors in the estimated exponents from around 1%-5%. This suggests that, for the CCP, finite time errors are relatively small, despite the short time interval over which the fits are performed. One might then hope that this is also the case for the QCP, though without substantially higher bond-dimensions, allowing one to reach much longer times, this cannot be confirmed.

Operator space entanglement in the schrödinger picture
To understand why the QCP is much harder to simulate than the CCP, we can compare the evolution of the operator space entanglement entropy, S˜. Taking the maximum value of the entanglement entropy across all bipartitions throughout, the evolutions of S˜for the CCP and QCP are shown in figure 3(a).
In both the CCP and QCP, S˜shows a clear 'barrier' behaviour, where initially it grows rapidly to a peak around γt=0.5 before decaying to a lower final value. This is consistent with an initial period of branching/ coagulation evolution, where correlations build up rapidly and spontaneous decay is irrelevant, followed by a period where the latter becomes relevant and removes correlations/excitations from the state. While the overall picture seems the same for both the CCP and QCP, in the classical case the barrier is clearly much lower than in the QCP. Given that the operator space entanglement entropy should characterise the error in simulations, the difference between S˜in the CCP and QCP helps explain the difference in accuracies found in observables. The relationship between S˜and errors is illustrated in figure 3(b), which shows the error in the simulations of the QCP over time, equal to the square root of the discarded weight defined in (15). As the value of χ is increased, the error at each time drops, approximately halving when the bond-dimension is doubled. As with the entropy, the error shows a peak-like structure, with the peak occurring shortly after that of the entropy.

Heisenberg picture in the double-space
To study the Heisenberg picture dynamics with TEBD in the double-space, one takes a representation of an operator O as a vector Oñ | , then evolves it through the dual Lindbladian  † , such that In the case of closed quantum systems, the Heisenberg picture can be used to extend the maximum time over which simulations are accurate by roughly a factor of two [15]. Intuitively, this is expected because the dual Figure 3. Evolutions of the operator space entanglement and discarded weight: (a) Operator space entanglement, S˜for the CCP and QCP. In both cases a barrier like structure can be seen, consistent with the initial build-up of correlations followed by a decrease due to dissipation. In terms of bond-dimension, the value of S˜convergences quickly, with the curves of S˜using χ=16, 128 for the CCP and QCP respectively differing by approximately 10 , 10 7 2 -and thus appearing identical to those shown on the scale plotted. The fact that the barrier for the CCP is much lower than for the QCP suggests that the dynamics should be much easier to approximate in the former case. (b) The evolution of the error estimate for the QCP, defined here as the square root of the sum of squared Schmidt coefficients discarded during truncation at each time-step, with the time-integrated discarded weight-which provides a measure of the total error to that time-displayed in the inset. As for S˜, the error shows a barrier like structure, consistent with a build-up of errors that lead to the significant finite χ effects seen in figure 2. As the value of χ is doubled, the barrier peak decreases correspondingly, leading to more accurate approximation of observables over longer times.

dynamics of the unitary evolution, U t ( ) †
, is equal to the original dynamics but backwards in time, . Thus, one might expect that the dual dynamics is not radically different in terms of computational difficulty than that of the usual dynamics, and both maybe accurately approximated up to the same time, thus doubling the maximum time when combined.
In contrast, for open quantum systems, the dynamics implemented by  † is in principle completely different from the one of . This is exemplified by the specific case of the QCP, where the dual dynamics does not have an absorbing state. Furthermore, when studying the survivial probability, the Heisenberg picture can be implemented using a homogenous initial vector, in contrast to the evolution starting from an initial seed state in the Schödinger picture. In cases such as the QCP, where the differences in the dual dynamics are significant, it might then be that performance in the Heisenberg picture is also substantially modified relative to that of the Schrödinger picture.
To explore this, we calculate the survival probability for the QCP in the Heisenberg picture using TEBD and χ=256, 512, along with the operator-space entanglement entropy, as shown in figure 4. As can be seen, the entropy displays a characteristic barrier as in the Schrödinger picture. However, the barrier is substantially lower for the Heisenberg picture than for the Schrödinger picture case (though it is also less sharply peaked). Correspondingly, the survival probability shows dramatically reduced finite bond-dimension effects, with the χ=512 approximation showing reasonable power-law behaviour up until γt=10, leading to an estimated exponent of δ=0.27±0.04 when fit over γt ä [2,4].
On a practical level, these results suggest that the Heisenberg picture might allow for a more accurate approximation of the survival probability, P sur , and thus δ, at cheaper computational cost (lower bonddimension). Other observables such as the total density can also be approximated in the Heisenberg picture, e.g. in order to establish the exponent Θ with greater accuracy, though we do not discuss this direction further.
6. Trajectories for the QCP 6.1. Entanglement distribution As an alternative to the Lindblad formalism and double-space approach, we now consider a stochastic unravelling of the master equation realised algorithmically by Quantum Jump Monte Carlo (QJMC) [49,50]. In this QTs approach, each individual quantum trajectory corresponds to a pure state evolution. Therefore, the standard TEBD method for closed quantum systems can be applied quite directly to simulate a given trajectory, with the sample means over all trajectories providing an approximation for the observables of the average (Lindblad) dynamics.
As before, we will be interested in quantifying the presence of entanglement in the dynamics and how this affects the accuracy of TN calculations of universal non-equilibrium physics. Since each individual trajectory in QJMC is governed by a pure state evolution, the von Neumann entanglement entropy of the state has the usual physical meaning. However, unlike the closed system case, the entanglement of a trajectory, S traj , can now be considered a random variable and thus associated to a distribution/probability. While this might seem to be a  H). While the Schrödinger picture approximations deviate dramatically from the expected power-law behaviour, the Heisenberg picture simulations seem much better and the χ H =512 case can be approximated by power-law until γt=10. (b) The entropy barrier for the dynamics in the Heisenberg picture and Schödinger picture dynamics. In the Heisenberg picture, the evolution of S˜shows the same barrier structure as for the Schrödinger picture. However, the shape of the barrier is different, with a considerably lower peak. This suggests that the same bond-dimension, and therefore the same computational costs, would lead to a significantly higher accuracy approximation, consistent with the results for P sur .
complicating feature compared to the single operator space entanglement entropy in the double-space case, it is actually very helpful for building a picture of the accuracy of simulations using MPSs: if we associate to a given bond-dimension, χ, some characteristic maximum entanglement 'cutoff', S c ( ), then we expect that the set of trajectories with S S traj c ¯( ) will be well approximated, while those with S S traj c »¯( ) will be subject to finite bond-dimension effects. In other words, for low-entanglement trajectories the entanglement cutoff will be irrelevant but for high-entanglement trajectories it will be relevant.
To explore these issues, we first consider the distribution of S traj in 1000 trajectories simulated with TEBD for χ=16,128 and χ=256. The evolution of each trajectory was calculated up until γt=10 using a Trotter-step of γδt=0.01, chosen so that the order of the associated error in the QJMC (which is a first-order scheme) is comparable with the second-order scheme used in the double-space case. The system size was fixed at L=128, for which observables agreed with those simulated with L=256 in the double-space case, see section 2. The entanglement distributions for γt=1,5 and 10 are shown in figure 5, with the rows illustrating the evolution of the entanglement distributions for χ=16,128 and χ=256 respectively. Details of the histogram construction are given in the caption.
In figure 5(a), when γt=1, all the histograms displayed are very similar; they show the same bimodality with one peak at S=0-attributable to the unentangled absorbing state-and are similar except for the fact that the lowest bond-dimension distribution displays slightly more weight near S=0. As such, the means of the distributions, S á ñ, shown as vertical dashed lines, are very close. This similarity can be explained by comparing these distributions with the maximum value of S found for any trajectory at any time. These values, plotted as dotted vertical lines, can be considered as a measurement of the cutoff, S¯. For all χ=16,128 and 256, the values of S¯lie well above the support of the distribution, and we can expect the effect of the cutoff to be minimal. In fact, given the separation between the support and S¯, we might expect there to be essentially no effect and it is interesting that there is still a clear discrepancy at S=0 for χ=16. This discrepancy suggests that the presence of an absorbing state poses challenges for numerical simulations.
In figure 5(b), where γt=5, the distribution of entropies for χ=16 and χ=256 differ significantly, with a large proportion of trajectories for χ=256 displaying entropies larger than the entanglement cutoff for χ=16, leading to an artificial build-up of weight around S¯for the χ=16 simulations. Thus, we can conclude The second column displays the three distributions at γt=5. By this time the χ=16 distribution differs significantly from the others, showing a wall-like behaviour near the corresponding maximum entanglement, while the other distributions largely agree. (c) By γt=10, shown in the third column, the distributions for χ=128 and 256 differ visibly. However, the difference represents only a small fraction of the weight overall. One might then expect that simple quantities-such as the densities-calculated with both χ=128 and χ=256 will be similar, as is indeed found for the observables displayed in figure 6. that the finite-entanglement cutoff is relevant at this time for χ=16, and the effect on the mean is clearly visible. This is in contrast to the case of χ=128, which retains a good agreement with the χ=256 simulations.
In figure 5(c), where γt=10, the support of the distribution for χ=256 is close to the entanglement cutoff S 256 c = ( ) , and above S 128 c = ( ) . This corresponds to a noticeable difference in the entanglement distributions for χ=128 and χ=256, though the difference is only slight compared to that of χ=16. In fact, since only a small weight appears in the χ=256 distribution above the value of S 128 c = ( ) , we might expect that the accuracy of the χ=128 simulations for observables will still be reasonably good. Moreover, we might expect that the χ=256 simulations themselves are accurate due to the fact there is no significant build-up of weight near S 256 c = ( ) , which for the χ=16 case provided a clear indication of the entanglement cutoff's relevance.

Universal dynamics with trajectories
To assess the accuracy of the trajectories approach for the QCP and investigate the relationship between the entanglement distributions and the approximation of observables, we repeat the analysis for the QCP performed in section 4, as shown in figure 6 for χ=64, 128 and 256. From analysis of the entanglement distributions, figure 5, we expect to find good accuracy up to γt=10 for χ=128 and 256, and indeed the curves of N t P t , a s u r ( ) ( ) and n seed (t) overlap closely for these bond-dimensions, with χ=64 deviating more noticeably. Compared to the double-space simulations, figure 2, the observables calculated from trajectories seem much more accurate; with χ=128 and 256 they converge up to γt=10 within statistical errors (given as two standard errors from the sample-mean and indicated by the shaded region in plots). Furthermore, they display the expected power-law behaviours, (9)- (11), for this whole period. This allows for fits to be performed over a longer region of time, γt ä [2, 10], thus helping to eliminate finite-time errors. Since the curves for χ=64, 128 and χ=256 lie well within the same shaded region for each observable, we can conclude that the finite bonddimension effects are essentially negligible relative to the statistical errors in this region. As such, we provide purely statistical error estimates on the estimated exponents, obtained via a statistical bootstrap (see figure 6 for details). While these error estimates are large (corresponding to an approximate 95% confidence interval), they can easily be reduced by increasing the number of samples.

Conclusions and outlook
In this paper, we have made use of MPSs to study the critical dynamics of the classical and the quantum contact processes, which display non-equilibrium absorbing state phase transitions. For the QCP we have shown how the Heisenberg picture, where the dynamics does not display an absorbing state and can be implemented using a uniform initial vector, can be used to improve the accuracy of simulations in the Lindblad formalism, over and above the Schrödinger picture. Moreover, when combining TEBD with a Quantum Jump Monte Carlo Figure 6. Critical dynamics in the QCP using the trajectories approach: Plots showing the evolution of N t P t , a s u r ( ) ( ) and n seed (t), calculated using QJMC and TEBD (see the second row of figure 2 for the same quantities calculated using the double-space approach). The error estimates for the exponents are calculated by bootstrap: the relevant power-laws are refit to 1000 datasets of 1000 trajectories generated by resampling. The error is then given as twice the standard deviation of the resulting empirical distributions. (a) The evolution of N a (t) calculated from the sample mean of 1000 trajectories with χ=64, 128 and 256. The shaded region indicates the statistical uncertainty for the χ=256 estimate, quantified as twice the standard error. All three curves lie within this region, indicating that finite bond-dimension effects are small relative to statistical error. All curves show a roughly power-law behaviour up to γt=10, a significant improvement over the double-space case (figure 2). The critical exponent Θ was estimated by fitting a power-law between γt ä [5,10] (shown as the dashed black line). (b) The evolution of P sur (t), calculated using the same method as N a (t). Once again all three curves lie well within the shaded region and display an approximate power-law behaviour and fitting between γt ä [5,10] establishes δ. (c) The evolution of n seed (t). As with N a (t) and P sur (t), the convergence with bond-dimension and agreement with a power-law is dramatically improved relative to the double-space results of figure 2. However, the presence of statistical errors still leads to a relatively high uncertainty on the estimate of the critical exponent z, calculated from the power-law fit to z 1 Qand using the value of Θ from (a).
approach, we find that the expected critical behaviour can be reproduced with much higher accuracy for longer times than that of the Schrödinger picture Lindblad approach. In all approaches, we show that the entanglement in the MPSs can be used to understand these differences clearly, providing both a useful diagnostic tool and physical picture that links the numerical method to the dynamics in question.
The difference in accuracies found between the Lindblad and QTs approaches in the case of the critical QCP emphasises, as has been mentioned previously [17], that when considering the simulation of open quantum systems with TN, one should examine different approaches to simulating the dynamics carefully. Given the observed superiority of a QTs approach for capturing the critical QCP dynamics, it would be interesting to know if this result is more general and whether approaches such as QJMC can allow one to study critical dynamics in other systems at higher accuracies than possible otherwise.
Finally, in the process of investigating these primary issues, we have also provided several results for the critical physics of the QCP. The most convincing conclusion that we can draw from these is that the universality class of the QCP cannot be DP, as evidenced by the fact that the best estimate of the exponent δ=0.26 is far from that of 1d DP and 2d DP, see figure 6 and table 1. This was also confirmed by the results from the doublespace calculations, shown in figure 2. However, since that finite bond-dimension errors are small relative to statistical errors when using QJMC, we can use the statistical error to quantify the difference between exponents more carefully. With a standard error of 0.02 for the estimate δ=0.26, we have that the 1d DP value of δ=0.16 lies five standard errors from the QCP estimate, while the 2d DP value of δ=0.45 lies 9.5 standard errors away. As such, it seems that the QCP universality class is genuinely different to directed percolation, though the presence of finite-time errors prevents us from stating this with absolute certainty.
Given these results, it is of interest to understand whether the QCP can be associated to some other known universality class and to identify exactly what the relevant quantum contributions might be that push QCP away from 1d DP. An interesting remark in this regard can be made concerning the rapidity reversal symmetry present in DP, [34], which leads to the relation α=δ between the two exponents characterising the decay of density, see table 1. While we have only investigated the value of δ in this work, the value of α has been estimated previously in [11] to give α=0.36±0.08. This value lies 5 standard errors from our estimated δ=0. 26. This seems to suggest that rapidity reversal is indeed broken in the QCP, though confirmation of this will require a better determination of α, with QJMC offering a promising approach given the results presented here.