Numerical solutions to Einstein's equations in a shearing-dust Universe: a code comparison

A number of codes for general-relativistic simulations of cosmological structure formation have been developed in recent years. Here we demonstrate that a sample of these codes produce consistent results beyond the Newtonian regime. We simulate solutions to Einstein's equations dominated by gravitomagnetism --- a vector-type gravitational field that doesn't exist in Newtonian gravity and produces frame-dragging, the leading-order post-Newtonian effect. We calculate the coordinate-invariant effect on intersecting null geodesics by performing ray tracing in each independent code. With this observable quantity, we assess and compare each code's ability to compute relativistic effects.


I. INTRODUCTION
The flat Λ Cold Dark Matter (ΛCDM) model is the backbone of modern cosmology. Originally proposed in the context of the inflationary scenario [1] and to accommodate for observations of structures on large scales [2], it has emerged as the concordance cosmological model [3,4] after the discovery of the accelerating expansion of the Universe [5,6]. Based on the assumptions that the Universe is statistically homogeneous and isotropic on large scales -specifically, adopting the Friedmann-Lemaître-Robertson-Walker (FLRW) model in general relativity (GR) with a cosmological constant Λ -ΛCDM successfully explains the majority of our cosmological observations in a surprisingly simple framework [7,8]. Yet ΛCDM faces a number of challenges, theoretical and observational. While a cosmological constant representing vacuum energy [9,10] is the simplest possible form of dark energy, the measured value is difficult to justify from a theory standpoint [11][12][13]. With the continuous improvement of cosmological observations, a number of tensions have started to emerge [see, e.g. 14], particularly between low and high redshift measurements of some cosmological parameters. For example, a significant tension exists between supernovae [15] and cosmic microwave background measurements [7] of the present Hubble ex-pansion rate, H 0 [16]. The former depends on calibration on the cosmic distance ladder [17], and the latter depends on assuming ΛCDM as cosmological model. Also assuming ΛCDM, a tension is present between high and low redshift observations of σ 8 , the parameter measuring the growth of structures [18,19]. Recently, some evidence for a spatially curved universe has been claimed [20,21] and disputed [22], with some authors suggesting the possibility of a structure formation-induced curvature [23]. Motivated theoretically and because of these tensions, a number of alternatives to ΛCDM have been considered. These range from an interacting vacuum scenario [see, e.g. 24-27, and references therein], to scalar fields [28] and modified gravity models [see e.g. 29-32, and references therein]. However, ΛCDM is still largely preferred when Bayesian model comparisons are carried out [33][34][35][36][37].
With the increasing precision of current and upcoming cosmological surveys [38][39][40], the ΛCDM model will be truly tested. In view of these future observations and their target 1% precision, current state-of-the-art cosmological N-body simulations of structure formation aim at the same precision in theoretical predictions. However, considering that these N-body simulations are mostly based on the Newtonian approximation in lieu of full GR, it is timely to address the possibility that some percentlevel GR effects may be missed, potentially biasing the inferred likelihood of cosmological parameters. Understanding the role of general-relativistic effects on observations will thus be crucial in correctly interpreting these precision data.
While the extensions and alternatives to ΛCDM mentioned above explore new physics, some explore the inclusion of existing physics that is neglected by the standard cosmological model. In particular, in recent years, a number of general-relativistic codes have been developed for cosmology, employing either a formally exact treatment of the metric [41][42][43][44][45][46] or an approximate scheme [47,48]. These tools provide new ways to study aspects of GR beyond the limited scope of known analytic solutions and perturbative expansions around them. For instance, they have been applied to quantify gravitational back-reaction of small-scale structures [49][50][51][52][53][54], light-cone projection effects [55][56][57][58], and the impact of relativistic species [59,60] on the evolution and observation of large-scale structure. These codes have proven themselves reliable through comparisons to both linearized and exact GR solutions (e.g. [61][62][63][64]), and have in turn been used to validate the applicability of traditional Newtonian simulations to cosmology in a weak-field limit (e.g. [45,65,66]).
Here, we compare several codes within a controlled setup that features an artificially large gravitomagnetic vector potential as part of the metric, generating a framedragging effect. Connected to rotation of masses, frame dragging has been measured in the gravitational field of the Earth [67]. Gravitomagnetism and frame dragging are purely relativistic and absent from Newtonian gravity, a theory based on a single scalar potential (see however [68] where this effect is computed from a Newtonian code using a post-Friedmann approximation [69] ). There are only a few known analytic solutions that exhibit frame-dragging -e.g. the Kerr and Kerr-Newman solutions -although the effect is ubiquitous in GR, for example in rotating neutron stars [see, e.g. 70]. Numerical cosmological solutions with large frame-dragging effects have also been studied [71].
In the limit where linear cosmological perturbation theory provides an accurate description of the behavior of a spacetime, frame-dragging is associated with the presence of vector modes [72,73] and is well understood and under analytic control. However, vector modes may also be sourced through nonlinear processes [e.g. [74][75][76][77][78][79][80], potentially interfering with our ability to measure phenomena such as primordial gravitational waves of sufficiently small amplitude [81]. They can also produce a gradient in the matter density field [82]. While such vector modes are not expected to be a dominant contribution to either the dynamics of our Universe or the propagation of information through it, simulating such a physical system nevertheless provides us with a way to explore the regime of validity of approximate numerical and analytic models. At the nonlinear level, we must seek to validate both perturbation theory and approximate numerical approaches against fully general-relativistic calculations.
In this work, we do precisely this. We describe and perform a comparison between linear theory, relativistic simulations that use approximate methods to treat relevant nonlinear effects, and numerical relativity simulations that provide exact numerical general-relativistic solutions. We present the initial data for our simulations in Section II, describe the observables we compute in Section III, give an overview of the different computational frameworks in Section IV, discuss our results in Section V and conclude in Section VI.
Unless otherwise stated, we will use Latin indices to represent spatial indices, which take values 1, 2, and 3, and Greek indices to represent space-time indices which take values 0, 1, 2, and 3, with repeated indices implying summation. We set the speed of light c = 1.

II. INITIAL DATA
GR admits a well-posed initial-value formulation [see, e.g. 83]. A consequence of this is that in cosmological simulations we don't need to specify an a-priori fixed background. We choose coordinates such that our initial Cauchy surface is described by a fixed coordinate time, t= t * . The line element in the 3+1 decomposition is [84] where x i ∈ {x, y, z} are coordinates on the threedimensional space-like hypersurface, α is the lapse function, and β i is the shift vector. While we will set initial conditions non-perturbatively, we can draw a connection to linear perturbation theory both for intuition and a comparison. We choose the initial lapse, shift, spatial metric, and extrinsic curvature to be, respectively, This can be regarded as a snapshot at an initial time t * of an exact perturbation of a reference Einstein-de Sitter (EdS) model with Hubble expansion rate H * . Here, L is the characteristic length scale of the vector perturbation that also determines the size of the simulation volume (the initial conditions are compatible with periodic boundary conditions that identify y = L with y = 0), b is a dimensionless amplitude, and the asterisk indicates that a quantity is evaluated on the initial Cauchy surface. The connection with linear perturbation theory will become apparent shortly. Having fixed the initial conditions for the metric we can proceed by solving the Hamiltonian and momentum constraint equations on the initial surface to obtain valid initial data for the matter. We assume that the matter can be described (at least initially) as a perfect fluid with vanishing pressure, such that the stress-energy tensor is given by where ρ 0 is the rest mass-energy density and u µ is the four-velocity of the fluid. For collisionless matter the fluid description can break down at some point in the evolution due to stream crossing, and we will comment on this issue later. The Hamiltonian and momentum constraint equations are, respectively, where the 3-curvature (3) R and covariant derivative D j are associated with the 3-metric γ ij , and K = γ ij K ij . Together with the mass-shell condition g µν u µ u ν = −1, this yields a closed system of equations from which we can determine ρ * 0 and u µ * . Solving this system, given the initial conditions (2), (3), (4), and (5), we obtain and u y * = u z * = 0. It is worth noting at this point that values b > 2H 2 * L 2 /π are unphysical, as they violate the weak energy condition ρ * 0 > 0 at t * . For H * L > π 2/3, i.e. for exact perturbations outside the Hubble horizon, the physical range of b is even more restricted, becoming bounded from above by b < 4 3H 2 * L 2 − π 2 /3. All of the explicit expressions given so far are of course valid in the coordinate system we chose, in particular with β i = 0. We can therefore use these expressions directly to set the initial conditions in those simulations that use such coordinates. However, in some simulations we will instead use the coordinates of the so-called Poisson gauge in which vector perturbations are completely carried by the shift. In this gauge we denote the line element as where a(t) is the scale factor of the reference EdS model, the shift B i is the transverse gravitomagnetic vector potential, and h ij is transverse and traceless.
For the particular initial data chosen, a closed expression for the coordinate transformation x µ →x µ is given byt for an infinitesimal (t − t * ) around the initial Cauchy hypersurface. The metric variables in Poisson gauge are then initially given by and φ * = h * ij = 0. We note that from the point of view of the Poisson gauge, the extrinsic curvature is only needed to provide the initial data for the propagating gravitational waves (i.e. the free part, or homogeneous solution, of h ij ). Due to the weak-field approximation, this part completely decouples from the remaining dynamics in gevolution while it is neglected from the outset in gramses -which are the two codes detailed below that take their initial data in this coordinate system. Therefore the extrinsic curvature in Poisson gauge is not required for setting initial conditions in this work.
From (16) it is clear that the dimensionless parameter b measures the strength of the gravitomagnetic field on the initial hypersurface. Neglecting b 2 terms in the above expressions, we obtain initial conditions for the first-order solutions in the two gauges, and find φ * = ψ * in Poisson gauge, as expected. We discuss this further in Section III A below.

III. BEHAVIOR OF OBSERVABLES
We now want to construct an observable that can be used to "measure" the frame-dragging effect even in the nonperturbative case. Consider an observer comoving with the fluid and located at a point of symmetry where spacetime is invariant under a parity transformation. Without loss of generality we can choose the observer to be located at the origin x O = y O = z O = 0. Now consider two events A and B on the initial hypersurface that emit a flash of light in all directions. Within the coordinate system that we introduced on the initial hypersurface, these events shall be located at The null geodesics that connect each of these two events with the worldline of the observer get "lensed" by the framedragging effect (see Figure 1 for an illustration). The ray coming from A travels close to a plane of symmetry, while the ray coming from B travels almost orthogonal to it. They will therefore be affected in different ways. One effect is that the angle ϑ between the two incoming rays is not exactly 90 degrees in the frame of the observer. The non-vanishing dot-product of the two direction vectors (in the observer's rest frame) is therefore an observable that directly relates to the frame-dragging effect. Here u µ is the observer's fourvelocity (which coincides with that of the fluid in this case), k µ A , k ν B denote the two null vectors of the incoming geodesics, and the e i µ are the basis vectors of the Fermi frame that, up to rotations, is fixed by the requirement that u µ e i µ = 0 and g µν e i µ e j ν = δ ij .

A. Linear regime
We first analyze a solution in the regime where the amplitude of the perturbation b is small and a linear treatment -i.e. a first-order expansion in b about an EdS background -is therefore a good approximation. In this regime it is convenient to work with first-order gaugeinvariant variables, an approach pioneered by Bardeen in his seminal work [72]. If matter has vanishing pressure, the first-order gauge-invariant vector mode decays like 1/a 2 . Noting that a ∝ t 2 if the Universe is matter dominated (where t is conformal time), we find that the linear solution with initial conditions given by eqs.
In Poisson gauge, the same linear solution reads thus in this gauge the entire metric first-order perturbation is encoded in the shift. In a generic gauge, Bardeen's gauge-invariant potential is a linear combination of the shift and the time derivative of the transverse-vector part of γ ij . Through the momentum constraint this potential is sourced by one of the two matter vector velocity perturbations, namely the one representing the vorticity of u µ . Using the momentum constraint we can therefore relate the gauge-invariant quantity in the two gauges, i.e., Here, the left-hand side represents the gauge-invariant vector mode of the metric in Poisson gauge and the righthand side the same quantity in the other gauge. We can also solve the null geodesic equations perturbatively. According to eq. (17), in order to obtain the direction vectors we have to contract the null vectors at the observer with the basis vectors that provide local Fermi coordinates, which can be constructed perturbatively as well. At leading order, the dot-product becomes (21) This first-order expression has several intuitive properties. First, it is directly proportional to the amplitude b of the vector perturbation. Second, in the limit H * L 1 it asymptotes to 3b/4 which is independent of H * L. This makes sense because deep inside the horizon the time it takes for the light to reach the observer is much shorter than the dynamical time scale of the perturbation. The observable hence becomes insensitive to time evolution. Third, in the limit H * L 1 quite the opposite is true, and the asymptotic value is 2bH −2 * L −2 . The signal gets damped because the vector mode decays significantly while the light travels through the spacetime.

B. Nonlinear regime
Beyond linear order a minor complication arises because the two flashes of light do not arrive at exactly the same time. The angle between them remains uniquely defined in the observer's inertial reference frame, and thus one needs to keep track of the rotation of that frame with respect to any coordinate system that is used for the calculation. The arrival vector of the first ray must therefore be parallel transported along the observer's world line in order for the angles to be comparable. On the other hand, the time delay can be seen as another observable that is linked to the frame-dragging effect.
In Figure 1 we illustrate this visually, depicting photon trajectories as they traverse a spacetime with a large vector mode perturbation. Deflection of photons in this case can be manifestly seen, along with the time-delay. In the background, stream-crossings in the density field are observed as nonlinear collapse occurs. The ADM density in a gauge with a harmonic lapse condition and zero shift is plotted, which differs from the rest density at O(b 2 ), e.g. as noted in eq. (30).

IV. COMPUTATIONAL FRAMEWORKS
We compute the observable (17) using four different relativistic codes: gevolution, gramses, the Einstein Toolkit (ET), and CosmoGRaPH. Each code employs a different relativistic approach to evolving the initial conditions presented in Section II, as detailed below. This study therefore provides a valuable comparison of these different computational methods in the context of a problem applicable exclusively to relativistic codes.
We evolve the gauge-appropriate initial conditions (as per Section II) with each code for several light-crossing times. This allows sufficient time for the light pulses to reach the observer even in the case of a large perturbation. For each code we perform 30 simulations with different initial conditions. Specifically, we choose ten values H * L = 2 n for n ∈ {−7, ..., 2}, and three initial perturbation amplitudes for each choice of H * L, namely b ∈ {0.5, 0.05, 0.005} × H 2 * L 2 . This particular choice is motivated by the fact that b/(H 2 * L 2 ), much like the compactness parameter of compact objects, measures how extreme the matter configuration is. We remind the reader that on sub-horizon scales b is bounded from above by 2H 2 * L 2 /π, beyond which the matter configuration would no longer satisfy the weak energy condition. Some intuition can also be gained from considering the curl part of the momentum constraint in Poisson gauge. Here the Laplacian of the gravitomagnetic potential B i -whose initial amplitude is given by b -is equal to the curl part of the momentum density (∆ 2 B i = 16πGa 2 ∇ × ∇ × T 0 i at leading order). Hence, a simple dimensional analysis shows that the latter has to approach the critical density as b approaches the value H 2 * L 2 , up to some factors of order unity. The chosen range of values for b allows us to sample those cases that we expect to be well within a linear regime, as well as those in which the initial perturbation has sufficient time to develop into the nonlinear regime during the time the light pulse takes to reach the observer.
We trace the path of the light pulses emitted at events A and B on the initial surface to the observer at the origin, where we then calculate the observable (17). To this end, we integrate the geodesic equation in Wolfram Mathematica 1 (see Appendix D) for the metric obtained with each code, until each light pulse reaches the observers position. The corresponding boundary-value problem is solved using a shooting method (employing 1 wolfram.com/mathematica a built-in root-finding algorithm). In order to take into account the delay between the two observed signals, we parallel-transport one of the photon four-vectors along the observer's world-line before taking the dot product. Our ray-tracing method was validated by comparing three independent implementations, including an extension to the mescaline code [54].
We confirm the numerical convergence of the observable (17) for each code by simulating multiple spatial resolutions, see Appendix B for details. We also compare the constraint violation in the numerical relativity codes CosmoGRaPH and the ET in Appendix A.

A. gevolution
The public cosmological N-body code gevolution is based on a weak-field expansion of Einstein's equations in Poisson gauge, which facilitates a vastly more efficient (yet ultimately approximate) computation in most cosmological settings [47,65]. This is mainly due to the fact that non-relativistic particle motion allows for a superior convergence rate of the time integrator, making gevolution an extremely useful tool in relativistic cosmology. A crucial feature in this respect is the spin decomposition of the metric which separates the dynamical spin-2 field from the constraints. The latter evolve on time scales determined by the non-relativistic matter. In our present setup, however, this advantage does not always play out, as we are exploring a parameter space that allows for highly relativistic particles.
The vector mode of the metric, B i , is kept only to linear order in gevolution, but its source term, which is the spin-1 part of the momentum density, is treated nonperturbatively. To the extent that the solution maintains δ ij B i B j 1, which is true in cosmology and even for most of the parameter space studied here, this yields a self-consistent framework. Of course, the numerical solution will only be accurate up to corrections quadratic in B i . We shall investigate the performance of this approximation in the regime of large B i in comparison with codes using numerical relativity.
Our main results shown in the next section are based on simulations with periodic domains of N = 96 grid points in each direction to sample the spacetime, and we also use the same number of particles. In order to study numerical convergence we also performed all simulations with N = 64, and some with N = 48.

B. gramses
The recently introduced N-body code gramses [48,85] implements a constrained formulation of GR [86][87][88], in which the Einstein equations are cast into a system composed of three hyperbolic equations for the evolution of tensor degrees of freedom, and a set of ten elliptic-type equations that explicitly include the constraints. Its current version neglects the hyperbolic part by using the conformal flatness approximation and solves the elliptic system using multigrid relaxation, which allows it to compute the two scalar and two vector modes of the metric. The code inherits the adaptive mesh refinements and massive parallelisation infrastructure from its parent code, ramses [89].
In gramses, the spatial coordinates are defined by the minimal distortion gauge (or generalized Dirac gauge) condition [88,90], ∂ i h ij = 0, where h ij corresponds to the deviation from a conformally flat spatial metric. Notice that this is generally different from the Poisson-gauge metric (11), in which h ij is both transverse and traceless. Furthermore, β i might carry both scalar (longitudinal) and vector degrees of freedom whereas in Poisson gauge only the latter is allowed. However, the initial data (16) is actually fully compatible with the conformal flatness condition h * ij = 0, so that the spatial coordinates at the initial hypersurface are equivalent in these two gauges, as well as the initial shifts. Moreover, in this code the time coordinate is fixed by a constant mean curvature (CMC) slicing condition, which (5) satisfies, and then (12) also applies.
gramses obtains the metric and extrinsic curvature components by solving elliptic-type equations on a mesh, which means the mesh resolution places a limit on the accuracy of its solutions through the discretisation error. In the results shown below we have used a mesh with 256 3 cells, while we have found that using 128 3 and 64 3 cells leads to larger inaccuracies even in the linear regime, where higher-order terms neglected by the conformal flatness approximation are subdominant. The same discretisation error occurs for all equations being solved, and so it can affect particle movements and thereby accumulate over time. It is therefore important to choose a sufficiently fine grid to suppress this error. Note that for finite differencing at a fixed order, the discretisation error is determined by the number of cells per side instead of the physical size of a cell.

C. Einstein Toolkit
The Einstein Toolkit 2 is an open-source numerical relativity code built on the Cactus infrastructure [41,91]. Comprised of about 100 individual modules, the ET contains codes to evolve the vacuum Einstein equations using either the Baumgarte-Shapiro-Shibata-Nakamura (BSSN) [92,93] or the conformal and covariant Z4 [94] formalism. In addition, it contains codes for relativistic (magneto-)hydrodynamics [95,96], employing a fluid approximation for the matter distribution, and the Carpet adaptive mesh refinement and MPI driver [97]. Many 2 einsteintoolkit.org in-built initial condition setups are also available, along with constraint violation and analysis modules, and simulation management.
The ET was first adopted for cosmological simulations in [43,44] (see also [64]), and has since been shown to be a viable code for fully relativistic simulations of largescale structure formation. Further studies have included structure formation growth rates [43], primordial gravitational waves [98], global backreaction and curvature [54], and the effect of small-scale inhomogeneities on the local expansion rate [51].
While the ET is capable of evolving spacetimes in an arbitrary gauge, in this work the initial conditions are evolved using a harmonic lapse condition and zero shift. Because this is a fully covariant calculation, the final observable computed will be independent of the gauge used.
Here we use the ET to simulate three periodic, cubic domains with resolutions N 3 , where N = 64, 80, and 96, for each set of initial conditions. Having multiple resolutions allows us to quantify numerical errors for each simulation, details of which are given in Appendix B.

D. CosmoGRaPH
Similar to ET, CosmoGRaPH [63,99] is an opensource numerical relativity code employing the BSSN formulation of Einstein's equations in order to evolve the metric. It has incorporated SAMRAI [100] in order to provide full adaptive mesh refinement and MPI capabilities. CosmoGRaPH was developed to explore general relativistic effects in cosmological spacetimes, and to probe the applicability of novel numerical methods to cosmological problems. The framework can evolve matter sources including N-body systems, perfect fluids, and scalar fields; and can further perform raytracing through these spacetimes as they dynamically evolve in either a forward or time-reversed setting in order to compute various cosmological observables.
CosmoGRaPH has been demonstrated capable of obtaining solutions with sufficient accuracy to robustly resolve relativistic corrections, down to the level of numerical precision. It has been used in a cosmological context to explore spacetimes in both weak-field [42,53] and strong-gravity [71] limits, and to explore observable properties of these spacetimes [55]. Similar to ET, Cos-moGRaPH is capable of utilizing an arbitrary gauge through choice of lapse and shift, however for this work a harmonic slicing condition and zero shift are used. The periodic domain is simulated with resolution N x = N z = 1 in the x-and z-directions, and N y = 64, 96, and 128 for each set of initial conditions with N p = N 2 y /8 particles. The particle number is chosen to scale this way in order to obtain convergence of the physical field configurations and constraint violation [53]. In gevolution, the weak-field approximation includes all terms in the expansion up to O(b), and so for large perturbations (and for H * L 1) we do expect to see a difference with respect to the ET and CosmoGRaPH. A similar statement can be made about gramses, which, due to the conformal flatness approximation, neglects tensor modes that might be excited during the evolution of the system at O(b 2 ) and beyond.
The top panel of Figure 2 shows results for the smallest-amplitude perturbation, i.e. b = 0.005 × H 2 * L 2 . For this case we see agreement with the linear solution to within 0.03% for all codes and for all values of H * L. To within numerical accuracy of the simulations performed, the only resolvable effect is a 0.003% deviation from the linear solution for the ET with H * L = 4. Especially at large values of H * L the performance of the linear prediction may seem surprising, but can be understood from the following considerations. First, working in Poisson gauge where the vector perturbation is in the shift, we can see that the linear expression for the shift is in fact exact on the initial hypersurface, cf. eqs. (16) and (19).
The shift also appears only linearly in the geodesic equation (see Appendix D), and the terms due to the lapse perturbation, which is formally O(b 2 ), would alone not lead to a deflection in the considered setup due to symmetry. Hence their effect appears only at one order higher, namely O(b 3 ). There are no new O(b 2 ) corrections as one moves away from the initial hypersurface because the matter dynamics are mainly due to inertia.
Considering the perturbations b = 0.05 × H 2 * L 2 , shown in the middle panel of Figure 2, we see measurable deviation from the linear solution for values of H * L 1. These are below ∼ 1% in all codes, and the numerical relativity codes ET and CosmoGRaPH agree within their quoted numerical accuracy. The results from gevolution and gramses show a qualitatively similar deviation from the linear prediction, and are well within expected truncation errors from higher-order terms of O(b 2 ) (i.e. quadratic in the shift or higher, see Section IV A) in the weak-field expansion. However, in the case of gramses the roughly constant deviation from the linear solution for H * L 1 is a result of the mesh discretization error (see Section IV B).
In the most extreme case with b = 0.5 × H 2 * L 2 -close to the limit set by the weak energy condition -shown in the bottom panel of Figure 2, we see the strongest deviations from the linear prediction. Incidentally, the linear prediction still holds to a good approximation for H * L ≤ 1, as confirmed by all four codes. In this regime, gevolution shows a persistent ∼ 2% error which is consistent with dropping terms of order B i ∂ 2 B i from Einstein's equations. The difference between gevolution and the other three codes for this value of b, for all values of H * L, is therefore still well within the expected O(b 2 ) truncation error from the weak-field expansion. We have clipped the points for H * L = 4 in the bottom panel of Figure 2 to ensure deviations at smaller H * L can be resolved. In this case, we find deviation from the linear solution for gevolution of −0.619, for gramses of −0.68, and for CosmoGRaPH of −0.8183 ± 0.0004. We could not find a null geodesic that connects the source and observer in the ET simulation in this case, possibly indicating the presence of a horizon. We suggest this is a result of the fluid approximation used in the ET, since all other codes use a particle description and do not have this issue.
To highlight the difference between codes in the extreme regime shown in the bottom panel of Figure 2, we also plot the observable (17) from each code relative to that calculated in CosmoGRaPH in Figure 3. Here we see gevolution and gramses agree to within ∼ 1% and 0.1% for small values of H * L, respectively. This difference grows to ∼ 1 for the largest box size H * L = 4. For H * L < 1 the ET and CosmoGRaPH agree within their numerical errors, however at H * L = 1 we see a small deviation, which increases to ∼ 15% for H * L = 2. In this case we expect stream crossing has occurred; a regime in which we no longer trust the fluid approximation implemented in the ET, and the N -body (Vlasov) description used in CosmoGRaPH is more physically relevant. However, the fluid approximation is more numerically accurate than N -body per computational cost, as can be seen clearly in the top panel of Figure 2. This is due to the additional numerical error introduced by smoothing over the particle distribution at each time step in order to source the metric evolution, see e.g. [53]. Regardless, the inability of the fluid approximation to capture stream crossings implies that once these occur, the results from the ET should not necessarily be considered representative of collisionless matter.

VI. CONCLUSIONS
In this work we have compared the computational approaches of four independent, relativistic, cosmological simulation codes. In the process we have also explored the validity of the linear approximation for the frame-dragging effect.
We summarize our main findings as follows: • For perturbations with amplitude b = 0.005 × H 2 * L 2 , we find a match to linear theory within 0.03% for all codes and for all box sizes studied here.
• For larger perturbations, with amplitudes b ∈ {0.05, 0.5} × H 2 * L 2 , we find agreement with linear theory within 0.1% and 1%, respectively, for all sub-horizon box sizes H * L 1. However, gevolution has a persistent deviation of almost 2% for the highest perturbation amplitude, which is nonetheless consistent with its approximation scheme.
• In CosmoGRaPH, the deviation from linear theory is ∼ 80% for H * L = 4 and b = 0.5 × H 2 * L 2 . We expect CosmoGRaPH to provide the most trustworthy results in this extreme case; a regime in which O(b 2 ) effects are relevant and stream crossing occurs.
• The weak-field approximation used in gevolution agrees well with the numerical relativity codes for most cases studied here. Any deviations seen are well within the expected O(b 2 ) for all choices of parameters.
• In gramses, deviations from the linear solution in cases well into the linear regime are dominated by the mesh discretization error, while for larger perturbations these are due to the conformal flatness approximation, and are within the expected O(b 2 ) truncation error.
• We find agreement within numerical uncertainties between the numerical relativity codes, ET and CosmoGRaPH, in most cases. Exceptions are those in which stream crossing occurs, at which point the fluid description used in ET is no longer resolving the full phase-space dynamics, and Cos-moGRaPH provides a more physically relevant result.
The test performed here is unique in that it is applicable only to codes which consider general-relativistic effects. Each code has differences in either its approximation of GR and/or numerical method. This study therefore provides an important test of these approximations and their limits in describing nonlinear dynamics.

ACKNOWLEDGMENTS
We thank the Sexten Center for Astrophysics 3 for hosting a workshop on GR effects in cosmological large-scale structure in July 2018 that helped this project to get off the ground. JA acknowledges funding by STFC Here we analyse the constraint violation for the numerical relativity codes ET and CosmoGRaPH for two select simulation cases. These codes are based upon hyperbolic formulations of Einstein's equations, which evolve the dynamical equations but do not explicitly enforce the constraint equations. The constraints (7) and (8) are instead used as a diagnostic tool to check whether solutions have drifted too far from the physical constraint surfaces, i.e. to determine how well energy and momentum have been conserved in a general-relativistic sense.
Although this check is a good diagnostic, it does not necessarily imply validity of solutions, as numerical error can violate the dynamical evolution equations while still preserving constraints. For a timestep sufficiently small that the error in the dynamical evolution is small, re-  maining error will be dominated by truncation error when evolving vacuum or fluid solutions, or particle noise in the N-body case. In both cases, we can compute the rate at which the constraint violation in simulations converges to zero, and compare this to theoretical expectations.
In the case of stream-crossings, or caustics, it has been found that the constraint violation in the vicinity of a caustic will not converge in general [53,101]. This is due to the presence of a mild singularity in the vicinity of caustics, where curvature scalars can diverge, yet the metric remains in a weak-field limit and the spacetime is geodesically complete. In the N-body case, we therefore expect constraint violation to be well-behaved before stream-crossings, and poor after. We expect the constraint violation to be well-behaved at all times in the corresponding fluid limit (assuming all relevant scales are resolved, i.e. that all gradients in the fluid remain constant between resolutions).
In Figure 4 we show the constraint violation as a function of approximate scale factor of the simulation, for ET (blue curves) and CosmoGRaPH (green curves). The top panel shows the simulation with H * L = 0.0625, and the bottom panel shows H * L = 2. Both simulations shown here have perturbation amplitude b = 0.5 × H 2 * L 2 . Solid curves show the highest resolution, dashed curves the medium resolution, and dotted curves the lowest resolution. These sets of resolutions differ slightly between ET and CosmoGRaPH, see Sections IV C and IV D, respectively, for details. Specifically, Figure 4 shows the L2 error of the normalised Hamiltonian constraint violation, i.e., where H i is the Hamiltonian constraint violation at grid cell i (for an exact solution we have H i = 0 everywhere), and the normalisation is where ρ = α 2 ρ 0 (u 0 ) 2 is the mass-energy density on the simulation hypersurfaces (not necessarily the rest-frame of the matter), and [H] i is (23) evaluated at grid cell i.
For the smaller box size, in the top panel of Figure 4, both simulations show convergence of the L2 error with an increase in resolution. For the larger box size H * L = 2 in the bottom panel of Figure 4, the perturbation has more time to grow in a few light-crossing times of the box. For the N-body approach used in CosmoGRaPH, convergence is found up until a stream-crossing occurs, at a scale factor of a ∼ 2. After this time, inexact cancellation of numerically large (and formally infinite) contributions to the Hamiltonian constraint leads to a lack of convergence of constraint violation. For ET we see convergence until approximately the same time as CosmoGRaPH in this case, after which the constraint violation is approximately the same at all resolutions. For this particular simulation, gradients are no longer consistent between resolutions and so we do not expect convergence of the constraints in general. After stream crossing, we do not expect the fluid approach used by ET to be representative of collisionless matter. Nevertheless, convergence of the metric itself can be found, as well as observable results, as discussed in the next section.

A. Numerical convergence of observable
Here we check numerical convergence of the observable presented in Figure 2 as a function of resolution for all codes. We must ensure that our numerical calculations provide results with a sufficient degree of precision that they are meaningful, i.e. that they are approaching a continuum limit solution. We expect different rates of convergence for each code, due to different dominant error sources which depends on the numerical approximations made in each case.
We compute the convergence rate by evaluating the observable at three different resolutions, ∆x 1 , ∆x 2 , and ∆x 3 . For a method of order p, the error will be O(∆x p ), and the convergence rate of the observable angle ϑ is given by and the theoretical convergence rate is (25) Figure 5 shows the convergence rate relative to the theoretical convergence rate for each code, for the set of simulations in the top panel of Figure 2. For this small perturbation, we expect all results to match the linear solution to a good approximation, and so it is an ideal case in which to test numerical convergence. From Figure 5 we see all codes give close to their expected numerical convergence rate for all values of H * L. We note a few peculiar convergence values for CosmoGRaPH, gevolution, and ET, at H * L = 0.0078125, 0.03125, and 2, respectively. In the case of CosmoGRaPH, we believe this is due to truncation error surpassing error introduced from particle noise, leading to a different convergence rate than expected, and thus effective method order p; this additionally results in a poorly extrapolated data point and error bar as seen in Figure 2. The remaining simulations appear to show normal numerical convergence for, e.g., the constraint violation in the case of ET, and so we believe the simulations themselves are providing reliable results. We therefore suggest that the non-convergence of the observable is related to the root-finding algorithm implemented in the ray tracing code used, but we do not investigate this further since all other points show good convergence.

B. Error calculation
For ET and CosmoGRaPH, simulations were run at three different spatial resolutions for each value of b and H * L. This allows us to quantify the numerical error on the observables we compute from these simulations, as well as to estimate values in the N → ∞ limit using a Richardson extrapolation. The values and error bars are calculated by fitting curves consistent with the expected order of convergence of the scheme used, and extrapolating to find the continuum-limit solution. The numerical error bars shown in Figure 2 are computed using differences between numerical values and the extrapolated, continuum-limit solutions in the case of ET, and using the distribution of extrapolated values in the case of CosmoGRaPH.  (17) calculated in simulations with b = 0.005 × H 2 * L 2 . We show the convergence rate relative to its expected value (25), for the Einstein Toolkit (blue diamonds), CosmoGRaPH (green squares), gramses (orange triangles), and gevolution (red circles).

C. INITIAL CONDITIONS WITH BSSN VARIABLES
The BSSN conformal factor and extrinsic curvature trace are given by Because γ = 1, the conformal BSSN metric is given bȳ γ ij = γ * ij . The conformally related trace-free part of the extrinsic curvature is given bȳ and the conformal, contracted Christoffel symbol is Lastly, the 3+1/ADM density is given by and relativistic gamma factor W = αu 0 W = 16H 2 * L 2 − 3b 2 cos 2 2πy L 16H 2 * L 2 − 3b 2 cos 2 2πy The geodesic equations parallel transport velocity vectors in the direction of the velocity, for a photon 4-vector k µ , or for ordinary matter with k µ → u µ . In order to integrate this numerically, we can cast this expression into a 3+1 form conducive to numerical integration, and for which the effects of framedragging due to a nonzero shift are transparent, Lastly, the observable, eq. (17), can be rewritten without a reference to basis vectors for an observer at rest, u µ ∝ (1, 0), as provided the vectors are observed simultaneously. In order to obtain this simple expression we also used the fact that β i vanishes at our observer location due to symmetry. For non-simultaneous arrivals, one vector will need to be parallel transported along the observer's trajectory. For observers at rest with respect to the coordinate system, the parallel transport equation can be written where the last term here is new compared to eq. (33), and the second-last term now is sensitive to the observer's velocity. In terms of 3 + 1 variables, and for the metric and observer considered in this work, and for a gauge choice that respects symmetry of the problem (so ∂ i α = 0 at y = 0), this expression simplifies to which is integrated purely in time at y = 0. The observable, eq. (34), is also evaluated using the metric at the time the second ray arrives.