Towards Cosmography of the Local Universe

Anisotropies in the distance-redshift relation of cosmological sources are expected due to large-scale inhomogeneities in the local Universe. When the observed sources are tracing a large-scale matter flow in a general spacetime geometry, the distance-redshift relation with its anisotropies can be described with a geometrical prediction that generalises the well-known Friedmann-Lema\^itre-Robertson-Walker result. Furthermore, it turns out that a finite set of multipole coefficients contain the full information about a finite-order truncation of the distance-redshift relation of a given observer. The multipoles of the distance-redshift relation are interesting new cosmological observables that have a direct physical interpretation in terms of kinematical quantities of the underlying matter flow. Using light cones extracted from $N$-body simulations we quantify the anisotropies expected in a $\Lambda$ cold dark matter cosmology by running a Markov chain Monte Carlo analysis on the observed data. In this observational approach the survey selection implements an implicit smoothing scale over which the effective rest frame of matter is fitted. The perceived anisotropy therefore depends significantly on the redshift range and distribution of sources. We find that the multipoles of the expansion rate, as well as the observer's velocity with respect to the large-scale matter flow, can be determined robustly with our approach.


INTRODUCTION
In modern cosmology, data are often analysed within a particular field theory and with assumptions about the arXiv:2402.12165v2[astro-ph.CO] 3 Jun 2024 energy-momentum content, for instance general relativity with ordinary matter, cold dark matter, radiation, and dark energy.A complementary way to analyse data is through cosmography, where the field theory of gravity is left unspecified.Cosmography thus presents an independent approach to constraining the kinematics and the curvature of the Universe, without relying on a particular dynamical theory (Weinberg 1972;Ellis et al. 1985).
Cosmographic approaches are particularly suitable for low-redshift measurements, but can also be applied in the intermediate-to high-redshift regime (see e.g.Cattoen & Visser 2007;Vitagliano et al. 2010;Capozziello et al. 2020).The low-redshift regime of cosmology is sensitive to the large-scale structures within our cosmic neighbourhood.The anisotropies that are expected in cosmological observables due to cosmic structures are commonly modelled using peculiar velocity corrections as predicted by a Λ cold dark matter (ΛCDM) cosmology.This procedure has the disadvantage that one must trust that the derived corrections indeed give a complete description of the anisotropies in the observations.An alternative approach is to incorporate the anisotropies into the cosmography itself, thereby fitting for the anisotropies instead of making a model prediction for them.Such cosmographic treatments can determine the recent expansion history of the Universe in a model-independent way (Ellis et al. 1985;Clarkson et al. 2012).Moreover, estimates of the impact of local structure on the measurements on various scales might provide insights into cosmological tensions and anomalies seen across a growing number complementary cosmological data sets (Perivolaropoulos & Skara 2022;Abdalla et al. 2022).
To formulate a cosmography that is valid for describing the anisotropy in our cosmological measurements, we must go beyond the spatially maximally-symmetric Friedmann-Lemaître-Robertson-Walker (FLRW) spacetimes and formulate observables geometrically in the absence of metric symmetries.Such formulations were pioneered by Kristian & Sachs (1966) and later developed by, e.g., MacCallum & Ellis (1970); Clarkson & Umeh (2011); Clarkson et al. (2012); Umeh (2013); Heinesen (2021a,b); Maartens et al. (2023).The application of such frameworks to the analysis of data has so far been very limited (though see Dhawan et al. 2023;Cowell et al. 2023) due to the high quality of data and good sky coverage necessary to measure anisotropic features in the observables robustly.However, as data sets continue to grow, cosmographic analyses accounting for the imprint of large-scale structure in observations are now within reach.
Cosmographic analyses can be done with a variety of astrophysical observables.In this paper, we focus on angular diameter distance and analyse the anisotropic distance-redshift data for observers within ΛCDM cosmology.We use the UNITY simulations presented in Coates et al. (2021), which were performed with the general-relativistic N -body code gevolution (Adamek et al. 2016).Using relativistic ray tracing we calculate distances and redshifts over the full skies of synthetic observers within the simulations.We investigate the distance-redshift cosmography from Heinesen (2021a) as inferred by the observers, presenting constraints on the dominant multipoles in generalised cosmological parameters for different redshift intervals (see Heinesen & Macpherson 2022;Cowell et al. 2023, for estimates of the expected dominant multipoles).
Our high-resolution simulations present a realistic scatter of data points around any possible notion of a largescale matter congruence.Here, a matter congruence is defined as a set of timelike integral curves that provide a threading of the observed patch of spacetime.To make such a notion more precise, we run a simulation where small-scale perturbations are filtered out in the initial conditions, leading to a situation where a smooth matter congruence is uniquely defined.Additionally, in a separate set of simulations we compare our results from gevolution to those found in the large-scale numerical relativity fluid simulations presented in Macpherson & Heinesen (2021).
In Section 2, we introduce the cosmography of Heinesen (2021a) and recast the Taylor series expansion into an arguably more suitable Padé approximant, as well as discuss fluctuations with respect to this smooth congruence model.In Section 3 we discuss details of the UNITY simulations and ray tracing procedure, and in Section 4 we outline our analysis of the synthetic observations.We present our main results in Section 5 and conclude in Section 6.

FORMALISM
In this section, we reformulate the anisotropic cosmography of astrophysical sources relative to an observer in an arbirtary spacetime without exact symmetries.Following Heinesen (2021a), we first consider a situation where overlapping streams of the sources (shell crossing) can be ignored at the scales of interest, such that the astrophysical sources constitute a non-singular congruence of world lines generated by a 4-velocity field u µ .In this treatment we first assume that the observer is comoving with the congruence of sources, and then generalise the result to arbitrary observer frames.Furthermore, we assume that the photons emitted from the sources and received at the observer are described in the geometric optics limit, and that photon number is conserved, such that Etherington's reciprocity theorem applies.
2.1.Taylor series cosmographic expansion When Etherington's reciprocity applies the luminosity distance to the emitting source can be written as d L = d A (1 + z) 2 for a light beam in a general geometry, where d A is the angular diameter distance (also known as the area distance) to the emitting source and z is the redshift.
Consider the case where z is a parameter along each of the photon null rays, such that the function z → d L (z, e) is well defined for each direction e of incoming light as seen on the observer's sky.If this function is analytic, its Taylor series expansion approximates the luminosity distance in some cosmic neighbourhood of the observer.The coefficients of the series expansion can be represented as L where the effective cosmological parameters generalise the FLRW Hubble, deceleration, curvature, and jerk parameters, respectively, in the distanceredshift cosmography.
Here, k µ is the photon 4momentum, λ is an affine parameter of the geodesic defined by k µ ∇ µ λ = 1, and the operator d dλ ≡ k µ ∇ µ is the directional derivative along the incoming null ray, E = −u µ k µ is the photon energy as measured by the sources/observer, and R µν is the Ricci curvature of the spacetime.The effective cosmological Hubble, deceleration, jerk, and curvature parameters {H, Q, J, R} contain information about local kinematics and curvature effects that impact the luminosity distance-redshift relation and might for instance include anisotropic expansion of space, bulk flow of sources and lensing of photons.
In this framework, the anisotropic Hubble parameter can be written as the exact multipole decomposition where θ is the volume expansion rate in the source frame, σ µν is the anisotropic deformation rate of space as traced by matter (shear), a µ ≡ u ν ∇ ν u µ is the 4-acceleration of the source frame, and e µ denotes the spatial unit fourvector in direction e in that frame.The effective deceleration parameter can similarly be written in terms of the exact multipole decomposition q µ + e µ e ν 2 q µν + e µ e ν e κ 3 q µνκ + e µ e ν e κ e λ 4 q µνκλ , (5) with coefficients where indices between angled brackets ⟨⟩ denote the traceless and symmetric part of the tensor in the involved indices, and D µ is the spatial covariant derivative in the frame of the congruence.The operator d dτ ≡ u µ ∇ µ is the derivative along the world lines of the source congruence, and ω µν is the vorticity tensor describing the rotation of the congruence.The higher-order effective cosmological parameters, J and R, can similarly be written in terms of multipole expansions (see Heinesen 2021a, for their detailed expressions).
The radius of convergence of Eq. ( 1) and the magnitude of the remainder for the truncation of the cosmography at third order depend on the particularities of the spacetime and the astrophysical survey geometry, and must in principle be assessed for each physical scenario of interest (see, e.g., Macpherson & Heinesen 2021, for estimates of the remainder terms within some numerical examples).

Padé approximant cosmography
We may take Eq. ( 1) as a first ansatz for an approximation of the cosmic neighbourhood of the observer.However, in the real Universe, the congruence described by our time-like vector field u µ can be understood as capturing the coarse-grained dynamics at large scales.Individual astrophysical sources would then have peculiar motion with respect to this large-scale average, which would manifest as a scatter in the Hubble diagram.The observed redshift therefore has particularly large random fluctuations which renders it discontinuous and nonmonotonic along the lines of sight in the late-time, nonlinear Universe.On the other hand, a continuous and monotonic observable -at least at low cosmological distances -is the angular diameter distance.When considered as a function of the affine parameter along the line of sight, d A is mainly perturbed by weak gravitational lensing and by the relativistic aberration due to the peculiar motion of the observer.These perturbations are very smooth as long as no caustics are encountered on the light cone.In attempt to arrive at a framework which is useful for observational constraints in the late-time Universe, we therefore reformulate the distance-redshift relation using d A as the parameter in which the series expansion is performed.The inverse series of Eq. ( 1) reads L (e) L (e) d    L (e) As suggested by Heinesen (2021a), it is convenient to define the following: which have a naturally-truncating, finite multipole decomposition within the cosmographic framework.Each multipole of these parameters contains physical information about the kinematical properties of the congruence u µ .For the case of Qo (e), this can be seen in Eq. ( 6) (see also the discussion in Umeh 2013), while for Ĵo (e) this is given in Appendix B of Heinesen (2021a).With the new quantities in Eqs.(8), Eq. ( 7) simplifies to Ĵo (e) + 11H o (e) Qo (e) The third-order Taylor series expansions discussed in this section are expected to provide good approximations to the cosmographic model at very small distances.However, at larger distances the truncation error can increase very rapidly.An alternative series expansion is the Padé approximant, which can often provide improved behaviour over a finite distance interval.Padé approximants have already been proposed and implemented in the place of Taylor series expansions for more accurate constraints from distance-redshift data (e.g.Nesseris & García-Bellido 2013;Sapone et al. 2014;Demianski et al. 2017), and have been shown to have radii of convergence beyond z = 1 (Gruber & Luongo 2014), which is the strict upper limit for the radius of convergence in Taylor series expansions.
We propose the approximant which is order [3/2] and has the same Taylor series expansion as Eq. ( 9) up to O(d 4 A ). Since we only fix the first three terms in the Taylor series, other Padé approximants of order [3/2] are possible, and our choice is mainly guided by the simplicity of the expression in terms of our preferred variables.For the FLRW background solution, the truncated Taylor series (9) has an error of ≈ 0.8% with respect to the exact ΛCDM angular diameter distance at redshift z ≈ 0.15.Our expression (10) using a Padé approximant with the same number of parameters reduces this error to better than 0.2% (see Figure 1).

Fluctuations about the congruence
Eq. ( 10) can be understood as a model for the 'cosmographic' redshift of the underlying coarse-grained matter congruence.The observed redshift of astrophysical sources is more realistically given by the model plus some stochastic component δz, representing the peculiar velocities of those sources with respect to the inhomogeneous congruence, i.e.This expression combined with Eq. ( 10) generalises the common interpretation of observed redshifts in an FLRW cosmology to the inhomogeneous case.
Here we are still assuming that the observation is made in the rest frame of the congruence.In reality, the observer can have some velocity with respect to the largescale congruence which may even need to be determined from the same data.It is therefore more useful to describe the observation in an arbitrary frame and allow for three additional parameters that characterise the boost between the frames.These three parameters are included in our joint parameter fit, and thus no knowledge about the velocity of the observer is assumed.
We denote the peculiar three-velocity vector of the observer with respect to the congruence u µ as β, with length β = v/c.Special relativity implies that the observed redshift in the boosted frame, z ′ obs , is given by where e ′ is the observed position in the boosted frame.The latter is different from e in the rest frame of the congruence due to relativistic aberration.They are related by The aberration also changes the angular distance which follows from , where dΩ is the solid angle element.We therefore find the well-known result that Putting everything together we have a model for the observed redshift in the boosted frame.If the velocity is small, i.e. v ≪ c, we find The aberration of the original anisotropy pattern is a second-order effect since we may write 16) with e − e ′ ≈ (e ′ • β)e ′ − β.The second-and higherorder corrections in above equation are therefore strongly suppressed for the low multipoles of the distance-redshift relation we are interested in, allowing us to make the further approximation that z(e, d A ) ≈ z(e ′ , d A ).If we also drop the term β × δz from Eq. ( 15), which is quadratic in velocities, we finally arrive at 17) This is the expression we use to constrain the anisotropies in our cosmography model (10) using the observed redshifts, angular diameter distances, and sky positions measured by the observers in the simulation.We describe the process of constraining the parameters of the cosmography in Section 4.2.

SIMULATIONS
In order to assess how well the parameters encoding the anisotropy of the distance-redshift relation can be measured when assuming the cosmographic model presented in the previous section, we use synthetic source catalogues that we generate from simulations.In Section 3.1, we describe the existing UNITY simulation data we use for our main analysis.In Section 3.2 we introduce new, low-resolution simulations, which represent 'smooth' analogues of the UNITY simulations from our main analysis.

UNITY simulation suite
We use data from existing simulations described in Coates et al. (2021): the UNITY simulation suite consists of 34 independent realisations of a flat ΛCDM Universe with cosmological parameters h = 0.67, Ω b = 0.049, Ω c = 0.27, T CMB = 2.7255 K and a minimal-mass normal hierarchy of neutrinos with m ν = 0.06 eV.Gaussian initial conditions follow a power law for primordial perturbations with amplitude A s = 2.1×10 −9 at the pivot scale of 0.05 Mpc −1 and spectral index n s = 0.96.The simulated volume is a cube of 4032 Mpc/h on each side, and the matter (baryons and cold dark matter are treated as a single species) is sampled using 2304 3 mass elements that form a collisionless N -body ensemble.In the present analysis we only use one of the realisations available, but the remaining ones could be analysed in the future to generate an ensemble of observers and quantify the cosmic variance in the cosmographic parameters.
The simulations were performed using the N -body code gevolution (Adamek et al. 2016) which adopts a weak-field expansion of general relativity to incorporate relativistic effects.The numerical output consists of data for the matter distribution and metric perturbations on the past light cone of an observer placed at a random location inside the simulation.In the present study we therefore do not consider any specific observer selection effects: as our synthetic observations span hundreds of megaparsecs, we expect such selection effects to be rather unimportant.We ran some quick checks on a few other realisations to convince ourselves that the realisation we use here is not a statistical outlier.
For convenience, the data are discretised in terms of concentric shells around the observer that are evenly spaced in comoving coordinates at intervals of 1.75 Mpc/h.Each shell is pixelised using the HEALPix framework (Górski et al. 2005) at a resolution adapted to the surface area of the shell in order to sample the simulated matter and metric perturbations as close as possible to the resolution of 1.75 Mpc/h, which corresponds to the Cartesian mesh used in the particle-mesh algorithm of the code.
The discretised data are post processed using a simple ray tracer that works in the Born approximation (see Lepori et al. 2020, for details of the ray-tracing procedure).Assuming that the lines of sight are unperturbed has the benefit that observables can be computed very efficiently by essentially performing weighted sums of pixel maps corresponding to consecutive shells down the past light cone.The ray tracer computes two observables for each pixel: the observed redshift z ′ obs and the angular distance d ′ A , where here the primes indicate that the observation is done in the rest frame corresponding to the time slicing of Poisson gauge, which closely matches the rest frame of the cosmic microwave background.In a Universe with inhomogeneous matter, this frame does in general not coincide with the local rest frame of the matter flow, i.e. one expects to see some bulk flows with respect to the cosmic microwave background.In our model, this is accounted for by the observer boost β.
Each observable includes all relativistic perturbations at first order.For the observed redshift these are the Doppler shift due to peculiar motion of the source, the gravitational redshift, the integrated Sachs-Wolfe effect and the Shapiro time delay, while the angular distance is only perturbed by the weak-lensing convergence.In order to assign the peculiar velocity we take a massweighted average of the velocity of all particles in each pixel.It is worth pointing out that, although all effects are added linearly by the ray tracer, they arise from a completely non-perturbative underlying solution.
For this work we assume that astrophysical sources are good tracers of the underlying matter, i.e. we sidestep the issue of bias and use a density of sources that is directly proportional to the matter density.This also avoids the issue of empty pixels in which no unique velocity can be assigned.

'Smooth' simulations
The simulations described in the previous section contain small-scale nonlinear structures at low redshift, thus, constructing a large-scale congruence description of these simulations is non-trivial (e.g.Clarkson et al. 2011).We attempt to do this by directly fitting the cosmographic framework outlined in Section 2, which essentially introduces a 'smoothing length' corresponding to the redshift range of the data we include.We also perform the same fits using a simulation which should have a uniquely defined large-scale congruence description.By comparing this fit to direct calculations of the effective cosmological parameters (3) in the same simulation (in a manner equivalent to those presented in Macpherson & Heinesen 2021), we can assess how well the fit is matching the 'true' congruence description of the simulation.
We take the UNITY simulation outlined in the previous section and remove all perturbations with wave-lengths shorter than 670 Mpc from the initial data by applying a low-pass filter.We decrease the spatial resolution of this simulation by a factor of four and the mass resolution by a factor of eight -justified by the lack of small-scale power in the simulation.Aside from this resolution change, we perform the simulation with parameters as described above.The filter is applied only to the initial data, and naturally some structures beneath this scale will form through the course of the simulation.However, the power on small scales will be significantly reduced at late times with respect to the original UNITY simulations.We thus consider the scale of 670 Mpc as an approximate smoothing scale for the late-time simulation, which is much larger than the resolution scale of approximately 10 Mpc.For this simulation, we use the analysis code of Macpherson & Heinesen (2021) to calculate the effective cosmological parameters (3) by taking numerical derivatives of dynamical quantities at the observer position.We then use the data obtained by ray tracing to constrain these same parameters and compare the results.Macpherson & Heinesen (2021) performed their analysis on simulations from the Einstein Toolkit (Löffler et al. 2012).To check for consistency when changing the simulation framework, we replicated one of those simulations with gevolution and repeated the analysis.The Einstein Toolkit uses a fluid approximation for the matter field in numerical relativity while gevolution uses an N -body particle description in weak-field general relativity.We find good agreement between these two cases (see Appendix A) and thus conclude that, under the right conditions, we can find a large-scale congruence description of the N -body particle simulation data.
It is worth pointing out that due to the noncommutation of averaging and time evolution in nonlinear general relativity, we do not in general expect the large-scale congruence of the UNITY simulation to coincide with that of the simulation with smoothed initial data.

PARAMETER CONSTRAINTS
In this section we discuss details of our method for constraining the anisotropic cosmography using synthetic observations from the UNITY simulations.First, we describe the assumptions we make and how they reduce the degrees of freedom of the model cosmography, and then we outline the pipeline we use to constrain these degrees of freedom using simulation data.

Dominant multipoles
The effective cosmological parameters in the Padé approximant (10) of the distance-redshift relation contain a hierarchy of multipoles.The multipole decompositions of the Hubble and deceleration parameters are given in (4) and ( 5), respectively, and the curvature and jerk parameter decompositions can be found in Heinesen (2021a).This full hierarchy contains many degrees of freedom (though see Heinesen & Macpherson 2022, for a reduction in the degrees of freedom by a factor of two), and constraining all of them with current data is difficult.We can make some simplifying assumptions to reduce the number of degrees of freedom and allow the best possible constraints.For a congruence of collisionless matter the acceleration vector a µ vanishes, which removes the dipole component of the anisotropic Hubble parameter1 .In this work, we thus focus on constraining the monopole and quadrupole of the Hubble parameter as well as the independent multipoles of the deceleration parameter (monopole, dipole, quadrupole and octopole).According to Eq. ( 6), the hexadecapole of the deceleration parameter is fully determined by the quadrupole of the Hubble parameter, and we build this constraint into our model.We fix Ĵo = 0 which is close to the value for a flat FLRW universe with a realistic value of the present energy density of radiation, Ĵo /H 3 0 = 2 Ω rad ≈ 10 −4 .Note the "−1" that enters the definition of Ĵo in Eq. ( 8) to see the connection to the canonical jerk parameter.

Markov chain Monte Carlo (MCMC) analysis
We constrain the components of each anisotropy tensor by modelling the logarithm of the likelihood as up to an irrelevant constant.In this formula, we compute the residual δz (i) for each individual source from Eq. ( 17), i.e., we solve that equation for δz using the observed redshift z ′ obs , the observed direction e ′ and the observed angular distance d ′ A of each observed pixel i.The model parameters are given by the boost velocity vector β (three components) and all the parameters that enter Eq. ( 10) describing the cosmography (22 independent parameters after fixing the four-acceleration and the jerk).Furthermore, we add two nuisance parameters to model the Gaussian scatter σ(d A ), in order to allow a certain evolution of the scatter with distance.This means that the Gaussian scatter needs to be evaluated for each pixel separately, and we try to find a global fit for σ 0 and s 1 .The underlying assumption of this likelihood model is that δz is indeed a Gaussian random variable with zero expectation value and a variance that mildly depends on angular distance.Our results show that this is a reasonable assumption in the presence of small-scale nonlinear structure, but not such a good approximation for the case where we filtered the initial conditions.Details are discussed in the next section.
Our data vectors consist of randomly selected pixels sampling the full sky at redshifts below a chosen threshold.We set the probability for selecting a pixel proportional to its comoving matter density.Since we want to model the distance-redshift relation over a certain baseline distance, we want the sources roughly evenly distributed across that distance.The volume factor naturally favours objects at large distance, and we compensate this fact by scaling the probability by an additional factor proportional to the solid angle of the pixel.In this way, for any given solid angle of the observations, the data are roughly evenly distributed in distance.This is of course just one possible selection function, and we 0.02 0.04 0.06 0.08 0.10 0.12 0.14 for the quadrupole (ℓ = 2) of Ho/H 0 which measures the shear tensor of the coarse-grained matter congruence relative to the expansion, and the independent multipoles of Qo/H 2 0 that characterise the anisotropy in the deceleration parameter.The error bars indicate 95% confidence intervals.For all anisotropy parameters the power drops nearly exponentially as the fitted range in redshift z increases (horizontal axis).The dashed lines show results from a simulation where perturbations below the scale of 670 Mpc were filtered out in the initial data, and hence a well-defined matter congruence exists.For this case, the horizontal dotted lines show an alternative measurement using the properties of null geodesics at the observer location (z → 0).
expect that different selection functions may lead to different results depending on how much they favour lowredshift sources over high-redshift ones, or vice versa.We use different redshift thresholds to gain some first insights into this effect.
Our analysis does not require a lower redshift cutoff and we even include sources with negative redshift.In fact, it may be important to keep such sources for an accurate fit of the observer velocity.We vary the redshift range of sources in our synthetic catalogs up to a maximum of z = 0.15 to ensure a small truncation error in our Padé approximant.Keeping the number of sources at low redshift approximately constant, this results in a different number of sources as we increase the maximum redshift.We tested different values for the density of sources, and as one expects, a larger number of sources generally leads to tighter posterior contours while remaining consistent with one another.Our final choice strikes a balance between constraining power and cost of running the MCMC chains, while also being guided by the typical size of existing source catalogs from real observations.In the UNITY simulation the number we used ranges from about 2000 sources at z = 0.04 up to about 7500 sources at z = 0.15.For the 'smooth' simulation we have fewer sources, ranging between about 500 at z = 0.025 to about 2000 at z = 0.15.

RESULTS
Figure 2 shows the multipole power 2 as a function of the maximum redshift z max of the fitting interval.Solid curves represent constraints in the fully nonlinear UNITY simulation and dashed curves represent constraints in the simulation with filtered initial data.We show the power in the quadrupole in H o /H 0 (dark blue) as well as the dipole (brown), quadrupole (red), and octopole (orange) in Qo /H 2 0 , where H 0 = θ/3 is the monopole of the expansion rate H o .The error bars indicate 95% confidence intervals.The smaller error bars in the 'smooth' case are due to the much reduced scatter in the data, owing to the lack of small-scale perturbations.The horizontal dotted lines (without error bars) indicate the multipole amplitudes found by taking numerical derivatives along null rays at the observer location in the 'smooth' simulation, following the analysis of Macpherson & Heinesen (2021) (see Appendix A for more details).
The 'smooth' simulations have structure below 670 Mpc removed from the initial data, corresponding to the diameter of the light cone at z ≈ 0.08.Consequently there is relatively little evolution of the multipole amplitude across the entire redshift range probed.The opposite is true for the simulation that includes smallscale nonlinear structure: anisotropies are very large if the fitting range includes only very low redshifts, and they drop nearly exponentially as the fitting range is enlarged.However, the anisotropy levels always remain larger than in the case of the smoothed initial data, even beyond the smoothing scale.We reiterate that the initial data of these simulations are otherwise identical; removing structure beneath 670 Mpc is the only difference.
The dotted lines in Figure 2, which are obtained by taking numerical derivatives of the distance-redshift relation at the observer location in the 'smooth' simulation, are mostly in fair agreement with our MCMC analysis, with the notable exception of the quadrupole of Qo where the measurements appear to disagree by more than two orders of magnitude.In this particular case, the apparent disagreement is a result of projecting the posterior onto C ℓ , which is a quadratic quantity.We show in Figure 7 (discussed in more detail below) that the posterior of the quadrupole components of Qo at low redshift encloses the zero quadrupole value within its 1 σ-contours.However, C ℓ effectively measures the square distance from zero in the five-dimensional parameter space of quadrupole components.The steep increase in volume as one moves away from zero initially overcomes the decrease in likelihood, giving a large value for the marginalised C ℓ but also an error bar of similar order.This effect is problematic for multipoles whose full posterior overlaps with the origin.On the other hand, if the entire posterior can be seen to be far away from zero, the C ℓ become a useful measure of the distance.
The physical reason for the expected smallness of the quadrupole term relative to the dipole and octopole of Qo can be seen in Eq. ( 6): while the dipole and octopole are dominated by spatial gradients of the matter flow field, the leading term of the quadrupole is the time derivative of the shear (assuming a geodesic flow).For cosmological matter perturbations, time derivatives are generally much smaller than spatial gradients.Fig. 3.-2D marginalised posteriors of σ 0 , H 0 (in units 100 s −1 Mpc −1 ) and q for different fitting ranges in redshift z (different colours).The sampling is roughly uniform in observed distance d A to the sources.Results from a simulation where perturbations below the scale of 670 Mpc were filtered out in the initial data are shown in orange and red.For this case, the red dashed lines show an alternative measurement of H 0 and q using numerical derivatives along null geodesics at the observer location (z → 0).The values for the fiducial FLRW background model are indicated by grey dashed lines.The inset labelled "Zoom-in" magnifies a region highlighted in the 2D plot for the parameters H 0 and q.
Cowell et al. (2023) placed an upper limit of 2.88% on the amplitude of the quadrupole in the Hubble parameter -relative to the monopole -using Pantheon+ supernovae data.The data in this work lay in the redshift range z = 0.023 to z = 0.15.The amplitude for a maximum redshift of z = 0.15 that we find from C 2 in Figure 2 is ≲ 1.6% (at 95% confidence), which is consistent with the current upper limit from supernovae and suggests that a detection should be possible in the near future.We note this amplitude is similar for both the UNITY and 'smooth' simulations we use here.
Figure 3 shows two-dimensional marginalised posterior distributions of the monopole of the Hubble parameter, H 0 , and the monopole of the deceleration parameter, The exact monopole of Q o would receive small corrections due to the anisotropy of H o , but we neglect these higher-order terms here.In other words, q parameterises the exact monopole of Qo which has a direct interpretation in terms of kinematical quantities of the cosmographic model as specified in Eq. ( 6). Figure 3 also includes posteriors for the nuisance parameter σ 0 defined in (19).Grey to progressively darker blue contours show an increasing maximum redshift of the sources as stated in the legend.Orange and red contours show constraints in the 'smooth' simulations.Grey dashed lines in the panels with H 0 and q represent the true values of these parameters in the FLRW limit of the simulations, while the red dashed lines indicate the corresponding measurements based on numerical derivatives at the observer location in the 'smooth' simulation.As a general trend, the posterior contours tend to move through parameter space and progressively shrink as the redshift range used in the fit increases.While the tightening of contours is partly due to the increase of the number of sources, the path of the best fit illustrates the dependence on the sampled volume.At low redshift, q is particularly poorly constrained, as may be expected, but even at z = 0.15 the contours have not yet converged fully to the FLRW limit.For the 'smooth' simulation, the contours at low and high redshift are roughly compatible, and both of them are in fair agreement with the UNITY simulation when data up to z = 0.1 or higher are included in the fit, see the panel labelled "Zoom-in" in Figure 3.The fitted value for σ 0 for UNITY is clustered around 8 × 10 −4 which is commensurate with the expected velocity dispersion.In the case with filtered initial data the scatter is significantly smaller, of the order of few ×10 −5 .
The monopole values found via numerical derivatives (red dashed lines in Figure 3) should correspond closely to the constrained parameters in the 'smooth' simulation case.While we find good agreement in the case of H 0 , the deceleration parameter shows disagreement between these two calculations at several σ.
Figure 4 shows a histogram of the observed scatter in redshift of the individual sources in the UNITY simulation with respect to the best-fit cosmographic model, Eq. ( 10), and the fiducial FLRW model (blue and red, respectively).In the former case, the scatter of each source is normalised to the inferred standard deviation σ d (i) A of the best-fit model, Eq. ( 19), whereas in the latter case we normalise the scatter to the standard deviation of the entire distribution.We find an excellent agreement with the Gaussian model assumed in the like- lihood analysis (dark blue curve), and the normality test using the Kolmogorov-Smirnov statistic is passed easily with a p-value of 0.2 (rising to 0.9 if the sample mean is subtracted).The scatter with respect to the fiducial FLRW model is also Gaussian, yet with a standard deviation that is about 19% larger than for the best-fit cosmography.In addition, the central value of the distribution is biased significantly -the Kolmogorov-Smirnov test rules out a zero-centered Gaussian at more than 10 σ in this case.
A corresponding histogram for the 'smooth' simulation is shown in Figure 5. Since there is no intrinsic velocity dispersion in this model the small residual redshift errors are dominated by non-Gaussian numerical noise due to discretisation.We confirm this by conducting a resolution study, showing that the scatter almost doubles when we decrease the resolution of the 'smooth' simulation by a factor of two.As a consequence, the Gaussian likelihood model is not a good approximation in this case, and we caution that the corresponding posteriors of our MCMC analysis should therefore be understood as being only indicative.While the mean of the data can still be inferred accurately, the model provides only a poor description of the scatter around that mean, making it hard to interpret the size of the posterior contours.
It is interesting to note that the cosmographic model provides a better and less biased description of the data than the fiducial FLRW model in all cases shown in Figures 4 and 5.We attribute this to the fact that the cosmographic model already accounts for some of the departure from the homogeneous and isotropic limit by fitting an inhomogeneous coarse-grained matter congruence to the observations.This provides a better physical model for the distribution of sources.
Figure 6 shows two-dimensional posterior contours for the five independent components of the quadrupole of H o , the shear tensor.The anisotropy in the expansion rate is typically of the order of a few percent in the redshift ranges studied, and systematically decreases towards larger redshift ranges.It is worth pointing out that the contours generally exclude the isotropic FLRW model (grey dashed lines) with high significance, and that convergence towards isotropy does not follow a straight path in parameter space.This means that the best-fit quadrupole shape depends quite strongly on the redshift window over which it is inferred.The red dashed lines in Figure 6 indicate the quadrupole shape determined from numerical derivatives at the observer location within the 'smooth' simulation.The corresponding contours from the MCMC analysis (very tight orange and red contours) are consistent with themselves over varying redshifts and agree extremely well with this alternative measurement.We remind the reader that the confidence contours should not be taken at face value in this case, since we know that the Gaussian likelihood model is not a good description.
Figure 7 shows the same two-dimensional posteriors of the quadrupole components of H o together with the quadrupole components of the deceleration parameter Qo .The panels showing the shear, in the lower-right part of the plot, are zoomed-in versions of Figure 6 to make the contours for the 'smooth' simulation more visible.The upper left part of the plot shows the five independent components of the quadrupole of Qo .As noted earlier, the posterior for the individual quadrupole components in the 'smooth' simulation, when measured at low redshift (orange contours), is compatible with zero at 1 σ in all cases.The large value of C ℓ seen in Figure 2 therefore needs to be interpreted as a projection effect from five-dimensional parameter space.The five panels highlighted in light green in the lower left part of Figure 7 show the degeneracy between H o and Qo in the anisotropic distance-redshift relation.The degeneracy is considerably weaker in the 'smooth' simulation where the quadrupole of H o can be measured very robustly.In the UNITY simulation the degeneracy remains significant even at high redshift.Using prior knowledge about the expected smallness of the deceleration quadrupole might therefore yield a more robust constraint.
In Figure 8 we show two-dimensional constraints on the components of the dipolar contributions to the distanceredshift relation.We assume from the outset that the expansion rate has vanishing dipole as it would be proportional to the acceleration at the observer, see Eq. (4).Therefore, the possible sources of a dipole anisotropy are the dipole in the deceleration parameter, ( 1 q x , 1 q y , 1 q z ), and the observer velocity with respect to the best-fit congruence, (β x , β y , β z ).As previously, with increasing maximum redshift we generally see the contours for the deceleration anisotropy progress towards the isotropic limit.This progression is much slower for the observer boost, which is still very far away from zero even for the largest redshift interval that goes to z = 0.15.For that case, the posteriors of the components of β overlap with the ones measured in the 'smooth' simulation.For each of the three dipole axes, there is a noticeable degeneracy between the amplitude of the deceleration dipole and the observer velocity, as can be seen by the fact that the corresponding contours are tilted.The degeneracy is largest at low redshift and gradually disappears as the redshift The sampling is roughly uniform in observed distance d A to the sources.The five independent components of σµν , which is the shear tensor of the fitted coarse-grained matter congruence, govern the quadrupole anisotropy of the Hubble parameter.Results from a simulation where perturbations below the scale of 670 Mpc were filtered out in the initial data are shown in orange and red.For this case, the red dashed lines show an alternative measurement using the properties of null geodesics at the observer location (z → 0).All shown quantities would be zero in an exact FLRW model (grey dashed lines).
interval grows -this is expected since the two effects act differently across redshift.As a guide to the eye, we have highlighted the three relevant panels in light green.
The dashed red lines in Figure 8 indicate (in the relevant panels) the deceleration dipole measured in the 'smooth' simulation using numerical derivatives at the observer location or the relative velocity of the observer frame with respect to the matter at that same location.The MCMC analysis recovers the observer boost very accurately from both redshift windows (extremely tight orange and red contours), whereas the dipole of the deceleration is not measured as well, especially when using only data at low redshifts z < 0.025.

CONCLUSION
Based on the cosmographic framework by Heinesen (2021a) for analysing the distance-redshift relation in a model-independent way, we have developed a new approach to constrain the anisotropy in the expansion q µ characterise the dipole anisotropy of the deceleration parameter whereas the three components of the velocity vector β parameterise the peculiar velocity of the observer with respect to the fitted coarse-grained matter congruence.Results from a simulation where perturbations below the scale of 670 Mpc were filtered out in the initial data are shown in orange and red.For this case, the red dashed lines show an alternative measurement using the properties of null geodesics at the observer location (z → 0).All shown quantities would be zero in an exact FLRW model (grey dashed lines).The panels highlighted in light green show the degeneracies between pairs of parameters that affect the same dipole axis.rate, deceleration, and higher-order differentials of the distance-redshift relation in the real Universe that is inhomogeneous on small scales.These new observables encapsulate information about the inhomogeneities within the local volume over which the data are taken.The observational selection function introduces an effective averaging procedure, and as the probed volume increases we see a trend towards the FLRW limit, as expected in the simulated ΛCDM model of our analysis.The departure from FLRW as a function of selection window can be seen as a new observable that can be predicted for any cosmological model, opening up a new way to test and constrain different possible scenarios.Our analysis can be applied directly to real observations.
Here we used synthetic data, generated with ΛCDM N -body simulations with relativistic ray tracing, to develop our analysis.Since a third-order Taylor series expansion of the distance-redshift relation (including expansion rate, deceleration, and jerk) has an error larger than 1% already at redshift z = 0.2, we employ a Padé approximant that is better behaved over an extended redshift range and limits our truncation error below 0.2%.We find that the lowest multipoles in the expansion rate and deceleration can be detected at high significance with a few thousand sources that sample the full sky and have a scatter that is dominated by peculiar motion.Under such optimal measurement conditions, the observed deviation from FLRW is still significant even at redshift z = 0.15, i.e. in a volume measuring more than 1 Gpc in diameter.In particular, the variation of the expansion rate can be as large as 1% across the sky, while the deceleration parameter can have variations of order 10%.Results for different redshift ranges are summarised in Figure 2; C ℓ effectively measures the variance of the expansion rate or deceleration across the sky, hence C 1/2 ℓ indicates the relative fluctuation amplitude.
In relation to this, it is worth noting that deviations from the ΛCDM initial conditions could change the quoted amplitudes of the anisotropies at various scales.If such deviations happen in our Universe, as some empirical observations of large-scale configurations and flows of matter may indicate (see, e.g., Migkas et al. 2021;Peebles 2023;Watkins et al. 2023;von Hausegger 2024), this could lead to measurements of multipoles that differ quantitatively in amplitude from those of the fiducial observer of this analysis.If we are atypical observers in one way or another, this can also cause differences in the measured data relative to the fairly typical ΛCDM observer of our analysis.The methods that we have presented in this work can deal with both of these scenarios: it is precisely the strength of the cosmography framework presented here that it allows to be agnostic about the particularities of the ambient geometry and perturbation statistics.
We compare our new analysis to previous work where the multipoles of the distance-redshift relation are computed directly from the kinematical properties of the matter congruence at the observer location.This requires a unique notion of matter congruence that only exists in a Universe where the overlap of multiple streams of matter (often called 'shell-crossing') is strictly avoided.We therefore run a simulation using initial conditions where small-scale perturbations were artificially removed by applying a low-pass filter.We find that the local definition of the expansion rate agrees well with our new analysis when applied to the same simulation.The agreement is indeed extremely precise for the monopole and quadrupole of the expansion rate, where both methods give the same departure from the FLRW limit.The inferred deceleration parameter and its multipoles, when measured using our new analysis, do not resemble the local measurement that well.This may be related to the fact that the latter measurement is based on numerical derivatives of data, which are prone to numerical issues -such as particle aliasing when performing a particleto-mesh projection -especially for higher derivative orders.We also note, however, that our MCMC analysis is based on a Gaussian likelihood model, which does not well describe the true scatter of the data when smallscale structure (otherwise the primary cause of scatter) has been artificially suppressed.
While finalising this article, an interesting study by Kalbouneh et al. (2024) appeared that is in many ways complementary to our work.Using a toy model for a large inhomogeneity in the local Universe that still admits a unique matter congruence, the anisotropies in the distance-redshift relation are discussed in the cosmographic framework.Their discussion of the role of the observer frame is consistent with our approach, showing that it can be included in the analysis as we both suggest.While they advocate using a Taylor series expansion in redshift for the distance-redshift relation (similar to earlier work, including some of our own, e.g.Heinesen 2021a; Heinesen & Macpherson 2022), we believe that using the inverse series, i.e. an expansion in angular diameter distance, has advantages when dealing with real data.2In particular, angular diameter distance is a smooth parameter along any null ray and is in practice strictly monotonic over a significant portion of the local light cone.The irreducible, intrinsic scatter of the data is instead linked mainly to Doppler contributions that affect the observed redshift, rendering this quantity discontinuous and badly non-monotonic.Our approach does not require a binning in redshift and can effortlessly incorporate negative redshifts which could be relevant for accurately measuring large observer boosts (it is of course physically impossible to find negative values of the angular diameter distance of a source).
In the realistic case where highly nonlinear structures form in the Universe, the notion of a matter congruence becomes ambiguous.Our approach provides an observational procedure to construct an effective congruence for a given survey footprint, allowing us to quantify the expected deviations from the FLRW limit.Since these deviations could be measured out to very large scales, they could be used as new probes in the analysis of cosmic large-scale structure, shedding a new light on the questions of how data should be interpreted in an inhomogeneous Universe and how exactly the FLRW limit is approached when taking summary statistics.
The simulations used in this work were originally run on the DiRAC DIaL system, operated by the University of Leicester IT Services, which forms part of the STFC DiRAC HPC Facility (www.dirac.ac.uk).Reusing existing simulation data greatly reduces the carbon footprint of our research activity.JA, RD and MK acknowledge financial support from the Swiss National Science Foundation.Support for HJM was provided by NASA through the NASA Hubble Fellowship grant HST-HF2-51514.001-A awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA, under contract NAS5-26555.CC is supported by the UK Science & Technology Facilities Council Consolidated Grant ST/T000341/1.

APPENDIX COMPARISON WITH THE EINSTEIN TOOLKIT
In this Appendix, we consider two simulations with identical initial data performed with different codes: gevolution and the Einstein Toolkit3 (ET; Löffler et al. 2012;Zilhão & Löffler 2013).The purpose of this test is to ensure that our calculations of the cosmographic parameters using N -body simulation data are consistent with a simulation of a strict fluid congruence in a carefully controlled setup, which we describe below.
The simulation performed with the ET mimics the setup used in Macpherson et al. (2019); Macpherson & Heinesen (2021), namely that of a matter-dominated universe initialised as a linearly-perturbed Einstein-de Sitter (EdS) model.The initial conditions are set up with FLRWSolver4 (Macpherson et al. 2017) as Gaussian random fluctuations following the matter power spectrum at z ini = 1000 from CLASS5 (Blas et al. 2011).To replicate the same spacetime solution with gevolution we generate the initial data for the ET simulation -which adopts a fluid approximation for the matter contentand then translate these to particle displacement fields and velocities for the N -body ensemble of gevolution using a method described in Adamek et al. (2022).
The matter power spectrum we use to generate the initial data has initial perturbations smaller than ∼ 200 Mpc/h removed, namely, we set P (k > k max ) = 0 for k max = 2π/λ min and λ min = 200 Mpc/h.Then the power at k < k max is set according to the output power spectrum from CLASS.We choose this particular scale cut to ensure that the structures appear at large scales only, and thus should remain well within the perturbative regime where the fluid description is valid.For this test, we choose an initial smoothing scale which is distinct from the smoothed simulations used in the main text in order to replicate the initial data of an existing ET simulation.In the presence of nonlinear structures, we might expect different results when comparing simulations performed under a fluid approximation and with an N -body description, especially when sampling small enough scales such that we resolve shell-crossing, in which case the fluid approximation breaks down completely.
Both simulations use a mesh of 256 3 points, and the N -body simulation is run with 1024 3 particles to reduce noise from the particle discretisation.We choose a physical box length of L = 1.28 Gpc/h for both simulations.The simulations are both initialised in the Newtonian gauge, and the lapse and shift chosen in the ET simulation will maintain this gauge so long as the perturbations remain in the linear regime (see Macpherson et al. 2019, for more details on the gauge used).The relativistic Nbody simulation with gevolution uses the Newtonian gauge throughout.
Figure 9 shows two-dimensional slices of the fluid scalars in the final snapshots of the two simulations run with gevolution (left panels) and the ET (right panels).For each simulation, we show the density field, expansion scalar, shear scalar, and acceleration magnitude (top-left to bottom-right on each side, respectively).While the outputs of the two simulations show a good overall agreement on large scales, the particle-to-mesh projection used to construct the fields from the N -body ensemble leads to small-scale numerical artefacts that are particularly noticeable in the bottom-right subplot showing the acceleration.
The smooth density and velocity fields constructed from gevolution were then passed through the same post-processing code as for the ET data (mescaline; see Macpherson et al. 2019).For both simulations, we calculate the effective cosmological parameters on the slice at redshift z = 0 (see Macpherson & Heinesen 2021, for further details on the calculation itself).We calculate the effective Hubble, deceleration, jerk, and curvature parameters first for a set of 100 observers randomly placed in each simulation domain (with equivalent positions between simulations), with lines of sight drawn in the direction of 12 × N 2 side HEALPix pixels with N side = 32.Figure 10 shows a set of Mollweide projections of the effective Hubble, deceleration, curvature, and jerk (topleft to bottom-right, respectively, in each set of panels) for one observer placed at the same location in the simulations from gevolution (left panels) and ET (right panels).With the exception of curvature, all parameters are normalised by their EdS counterpart.We notice both the anisotropic signatures and amplitudes of the parameters are very close between the simulations, with the exception of the jerk parameter (bottom right in both sets of panels), which shows visible differences in anisotropic signature and a larger difference in amplitude.
Since this analysis is based on numerical derivatives of the fields at the observer location, we expect that highorder derivatives -like the ones that enter the calculation of the jerk -may become contaminated by the particle discretisation, possibly requiring a higher-order particle-to-mesh projection to mitigate (we used 'cloudin-cell' projection).However, we do not expect that our MCMC analysis in the main text suffers from similar shortfalls, as no derivatives of data are taken at all in this case.
Next, we consider a set of 1000 observers (again placed in identical positions between the two simulations) each with 300 randomly-drawn lines of sight (lines of sight are unique for each observer but identical between simulations).For each line of sight we calculate the four effective cosmological parameters, and average the result over the sky for each observer.This procedure done in Macpherson & Heinesen (2021) to provide an estimate on the observers "measurement" of these parameters, and to assess how they might vary with observer position.
Figure 11 shows these sky averages of the four effective cosmological parameters (panels; normalised by each EdS counterpart) for all observers in the simulations from gevolution (x-axes) and ET (y-axes).Solid black lines in each panel shows a 1:1 correspondence, i.e. what we would expect if the ET and gevolution results were identical.The effective Hubble parameter, deceleration parameter, and curvature parameter show a good agreement between the codes.We note there are some small differences, however, we emphasise that we are comparing two independent codes which have different numerical errors as well as distinct descriptions of the matter field itself.As in Figure 10 and for reasons already discussed, we do not find a good agreement for the effective jerk parameter.
We conclude that the simulation run with gevolution provides a very good description of the large-scale congruence in this controlled situation in which we expect it to exist.

z
Fig. 1.-We show the relative error on the distance-redshift relation as a function of redshift for the third-order Taylor series expansion (9) and the [3/2]-order Padé approximant (10) in red and blue, respectively.Both approximations use the same expansion rate, deceleration parameter and jerk, and were compared to an exact ΛCDM background solution with Ωm = 0.31, Ω rad = 10 −4 , and vanishing curvature.
Fig.4.-Scatter of individual sources around the modelled distance-redshift relation.The best-fit cosmography is shown in blue, and the scatter is normalised according to the best-fit Gaussian inferred in the likelihood analysis (dark blue line).The fiducial FLRW model is shown in red, and we normalise the scatter by the standard deviation this case, i.e. without assuming a model.The histogram shows that the assumed Gaussian likelihood model provides a fair description of the data, here for the case of approximately 5000 sources with redshifts z < 0.1.
Fig.5.-Same as Fig.4, but for the simulation with filtered initial conditions and using approximately 2000 sources with redshifts z < 0.1.The Gaussian likelihood model is clearly not a very good description of the data in this case.

Fig. 6 .
Fig. 6.-2D marginalised posteriors of the quadrupolar contributions to Ho for different fitting ranges in redshift z (different colours).The sampling is roughly uniform in observed distance d A to the sources.The five independent components of σµν , which is the shear tensor of the fitted coarse-grained matter congruence, govern the quadrupole anisotropy of the Hubble parameter.Results from a simulation where perturbations below the scale of 670 Mpc were filtered out in the initial data are shown in orange and red.For this case, the red dashed lines show an alternative measurement using the properties of null geodesics at the observer location (z → 0).All shown quantities would be zero in an exact FLRW model (grey dashed lines).

Fig. 8 .
Fig. 8.-2D marginalised posteriors of dipolar contributions to the distance-redshift relation for different fitting ranges in redshift z (different colours).The sampling is roughly uniform in observed distance d A to the sources.The three components of 1 Fig. 9.-2D slices at redshift z = 0 of the density, scalar expansion, shear scalar, and acceleration scalar (top-left to bottom-right, respectively) for gevolution data (a) and ET data (b) for runs with identical initial conditions.The physical box length is 1.28 h −1 Gpc and all modes with wavelength below 200 h −1 Mpc were removed from the initial conditions.
Fig. 10.-Sky maps of the effective parameters for the same observer in the simulation from gevolution (a) and ET (b) with domain size 1.28 h −1 Gpc and all power beneath 200 h −1 Mpc removed from the initial conditions.Each parameter is shown normalised by its EdS counterpart, with the exception of curvature.