Clustering of dark matter in the cosmic web as a probe of massive neutrinos

The large-scale structure of the Universe is distributed in a cosmic web. Studying the distribution and clustering of dark matter particles and halos may open up a new horizon for studying the physics of the dark Universe. In this work, we investigate the nearest neighbour statistics and spherical contact function in cosmological models with massive neutrinos. For this task, we use the relativistic N-body code, gevolution and study particle snapshots at three different redshifts. In each snapshot, we find the halos and evaluate the letter functions for them. We show that a generic behaviour can be found in the nearest neighbour, $G(r)$, and spherical contact functions, $F(r)$, which makes these statistics promising tools to constrain the total neutrino mass.


INTRODUCTION
The data sets from ongoing (Aghamousa et al. 2016) and upcoming (Amendola et al. 2018) large scale structure (LSS) surveys not only let tighter constraints on the cosmological parameters but make it possible to address some fundamental questions in physics and cosmology-e.g., the equation-of-state of dark energy, the characteristics of dark matter, and the total neutrino mass.With a wealth of raw data, the task is to extract as much possible information in a computationally efficient way from these data sets.Due to the presence of non-linearities, especially in the late-time Universe, this will be a challenging task.On one hand, the linear perturbation theory (LPT) still holds on large scales (Dodelson 2003), where the density contrast field can also be considered a Gaussian random field.So, the traditional 2-point correlation function contains all the information and is the most suitable statistic for extracting the information from the LSS data (Romanello et al. 2023).On the contrary, at small scales (Cooray & Sheth 2002), the nonlinear gravitational evolution of matter leads to a non-Gaussian field as a result of mode couplings.In this situation, information diffuses into the higher-order correlation functions, and the 2-point correlation function does not contain all the information.So it fails to grasp the complete content of the data set.The direct way to capture leaked information from the second moment is to compute higher-order ★ baghram@sharif.educorrelation functions, e.g., bispectrum (Sefusatti et al. 2016).As going to deep non-linear regimes, it is not straightforward that the information is stored in low moments of the correlation function.It means that the whole probability distribution function (PDF) incorporates the non-Gaussian effects.One approach to circumvent these difficulties is to consider the 2-point correlation function (or its Fourier counterpart, the power spectrum) equipped with extra information -e.g., the marked power spectrum (Stoyan 1984) -or by calculating it in different environments like voids, nodes, sheets and filaments in real space (Bonnaire et al. 2022), or in redshift space (Bonnaire et al. 2023).Parallel to these efforts, many nonlinear semi-analytical models have been introduced, that use the relation between linear scale matter distribution statistics and nonlinear observables such as the number density of dark matter halos.Peak theory (Bardeen et al. 1986;Sheth & Tormen 1999) and the excursion set theory (Bond et al. 1991;Sheth et al. 2001;Zentner 2007;Nikakhtar & Baghram 2017;Nikakhtar et al. 2018) are two non-linear structure formation models, which use the relation between linear and non-linear scales to probe the distribution of matter over a wide range of scales.Another strategy is to search for other statistical measures sensitive to all higher-order correlation functions that are computed for one point in space.To mention a few -the PDF of the matter density field smoothed at some radius (Uhlemann et al. 2020;Boyle et al. 2021;Gough & Uhlemann 2022), k-nearest neighbours cumulative distribution functions (CDF) or peaked-CDF (Banerjee & Abel 2020;Banerjee et al. 2022;Yuan et al. 2023), and the pio-neering work of White (1979) using the void probability function to study the clustering of the tracers, e.g., halos or galaxies.Fard et al. (2022) employed the nearest neighbour CDF, or  () and the spherical contact CDF, or  () within the context of the standard model of cosmology based on cosmological constant Λ and cold dark matter (CDM) known as ΛCDM model using the SMDPL simulations (Moliné et al. 2023) with different mass cuts and at redshifts  = 0, 0.5, 1.They demonstrated that these statistical measures could differentiate between samples constructed by mass cuts at various redshifts.
The subsequent query is if these statistics (i.e.,  and ) can distinguish between different extensions of the ΛCDM.In this paper, we compute these summary statistics for the base ΛCDM cosmology and cosmologies with massive neutrinos, ΛCDM.After a brief review of the impact of massive neutrino on the late time cosmology and the statistics used in this work in Section 2, we describe the simulations and the data preparation in Section 3. The results of the analysis are depicted in Section 4. Finally, besides wrapping up, we point out the future direction in Section 5.

THEORETICAL BACKGROUND
In this section, we present the theoretical background needed for this work.In the first subsection, we review the idea of one-point statistics, and in the second subsection, we address the effect of massive neutrinos on large-scale structures.

One point statistics
In the spatial point analysis, we use the cumulative distribution functions of the nearest neighbour distances (NNDs) such as  () and  () to reveal the clustering feature of different samples.By comparing these functions to those calculated for a Poisson point process known as complete spatial randomness (CSR) with the same average intensity (roughly, the number of points per volume), we can effectively identify and analyse spatial patterns.
In our case, the nearest neighbour distances measure the separation between two events from the data set (here, the position of halos in the halo catalogue) or the distance between an event and a reference point (in our analysis, the reference points are randomly generated points in the simulation box with the same intensity as the halo catalogue).The nearest neighbour CDF,  () indicates the proportion of the halos shown by  having their first nearest neighbour on a sphere of radius   ≤  around them.On the other hand, the spherical contact CDF,  () shows the fraction of the reference points indicated by  with an event point (position of a halo) within the distance   ≤ .The presence of clustering leads to higher values for  () and  () in comparison to CSR (Hand 2008). Moreover, Fard et al. (2022) argued that  () is advantageous for studying the high-density regions and small scales while  () comes in handy to investigate the larger scales.An interpretable combination of these two functions is the −function: with the value one for CSR while in the presence of clustering, this function is less than unity.The letter functions (i.e., , , and ) are the statistics we use in this work.In the context of cosmology, whenever dealing with new statistics, two natural questions always arise: how much information these statistics contain and what is the connection between these statistics and the classic n-point correlation functions.To address these, White (1979) introduced conditional correlation functions Ξ  , where the integrals are over an empty volume of interest , except for the points   and   are the n-point correlation functions.Note that by definition  0 = 0 and  1 = 1.White (1979) showed the probability of finding an empty sphere of radius  around a point, i.e., the void probability - 0 (), is related to Ξ 0 through  0 () = exp [Ξ 0 ( ())], which is just the complement of  ().The probability of a halo having its first nearest neighbour on the surface or inside of a sphere of volume  is the complement of no other halo at this distance, which reads These relations, along with the equation 2, manifest the dependence of these statistics on not only the 2-point correlation function but on all the higher-order correlations.Accordingly, it is reasonable to use these letter functions to investigate cosmological models, especially in non-linear regime.

Massive neutrinos in cosmology
The detection of the atmospheric neutrino oscillation indicates that at least two of three neutrino flavors of the standard model of particle physics have mass.The latest data release from KATRIN experiment (Aker et al. 2022) suggests that the upper limit for the total neutrino mass would be around   ≲ 0.8 eV (90% C. L.).For cosmology, this experimental fact means neutrinos belong to the matter sector rather than the radiation in late times.Consequently, the neutrino energy density at redshift zero reads, with the neutrino mass fraction   =   /  , where   = Ω  ℎ 2 .Hence, cosmological data sets can constrain the total neutrino mass through the constraints on the   independently from the laboratory kinematics constraints.This amount would be around   ≲ 0.12 eV (Aghanim et al. 2020), with severe restrictions on the nature of the dark energy to be near the cosmological constant (Hannestad 2005) 1 .Unlike cold dark matter particles, neutrinos fall out of equilibrium as relativistic particles.Due to the Universe's expansion, neutrinos lose energy and eventually become non-relativistic.This process introduces a non-relativistic length scale   as an ultimate scale beyond which neutrinos act like cold dark matter.Below this maximum scale, neutrinos stream freely and, due to their thermal velocities, can not cluster effectively under the influence of gravity.Accordingly, for a given redshift and neutrino species, the free-streaming wave number is defined as Agarwal & Feldman (2011): with   is the mass of one neutrino specie,  scale factor,  () the Hubble parameter, and  0 = 100ℎ (km Mpc −1 s −1 ) the Hubble constant.Moreover, free-streaming of the neutrinos smooth the matter density fluctuation field below    ∼  −1 . This process leads to less clustering in the Universe and suppression of structures compared to the ΛCDM universe.LoVerde (2014) shows that the presence of massive neutrinos reduces the abundance of massive halos, hosting galaxy clusters, by slowing down the collapse time.Non-linear clustering of massive neutrinos counter acts this delay slightly, but the outcome is the reduced number of massive halos by 1% for the total neutrino mass below 0.5 eV.Yet, there is a claim that the free-streaming of light relics induces a cut-off on the minimum mass of halos Schneider et al. (2013).The lighter the relic particles are, the cut-off in the halo mass gets smaller.One of the main cosmological observations to probe the effect of massive neutrinos is the matter power spectrum both in linear and non-linear regimes (Hannestad et al. 2020).To study the non-linear regime, one often uses the halo model approach.In this context, Hannestad et al. (2020) shows the suppression in power starts at  ≃ 0.01 ℎ Mpc −1 , decreases up to  ≃ 1 ℎ Mpc −1 , and continues in larger wavenumbers as a plateau.Hannestad et al. (2020) refer to the former feature as a "slide" and the latter as a "spoon" feature and state the spoon feature as a generic behaviour detected in different simulations.As an example, Rossi (2017) used the simulation on the cosmic web and used the Lyman alpha observations to study the effect of neutrinos on small scales.Also, there are studies on the effect of massive neutrinos on void statistics and distribution (Massara et al. 2015), also on the bias parameter (Hassani et al. 2022).In this work, instead of using the power spectrum, we use the 1-point statistics to investigate the distribution of matter in the presence of massive neutrinos.We will show the probable relation of some specific scales, introduced in the two frameworks (power spectrum and 1-point statistics).

SIMULATION AND SAMPLE PREPARATION
In the following subsections, we discuss the simulation used and the data preparation process.

Simulation
To study the effect of neutrino mass on the statistics introduced in Section 2, we use gevolution code (Adamek et al. 2016a,b) 2 .Gevolution is a relativistic N-body code based on the weak field expansion of the general relativity.In this code, one can treat massive neutrinos as separate relativistic species or by using their linear perturbations from the transfer functions obtained from CLASS (Blas et al. 2011).Adamek et al. (2017) showed that the results of the two approaches are consistent.In this work we use the halo catalogues of the simulations presented in Adamek et al. (2017).Specifically, we use three simulations considering the total neutrino mass of   = 0.06, 0.20, 0.30 eV plus one for the vanilla ΛCDM cosmology.The fiducial cosmology parameters are given in Table 1.For the cosmologies with massive neutrinos, we set the density parameters of the cold dark matter according to the following relation,   ≡ Ω  ℎ 2 = 0.12038 −   , to keep the total matter density (  +   +   ) unchanged.In the previous relation 2 https://github.com/gevolution-code  = 0.3 eV (bottom).The black dotted lines represent PDF in comoving space, the red dash-dotted lines show it after removing the intersecting halos, the blue dashed lines depict the PDF in the redshift space, and the solid yellow lines signify it after omitting the remaining intersecting halos in the redshift space.The x-axis represents the distance in real/redshift space, depending on whether the analysis is performed in either space.  is computed using Eq. 3 and   = 0.12038 is the value we consider for the cold dark matter density in the ΛCDM scenario.These simulations contain 4096 3 particles and grids in a cube of the comoving box size  = 2048 ℎ −1 Mpc initialised at redshift  = 100.Using these high resolution simulations we study the letter functions at  = 0.Moreover, to study the redshift evolution of the 1-point statistics, we run lower resolution simulations, where we consider  = 300 ℎ −1 Mpc and  grids =  pcl = 1024 3 .We output snapshots at three redshifts  = 0, 0.5, 1.The results for these simulations are presented in Appendix A. It is worth mentioning that we construct the halo catalogues for each snapshot using ROCKSTAR halo finder (Behroozi et al. 2012) at each redshift.

Data Preparation
Data preparation follows these steps.First, we remove all sub-halos from the halo catalogue.Then we eliminate the halos which do not meet the virial condition: " T / U < 2", where T and U stand for kinetic and potential energies, respectively.As we use a specific mass cut for sample preparation, we omit the sub-halos to control the effect of substructures.In observational proposal this means that we use a volume limited sample for our analysis.The halos which do not meet the virial condition are considered as ones which are not gravitationally bound objects (Bett et al. 2007).At this stage, if we compute the nearest neighbour distance and plot its probability distribution function (PDF)-referred to as NND-PDF-against the comoving radius, , we observe a bimodal behaviour in the PDF.
This bimodality appears for all simulations regardless of neutrino mass, i.e., dashed black line in the top and bottom panel of Figure 1, for ΛCDM and   = 0.3 eV respectively.Since our analysis depends on the behaviour of probability functions, it is critical to understand whether it is something physical or just an artefact of the resolution of the simulation or the halo finder used.Our study suggests that most halos that contribute to the first mode are intersecting halos-denoting that these halos overlap with each other or share common spatial volume.To illustrate this, we compare the distance between the neighbouring halos with the sum of their virial radii and we find that some of the neighbouring halos intersect.
If we omit the intersecting halos from the sample and then compute the NND, we end up with the red dash-dotted line in Figure 1.
As we discuss in Appendix B, we observe that if we do the same computations in the redshift space, where we use the redshift space distances instead of the comoving ones, we obtain a smooth PDF with one peak, and most of the contribution from intersecting halos introduced by the method of halo finder will disappear which is presented with the solid yellow line in Figure 1.To do this, we use the plane-parallel approximation and reposition all the halos along one Cartesian axis of the simulation box using the relation between the comoving positions, x  , and the one in the redshift space x with n being the unit vector along the line of sight,  scale factor, and  () the Hubble parameter at redshift .We consider two catalogues in redshift space: one that takes into account all halos and another that is filtered to eliminate intersecting halos.Eventually, the halo catalogues in the redshift space represent a smooth NND-PDF with one peak, the solid yellow and dashed blue lines in Figure 1.Hereafter, we do all the computations for the catalogues in the redshift space unless we state it directly.The results are almost independent of the specific direction we choose in simulation because of the statistical isotropy of the sample.Note that in the upcoming results the distances are in redshift space obtained from equation 5.

RESULTS
As mentioned earlier, to study the impact of neutrino mass, we use point process analysis tools to investigate the clustering of the halo catalogues.We specifically use the the so-called letter functions, i.e.,  ()− the nearest neighbour distance cumulative distribution function,  ()−the spherical contact cumulative distribution function, and −function as defined in equation 1.The main task is to compare the letter functions for the base ΛCDM and cosmologies with massive neutrinos (   = 0.06, 0.2, 0.3 eV).To do this, we compute the letter functions for each of the halo catalogues in different cosmologies.Then, we plot the subtraction of these functions for the catalogues with massive neutrinos and the ΛCDM, i.e., Δ  =  ΛCDM − ΛCDM , where Y is a member of the set {, , }.
As G and F functions are CDFs, we subtract them in each radius for comparing two distinct cosmological models.This method is widely used in Banerjee & Abel (2020); Uhlemann et al. (2020).
We also compare the J-function in the same way to be consistent.The errors, for each simulation and for each  , are determined utilising the Jack-Knife method (Wolter & Wolter 2003).For this task, we divide the simulation box into  3 sub-boxes and subsequently omit each sub-box at the time, calculate the desired quantity, and at the end, perform averaging over the  3 − 1 quantities, and then using the mean as the data points and the standard deviation as the errors.The total error bars on the Δ  are calculated by the sum of the uncorrelated errors of the  ΛCDM and  ΛCDM .The error bars are smaller than the size of the points.

Letter function's dependence on the total neutrino mass
In Figure 2, we plot Δ  versus distances in the redshift space at redshift  = 0.The blue circles, green squares, and red triangles are representative of the catalogue with the total neutrino mass of 0.06, 0.2, and 0.3 eV, respectively.The top panel depicts Δ  , which manifests a generic behaviour, starting at zero, meeting a mild maximum value at around ∼ 2 ℎ −1 Mpc, then decreasing and hitting its minimum value around ∼ 7 ℎ −1 Mpc, and finally reaching zero again.The depth of the Δ  is highly sensitive to the sum of the neutrino's mass.As the total mass increases, the Δ  deepens more.On the contrary, the peak of Δ  is affected slightly by the change in the total mass.To interpret the behaviour, recall that  () is the probability of having the first nearest neighbour at distances less than  for the halos in the catalogue.Wherever the Δ  takes a positive value, it implies finding the immediate neighbour is more probable in cosmology with massive neutrino than ΛCDM cosmology.The negative value for Δ  translates to larger distances between neighbouring halos in neutrino cosmologies.Accordingly, the shape of the Δ  suggests that a small portion of halos in cosmology with massive neutrinos have their nearest neighbour at smaller distances than their counterpart in the base ΛCDM, as for a large number of halos, the immediate neighbour is at further distances.
To interpret Figure 2, we relate the statistical properties of individual points to the behaviour of matter power spectrum when massive neutrinos are included in the matter sector.Typically, the main influence of massive neutrinos, attributed to their free streaming, is to dampen clustering at smaller scales and result in a decrease in matter power spectrum.However, we can explore the nonlinear regime of matter power spectrum using the halo model (Cooray & Sheth 2002).The halo model suggests that, on small scales, the power spectrum is mainly affected by how matter is distributed within individual halos.On larger scales, within the quasi-linear regime, the power spectrum is derived from knowledge of both the matter distribution within halos and the density of dark matter halos.Hannestad et al. (2020) shows that in the context of the halo model, the power spectrum predicts a spoon-like shape for matter power spectrum in the presence of massive neutrinos.It means that the deviation from ΛCDM has its extremum at a scale of approximately ∼ 1 ℎ −1 Mpc.One step further, in order to relate the matter power spectrum to the distribution of DM halos, one should account for the halo-bias term.Hassani et al. (2022) investigate the scale-dependent nature of the halo bias term in the presence of massive neutrinos.In other words, their analysis illustrates that the bias parameter tends to be larger on smaller scales, which typically correspond to lighter halos.We argue that the peak at a smaller scale of Δ  arises from the one-halo term and the bias parameter at this scale.On the other hand, the minimum observed at a larger scale is a consequence of the free streaming effect, as a result of the dissipation of structures due to the presence of massive neutrinos.In the middle panel, Δ  also reveals a generic behaviour that starts at zero, decreases, reaches a minimum at around ∼ 10 ℎ −1 Mpc, and then increases and goes to zero.Yet the total neutrino mass causes the change in the depth of the minimum value.As the sum of neutrinos' mass increases, the minimum value decreases. () probes the clustering of the point process by measuring the distances between a set of randomly generated points and the halos from the catalogue.The shortfall of the probability for the neutrino cosmologies from their ΛCDM counterpart suggests that a randomly located sphere with a radius around ∼ 10 ℎ −1 Mpc in a ΛCDM is less populated than one placed in the ΛCDM universe.It is worth mentioning that in general, the  () is expected to probe larger scales than  ().
This happens because in  () both points are chosen from the halo samples while in  () one of the points is chosen randomly.In the bottom panel of Figure 2, we plot the Δ  .The -function encapsulates the information in the  and  functions as defined in equation 1.The deviation of this function from unity toward lesser values represents clustering in the point pattern.The positive value for Δ  means that halos in ΛCDM cosmologies are less clustered than those halos in the ΛCDM one, and vice versa.The generic behaviour of Δ  , inherited from  and  functions, once more suggests a slightly excessive clustering at a smaller scale and a fall-off of clustering on intermediate scales in cosmology with neutrinos to the base model.Note that the extremum on the  and  happens on the same scale, while the extremum on the  is at a larger scale.This is because of the shape of the function of  and the small correction of  in the denominator causes this effect.An interesting point is that, in the matter power spectra, ΛCDM is always larger than massive neutrinos as neutrinos cannot cluster as effectively as CDM particles.But we see a maximum in Δ which seems to hint at a clustering at certain scales.But this is regarding the halos, not CDM particles.Although, the maximum is less significant in comparison to the minimum of the Δ  , however, it could be a specific fingerprint of the massive neutrinos.This fingerprint can be used to distinguish the massive neutrino from other types of the modified cold dark matter scenario, e.g., Kousha et al. (2023).These results propose that the notable signature of cosmology with massive neutrinos is in the dip of the  and  plots in the scales of ∼ 7 − 10 ℎ −1 Mpc.The dip of the  and  is consistent with the less power in cosmology with massive neutrinos in the aforementioned scales.In order to propose an observational probe, we have to translate this distribution of dark matter halos to the galaxy distribution.This translation is very complicated due to the halo occupation distribution and the bias parameter.

Mass Cut
In this subsection, at  = 0, we investigate the dependency of mass cut on our results.The mass cut specifies the minimum mass for which we consider halos to be equal to or larger than it.By including a mass cut in our halo catalogues, we study various halo types that have undergone distinct dynamics.In Figure 3, we plot the Δ, Δ and Δ function versus the radius in the redshift space.The colour bar diagrams show the mass cut of these functions.By increasing the mass-cut we observe that the minima in the Δ  where  = {, } deepen and occur at larger scales.This behaviour is very similar to the behaviour of matter spectrum for different halo mass functions in small scales (Hannestad et al. 2020).Higher mass cuts eliminate low-mass dark matter halos, which has the consequence of finding the nearest neighbours at larger distances.Among all letter functions the J-function shows the largest deviation from unity for large masscuts.

CONCLUSIONS AND REMARKS
Massive neutrinos affect the LSS mainly in non-linear regimes.Accordingly, the one point statistics which is a probe of the LSS in non-linear scales will be useful as a complementary method to traditional power spectrum calculations.In the data preparation, we follow the same procedure for the massive neutrino and ΛCDM cosmologies.We start the simulation from the same seed number and consider similar spatial resolution and halo finding processes.This approach helps in mitigating discrepancies arising from simulations' randomness, numerics, or differences in spatial resolution.Consequently, the differences between the letter functions in the two cosmologies become more physical, minimising susceptibility to artificial effects introduced by the simulation.In addition to the halo catalogue obtained from the halo finder, we prepare a halo catalogue in the redshift space.In this process, we transform the position of dark matter halos to the one in the redshift space observed by a distant observer and then we eliminate the halos that are in separation less than their total virial radius.In Appendix B we show how this correction eliminates the artificial effect of halo halo-finding process.We find the difference of the letter functions, namely G, F, and J functions, between the ΛCDM and the base ΛCDM cosmologies.The specific shape of an excess in small scales and a deficit in larger scales is probably related to the characteristic scales appearing in the matter power spectrum.In the context of the Halo model in smaller scales the one-halo term will be the dominant one and the two-halo terms of matter power spectrum affect the larger scales.The investigation of this relation is out of the scope of this work and is the subject of further study.The redshift evolution of the letter functions is discussed in the Appendix A. We show that the difference between the two models occurs on almost the same scale with redshift evolution and the difference is more significant in higher redshifts.This suggests that high- surveys will be more convenient to distinguish the neutrino cosmology from ΛCDM using the letter functions.The depth of the Δ  and Δ  is highly sensitive to the sum of neutrino masses; As the total neutrino mass increases, the Δ  deepens more.On the contrary, the peak of Δ  is affected slightly by the change in the total neutrino mass.The extension of this work would be to investigate mock catalogues and study the distribution of galaxies in the light of 1-point statistics in both neutrino cosmology and standard model.
Although, the plausibility of this conclusion needs further consideration.Knebe et al. (2011) provide an extensive comparison of various halo-finding algorithms.As mentioned in this work, for the FoF algorithm, the building blocks-the clusters (then to be recognised as halos) reside in almost non-overlapping volumes in the coordinate space.Weighted graphs are used to construct clusters, and the identification process is based on the graph without loops (minimum spanning trees) between particles and their spatial proximity.The ROCKSTAR algorithm also starts grouping the particles using a FoF scheme at large distances, then continues the halo identification process in the phase space and uses a modified metric to decide whether two particles belong to one halo.This metric for two particles with position and velocity of ( 1 ,  1 ) and

𝑣
where   and   are the standard deviations for particle position and velocity of the subgroup respectively.The second term in the square is a positive quantity, and it can cause two close particles in the coordinate space to move apart in the phase space.This metric can result in identifying multiple halos for a set of particles considered as a single halo based on their spatial proximity in the coordinate space.It also leads to overlapping halos in the coordinate space, as opposed to halos identified by the FoF algorithm.Next, we discuss the impact of transforming the halo positions to the redshift space using equation 5.As per Section 3, we use the parallel plane approximation and adjust the -component of positions by   /(), where   is the z-component of the velocity.In this process, a few percent of halos from both halo catalogues (FoF and ROCKSTAR from the SMDPL simulation) leave the simulation box 4 , which indicates the stretching along the z-axis.Repositioning the halos in this way causes an increase in their distance, which shows itself in the overall shift of both NND-PDFs towards the large distances.This repositioning seems to counteract the effect of the phase space metric for the ROCKSTAR catalogues, and this causes the two NND-PDFs to become more similar to each other.

Figure 1 .
Figure 1.The nearest-neighbour distance PDF for ΛCDM model (top) and

Figure 2 .
Figure 2. The Δ  versus radius, blue circles related to   = 0.06 eV, green squares to   = 0.2 eV and red triangles to   = 0.3 eV.The top panel is Δ  , the middle Δ  , and the bottom Δ  .

Figure 3 .
Figure 3.The Δ  versus radius in the redshift space, for   = 0.3 eV is plotted.The colour-bar shows the minimum mass of the sample.

Table 1 .
The cosmology parameters of standard model where   = Ω  ℎ 2 ,   = Ω  ℎ 2 are baryonic and dark matter densities normalises to critical density.ℎ =  0 /100,   ,  CMB ,   are respectively the reduced Hubble parameter, spectral index, the CMB temperature, and the amplitude amplitude of scalar perturbations.