Comparison of two di ff erent integration methods for the (1 + 1)-Dimensional Schr¨odinger-Poisson Equation

We compare two di ff erent numerical methods to integrate in time spatially delocalized initial densities using the Schr¨odinger-Poisson equation system as the evolution law. The basic equation is a nonlinear Schr¨odinger equation with an auto-gravitating potential created by the wave function density itself. The latter is determined as a solution of Poisson’s equation modelling, e.g., non-relativistic gravity. For reasons of complexity, we treat a one-dimensional version of the problem whose numerical integration is still challenging because of the extreme long-range forces (being constant in the asymptotic limit). Both of our methods, a Strang splitting scheme and a basis function approach using B-splines, are compared in numerical convergence and e ff ectivity. Overall, our Strang-splitting evolution compares favourably with the B-spline method. In particular, by using an adaptive time-stepper rather large one-dimensional boxes can be treated. These results give hope for extensions to two spatial dimensions for not too small boxes and large evolution times necessary for describing, for instance, dark matter formation over cosmologically relevant scales.


Introduction
Nonlinear Schrödinger equations (NLSE) with (non-)local interactions constitute a physically intriguing and phenomenologically rich model for a range of seemingly disparate systems.If the interaction is local and its coupling strength constant in time, the Gross-Pitaevskii equation represents an accurate mean-field description of Bose-Einstein condensates in d ≤ 3 dimensions [1].Experimentally observable phenomena such as superfluidity or quantum vortices are theoretically recovered within the scope of this NLSE.If the interaction is long-range and identical to the Green's function of Poisson's equation, the Schrödinger-Poisson equation (SP) with constant coupling strength -a NLSE -is used in nonlinear optics as d < 3 steady-state model of light propagating through dispersive media [2,3,4,5].In cosmology, one recovers SP with time-dependent interaction strength as the non-relativistic equation of motion of a scalar field minimally coupled to space-time geometry in d = 3 spatial dimensions, e.g.[6].This model is known in cosmology as fuzzy dark matter [7].In both cases, the Gross-Pitaevskii and the SP equation, nonlinear effects such as soliton formation [8,9], evolution [10], and interaction with transient interference fringes are accurately predicted by the model, e.g.[10,11].Fuzzy dark matter provides an alternative approach in order to model large scale structure formation by the evolution of a complex scalar field, the wave function in the SP system.Using only the dark matter without the baryonic part is certainly an approximation for the real universe, but for some purposes such as the formation of structures on large, intergalactic scales, it is believed to be nevertheless realistic because of the dominance of dark matter with respect to the baryonic matter [12].Depending on one's viewpoint, one may either interpret such a description of dark matter as structure formation driven by a cosmic Bose-Einstein condensate trapped in its own gravitational potential [7,13] and where the cosmic expansion determines the time-dependent interaction strength, or as a field theoretical approximation to the classical collisionless Boltzmann equation [14,15].In the latter sense, SP simulations may be seen as an alternative method to sample the phase space evolution of cold dark matter [14,15,16].The SP system finds its application in many other contexts such as in nonlinear optics [3,17] and in investigation of structure formation on small scales in exotic Bose-Einstein condensates, see e.g.[18,19,20].The recent review by Paredes et al. [21] conveniently collects the different physical motivations for the SP system.
From a numerical point of view, integrating the three dimensional SP system for realistic cosmological initial conditions poses a challenging problem.Its Hamiltonian is nonlinear and the nonlocal interaction leads to strong mode coupling, forcing one to resolve all spatial scales down to the microscopic de Broglie scale, below which Heisenberg's uncertainty principle effectively implements a diffusion mechanism, see e.g.[22].Allowing for a time-dependent coupling strength complicates the formulation of a time-evolution operator and deprives us of energy conservation already at the analytical level.Multiple numerical approaches and semi-analytical approximations have emerged to (partially) combat these issues in a top-down fashion, i.e. simplifications applied to the full-fledged, three-dimensional problem: Ref. [23,24] use adaptive-mesh refinement techniques to alleviate the high cost of resolving many orders of magnitude in spatial scale -in cosmological applications usually five to six orders -at the expense of non-unitary time-evolution.The authors of [25] recast SP to an approximately correct hydrodynamic form and apply smooth particle hydrodynamics to discretize and evolve the resulting equations.This increases the computational efficiency substantially and makes norm conservation manifest.However, due to the approximate nature of the hydrodynamic formalism, one loses certain aspects of the full-fledged SP phenomenology and interference effects in particular.Ref. [26] introduced a promising hybrid method in which the large scale dynamics is followed by means of a phase-augmented N-body problem with which the SP wave function can be reconstructed via a WKB-like gaussian beam decomposition in small scale, high density regions.This is ideal for the purpose of deep zoom in simulations of highly compact objects.Application to cosmological box sizes is still pending.The most widely used technique for integration of SP in three dimensions is the pseudo-spectral or Fourier method, [27,28,29], cf.Sec.3.1.The method is second-order accurate in time and spectrally accurate in space.Moreover, its structure lends itself to an easy implementation in existing astrophysical codes.However, its major bottleneck is the requirement of an uniform spatial grid which translates to run times of O(10 7 ) CPU hours for small to moderately sized cosmological boxes [28].
The purpose of this work is to make progress on the bottom-up approach, i.e. instead of tackling the d = 3 SP system directly, we build a comprehensive numerical understanding in lower dimensions, in this work d = 1, first.In a previous work, see ref. [22], we focused on the physical implications of this approach and showed that by introducing an external confinement, important features of the three-dimensional setting, in particular the correct interaction range, can be implemented with only one spatial degree of freedom.A one-dimensional setting finds its own motivation also in the possibility to scan broad parameter spaces and to allow for generating larger ensembles of simulations with random initial conditions [22].
In this work, we focus more on numerical aspects of the bottom-up approach and integrate the d = 1 SP system without confinement -a setting numerically more challenging than the situation with confinement due to the stronger mode-coupling.Compared to the discussion in [22], we complicate the task at hand even further by allowing for a time-dependent coupling strength in all our considerations that models the expansion of the universe during the evolution.Our main goal is to compare two numerical approaches, i.e. the aforementioned pseudo-spectral method and a B-spline approach commonly used in atomic and molecular physics, see e.g.[30,31,32,33].Our results are: first, we are fairly sure that our numerical approach is well converged.This is underpinned by a direct comparison of wave functions and by testing a series of conservation laws.Second, we demonstrate that the pseudo-spectral method, in particular together with an advanced adaptive-time stepper, is most efficient for treating larger system sizes.These results give us hope for an extension of a fully converged method to higher dimensions.
The paper is organized as follows: Sec. 2 gives details on the exact initial boundary value problem we want to solve and summarizes key properties of the one-dimensional SP system.Sec. 3 outlines our numerical methods.An in-depth comparison between both the pseudo-spectral and B-spline method in terms of accuracy and performance is given in Sec. 4. Building on these results, we introduce a timeadaptive extension to the plain pseudo-spectral approach and showcase its effectiveness in Sec. 5. We conclude in Sec. 6.

Theoretical background
We begin with a compact discussion of the SP system in one spatial dimension.As noted in our introductory remarks, SP arises as challenging equation of motion in various contexts [21].Here, we are primarily interested in SP at a numerical level.However, to make the problem of integrating SP well-defined and sufficiently challenging, initial conditions, boundary conditions and parameters should be chosen in a physically reasonable manner.To do so, we take inspiration from cosmology where the three-dimensional SP system arises as a dark-matter only approximation in the non-relativistic limit of a scalar field coupled to an expanding background cosmology, [6].In this application the "wave function" models a complex scalar, but classical, field.It's density, |ψ| 2 , is interpreted as the density of dark matter that evolves self-consistently in it's own gravitational potential [23].
Section 2.1 introduces the SP system in one spatial dimension.In Sec.2.2 we elaborate on our choice of initial conditions, while Sec.2.3 summarises important conservation properties of SP, which we use in Sec. 4 to benchmark our numerical results.

Schrödinger-Poisson equation
The equation to solve is the one-dimensional Schrödinger-Poisson equation or (1 + 1)-SP, where +1 stands for the time variable.It can be derived from the three-dimensional version by assuming a homogeneous matter distribution in the two transverse dimensions and factorizing the wave function into the transverse directions and the direction of interest [22,34].In momentum (k) space, the remaining interaction kernel is recognisable as a lower dimensional analogue to the three-dimensional gravity kernel.In both cases, the spectrum scales as 1/k 2 .The x-space kernel, on the other hand, looks qualitatively different leading asymptotically to a constant force or a linear potential satisfying free-space boundary conditions.As analysed in detail in [22], the latter lacks a length scale which we can associate with a finite interaction range.Not only does this introduce distinct differences in the asymptotic system state and the underlying relaxation mechanism of the one-dimensional dynamics compared to the full-fledged three-dimensional situation.The absence of a clear length scale complicates the numerical problem quite a bit, see e.g.[35].We note in passing that a similar behaviour -the evolution towards nonequilibrium quasi-stationary states and extremely long relaxation times to equilibrium -is well known from other long-range interacting classical systems, also in one-dimension as in our case here, see for instance [36,37,38,39].
The coupled equations we evolve in time read in their dimensionless form in the co-moving frame: where V is the nonlocal, nonlinear potential given by the convolution of the density fluctuations δρ(x, t) ≡ |ψ(x, t)| 2 −1.Here we subtract the uniform background since, in the co-moving frame, we are only interested in the density evolution induced by the initial cosmological density contrast, see e.g.[6,23,27,28,29,40].Effectively, we work in a finite box of length L, and we apply periodic boundary conditions: Then, the norm conservation of the wave function corresponds to the conservation of mass during the evolution: Note that the enforced equality in Eq. ( 4) corresponds to a compatibility condition for Poisson's equation, Eq. ( 2), with our choice of periodic boundary condition: If L M then Eq. ( 2), together with boundary conditions Eq. (3), would be ill-defined.Moreover, since Eq. ( 1) is nonlinear, it is not possible to satisfy Eq. ( 4) by normalizing an obtained numerical solution a posteriori.Hence a norm conserving, unitary time evolution is a necessity for the numerical treatment of Eq. ( 1) and the simplest check of the quality of the numerics will be to compute the total norm or mass from the evolved wave function.
A physically more insightful representation of the interaction potential V(|ψ(x, t| 2 ) follows from the integral solution of Eq. ( 2).Let G(x, x ′ ) denote the interaction kernel, i.e. the gravitational potential of a point source at x ′ experienced at x.We then have: (5) G(x, x ′ ) can be given analytically [41,42]: Here k l = 2πl L , l ∈ {0, ..., N − 1}, for a spatial discretisation with N grid points.We absorbed the aforementioned compatibility condition into G(x, x ′ ) by suppressing the singular k = 0 (l = 0) mode in its spectrum.This explains why the density ρ(x, t) ≡ |ψ(x, t)| 2 and not just the density contrast δρ(x, t) enters into the convolution of Eq. (5).Consequently, the spatial mean potential is zero, ⟨V⟩ L = 0, which complies with the imposed periodicity in the spatial domain.
The time-dependent coupling a(t) ∈ (0, 1] -in the cosmological context known as scale factor modelling the expansion of the universe -encapsulates our choice of background cosmology and is given by the solution of Friedmann's equation in a flat and radiation-free universe, see e.g.[43]: Here the present-day density parameters Ω m,0 = 0.3 for the dark matter (as usual for a dark-matter only approximation absorbing all background matter density into this single parameter [28,29,40]) and Ω Λ,0 = 0.7 for the dark energy are used.For today's Hubble constant we take the specific value H 0 = 68 km s −1 Mpc for our simulations, thus making our choice of cosmological parameters, i.e. {Ω m,0 , Ω Λ,0 , H 0 }, compatible with observational constraints, see e.g.[44].The reader is referred to [45] for an illustration of the temporal behavior of the scale factor.Under realistic conditions, the integration is initialised with a(t 0 ) ≈ 10 −2 .Subsequently, a(t) is strictly monotonically increasing and two distinct regimes may be identified.For t < 40, the background expansion, measured by the value a(t), proceeds slowly so that Eq. ( 1) is close to free.Once t > 40, a(t) grows rapidly in a power-law like manner thus increasing the importance of the nonlinear interaction term on a short time scale.Section 4 will report results obtained by numerical simulations of Eqs. ( 1), ( 2) and (7) with two different numerical methods.

Initial Conditions
In [22] most of our analysis focused on spatially localised, synthetic initial conditions, e.g. a gravitationally unstable gaussian wave packet.The aforementioned nonlocality of the interaction kernel G(x, x ′ ), on the other hand, suggests that spatially delocalized initial conditions are more challenging to integrate correctly.
In what follows, all simulations depart from cosmological initial conditions adapted to the one dimensional situation at hand.On the numerical level relevant for this work, these may be understood as a random and spatially delocalised realisation of the wave function where prior expectation of the properties of |ψ| 2 are absorbed into a long-range spatial correlation.The long-range correlations, which dynamically manifest by the long-range potential, together with the double time-dependence of the potential through the varying density as well as the fast varying scale factor a(t), make the numerical problem challenging.To keep the discussion compact, we refer to [34] for a detailed exposition and only mention key points of the initial condition construction.
Initialising the wave function at t 0 = 0 requires us to specify two degrees of freedom at each grid location x n -wave modulus δ(x n , t 0 ) and phase S (x n , t 0 ): We may already deduce from this ansatz that the spatially averaged density contrast ⟨δ⟩ L = 0 in order to satisfy Eq. ( 4) at the initial time t 0 .The modulus, is expected to obey gaussian statistics at early (a(t) < 10 −2 ) times [46].More precisely, if Σ nm = ξ(x n , x m ; t 0 ) = ⟨δ(x n , t 0 )δ(x m , t 0 )⟩ denotes the real space covariance matrix, then δ(x n , t 0 ) ∼ N (0, Σ), for the discretized situation at hand.N (0, Σ) denotes the normal distribution with zero mean and standard deviation Σ.Note that the zero mean is a consequence of ergodicity so that ⟨δ⟩ = ⟨δ⟩ L = 0 by construction.
For long-range correlations, Σ will be a dense covariance matrix and sampling from this high dimension normal distribution is thus intractable in the case of high resolution grids with N = O(10 6 ) grid points.Fortunately, the cosmological principle, i.e. the assumption of statistical isotropy and homogeneity, [12], guarantee the one-dimensional matter power spectrum P(k; t 0 ) = F [ξ(r; t 0 )] to be diagonal in momentum space.It is for this reason that the construction of cosmological initial conditions is usually carried out in k-space.
Form and time dependence of the one dimensional matter power spectrum is set by (i) the linear theory of structure formation, [46], governing the three dimensional matter power spectrum P 3D (k; t 0 ), and (ii) a dimensional reduction from P 3D (k; t 0 ) to P(k; t 0 ).For the purpose of this work, we treat step (i) as black box and rely on well-established linear Boltzmann codes such as AxionCAMB, see [47], to provide the three dimensional matter power spectrum at simulation start t 0 .With P 3D (k; t 0 ) at hand, we proceed as in [7,22] and find its lower dimensional analogue P(k; t 0 ) by demanding a dimension independent variance ξ 3D (0; t 0 ) = ξ(0; t 0 ).
Finally, the initial phase function S (x n , t 0 ) follows from mass conservation in the linear evolution regime [34]: which we recognize as Poisson-type equation and thus solvable with the numerical procedure outlined in Sec. 3 for the gravitational potential.

Conserved Quantities of (1 + 1)-SP
Lacking analytical solutions, a crucial part of our numerical test harness, Sec. 4, is to assess deviations of numerically deduced observables, such as mass, momentum and energy, from their analytical counterparts which are manifestly conserved in the dynamics described by Eq. ( 1).Here we summarize these conserved quantities.
Note that Eq. ( 1) -( 2) is the Euler-Lagrange equation of the action: with Hamiltonian density: Noether's theorem then implies the aforementioned mass conservation: due to invariance of the action under global phase changes ψ(x, t) → e iϕ ψ(x, t).Spatial translation invariance, ψ(x, t) → ψ(x − x ′ , t), as a consequence of periodicity and symmetry of the kernel (6) assures conservation of linear momentum (6) [34].The total momentum is defined by for both static (a(t) = const.)and dynamic background cosmologies (a(t) const.).Finally, the total energy: with E(a) is only conserved for a = const..It is only then that ψ(x, t) → ψ(x, t − t ′ ) constitutes a symmetry of Eq. (10).If background expansion is allowed, the action producing Eq.( 1) loses its time translation symmetry and Eq. ( 14) becomes a function of time.The total time derivate is then given by: Note that this result is different from the three dimensional considerations of [14], which recovers the wellknown Layzer-Irvine equation, dE dt = − ȧ a (2K + W), instead of Eq. ( 16).This is a consequence of the form of the interaction kernel (6).
Nevertheless, it is still possible to define a conserved energy, E tot (a), by compensating for the loss implied by Eq. ( 16):

Numerical methods
We proceed by introducing two independent numerical methods capable of integrating the nonlinear and nonlocal Schrödinger-Poisson equation in an expanding background cosmology: The pseudospectral Fourier method -the go-to method of integrating Eq. ( 1) in astrophysical settings [27,28,29] -and a Crank-Nicolson-based B-spline integrator, popular for applications in atomic and molecular physics [30].
Although the temporal integration proceeds differently in both schemes, their discrete time evolution operators share a set of common features.Most importantly, both are formally second order accurate in time and manifestly unitary [48].Thus, the suitability of the spatial discretization for Eq.(1) will determine which out of the two numerical schemes is more appropriate for simulating Schrödinger-Poisson.
As discussed in Sec. 1, the uniform spatial grid used by the pseudospectral approach is its major bottleneck in large-scale cosmological simulations, in any number of dimensions.It is for this reason why we choose to benchmark this "state-of-the-art" approach against a more adaptive B-spline discretization.
Understanding how this change in the spatial discretization of Eq. ( 1) impacts numerical accuracy and computational efficiency will be helpful to decide, which approach should be developed further, in particular to extend our studies in a bottom-up fashion to two spatial dimensions, i.e. the (2 + 1) problem [5].

Pseudospectral or Fourier Method
The pseduospectral approach discussed below has emerged, despite high computational cost for highresolution simulations, see e.g.[28], as the canonical approach for evolving Eq. ( 1)-( 2).The scheme is simple to implement, second-order accurate in time, can be spectrally accurate in space, is by construction unitary [48], and approximately energy conserving for a(t) = const..In case of cosmological applications, it may also be understood as the wave-dynamic equivalent of the symplectic kick-drift-kick integrator, omnipresent in N-body computations and thus easily integrated into existing cosmological codes.The following two sections summarize the method's key ingredients.

Time Integrator
The task at hand is to construct a numerically tractable approximation to the time evolution operator of Eq. (1).To this end, we follow the operator splitting technique, i.e. we split Eq. ( 1) into subproblems and find ideally exact time evolution operators for the subproblems.A suitable composition of the latter then yields a time integrator for Eq. ( 1).
We realize that Eq. ( 1) may be split into the subproblems: and While Eq. ( 18) and the Poisson equation can be exactly solved in Fourier space, Eq. ( 19) is easier treated in real x space.
A permissible integrator composition is given by the three-stage Strang splitting [49] already used in Ref. [22], which explored the fully dynamic case a(t) mostly by extrapolation of numerical results of the static scenario or only limited investigation for a const.In this paper, we make progress in treating the expanding case with explicitly time-dependent a(t) more thoroughly, a much harder scenario for obvious reasons.
The time evolution of the full problem then reads: where ÛK and ÛV are the time evolution operators of the kinetic and potential sub-problem, respectively.Since ÛK can be computed exactly, cf.(22), it is permissible to combine consecutive kinetic operator applications across adjacent time steps: This reduces the effective number of operator applications.

Spatial discretization and Time Evolution Operators
With the overall structure of ÛK+V in place, we proceed to specify ÛK and ÛV .
We choose an equidistant grid of N points in x-space, i.e. x n = n∆x, and the conjugate k-space k n = 2πn L , n ∈ {0, ..., N −1}, respectively.The subproblems are solved separately and F and F −1 stand in the following for the Fourier transform and its inverse.In Fourier space the kinetic part is solved exactly: The isolated potential part: conserves the norm of ψ component-wise due to the real nature of V: implying |ψ(x n , t)| 2 = |ψ(x n , t 0 )| 2 .We conclude that the evolution operator of Eq. ( 23), ÛV , depends explicitly on time only through a(t), and we approximate this remaining time dependence by the mid-point method: We apply the convolution theorem and solve the Poisson equation in k-space using the convolution kernel , see Eq. ( 6), We close with two remarks: Firstly, given that the trivial kinetic subproblem is solved in k-space, it would be desirable to do the same for the potential subproblem as well.Unfortunately, we cannot efficiently solve the above equations in k-space only, because of two nonlinearity related problems: (i) in order to calculate the potential V, we need the point-wise norm |ψ(x n , t)| 2 in x-space, not just the wave function itself; and (ii) V would be highly nonlocal in k-space and it is hence more efficient to apply ÛV in x-space, where it is trivially diagonal.
Secondly, note that (i) the operators in Eq. ( 22) and Eq. ( 25) are unitary by construction and (ii) the Strang splitting, Eq. ( 20), is a composition of single unitary operators [48].Thus, the overall time integration is guaranteed to be unitary and norm preserving up to accumulated numerical noise.The property of being unitary, just as symplectic integrators for classical Hamiltonian problems [50,51], allows for a long time evolution with minimal norm loss, which in particular is stable also for relevantly large time steps as compared to non-unitary integrators.The stability in relatively large windows of time steps is well known for other nonlinear problems [52,53], and forms the basis for our method which has to deal additionally with a fast varying scale factor entering the nonlinear long-range potential.

B-spline method
In atomic and molecular physics the use of B-spline basis functions is very common [30,31,32,33], since this basis can be often adapted to the problem at hand.Its advantage is that often the number of basis functions can be reduced to a minimum due to the localised nature of the B-splines.The number of necessary functions is typically much smaller than the number of grid points N in our pseudospectral method.In the following we provide a short overview on the method of B-splines and we refer to the specialised literature for more details, see e.g.[54,55,56].

Spatial Discretization
B-splines are polynomial functions with compact support and cardinal B-splines are defined on an equidistant grid.A cardinal B-spline of first order is the function A cardinal B-spline of order p, p ∈ N is defined as the convolution or equivalently by the recursion From Eqs. ( 27) and ( 28) it follows that the Fourier transform of φ p (x) is For a given interval [a, b] and for j = 1, . . ., N, we define the functions then It is the overlap matrix, which causes off-diagonal elements in A. More precisely, A consists of a p+1-wide band around the diagonal and triangular parts of size p in the lower left and upper right corners.The main disadvantage of the method is then that the application of a matrix-vector product is much more involved compared to a diagonal or a small-banded matrix in an orthogonal basis.The main advantage of the method is that the number of basis functions typically is much smaller than the grid points used in an equidistant spatial basis, such as in Sec.3.1.The tradeoff between these two aspects will determine the usefulness of the B-spline approach.Anticipating results from below, we will actually see that for our gravitational problem, typically a lot of B-splines are nevertheless necessary such that the method becomes very time and memory consuming making it actually less adaptable to our problem.
where Ĥ = 1 ∆t The time-dependent integral ĤV is calculated with the help of the trapezoidal rule, Here, the potential V(t +∆t) is needed at each time step.However, it depends on the unknown wave function ψ(t + ∆t).To circumvent this problem we use a predictor-corrector step, as applied in other nonlinear problems [52,53].That is, we predict an intermediate solution ψ(t + ∆t) by applying the above mentioned method (34) with ĤV ≈ ĤV = a(t)V(t) then we use this predicted wave function to solve the Poisson equation and obtain Ṽ(t + ∆t) which we use in the corrector step, In this way we need to solve a system of four uncoupled linear equations at each time step (two each for the predictor and the corrector step, respectively).One corrector step is typically sufficient to deal with the timedependent potential.Similar to the case of the Gross-Pitaevskii equation [52,53], more corrector steps don't significantly improve the outcome, as we checked.The Hamiltonian is represented in the non-orthogonal B-spline basis and Eq. ( 34) takes the form This is the most time consuming step in our integrator, since the matrices involved in Eq. ( 39) have a banded structure combined with nonzero triangular parts in the lower left and upper right corner due to the periodic boundary conditions.Summarizing, Eq. ( 39) provides us the time evolution of our state where the H is calculated using the predictor corrector steps (37) and (38), where the potential V(t) is obtained by solving the Poisson equation using a direct matrix inversion of the Laplace operator.As in Eq. ( 25), we approximate a(t) with the mid-point rule.
Altogether, due to the Crank-Nicolson scheme also the B-spline method is unitary and hence norm conserving just as the pseudospectral method from Sec. 3.1.1.Norm (or mass) conservation is crucial in our approach in order to maintain numerical errors small and to allow for not too small values of the time step ∆t.

Numerical Results
In the following we provide an in-depth numerical comparison of the pseudospectral method of Sec.3.1 and the B-spline approach, Sec.3.2, applied to the numerically challenging problem of integrating the Schrödinger-Poisson equation in an expanding, flat FLRW universe, i.e when a(t) is given as the solution to the background Eq. (7).Sec.4.1 establishes PS as our baseline scheme by a comprehensive study of its convergence and accuracy as a function of the spatio-temporal grid.The obtained high-resolution reference solutions are then used in Sec.4.2 for an exhaustive comparison against the independently implemented Crank-Nicolson B-spline approach: Sec.4.2.1 investigates deviations and systematics in the nonlinear matter power spectrum, Sec.4.2.2 -4.2.4 assess the adherence of both schemes to the conservation properties introduced in Sec.2.3.We close by benchmarking both schemes in terms of CPU runtime and memory in Sec.4.2.5.
For reference, the chosen box sizes are in our dimensionless units L = 100, 500, and 1000, corresponding to cosmologically relevant sizes of L ≈ 2, 10, and 20 Mpc for the parameters reported in Sec.2.1.The typical distance between galaxies is of O(1 Mpc), for instance.
For small L, the initial state, as described in Sec.2.2, is mostly a single period sinusoidal function.The amplitude fluctuations for these states firstly rises to δ ∼ 0.15 at L ≈ 100.Beyond that, the number of periods grows to make the state highly oscillating.All simulations depart from an initial coupling strength of a(t 0 ) ≈ 0.01 at initial time t 0 = 0.

Pseudospectral Method
The first point that we want to check here is the convergence of the PS method with respect to the systems size in the number of points in space and time.Hence we look for the convergence plateau in both grid spacings (∆x, ∆t).A fine-grained solution for the parameters N = 2 20 , ∆t = 10 −5 serves as reference relative to which errors are calculated.Figure 2 shows the error ∆ϵ, which is computed by taking the relative Euclidean norm: The black-squares indicate our reference solution in Fig. 2. The parameter space spans three orders of magnitude in the space domain N ∈ {2 10 , .., 2 20 } as well as in the time domain ∆t ∈ {2 0 , ..., 2 10 } • 10 −5 .Three box sizes are plotted row-wise.a(t) is the natural choice to indicate the time evolution since it couples the nonlinear potential to the Schrödinger equation and therefore provides the best physical insight.For example, we can expect matter fluctuations to be dispersed randomly at a ≪ 1 and to clump at a → 1. Errors induced by the space grid and time grid are clearly distinguishable in Fig. 2. On one hand, an insufficient Fourier basis manifests in a sharp edge of ∆ϵ in the N-direction (y axis).The problem becomes more nonlinear either when going to a larger box size or with the increasing scale factor a(t) during the integration.This implies increasingly compact density distributions in x-space and hence a wider distribution in k-space, which has to be compatible with the maximal resolved wavenumber set via L/N.The minimum requirement N min for a converged solution is such that all modes in the power spectrum are resolved.That is the case where the natural roll-off dives into the noise floor of numerical precision.We refer here already to Sec. 4.2.1, in which we discuss in more detail the intricacies of the spatial representation, see in particular Fig. 5 there.
Although a smaller grid N < N min does not resolve all of the small-scale dynamics, especially as a → 1, the integrator still stays stable.On the other hand, ∆ϵ approaches the convergence plateau in ∆t in a quadratic manner ∆ ϵ ∝ ∆t 2 , as the second-order splitting scheme suggests, see Fig. 3.Even though the absolute value of the errors change, the shape of temporal convergence stays approximately constant.We deem ∆τ = 8 × 10 −5 the first temporal grid point distance inside the convergence plateau.
One has to consider that if the box size is changed, the different degree of nonlinearity drastically effects the evolution dynamics.As the box size increases, the total mass increases due to the normalization condition, see Eq. ( 4), and the local potential wells become deeper on average, which is a prerequisite for more dynamics of the wave function.At the same time, this adds to the computational cost, because more Fourier modes are needed to resolve the appearing peculiar local velocities v pec , which are defined by local phase changes of the wave function.In Madelung representation we have Since the exponential function is 2π-periodic, we need to restrict phase jumps between adjacent grid points to ∂ x S < 2π/∆x, which in fact is the maximal phase velocity we can numerically faithfully simulate.As expected from the second-order scheme, the plateau is approached as ∆ ϵ ∝ ∆t 2 .

Comparison of the pseudospectral and the B-spline method
With the scope of high computational performance in mind, two aspects can be checked for improvements, the spatial and the temporal representation of the problem.Regarding the former, we compare our PS method now with the B-spline method introduced in Sec.3.2.Our B-Spline method has been implemented independently by the Calì group which makes the comparison with the PS technique developed in Parma/Heidelberg very valid.
At first measure, we directly compare the wave functions from the PS ψ PS and the B-spline method ψ BS via their distance: In Fig. 4 both wave functions are shown in thick blue, as well as their difference at various stages of the integration by the dashed lines, given in terms of the scale factor a(t). Please note the difference in scale between the two.At the simulation start, at a ≈ 0.01 corresponding to a redshift z = 100, numerical errors of O(10 −20 ) or below appear, which are within our numerical precision (standard double precision was used throughout).At a = 0.1, the densest region, on the left of the panels in Fig. 4, begins to collapse due to the gravitational attraction.And as expected, in the areas of depletion and accumulation of mass, the differences between the codes begin to show.Both solutions are always close and the difference grows to maximally O(10 −5 ) at the end of the simulation, i.e. at a = 1.We can say that the two independently developed methods give a reasonably good agreement.This is so far the best confirmation of the quality of our methods, next to self-consistent checks, such as the ones presented in the next sections.

Power Spectra
Going to larger box sizes L ∈ [500, 1000] shows the weakness of the B-spline method.This is best seen when looking at the power spectra of the density fluctuations.Figure 5 Figure 4: Wave function in a small box (L=100) of the B-spline (grey dashed line) as well as the Fourier (blue solid line) method and their difference ∆ (black solid line) at various stages of the integration given in terms of the scale factor a = 0.01 (A), a = 0.1 (B), and a = 1 (C).∆ is limited by numerical precision in the beginning of the simulation and grows to O(10 −5 ) at the end at a = 1 (C).We further see that after some short initial stage the differences in the two methods are approximately proportional to the density itself (B, C), as explained in the main text.
for positive modes only since the spectra are symmetric around zero for real densities.The PS results are shown in blue underlying the B-spline ones in grey.For this plot, the number of B-spline basis functions N S pline = 125000 was very large, in such a way, that a Fourier basis of that size can resolve the entire spectrum.
In the lower part of the spectrum k < N Spline 2 , both power spectra follow each other closely as it should be.We observe, however, additional peaks in the B-spline spectra at k = nN spline , n ∈ N, which exist early on and grow in time not only in amplitude, but also in extent.They originate in an inherent lack of resolution of the B-Spline basis at those scales, as can be seen from the Fourier transform Eq. ( 30), see Fig. 6.To ensure convergence of the result, the spot where the B-splines have a dip in resolution, must be far outside the natural roll-off of the power spectrum.A Fourier (PS) method on the contrary can be limited to the roll-off and seems therefore be better suited to solve the Schrödinger-Poisson equation than the B-spline approach that must include a very large number of basis functions to avoid problems in the momentumspace resolution.This goes against the expectation that a B-spline method would demand only a small basis size.As a consequence its main advantage, see Sec. 3.2 is lost, and the method works efficiently only for small box sizes.Currently, it is not efficient to use B-spline basis sizes greater than 150k due to the high CPU time required.However the B-spline code still has some room for improvement since its routines could, in principle, be parallelized.There seems to be only little improvement possible by a further optimisation of the B-Spline parameters to enhance the spatial resolution.Alternatively, one may try to use a different set of basis functions, or some kind of adaptive mesh refinement [23,24,58,59,60,61].

Conservation of Mass
A physical way to assess the accuracy of our integrators is by checking how well norm or the total mass is conserved.We can expect good mass conservation, since the used integrators are unitary by design.We measure the absolute deviation ∆M(t) as, cf.eq. ( 12): Figure 7 shows how the mass evolves with time, beginning at |∆M| ≈ 0, where approximately means below O(10 −12 ).The upper row derives from the PS method, the lower row from the B-Spline approach.The three panels reflect the different box sizes L = 100, 500, and 1000.The PS method conserves the mass well (|∆M| < 10 −6 ) in all cases, while the B-spline method struggles with large box sizes where the error becomes ∆M = O(10 −3 ).It is worth mentioning the correlation between the increase of mass and the generation of power above the noise floor in Fig. 5, as discussed in Sec.4.2.1.
Figure 6: Power spectra of (i) the state in the large box (L=1000) at the end of the simulation (blue line), (ii) a single B-spline on a N = 10 6 grid with order 1 (purple line), and (iii) single B-splines on a N = 125000 grid with order 4 (orange line), 6 (green), and 8 (red).At multiples of the grid size N, the single splines have minima in their power spectrum, e.g. at 125k for the green curve.That is the place, where power is generated in simulation runs (blue curve).Splines of different order exhibit their minima at the same position.Whether the additionally generated power is negligible depends on how far outside of the natural roll-off of the power spectrum these minima are, e.g. the first minimum of the purple B-spline is well outside the depicted range.

Conservation of Momentum
The discrete version of Eq. ( 13) takes more thought.Here the results may be very sensitive to how the derivative is actually computed, especially when low-order finite differencing schemes are applied.We take the most accurate way and use the spectral derivative: While for our PS results, the situation and the error is similar to the case of mass conversation discussed above, we observe that the B-Spline approach leads to relatively large errors in the momentum conservation, in particular for larger box sizes and larger integration time, i.e. larger a(t), even going above 10 −1 for a → 1 at L = 500 and 1000.Those relatively large errors arise from taking the spatial derivative in space representation, necessary since the B-spline integrator exclusively works in the spatial x-representation, see Sec. 3.2.We have checked this by explicitly deriving our SP results in space representation in a similar manner, either with a three-point or a five-point method, which for large a(t) > 0.1 induces a similar increase in error.

Energy Evolution
Figure 9 depicts the relative deviation cf. Eq. ( 17) as a function of the coupling constant a for various box sizes and integrators, including the time-adaptive extension of the PS method, which we present in Sec. 5. To arrive at this result, and in line with the computation of the linear momentum, we compute the instantaneous kinetic energy by evaluating the spatial derivative in k-space for the PS method.The temporal integration in Eq. ( 17) is approximated by a more physical uniform sampling in a(t), rather than in t.All methods show a rather small maximal relative deviation of 10 −3 , which is only reached at the final integration time, i.e. at a = 1.

Numerical performance tests
With the cosmological background in mind, the above discussed simulations represent reasonable box sizes of a few Mpc-the typical distance between galaxies-to a few tens of Mpc.All PS based simulations were executed on the bwUniCluster thought Heidelberg University1 with Intel Xeon Gold 6230 processors with 1.25 MB of L1 cache, that supports basis sizes up to N ≈ 2 14 without additional RAM.Fourier transforms are sped up with the Fast Fourier Transform (FFT) package FFTW and parallelized with OpenMP on 16 cores.The computational resources needed for those simulations are shown in Fig. 10.With a time step ∆t = 10 −2 , much larger than appropriate from the convergence study, even hypothetical basis sizes of  45) vs. a(t) for box sizes L = 100 and L = 500.In the former case, results from the PS (dark green and brown lines) and the B-spline (wine red line) methods are compared.Local averages over small windows are taken along a uniform grid in a(t) with standard deviation (shadowed areas).As in previous plots, the time-adaptive PS (brown line) tends to work best, but all methods show a rather small maximal error of 10 −3 at a = 1.The minima shortly before in the brown and wine-red curves are due to sign changes of the argument in the modulus in Eq. (45).
O(10 8 ) would be feasible.The CPU time can be linearly rescaled if smaller time steps are used, while the memory consumption is unaffected by a change of ∆t only.As for FFTs expected, the CPU time approaches a scaling ∝ NlogN and the memory (RAM) a linear scaling ∝ N at large basis sizes.At medium basis sizes (N = 10 5 , 10 6 ), unfortunately caching limitations play a significant role in time consumption.
The bwUniCluster's main ('Thin') nodes have a memory capacity of O(100 GB), which limits us to basis sizes N ≤ 10 9 .CPU time and memory have plenty of headroom for our one-dimensional simulations.The situation would surely change in higher dimensions.In 3D for example this means 10 3 grid points in every dimension.That is why many Fourier based approaches only handle resolutions up to N = 1024 3 , see also [27,40].Whether memory consumption will become a limiting factor depends on how the physical behaviour changes when we move to higher dimensions.Since the interaction potential is more confined in 2D and 3D, and the force-law is not constant as in 1D, we have reason to belief that the situation is less problematic than an upscaled 1D version.However, [23] reported a dynamic range of O(10 5 ) to be necessary to resolve compact halo-cores.On the one hand, a box size of a few Mpc is needed to accumulate enough mass for the formation of a halo core, i.e. in order to simulate cosmologically interesting evolutions.On the other hand the spatial resolution must be at least a few tens of pc to capture highly oscillating wave functions.
Bigger nodes or larger supercomputers could deliver a maximum performance upgrade of O(10 − 100).That might still not be enough for full-fledged three dimensional simulations, which is why we investigate an optimization of performance in the next Sec.We will see there that with an optimized time stepper a speed-up factor of a few tens can be gained in CPU time for our (1+1) SP problem.

Optimized time stepping
While working well for small box sizes and for a small number of basis functions, our B-spline integrator, presented in Sec.3.2, unfortunately turned out to be less performing than our PS or Fourier evolution code, see Sec. 3.1.Hence, in the following we exclusively discuss the PS method.
Besides trying to change the spatial grid representation, we attacked the temporal discretization.The first improvement we had tried was a higher-order integrator with respect to the second-order String splitting reported above.While obviously more expensive in the necessary number of operators, such a higher-order integrator might allow for larger time steps and thus overall result in a gain in the CPU time.Our original code with a fixed time step, as the one reported in Sec.3.1, was not able to give significant gains.However, adding a possibility for an adaptive time step, we could come up with significant improvements which will be reported in the following.

Fourth-order integrator
A generalized splitting scheme of order p has the following form:  where a i (i = 1, 2, . . ., s − 1) and b i (i = 1, 2, . . ., s) are the splitting coefficients and t i = t 0 + i j=1 b j ∆t, In comparison, in Eq. ( 20) the coefficients were s is the number of stages, which not necessarily coincides with the accuracy p (this is true only for certain low-order schemes).A plethora of higher-order integrators is on the market, see e.g.[62] that gives some insight on the construction of these schemes, and we refer to [63] for a collection of splitting coefficients.Please note that any scheme of the form as shown in Eq. ( 46) is again norm preserving since each single evolution over one time step is, independently of the precise value of the step.
We chose a fourth-order method, since it offers a significant improvement in accuracy while at the same time being compatible with an adaptive time-step selection explained in Sec.5.2.In particular, we use the splitting scheme "Emb 4/3 BM PRK/A" from [62]: It is an embedded scheme, meaning it has two coworking integrators which can be compared along the integration.One is third-order accurate (s = 4 stages) and acts as the 'controller', the other one is fourth-order accurate (s = 7 stages) and acts as the 'worker'.Both share the first three splitting coefficients, which reduces the effective number of stages.Instead of two Fourier transforms per time step in the Strang splitting reported in Sec.3.1.1,17 (sic!) of them are necessary in the new scheme.That includes the 'first same as last' (FSAL) property, where consecutive kinetic operators ÛK (b last ∆t) • ÛK (b 1 ∆t) are directly applied without the need of an in-between Fourier transform.
An adaptive time-step selection is then implemented by comparison of the fourth-and third-order integrator.This comparison gives an intrinsic time-dependent error estimate.The advantage of such an intrinsic error estimation is that other checks, such as based on the famous Courant-Friedrichs-Levi (CFL) criterion for many finite differencing schemes [48,64], as applied e.g. in [24,58,65] in our context, are not required.

Adaptive Time Stepper
As discussed in detail in [66,67], by comparison of worker (fourth-order) and controller (third-order) we get an error estimate which is used to determine the next step size ∆t new based on the current step ∆t after each time step.The precise rule from [66,67] is ∆ϵ loc acts as a time-local absolute error measure, and order accounts for the order of the chosen splitting scheme, in our case fourth (w for worker) and third order (c for controller) The choice of the absolute rather than the relative error, see Eq. ( 40), confirmed advantageous since the latter showed a stronger dependence on the box size L. Following [66], we chose the parameters α max = 4, α min = 0.25, α = 0.9 tested by an optimization 'by hand'.Besides the reasonable choice α max > 1 and α min < 1, Ref. [66] states that the exact value of the αs is not so important.The time step is then mostly self organizing well within the mentioned limits of the parameters.Our simulations confirm this.What is important is to avoid too high rates in the change of the time step, which is controlled by the two parameters α max and α min .Only occasionally when the wave function ψ becomes quasi-stationary, hence we get into a quasi-stable regime, large jumps of ∆t to large values are prevented by α max .On the other hand, when two peaks in the density |ψ| 2 merge due to their gravitational attraction, too fast changes to small ∆t are controlled by α min .Typically for our cosmological simulations, the initial time step can be chosen relatively large, since there the mentioned problems are typically not occurring.The self adaption of the time step is quite fast, typically within just a few steps during which substantial errors may accumulate.Our choice of ∆t ini = 10 −3 turns out to be sufficient for most purposes, also in comparison with the simulation using the second-order Strang splitting.From the perspective of an end-user, getting a solution of a desired accuracy seems reasonably simple.Only the targeted error per time step, tol, has to be chosen, which is then compared to the local error ∆ϵ loc , see Eq. ( 48).If the ratio tol/∆ϵ loc > 1 the time step is increased, and if tol/∆ϵ loc < 1 the time step is decreased in the algorithm of Ref. [66].As mentioned after Eq. ( 46), adapting ∆t gives for each single step a unitary time evolution and hence the entire evolution remains norm preserving.
A few results based on our adaptive-time step PS integrator have already been shown in Figs.7 to 9, where the conservation of mass and momentum and the energy evolution were tested.There, in all three cases, the adaptive technique compared favourably with the constant-time PS and the B-spline method.
Next, we investigate the behavior of the adaptive integrator in more detail in the following systematic manner: Three box sizes L ∈ {100, 500, 1000} are chosen as for all previous plots.To guarantee convergence in the space domain, we set the spatial grid size to the large value N = 2 20 ≈ 10 6 .The initial time step is ∆t ini = 10 −3 for our adaptive stepper.A scan of tolerance parameters tol ∈ {10 −1 , 10 −2 , 10 −3 , 10 −4 , 10 −5 } controls the size of the time steps chosen by the rule of Eq. ( 47).The relative Euclidean norm ∆ϵ, see Eq. ( 40), with the adaptive-step solution and the reference solution is computed.As before, our reference were the simulations with the second-order Strang splitting at ∆t = const.= 10 −5 and the same spatial grid N = 2 20 .Figure 11 shows these results.A second-order solution with ∆t = 8 × 10 −5 serves as a guide therein.This guiding solution is systematically approached by the adaptive ones, when tol is becomes smaller, see the left panel.The maximal accuracy of any solution is limited by the fidelity of the reference itself, which is estimated by ∆ϵ of a nearby solution within the convergence plateau (see Fig. 2).Taking for this error estimate the guiding and the reference solutions, we conclude that the reference is O(10 −5 ) accurate at the end of the simulations, i.e. at a(t) = 1.
The right panel in Fig. 11 shows how the time step evolves with time motional in the scaling parameter a(t).All lines depart from the initial value ∆t(a ini ) = 10 −3 .The time step grows until a ≈ 0.02 for large tol.Whenever ∆t ≥ 1, the evaluation of the step size becomes unstable and ∆ϵ fails to serve as a local error estimate, see data with tol = 10 −1 in Fig. 11.The value of tol, for which this happens depends on the box size L, which governs the total mass in our system.When ∆t ≤ 1, the time step is automatically reduced, typically monotonically but with possible minor fluctuations on top, e.g. at instances when the evolution changes fast due to an increased nonlinearity for instance, see at a ≈ 0.1 in Fig. 11(B, right panel).A posteriori, after having made such detailed simulations as shown in Fig. 11(B), one may redo the computations with the specific series of time steps, e.g.extracted by the power-law scaling of the optimal time steps, as suggested by the figure.However, there cannot be any a priori knowledge of those optimal values and the initial steps for which a relatively large time step is possible, see discussion above, would have to be anyhow excluded.
The total CPU time of our PS computation with and without adaptive time stepping is shown on the left panel in Fig. 12.All results are obtain with the hardware reported in Sec.4.2.5.In Fig. 12, every box uses the largest stable tolerance parameter tol extracted from figures as the previous Fig.11 for each case independently.An important result that we can read off Fig. 12 is that all adaptive runs are much faster than the standard Strang splitting method.By how much, is displayed on the right panel of Fig. 12, where we show the speed up factor for adaptive solutions with the highest possible tolerance with respect to the runs with the second-order PS and constant time steps.Albeit these are not always the most precise solutions, the error ∆ϵ is below 10 −5 in all cases, for L = 500 shown in Fig. 11.Remarkable speed up factors of O (10) to O(100) are possible, in particular for the larger spatial bases with N ≳ 10 5 , where this will help most.
Three immediate applications of this time gain come to our minds: Firstly, medium-sized computations are feasible on desktop computers without massive parallelization.Secondly, scanning a larger parameter set, e.g. of cosmological initial states or of different box sizes (impacting the total mass of the system) can be done much faster.Lastly, it might allow us to tackle more complex problems that require larger basis sizes, also thinking of possible extensions to higher spatial dimensions.

Conclusion
We provided a detailed convergence analysis on our pseudo-spectral (PS) method for the simulation of wave-like fuzzy dark matter in one spatial (1+1) dimension(s).Our analysis is partly based on a direct comparison with an independent technique based on B-splines.This important comparison confirms that both methods essentially work and represent valid physical results for a problem for which not many analytical results exist, and which is complicated by delocalised cosmological initial conditions and a time-dependent and nonlinear potential.In particular, detailed error measures either directly confronting two independent wave function solutions or based on conserved quantities have been studied to underpin convergence of our results.
Unfortunately, the B-spline technique turned out to be less efficient in time and memory consumption than our PS method, typically leading to considerably larger errors at larger system sizes.A considerable increase of the basis size would be necessary for mitigating those errors which would only be possible by a massive parallelization of the method.
Our results show how the PS numerical simulations depend on time (CPU) and memory resources.An important advancement is the implementation of an adaptive time step fourth-order splitting algorithm that gives a significant speed up of the order O (10) to O(100) with respect to our second-order code with fixed time steps without losing unitarity in the evolution.With this speed up, we have hope to explore more complex terrain, e.g.trying to extend our approach to (2 + 1) dimensions.At least for small quadratic boxes this should immediately be possible, as Refs.[14,23,28,29] show us for two and three dimensional boxes with grid sizes of a few thousand points per direction.This means that memory for the spatial representation will be the limiting factor for simulations in higher dimensions.In the latter case new solutions, e.g. based on finite elements [68] or adaptive mesh refinement [23,24,58,59,61,69] should be identified, implemented, and tested in convergence and quality in full detail.

Figure 1 :
Figure 1: A set of 14 B-splines of order p = 4 for periodic boundary conditions in the interval [a, b].
where x j = a + ( j − 1)∆x, with ∆x = (b − a)/N.These functions form a complete basis of N B-splines of order p for periodic boundary conditions in the interval [a, b].Basic B-spline functions are exemplary shown in Fig.1for p = 4 and N = 14.As can be seen in the figure, the B-Spline basis functions are not orthogonal.The overlap matrix B is given by the elements ⟨B j |B k ⟩ = B jk δ jk .Therefore, the matrix representation of an operator Â differs from its representation in an orthogonal basis in the following way: if Ãik are the expansions coefficients of the action of Â on the B-spline |B i ⟩,

Figure 2 :
Figure2: Convergence of the PS method with respect to different parameters in the (1+1)-SP.Each cell colour-maps the errors ∆ϵ with respect to the black-square reference.The three box sizes L ∈ {100, 500, 1000} are plotted row-wise.The time development is given column-wise in terms of the scale factor a. The N-grid is in powers of 2 to ensure overlapping spatial nodes.To make it easier to read, N and ∆t have ticks at powers of 10.Errors induced by constraints in ∆x show a sharp edge in N-direction, which moves toward higher N as the nonlinearity increases.Errors in the time domain on the contrary create a smooth saturation with decreasing ∆t that does not move with time.

Figure 3 :
Figure 3: Scaling of the convergence of the Fourier-Strang integrator as a function of time step ∆t for the box sizes shown in the legend.As expected from the second-order scheme, the plateau is approached as ∆ ϵ ∝ ∆t 2 .
presents the power spectra

Figure 5 :
Figure 5: (a): Evolution of the power spectra in the largest box that we study here, with L = 1000, in terms of the scale factor a(t), see legend.The Fourier (PS) method (blue) underlies the B-splines in grey color.Modes k < N Spline /2 coincide in their spectra.(b): Zoom into the 'oversampled' B-Splines.Artificial peaks show up at multiples of the grid size at k = nN spline , n ∈ N (black dashed lines).They exist early on and expand farther during integration.

Figure 7 :Figure 8 :
Figure7: Mass conservation of the PS, the B-Spline and the new adaptive method (to be discussed in Sec. 5) for three box sizes.The Fourier method conserves the total mass below the 10 −6 range, whereas the B-spline method is in advantage for smaller L, but struggles with the larger boxes, where |∆M| grows to O(10 −3 ).The adaptive method seems to provide the best solutions.It starts at |∆M| < O(10 −12 ) and stays bounded by |∆M| = O(10 −9 ).

Figure 9 :
Figure 9: Energy test from Eq. (45) vs. a(t) for box sizes L = 100 and L = 500.In the former case, results from the PS (dark green and brown lines) and the B-spline (wine red line) methods are compared.Local averages over small windows are taken along a uniform grid in a(t) with standard deviation (shadowed areas).As in previous plots, the time-adaptive PS (brown line) tends to work best, but all methods show a rather small maximal error of 10 −3 at a = 1.The minima shortly before in the brown and wine-red curves are due to sign changes of the argument in the modulus in Eq.(45).

Figure 10 :
Figure 10: Total CPU time and memory requirements as a function of the basis size N.The expected scaling ∝ N log N and ∝ N is recovered for large N. Small basis simulations are more efficient, as long as important vectors fit into the processors cache Runs with middle sized N scale become worse, because the limiting factor is now the memory access.The left half shows data, which was mapped in a linear fashion to match a time step ∆t = 10 −5 .The arrows mark the minimal requirements for the specific problem sizes L = 100 and L = 1000 from our previous figures.

Figure 11 :
Figure 11: Left panel (A): Deviation ∆ϵ, see (40), from the reference solution (Strang splitting, ∆t = const.= 10 −5 , N = 2 20 ) as a function of stepping parameter tol (see the legend for its values) of our adaptive time stepper.All data are for the L = 500 box.Five adaptive, 4th-order solutions with different tol parameters are shown, and one converged second-order solution with a constant time step ∆t = 8 × 10 −5 .Right panel (B): Evolution of the time step ∆t during integration for the adaptive stepper.Time-step selection works well for tol ≤ 10 −2 .