Bayesian inference for near-field interferometric tests of collapse models

We explore the information which proposed matterwave interferometry experiments with large test masses can provide about parameterizable extensions to quantum mechanics, such as have been proposed to explain the apparent quantum to classical transition. Specifically, we consider a matterwave near-field Talbot interferometer and Continuous Spontaneous Localisation (CSL). Using Bayesian inference we compute the effect of decoherence mechanisms including pressure and blackbody radiation, find estimates for the number of measurements required, and provide a procedure for optimal choice of experimental control variables. We show that in a MAQRO like experiment it is possible to reach masses of $\sim10^9\,\text{u}$ and we quantify the bounds which can be placed on CSL. These specific results can be used to inform experimental design and the general approach can be applied to other parameterizable models.


I. INTRODUCTION
Quantum mechanics is a remarkably successful physical theory for predicting microscopic behavior, successfully predicting and explaining phenomena such as the spectra of blackbody radiation [1], atom interferometry [2], semi-conductors [3], and the properties of lasers [4] to name only a few.There remains no experiment that falsifies or limits the regime of validity of quantum mechanics, and yet, at its core, there is an apparent problem where unitary evolution described by the Schrödinger equation gives way to a probabilistic description through the Born rule.A thorough exploration of these problems and ideas can be found in Bassi el al. [5].
One intriguing possibility is that the apparent quantum-to-classical transition is real and there exists objective spontaneous decoherence of the wavefunction, with the mechanism more effective for larger superpositions, with "large", in this context, interpreted to mean large in both mass and spatial separation [6].Assessing this possibility is an experimental task.
Experimental evidence which so far constrains these stochastic theories comprises both interferometric and non-interferometric measurements, the latter recording the (absence of) a predicted anomalous heating present in optomechanical systems.Such experiments include clamped cantilever systems using ferromagnetic spheres up to hundreds of ng [7], tests for gravitational wave detectors using purely macrsocopic masses [8], and levitated spheres in linear Paul traps [9] or magneto-levitation traps [10].Of interferometric experimental tests, the largest superpositions to-date come from Talbot-Lau matterwave interferometry with 25 ku macromolecules [11].These molecular beam-line experiments are remarkable, but to extend beyond this mass limit likely requires an alternative approach.
Near-field Talbot effect applied to matterwave experiments with levitated nanoparticles [12] has emerged as a * j.e.bateman@swansea.ac.uk 1. Illustration of the scenario considered showing a particle localized in a harmonic dipole trap (a) the wavefunction of which, when released, expands to cover several fringes of a standing wave grating (b) formed by retroreflection of a light pulse.The arrival location of particles is recorded (c) some time after the grating.In typical scenarios, the number of data points is small and taking a histogram of arrival positions (left) may be sufficient to evidence wave nature of the particle, the Bayesian inference approach (right; see main text) makes fuller use of the available information and can hope to constrain free parameters in CSL.
promising approach to interferometric experimental tests of stochastic theories [13].Creating such superpositions requires considerable effort to localize and measure position with sufficient accuracy and to mitigate sources of decoherence, including principally interactions with blackbody radiation and collisions with background gas.This becomes extremely challenging for superposition which might inform the objective decoherence models as the timescale for free evolution-set by the Talbot time t T = md 2 /h where m is the mass, d is the grating pitch, and h is Planck's constant-can exceed tens of seconds.Given such long free-evolution times, even with adequate position resolution and mitigation of decoherence, freefall under gravity would mean that an Earth-bound experiment would be impractically tall.Hence, space-borne experiments, where the particle remains nominally at rest relative to the apparatus, have been proposed and studied [14][15][16], including a design study by the European Space Agency [17].
The extent to which such an experiment would constrain particular theories is often expressed by excluding regions in parameter space representing the unknowns of the theory; most often this is Continuous Spontaneous Localization (CSL), which is parameterized by a rate λ c and a length-scale r c [5].Excluded regions are computed by noting that observation of an interference pattern with a given visibility would not be possible if there existed a given magnitude of objective collapse.Predicted interference patterns may also be compared with classical predictions, perhaps with the aid of a figure of merit, to quantify the degree to which the proposed experiment necessitates a quantum description.This approach does not assign a confidence interval nor does it compute the number of measurements necessary to exclude a region of parameter space.
In this manuscript we improve upon this method of excluding regions of parameter space by use of Bayesian inference which assigns to these regions not a binary value of excluded or not, but a real-valued probability density.We simulate the results of a hypothetical matter wave interferometry experiment and apply a Bayesian treatment to these results accounting for various sources of decoherence, such as blackbody radiation and collisions with gas molecules as is shown in Fig. 1.This allows us to predict the bounds which such an experiment could reasonably set on the parameters of CSL.In contrast with related work which considers tests of quantum mechanics more broadly, [18], we focus specifically on near-field matterwave interferometry, consider the two-dimensional parameter space of a specific collapse model with straightforward extensions to higher dimensions, and extend the description beyond the point-like particle approximation, as proves necessary for experimentally interesting scenarios.This approach provides the ability to quantify the information gain which should be expected under given conditions, compute the number of points necessary to reach a desired confidence, and apply the extensive toolset of Bayesian optimal experiment design.We are able to explore the experimental tradeoffs and data acquisition requirements that are acutely relevant for design of future experiments including a space-borne platform, such as a MAQRO like experiment [19].We base our simulations on such an experiment, with parameters given in Table I.

II. TALBOT INTERFEROMETER
We consider a near-field Talbot interferometer consisting of a spherical nanoparticle of mass m, prepared in an thermal state of a harmonic trap of frequency Ω, released and allowed to evolve freely for time t 1 before interaction with a phase grating with pitch d and phase amplitude ϕ 0 , and finally free evolution for time t 2 before arrival position is measured [12].The initial thermal state has Gaussian position width σ x and momentum width σ p .
The probability density for the particle arriving at position x is given by where Z = √ 2πσ p (t 1 + t 2 ), t T = md 2 /h is the Talbot time, h is Planck's constant, and D = d(t 1 + t 2 )/t 1 is the geometrically magnified grating period to be expected at the detector [12,13].
The functions B n are the generalized Talbot coefficients: they account for coherent and incoherent interaction of a spherical particle with a phase grating realized by a standing light wave from a pulsed laser [20].The terms R n describe decoherence and reduce the amplitude of each spatial frequency component: these account for the various sources of environmental decoherence, including blackbody radiation and collisions with gas particles, and the effect of any stochastic modification to quantum mechanics.
Decoherence mechanisms, which we assume each act at a constant rate during the experiment [12,Sup. Eq. 26], are characterised by a rate Γ and a function f (x) which characterizes the spatial extent of the localizing interaction.The decoherence coefficients for each mechanism are computed as, A full treatment of the decoherence mechanisms is given in supplementary B. Finite position resolution will impact any experiment and the measured position x will be distributed about the true value.We model this as a convolution with a Gaussian kernel of width σ m , which may have a time dependence, and is described in supplementary B 1 for a MAQRO like experiment; via Fourier transforms and the convolution theorem we find that can be included as a decoherence term: R meas n = exp −(2πnσ m /D) 2 /2 .Any experiment will operate over a finite spatial region S and we include this via a multiplicative window, For brevity, we describe the pitch of the pattern using k = 2π/D and combine the pure Talbot terms as

III. BAYESIAN MODEL
We consider any collapse model which manifests as a decoherence process and is described by a set of free parameters, which we write as vector θ.The combined effect of several decoherence mechanisms is found by multiplying separate coefficients, and so we split the total decoherence into model and other contributions: where R mod n (θ) and R oth n are respectively the effects of the decoherence from the objective collapse model and various environmental sources, including R meas n described above.Hence, rewriting Eq. 1, the joint probability density for measuring position x for specific values θ is (5) Using traditional rules of probability theory, we find the likelihood to be Assuming independent and identically distributed position measurements, the joint probability of all N measurements x = (x 1 , x 2 , . . .x N ) is p(x|θ) = Π N i=1 p(x i |θ).Through Bayes theorem [21], this data x can be used to find the posterior probability for the parameters θ: where the constant of proportionality is 1/p(x), often called the 'evidence', which we neglect here as it has no dependence on the parameters θ.
The prior p(θ) encapsulates our assumptions before considering any data, and must be approached with care.Specifically, while e.g.uniform probability across θ may seem natural, this is not invariant under parameterization i.e. our physical predictions would change if we wrote the model in a different form.
The canonical choice for an uninformative prior is Jeffrey's prior, proportional to the square root of the determinant of the Fisher information matrix.However, this prior does not extend well into multi-dimensional problems [22, §5.2.9] giving unphysical posteriors for the collapse model which we consider here.
We choose the Maximal Data Information Prior (MDIP) which, while derived from the likelihood of a measurement and therefore dependent on the choice of parameterization, minimizes the arbitrary choices and is designed to maximize the information gain from a single measurement [23].Further, it has no difficulty working with problems of arbitrary dimension [24].It is given by [25], Design of this prior ensures that even for a small number of measurements the posterior quickly becomes dominated by updates from the data.The prior is approximately flat but does inherit some shape from the experimental design, reminiscent of plots from the literature on experiments which seek to constrain parameters of CSL.
The MDIP makes no reference to other experiments which provide considerable information on what values of CSL are plausible.Therefore we also consider a prior, which we term the "Experimental Prior", informed by the results of all previous experiments.We define this Experiential Prior as constant over all values of θ that have not yet been excluded by experiment and zero for those values which have been excluded by previous experiments [26].

IV. CONTINUOUS SPONTANEOUS LOCALIZATION
Until now our discussions have been for general collapse models.From now we focus on the model of Continuous Spontaneous Localization (CSL) [5].CSL can be described as a decoherence term of the form of Eq. ( 2) defined by its rate and length scale parameters which are to be estimated empirically.The decoherence effects of CSL enter the total decoherence of the interferometer as R mod n (θ) where θ = [λ c , r c ] is a vector containing the CSL parameters.The description was recently extended to include non-point-like particles [13].Using a spherical test particle of radius R, mass m, and constant density ρ in 0 ≤ r ≤ R, we have dimensionless pre-factor where j 1 is the Spherical Bessel function of the first kind, Si is the sine integral, and m 0 is the atomic mass constant.These arise via the Fourier transform of the uniform density sphere μ(q) = 4πℏR 2 ρ j 1 (qR/ℏ)/q; further details in Supplementary B 2.

V. POSTERIOR PROBABILITIES
Our discussion now focuses on application of Bayesian inference for Talbot interference experiments in the specific case of the CSL model.We compute the expectation  value of the posterior distribution under the hypothesis that θ = 0, i.e. that the CSL effect, if it exists, is vanishingly small on the scale accessible to the experiment.For a more general discussion on hypothesis falsification in the context of macroscopic superpositions, see [29].
Fig. 2 shows prior and posterior distributions, for simulated measurements of a silicon nanoparticle with a mass of m = 10 8 u for parameters typical of a MAQRO like experiment, as given in Table I.The phase parameter ϕ 0 and second free-fall time t 2 are chosen via the optimization process described in Sec.VI A to maximize the change in visibility brought on by the collapse model.Sources of decoherence used the simulations are discussed in Supplementary B.
The posterior distributions in Fig. 2 are each from a single realization of N simulated arrival positions.While in principle we should compute the expectation value over many realizations, the distributions are indistinguishable for N ≳ 1000.
We indicate on the posteriors values suggested by Adler [28] based on the rate of latent image formation in photography or etched track detection, and the value suggested by Ghirardi, Rimini, and Weber [27] that was chosen to ensure that quantum systems maintained their coherence for times comparable to the age of the universe while macroscopic systems collapse in fractions of a second.The black line in the plots indicates the lower bound on the CSL parameters; it is derived not from experiments but from the requirement that macroscopic superpositions do not maintain their coherence for long periods of time.The specific values given are chosen such that a graphene disk with radius 10µm collapses in 0.01s.This ensures that the smallest sized objects that can be detected by the human eye collapses in the time resolution of the eye [26].It should be noted that the Experimental Prior we consider does not consider these lower limits, remaining non-zero for all values bellow the empirically determined upper bounds.The plot region is finite in extent and contains much of the probability space of interest.The logarithmic scale means the integrated region of lower λ c and r c is small, and the distribution decays quickly in the region of larger λ c and r c ; the probability value is small for regions of larger λ c and r c which have also been previously excluded by non-interferometric experiments.
The posteriors include an upper bound line for the possible values of the CSL parameters λ c and r c .This curve is dependent on the value of the decoherence strength Λ which governs the rate at which each spatial frequency is reduced by a given source of decoherence.The relationship between the CSL parameters and the decoherence strength is given in Supplementary A 4. To define the location of the upper bounds, we adjust the value of Λ via iterative bisection until the integral of the posterior below the line is approximately 95% of the total, i.e. there is a 95% probability that, based on measurements, the true value falls below this line.
As we collect data about particle arrival positions a plausible region of θ begins to emerge leaving behind an upper triangle of classicality.We begin to see the differ-ence between regions after N ∼ 4×10 3 and the difference between these regions only increases as N increases.Beyond N ∼ 10 4 we reach an asymptotic limit where acquisition of more data does not appreciably change the distribution.Quantification of these statements, and comparison across different scenarios, is the subject of the next sections.

VI. INFORMATION GAIN
Information theory provides us with an objective means to quantify the information gain from a given experiment.An objective measure of the information about CSL provided by a measurement is essential to make informed decisions about the number of measurements we should plan to take and, more broadly, the optimal design of an experiment.
The Kullback-Leibler divergence offers a measure of the information contained within the posterior probability through comparison of this with the prior [21,[30][31][32].The information contained in data x is hence We can compute the expected information for a given experiment by integrating over all possible outcomes, weighted by the probability of each outcome occurring: where p(x) is the 'evidence', seen in Eq. 7 where it was treated as a proportionality constant.It can be obtained as Expected information Eq. 12 is found by integration over the space of all possible experimental outcomes x.Due to the large number of dimensions in x, direct integration is numerically intractable.As a result we use the Monte-Carlo integration method described in [33].We rewrite Eq. ( 12) as, where the values of θ (i) are drawn from the prior p(θ) and the data x (i) is drawn from the conditional likelihood p(x|θ = θ (i) ).Although there is no analytical expression for the evidence p(x (i) ) in this calculation, due to the low dimensionality of θ, we are able to calculate it via numerical integration [34].
We estimate the uncertainty of ⟨H⟩ using statistical error, ∆ = (⟨H 2 ⟩ − ⟨H⟩ 2 )/M where M is the number of Monte-Carlo iterations performed.These bounds are shown in our plots as the shaded regions around the mean values.Fig. 3 shows the expected information gain under the assumption CSL is false, i.e. the measured data is distributed about p(x i |θ = 0), as a function of the number of data points N for the MAQRO scenario and using the Maximal Information Data Prior and the Experimental Prior.This quantifies earlier statements about the asymptotic behavior of the distribution; we see that the information gain per data point falls to negligible values beyond N ∼ 10 4 .We can also see the difference in the amount of information gained per data point for different priors.
The expected information gain after 10 4 measurements for two different priors is shown in Fig. 4. Panel (a) shows the expected information gain using the Markov Chain method described in Eq. ( 14) whilst panel (b) shows the information we would gain if there is no CSL effect.We see that the choice in prior has a large impact on the information gain we compute.This is because the information of Eq. ( 11) gives the amount of information contained in the posterior relative to the prior, and the MDIP is designed to be least informative.We see in the simulations using the MDIP that the MAQRO like scenario (given in Table I) achieves a maximum information near M ∼ 10 9 u.At this point the radius of the silicon nanoparticle used is comparable to the grating width and thus to achieve an appreciable phase-shift requires a large fluence for the pulsed laser enacting the grating, leading inevitably to large decoherence from this interaction.Experimental Prior MDIP FIG. 4. Maximum expected information for different priors as a function of particle mass.The orange lines show the information gained from the MDIP, which is mass dependent, and the blue lines show the information gained from using a prior motivated by previous experimental results.(a) show the expected information found through Markov Chain Monte-Carlo integration distributing our measurements by p(x) as described in Eq. ( 14) ,while (b) shows the expected information under the assumption that there is no CSL effect, i.e. θ = 0.
We note that by using a non-invariant prior we become bound by the choice of parameterization which is, in this case linear i.e.
Previous works have characterized the capabilities of matterwave experiments by a single decoherence parameter λ c when setting r c = 10 −7 m [12,35], which allows for simple comparison with the GRW and Adler suggested values for the CSL parameters.Hence, we compute the value of λ c at this choice of r c to facilitate comparison with our measure ⟨H⟩.Fig. 5 shows that the expected value of λ c given θ = 0 decreases as the particle mass is increased.This continues until the particle reaches a mass of 10 9 u where the information falls to 0. At this point the value of λ c rises significantly.This value at high masses is an artefact of the finite integration region of the parameter space we are considering.The information gain provides a more general metric as it makes use of the full parameter space.Experimental Prior MDIP FIG. 5.The lowest upper bound that can be put on the value of λc at rc = 10 −7 m as a function of particle mass.The value of the upper bounds is found from the posterior after 10 4 measurements starting from either the MDIP (orange line) or an experimentally motivated posterior (blue line).We note the difference in the parameter λc for large masses ( 10 9 u) occurring from the fact that regions of the experimental prior are set to identically 0, such that the 95% confidence line is found to be lower than for posteriors generated by the MDIP.

A. Optimal Experimental Design
Our experiment is parameterized by a set of control parameters C.This is a vector containing all the parameters that we can, in principle, choose in advance of performing the experiment, consisting of the particle mass m, the free fall times t 1 and t 2 , the phase parameter ϕ 0 .We seek values for the parameters C which maximize the amount of information gain expected in each measurement.In principle we would choose C so as to maximize the value of ⟨H⟩.However, this is computationally expensive, so as a proxy, we find C such that the introduction of CSL has a maximal effect on the expected fringe visibility.
We define visibility as the first order term from Eq. ( 1), (15) which captures the maximum amplitude of the interference pattern without any decoherence effects.We also define the visibility after the CSL effect as, We then find C such that we maximize the difference ν sin (C) − ν red (C).The optimum values of the parameters ϕ 0 and t 2 are shown for various particle masses in Fig. 8 in the supplementary.
For the purpose of this study we consider the mass m to be given parameter and not subject to experimental control.This is a simplification because, in a real experiment, we anticipate that m will be measured in-situ for each particle.Further, we assume that time before the grating, t 1 , is also given, as this depends on mass and the grating pitch, and does not crucially affect the interference pattern provided it is sufficiently long for spatial coherence to emerge.Hence, we focus our study of control parameters on C = [t 2 , ϕ 0 ].Using the method of parameter optimization described above, we plot the maximum expected information ⟨H⟩ ∞ in the limit of a large number of measurements N , for an experiment using a particle of a given mass as shown in Fig. 4. As shown in Fig. 3, after N ≈ 10 4 the information gained per new data point becomes small, so we choose the limit of large N to be N = 10 4 .Armed with these tools, we proceed to compare the ultimate limits of expected information under different experimental conditions.

VII. SCENARIO COMPARISON
The notion of expected information facilitates comparison across difference scenarios.The MDIP described earlier inherits structure from the experimental design.However, due to the mostly flat nature of the prior we can use it to compare the information gain between experiments.
Fig. 6 shows the expected information gain given θ = 0 for various environmental pressures.It shows that in order to maximize the mass of the test particle, we must decrease the pressure, because for higher pressures the decoherence is dominated by collisions with residual gas particles.While this general point is well known, we are now able to quantify this pressure.We observe that 0.1 nm 1 nm 10 nm 30 nm 100 nm FIG. 7. The expected information under the assumption that there is no measurable CSL effect for various drift uncertainty parameters as a function of particle mass for a range of scenarios in a MAQRO like experiment.Each new plot is found by changing the width per 100s of increase in the position uncertainty.
pressures below 10 −14 hPa are sufficient to maximise the particle mass and information we can gain (see Figs. 9 and 10 in the supplementary).A further consideration is the maximum allowable measurment uncertainty ans in many scenarios this increases linearly with free flight time.Fig. 7 shows the expected information gain given θ = 0 for various drift uncertainty parameters σ m given in Table I.Minimising the increase in position uncertainty allows us to gain information from higher masses.However, for drifts better than 10nm in 100s, as was suggested by the ESA design study [17], there is negligible effect on expected information for MAQRO like experiments.

VIII. CONCLUSIONS AND FUTURE WORK
We have presented a procedure and computed the probability density which a MAQRO like experiment is likely to assign to parameters of CSL as a popular example of a parameterizable macrorealistic extension to quantum mechanics.This work provides a toolbox for exploring specific scenarios as part of a design study.We show that in a MAQRO like experiment, around 10 4 measured arrival locations are needed to saturate the information gain.We also find that environmental pressures below 10 −14 hPa are sufficient to maximise the particle mass with which we can achieve superpositions.This is in contrast to previous proposals that suggest tolerances to much higher pressures, and we provide for the consequences if this stringent pressure requirement cannot be met.Finally we show that increases in the uncertainty due to spacecraft drift up to 10nm in every 100s maximise the range of particle masses that can achieve superpositions.
Future work will include optimal experimental design where we provide a strategy such that the optimal parameters can be chosen based on all previous data x.
The current description and approach of MAQRO is to minimize known sources of decoherence so that the effect of CSL can be observed.Any claim of observing nonzero CSL parameters will then be dependent on confident knowledge of these decoherence sources.An improved approach would be to distinguish the effect of CSL from other sources by its dependence on parameters such as mass and free-flight time.Future work will apply the Bayesian description herein to define a strategy which can infer parameters of CSL in the presence of imperfectly known sources of decoherence.
As discussed in the main text, it is necessary to include a finite window W (x) so that the distribution is normalizable.This window must cover a large number of oscillations, and the sampling must be sufficiently fine that oscillations are well resolved.The grating pitch is 100 nm which is scaled geometrically by (t 1 + t 2 )/t 2 to, typically, 180 nm.We include terms up to 6th order, and so, by Nyquist, must sample once every 10 nm.We chose a 10 µm square-edged window which covers ∼ 50 complete oscillations which we sample with 1000 points.

Grating phase
Grating phase depends on the grating laser focus area a G , the pulse energy E G , and the properties of the nanoparticle.For a given mass, we chose pulse area such that it covers the thermal distribution after expansion for time t 1 , and we then chose the pulse energy such that [20] matches the optimum value according to the visibility argument described in the main text.

Optimum parameters
The specific experiment is defined by the set of control parameters C as discussed in the main text.For many of the parameters, we can use physical arguments to set their values.For example the residual gas pressure P g should be as low as possible to minimize decoherence due to gas collisions.However, the parameters ϕ 0 and t 2 do not have such obvious values, so we follow the method given in the main text to chose the optimum values for these parameters.The optimum values for these parameters are given for various particle masses in Fig. 8.
From reference [13,Eq. 6] we obtain rate Γ CSL and resolution function f CSL for an extended particle [38] is the Fourier transform of the mass density function µ(x).For the uniform density sphere, µ(x) = ρ for ∥x∥ ≤ R, we use spherical symmetry to write μ(q) = μ(∥q∥).We find μ(q) = 4πℏ q ρR2 j 1 (qR/ℏ) .(B4) Hence, using α = qr c /ℏ and M = (4/3)πR 3 ρ, we find , as in the main text.For a point-like particle R → 0, these general results should reduce to point-like CSL: Γ = (M/m 0 ) 2 λ c and f CSL (x) = √ π(r c /x) erf (x/2r c ).Each expression contains a term j 1 (ab)/b which becomes b/3 in the limit a → 0.Then, using the integral results we find that the point-like CSL expressions are indeed recovered in this limit.

Blackbody and collisional decoherence
This section summarizes the implementation used in this work, following reference [12].
Blackbody radiation leads to decoherence through absorption, emission, and Rayleigh scattering.Rates per unit wave-number k at temperature T are computed as where σ abs (k) = k Im [χ(k)] for absorption and emission and σ sca (k) = k 4 |χ(k)| 2 /6π for Rayleigh scattering.The susceptibility χ is found from material permittivity ϵ and the Clausius-Mossotti relation: where V is the particle volume; the particle is assumed sub-wavelength for all relevant blackbody wavelengths.
For simplicity, and confident that it gives an upperestimate for decoherence and the difference is small, we assume the particle does not cool during free-flight.Hence, equations for decoherence for emission and absorption of blackbody radiation are the same, with the only difference being temperature.
To compute the corresponding R n decoherence Talbot coefficient requires total rate Γ and function f .These are Γ = ∞ 0 γ(k, T ) dk for each of the mechanisms, for absorption (similarly for emission), and We treat collisional events with background gas as resolving position far better than the grating wavelength and hence we only require the rate Γ col ; this is computed as in reference [12].

Appendix C: Talbot Coefficients for a Mie Particle
We derive the scattering decoherence on a particle in the standing wave grating using the method described in [39].In contrast with previous work [20], we find that scattering into θ and ϕ polarizations must be treated as separate processes.
The electromagnetic field that forms the bath is given by where âkν (â † k,ν ) is the annihilation (creation) operator, ε k,ν is the polarization vector, k is the wave vector, ν denotes the two independent polarization directions θ and ϕ, w k = ck, and k = |k|.We also define V q as the quantization volume defined by the boundaries of the experiment given by V q = L 3 where L is the length of a box.
We are then able to perform the calculations given in [39] whilst maintaining the explicit dependence on the phase to obtain the transition matrix, For a standing wave with linear polarization, the mode function |c⟩ can be described by, where we have made the assumption that the laser field has a very large spot area.From this we can obtain, If we assume that the wave is polarized in the x direction, we recover the vector scattering amplitude, where n 0 = I0 cF0 σ abs ϕ 0 is the mean number of absorbed photons and we have used the absorption cross section σ abs given by Mie theory.
We take the Fourier coefficients of this total decoherence mask and, with use of Graf's addition theorem, convolve it with the coherent grating effects as described in [20] FIG.1.Illustration of the scenario considered showing a particle localized in a harmonic dipole trap (a) the wavefunction of which, when released, expands to cover several fringes of a standing wave grating (b) formed by retroreflection of a light pulse.The arrival location of particles is recorded (c) some time after the grating.In typical scenarios, the number of data points is small and taking a histogram of arrival positions (left) may be sufficient to evidence wave nature of the particle, the Bayesian inference approach (right; see main text) makes fuller use of the available information and can hope to constrain free parameters in CSL.

FIG. 2 .
FIG. 2. Probability distributions p(θ|x) for the rate (λc) and length scale (rc) parameters of the CSL model with MAQRO like parameters, as detailed in the text.These posteriors are generated using the Maximal Data Information Prior, with N points picked from distribution p(x|θ = 0).(a) N = 0 is the prior; (b-d) are for N as indicated.The GRW value [27], Adler values [28], and lower bound, discussed in the text, are motivated by theoretical considerations and are only shown for comparison.The red dashed upper bound is found such that 95% of the probability distribution is below this line.

FIG. 3 .
FIG.3.Expected information ⟨H⟩ as a function of the number of data points N for a MAQRO like scenario as discussed in the text with a nanoparticle of mass m = 10 8 u for setting θ = 0.The shaded region around each line indicates the standard deviation in the estimation.The information gain is small beyound N ∼ 10 4 and the variance in our estimation of this value changes only very slowly.These are computationally expensive and for a given N the value of ⟨H⟩ is found from 200 realisations of H(x).

FIG. 8 .
FIG.8.Optimum values of ϕ0 and t2 for various masses of nanosphere.The bounds on t2 are chosen to ensure that t2 ≈ tT to avoid excessive flight times increasing decoherence whilst allowing enough time for the interference effect to take place.

TABLE I .
Control parameters used in the MAQRO-like scenario.The values of the free-fall time t2, and phase parameter ϕ0 are optimized for each new run based on the method described above.
FIG.10.Total effect of environmental decoherence on the fringe visibilities, R oth 1 , for various experimental scenarios with varying residual gas pressure.