Vector wave dark matter and terrestrial quantum sensors

(Ultra)light spin-1 particles — dark photons — can constitute all of dark matter (DM) and have beyond Standard Model couplings. This can lead to a coherent, oscillatory signature in terrestrial detectors that depends on the coupling strength. We provide a signal analysis and statistical framework for inferring the properties of such DM by taking into account (i) the stochastic and (ii) the vector nature of the underlying field, along with (iii) the effects due to the Earth's rotation. Owing to equipartition, on time scales shorter than the coherence time the DM field vector typically traces out a fixed ellipse. Taking this ellipse and the rotation of the Earth into account, we highlight a distinctive three-peak signal in Fourier space that can be used to constrain DM coupling strengths. Accounting for all three peaks, we derive latitude-independent constraints on such DM couplings, unlike those stemming from single-peak studies. We apply our framework to the search for ultralight B - L DM using optomechanical sensors, demonstrating the ability to delve into previously unprobed regions of this DM candidate's parameter space.


Introduction
Dark matter (DM) dominates the non-relativistic matter content in our cosmos.However, we know exceptionally little about the constituent particles/fields of DM.Apart from the fact that they must interact gravitationally, we do not know their mass, spin, and other potential interactions [1,2].Astrophysical observations allow for a broad range of masses for the dark matter "particles": 10 −19 eV ≲ few × M ⊙ [3,4].Theoretical models include particle masses that span this range, with ultralight bosons at the lower end and composite particles/primordial black holes at the upper end [5][6][7][8][9][10].
Among the variety of possibilities, the case of ultralight, bosonic dark matter is particularly intriguing.These include, for instance, the QCD axion [11][12][13], axion-like particles and other scalars [14][15][16][17], and vector particles [18,19].A wide-ranging observational and experimental program is currently exploring models that can be tested with contemporary technology.Assuming a local dark matter density ρ ∼ GeV cm −3 with a typical virial velocity of v 0 ∼ 10 −3 c, for particle masses smaller than a few eV, the typical particle number within a de Broglie volume becomes sufficiently large, allowing for a classical field theory description: N dB ≃ ρ/m (h/mv 0 ) 3 ∼ 10 66 10 −15 eV/m 4 .
In our detection scheme, the sensor points in a fixed direction relative to the local tangent plane of the experiment, rotating with the Earth at an angular frequency ω ⊕ ≡ 2π/(1 sidereal day) ≈ 7.5 × 10 −5 Hz.We will show that the signal in Fourier space contains three distinct peaks located at the angular frequencies m, m − ω ⊕ , and m + ω ⊕ , with the last two arising due to the Earth's rotation. 1Since ω ⊕ corresponds to a mass scale of ∼ 5 × 10 −20 eV, which is already outside of the allowed mass window, we naturally have m > ω ⊕ .To resolve these three distinct peaks, we shall therefore enforce that the observation time always satisfies T obs > 1 d.With this basic setup, we will concentrate on the short observation time regime in this paper: experimental expedition timescales which are much smaller than the coherent timescale of the wave DM field, T obs ≪ τ coh ≡ h/mv 2 0 .
(1. 1) This means that, at most, our framework is valid for masses that give a coherence time of at least 1 d, corresponding to m ≲ 5 × 10 −15 eV.For longer expedition times, this upper mass limit decreases: for example, for a runtime of 1 yr, we have that m ≲ 10 −16 eV.This gives us the mass range of 5 × 10 −20 eV ≲ m ≲ 5 × 10 −15 eV where our analysis is appropriate 2 .We limit ourselves to working within the coherent regime for two reasons.Foremost, we wish to highlight how the inherent stochasticity of the full 3-dimensional vector field should be treated when drawing inferences.This randomness is roughly only manifest for observation times shorter than a coherence time since the random variables dictating the behaviour of the field can be treated to be sampled once every coherence patch.For longer observation times, this randomness is averaged over.Secondly, remaining in the coherent regime allows us to treat the signal as only appearing within three bins in Fourier space.For observation times outside of this regime, we would instead begin to resolve the shape of the dark matter halo velocity distribution, complicating our inferencing.Constraining ourselves to this regime thus simplifies the problem, emphasizing the role that the vector nature of this type of dark matter plays when performing inferences.
Given the recent and substantial research and development efforts in quantum technologies, we are specifically interested in timely studies aimed at understanding the potential of quantum optomechanical sensors in the direct detection of dark matter.Mechanical detectors have a rich history in tests of gravity, including LIGO, and in recent years there has been a surge in efforts to explore their potential in quantum sensing for fundamental physics investigations (see reviews [52][53][54]).We are only beginning to understand the new opportunities for dark matter searches [55][56][57][58][59][60] in light of significant advances in quantum readout and control of mechanical sensing devices using optical or microwave light [61][62][63].Accurately modeling the dark matter signal and the associated statistics is crucial for drawing representative inferences, guiding these quantum research and development efforts and aiding in experimental design.This is particularly relevant for large-scale accelerometer projects, such as the one proposed by the Windchime collaboration [64].Such sensors have demonstrated potential as powerful probes for the wavelike signature produced by ultralight dark matter [65].
In this paper, we devise the analysis strategy for ultralight vector dark matter in the coherent regime to draw more representative exclusion inferences in the future.We begin by laying the theoretical groundwork for this DM paradigm in Section 2. Considering equipartition between the longitudinal and transverse modes of the VDM field, we derive the associated signal in both the time and frequency domains.We do this by taking into account both its stochastic and polarization properties, as well as accounting for the rotation of the Earth.We then perform statistical analyses of the DM signal in the frequency domain in Section 3. We derive a limit on a generalised parameter that is independent of the vector dark matter model and experimental parameters, which can be recast to concrete choices of them.Unlike other studies that focus solely on a single peak, our findings reveal that the signal power is distributed across three distinct peaks.Accounting for this distribution ensures the retention of constraining power, regardless of the experiment's location on Earth.Finally, in Section 4, we apply our framework to a concrete dark matter model and sensor: B − L dark matter and the canonical optomechanical light cavity.
We have also included 5 appendices in this paper.In Appendix A, we discuss the stochastic behaviour of the vector field given an unequal distribution of power amongst its longitudinal and transverse modes, as well as their ultimate equipartition (owing to non-linear gravitational dynamics).In Appendix B, we derive the marginal likelihood for the three-peak signal and also show that the powers in the three peaks are uncorrelated.In Appendix C, we discuss the applicability of our results in the context of the gradient of a scalar, deriving limits on the appropriate generalized parameter.In Appendix D, we derive the likelihood in the case that the vector field is 'linearly polarized' and also show that the covariance matrix is not diagonal.Finally, in Appendix E, we derive an updated limit on the gauge coupling of a new, long-range B − L coupled fifth force given the latest MICROSCOPE results.Throughout the rest of this paper, we will work in natural units, whereby ℏ = c = 1.Moreover, we sometimes quote the DM mass in units of Hz where it is more appropriate to treat it as an angular frequency.The conversion from Hz to eV is given by the relation m = ℏω/c 2 ≈ 4.14 × 10 −15 eV[ω/(2π Hz)].

The Dark Photon Field
The random vector field Â of mass m, at any given location and time, can be decomposed as where, in the non-relativistic limit and assuming free field evolution, each Fourier mode of the complex 3-vector field Ψ(x, t) evolves as Ψk (t) = Ψk e −i k 2 2m t .We use hatted notation to indicate that a quantity is stochastic.
With non-linear gravitational clustering, we expect an equipartition between longitudinal and transverse polarizations of the plane waves in our vicinity, regardless of whether the early universe production mechanism favors one over the other.In Appendix A, we discuss this equipartition in greater detail, providing evidence for it via halo-formation 3D simulations of the Schrödinger field Ψ.
Given this equipartition, the spectrum Here, V is the volume and ρ is the local mass density.To be explicit, we work with a finite volume V so that k is discretized.
We can define Ψk = √ f k εk such that, for every k, εk is a set of 6 real (3 complex) i.i.d.s (independent and identically distributed random variables) with unit norm ⟨ε † k • εp ⟩ = δ k,p .In other words, for every k there are 5 real random numbers that are uniformly distributed on a unit S 5 .With this, a realization of the random vector field is In this paper, we are interested in the short observation time limit, T obs ≪ τ coh = m/k 2 0 (where k 0 = mv 0 denotes the typical wavenumber).In this case, we can neglect the k 2 /2m factor from the time-varying sinusoid.Subsequently, we have a summation over many monochromatic waves (all oscillating with frequency m), with different amplitudes and phases for different values of k.Assuming that the halo function is well behaved (meaning any n th moment k k n f k is finite), we can use the central limit theorem to arrive at the following4 : Â(x, t) For T obs ≪ τ coh , we can also safely assume that the distance Earth sweeps during the observation time is negligible compared to the de Broglie length.As such, we shall set x = y and further set x = 0 assuming statistical homogeneity of the DM field.Decomposing the complex Gaussian random variables ŵ into Euler form, ŵj = αj e −i φj , we may write Âj (t) The three αj are independent Rayleigh distributed random variables, while the three φ j are independent uniformly distributed angles (ranging from 0 to 2π): The three components become statistically independent, mimicking three independent scalars. 5Also note that our result is similar to the case of the gradient of a scalar [46], in the sense that there are 6 independent normal random variables to describe the DM field at a given location and short time scales.

Equipartion and Ellipses
In this subsection, we further justify our assumption of equipartition in the previous section.

Equipartion and Vector Field Ellipses:
There exist various production mechanisms where disparate amounts of longitudinal (spin-0) and transverse (spin-1) helicities are produced [20,21,23,25].That is, for every k, there could be different amounts of longitudinal and transverse components of Ψ k .However, owing to non-linear gravitational clustering, such disparity within the two sectors is expected to have disappeared by now within our local cosmic vicinity.With non-linear gravitational clustering, we expect virialization, leading to the equipartition of energy within all three degrees of freedom (dof).This would result in 2/3 of the total power being contained within the (two) transverse dof and the remaining 1/3 within the longitudinal ones.In this case, the vector field at each point (within a coherence region), which is formed out of a sum over a large number of Fourier modes, roughly traces out a randomly oriented ellipse (as opposed to oscillating along some fixed direction).This can be seen by noting that within time scales and length scales much smaller than the coherent ones, the spin current [31,66] is negligible since it scales with the typical speed σ.Hence, the local spin density (given by s = A × Ȧ) is conserved, implying that the local field vector A can execute a two-dimensional sinusoidal motion in general, i.e. it sweeps an ellipse.See Fig. 1 for a visualization and a description of this evolution, and visit this webpage for a video.Such elliptical motion is appropriately used in, for example, [67][68][69].
Random Linear Polarization: Some studies make the assumption of linear polarization; i.e. the vector field oscillates along a line in a fixed direction with this direction changing randomly every coherence time (see for example [41,[70][71][72]).However, as we argued above based on equipartition, an elliptical motion of the vector field is what one should expect.Nevertheless, we note that a preference for linear (circular) polarization could be generated by allowing for a non-gravitational attractive (repulsive) self-interaction [73][74][75].While these works argue for such a preference in isolated soliton-like configurations, whether a significant preference for linear (circular) polarization within each coherence patch can be achieved dynamically remains to be seen.
Fixed Direction: There could exist a misalignment production mechanism for vectors where the entire observable Universe (or at least a large portion of it) has the vector field oscillating in a fixed direction [76,77].Such setups indeed lead to a fixed direction of oscillation of the vector field (apart from randomly oriented small perturbations).This would be distinct from the equipartition case we consider.However, there are difficulties associated with their production mechanisms [78][79][80].

The Detector Signal
Let ζ(t) be the "antenna" direction-the time series signal is then given by Ŝ Here, g is some overall coefficient containing the coupling Figure 1: At each spatial point with a coherence region of size 2π/mv 0 (scale of interference "granules"), the dark matter field A (orange vector) traces out an approximately fixed ellipse (for 2π/mc 2 = τ Comp ≪ t ≪ τ coh = 2π/mv 2 0 ).These ellipses change their size and orientations on the coherence time scale and smoothly connect with each other from one coherence region to another.For the duration of the measurement, we expect to find ourselves within one such coherence region.The detection signal S(t) is proportional to the dot product of the dark electric field E ≃ ∂ t A and the detector orientation ζ(t) (green arrow).Also see Fig. 2. Snapshot of the actual simulated field-leftmost paneltaken from [31].For a movie of the vector field behaviour based on the simulations, visit https://www.youtube.com/watch?v=bbw6yFRLS7s.
constant and any other possible model parameters, and we have used the fact that the produced dark electric field is approximately given by E ≈ ∂ t A in the non-relativistic limit of the vector DM field.In this work, we will take the antenna to always point towards the zenith; however, we will comment on this assumption in Section 3.With ϕ denoting the latitude (where ϕ = 0 • and ϕ = 90 • respectively correspond to the equator and poles), and ω ⊕ denoting the angular rotation frequency of the Earth, we have ζ(t) = (cos ϕ cos(ω ⊕ t), cos ϕ sin(ω ⊕ t), sin ϕ) ⊺ .Then, the signal Ŝ(t) is Ŝ(t)| T ≪τ coh ≈ g 2ρ/3 αx cos ϕ cos (mt + φx ) cos(ω ⊕ t) + αy cos ϕ cos (mt + φy ) sin(ω ⊕ t) + αz sin ϕ cos (mt + φz ) . (2.6) Ultimately, we perform our analysis in Fourier space, considering the (one-sided) periodogram generated by Eq. (2.6).This is proportional to the mod-square of the discrete . This depends on the shape, orientation and period of the vector field A(t), as well as the detector axis ζ(t), which rotates with the Earth, and a model-dependent coupling g.Also refer to Fig. 1.We show the signal in the time (left) and Fourier (right) domains.For the time domain signal, we show its behaviour over several coherent times (upper) and sidereal days (lower), the latter of which is the expected observed signal in our sensor.The inset shows the oscillation of the signal over several Compton times.Note the appearance of three peaks: a single Compton peak at frequency m in the middle and two additional ones appearing due to Earth's rotation-a difference peak at d ≡ m − ω ⊕ and a sum peak at s ≡ m + ω ⊕ .The time domain signal is simulated from Eq. (2.6), the periodogram of which is taken via Eq.(2.7).For the simulation, we have used the observation time T obs = 10T ⊕ , and the results are expressed in terms A ≡ g 2ρ/3 (where ρ is the local DM density).For additional details on the simulation, see text.The peaks are contained within bins whose widths correspond to the resolution of the frequency space data, given by ∆ω = 2π/T obs .Note that the general elliptical behavior of the DM field allows for different power in the sum and difference peaks.In contrast, the linearly polarized DM field would lead to equal power in these peaks.
Fourier transform of Ŝ(t) and is given by (2.7) Here, ω is the angular frequency, N is the number of points sampled in the time domain, and ∆t ≡ T obs /N is the sampling frequency.The factor of two accounts for the 'folding' of the result from negative angular frequencies to positive angular frequencies, producing the one-sided periodogram that ignores the former.We define the dimensionless parameter where σ is the noise power spectral density (PSD).Typically, A is an acceleration or a force for accelerometer studies.With these definitions, the signal periodogram, normalised by the noise PSD (which we call the excess power λ), is given by λ(ω) ≡ P(ω) (2.9) Here, we have defined the 'sum' and 'difference' angular frequencies, s ≡ m + ω ⊕ and d ≡ m − ω ⊕ , respectively.The δ a, b are Kronecker delta functions over angular frequencies.We note that σ can be frequency dependent.This means that the expected noise PSD within each signal-containing bin can be different.However, for the remainder of this work, we take σ to be approximately constant, which is a good approximation in the frequency range of the signal, ∆ω ∼ ω ⊕ ≡ 2π/T ⊕ ≈ 7.3 × 10 −5 Hz, where T ⊕ = 23.93 h is the Earth's sidereal period.We comment on this approximation further in Section 4 when we consider a concrete sensor and DM model.Fig. 2 shows an example of a signal in the short observation time regime, in both the time (Eq.(2.6)) and Fourier (Eq.(2.9)) domain.To generate them, we have taken A = 1 [A], m = 2π Hz, ϕ = 45 • , T obs = 10T ⊕ , and, for the purposes of fast convergence, T ⊕ = 100 s.Here, [A] are the units of A, which depend on the quantity being measured by the experiment.We have also taken α ≡ (α x , α y , α z ) ⊺ = (1, 0.7, 0.2) ⊺ and φ ≡ (φ x , φ y , φ z ) ⊺ = (π/2, π/4, π/3) ⊺ .When running our future simulations, we sample these six variables independently from their respective distributions.
There are three characteristic timescales within the signal: the Compton scale, the Earth's rotation period, and the coherent (de Broglie) scale.The first two of these are present in the larger panels of Fig. 2. The Earth's rotation period is evident from the time domain signal, which we have shown for three full rotation periods.The Compton scale is much faster than this scale (see inset), making the signal appear solid in shape.Crucially, we see that the vector ULDM field leaves a characteristic three-peak signal in the Fourier domain 6 .One peak is present at the Compton frequency ω = m, which previous frequency-space analyses have focused on [37,39].However, a further two peaks manifest as a result of the Earth's rotation, which are spaced ω ⊕ away from the Compton peak.These additional peaks, the use of which has been ignored in previous accelerometer analyses 7 , only appear as T obs ≥ T ⊕ ; shorter observation times do not give us enough resolution in the frequency domain to resolve them8 .We call the peak at the Compton frequency the Compton peak, that at s the sum peak, and that at d the difference peak.
We argued earlier that, within a coherence patch, we expect the vector field to undergo an elliptical motion with period 2π/m (see Fig. 1), as opposed to a linear one commonly used in the literature.In both the linear and elliptical cases, the time domain signal, S(t), is sinusoidal and contains the angular frequencies m and m ± ω ⊕ .Repeating the analysis of [41] in the time domain, but without the linear polarization assumption, we expect qualitatively similar results (with more statistical spread on the time averaged power).However, there are some important differences when analysing the expected signal in Fourier space.
In the elliptical case, the power contained in the m and m ± ω ⊕ peaks is statistically uncorrelated (see Appendix B).On the other hand, for the linear polarization case, the power at m and m±ω ⊕ is correlated, with equal power in the sum and difference peaks.This can be seen by noting that, in this case, all the components of the vector are in phase.The statistical independence in the elliptical case significantly simplifies our analysis pipeline for projected sensitivities.Furthermore, the distinction in power at m ± ω ⊕ is also relevant in case of a detection since we would expect different powers in the elliptical case.We show the statistical independence arising from the elliptical case in Appendix B and the statistical dependence of the sum and difference peaks arising from the linear polarization case in Appendix D.

Statistical Analysis
We now consider the projected exclusion limits that a generic experiment would be able to set using our three-peak analysis.To do this, we use a series of likelihood-ratio tests.

Signal Likelihood
For our likelihood, we follow a hybrid frequentist-Bayesian approach, defining a marginalized likelihood in which all nuisance parameters are integrated out.In our case, these are the random Rayleigh parameters, α, and random uniform DM phases, φ.Such a hybrid approach has already been used in the context of ultralight bosonic dark matter [43,81].Our work differs from Ref. [81] since they focused on an axion-like signal as opposed to that from vector DM.It goes beyond Ref.[43] since they did not consider the peaks arising from the rotation of the Earth in their analysis.
The full likelihood in Fourier space is well-known to follow a non-central χ 2 with two degrees of freedom [82].In our case, the non-centrality parameter is the total signal amplitude in Eq. (2.9).The marginalized likelihood is then given by where Π describes the priors of our random parameters and p is the random variable we expect to measure in an experiment.We can express the result of Eq. (3.1) completely analytically and provide a full derivation of it in Appendix B, only quoting the final result here.The likelihoods in the signal-containing bins, which we call the Compton and sum/difference likelihoods, are respectively Note that, when β = 0, we correctly retrieve central χ 2 distributions in each bin, corresponding to the background-only case.The result for the Compton peak matches that of Refs.[43,81].The result for the sum/difference peaks is new.For completeness, we also derive the equivalent of Eq. (3.2) for the linear polarization case in Appendix D. In Fig. 3, we show a comparison between a numerical simulation of the signal likelihoods and the analytical results of Eq. (3.2).For the former, we begin from Eq. (2.6), simulating 10 6 realisations of the time-domain signal.We take A = 1 [A], T obs = 10T ⊕ , σ 2 = 5 [A] 2 Hz −1 , and ϕ = 45 • .For the purposes of fast convergence in our simulations, we take m = 2π Hz and T ⊕ = 100 s.For each run, we sample α and φ from independent Rayleigh and uniform distributions respectively, as given by Eq. (2.5).To generate the time domain signal, we add Gaussian-distributed white noise with zero mean and variance given by σ 2 t = σ 2 ∆t.Finally, we compute the periodogram of the signal following Eq.(2.7), dividing by σ 2 to produce the excess power in each frequency bin.The resulting normalised distribution of p(ω) for the Compton bin (ω = m) and the sum/difference bins (ω = m ± ω ⊕ ) are in excellent agreement with our analytical result.We also show the likelihood governing the deterministic result for the Compton peak: a non-central χ 2 with two degrees of freedom and non-centrality parameter given by the last term of Eq. (2.9).Without the nuisance parameter α 2 z integrated out, we instead set it to its expectation value: ⟨α 2 z ⟩ = 1.We see that higher values of p are favoured in this case, which would ultimately lead to an overly aggressive constraint on β.
The full likelihood over all frequency space is then given by the product of the likelihoods in each frequency bin, where p i represents the excess power density in the i th frequency bin, p is the full data vector, and the product runs over all N bins frequency bins.Ultimately, since our signal only Here, ϕ is the latitude of the experiment, p (a random variable) is the value of the measured excess power, and β is defined as per Eq.(2.8).Also shown as a dashed line is the deterministic result for the Compton peak, where the stochastic variable α 2 z is set to its expectation value, ⟨α 2 z ⟩ = 1.
manifests in three bins, it suffices for us to consider only those bins that could potentially contain a signal, and we may ignore all other bins.We can express the likelihood in this way because each bin is statistically uncorrelated, as we show in Appendix B. This is in contrast with the analysis performed in Ref. [46], where a similar study was conducted in the case of the gradient of a scalar in the time domain.There, a complicated covariance matrix had to be computed to account for correlations in the signal at different times.In Fourier space, these covariances disappear.The power of performing this analysis in the frequency domain is thus not only that the signal is contained within a small number of bins, but also that these bins are statistically independent, which allows us to treat the statistics in a significantly simpler way.
Crucially, once the latitude of the experiment, ϕ, is fixed, the likelihood depends on the product of all experimental variables via the dimensionless parameter β.This means that we can set a more holistic limit that is independent of the specifics of an experiment.Once the form of A (which depends on both the experiment and the DM model), the observation time T obs , and the noise profile σ are known, the ensuing limit on β can be recast to one on the model parameters of interest.This makes our analysis, both the results and overall logic, as generally useful as possible.

Projected Exclusions
To derive our limits, we construct the one-sided log-likelihood-ratio test statistic, defined as where β is that value of β which maximises the likelihood given the observed data set p, characterising the best-fit model.This statistic is defined as a piecewise function as we only expect excess signals to be disfavoured when excluding a value of β in a one-sided test.This corresponds to values of β greater than the best-fit value.Values below this are deemed under-fluctuations and considered consistent with observation.This statistic tells us how consistent the data are with a signal defined by β compared to the best-fit model, with zero representing perfect consistency and large values indicating high inconsistency.
The general idea is that we want to exclude those values of β that lead to excessive values of q β .We do this by considering the distribution f (q β ) that we expect to arise when many hypothetical experiments perform a measurement.Generating the α conf % confidence level (CL) limit then depends on finding the value of the test statistic, q lim β , for which an experiment has an α conf % probability of attaining that value or below.That is, we solve for q lim β in From q lim β , we then find that β lim which gives us this value for the test statistic.This is our α conf % CL limit on our parameter of interest, β.
Ultimately, we compute 90% CL limits (α conf = 0.90) via a series of Monte Carlo (MC) simulations, following the above procedure in each MC run 9 .For a given choice of β, we begin by simulating the distribution f (q β ).We do this by simulating 10 6 experiments, sampling the data p for each signal bin directly from the verified likelihoods given in Eq. (3.2).In each run, we find β and compute the distribution of q β for a range of β and ϕ values.We then fit our distributions to the ansatz where ϖ ∈ [0, 1] is a scale factor ensuring that the distribution is normalised to 1.This ansatz is inspired by the asymptotic result of Chernoff (where ϖ = 1/2), which itself is a limiting case of Wilk's theorem when the true value of β lies on the boundary of its domain (which is true in our case, since our background-only data set if defined by β = 0) [84,85].Our three-peak analysis generally provides a better/comparable limit to either of the two single-peak analyses and is largely latitude-independent.
For each run, we find the best-fit value of ϖ.From Eq. (3.6), we can then invert Eq. (3.5) to find q lim β .For the α conf % CL limit, we have that In generating our distributions, we find a weak dependence on the chosen values of β and ϕ.However, this leads to a small change in the ultimate value of q lim β .We take the mean value of it over our fits as its best estimator, yielding q lim β ≃ 2.43, and use this throughout the rest of our study.Note that this is close to the asymptotically expected result of q lim β ≃ 2.71 for a 1 d.o.f.problem [83].We could have derived a similar result by integrating the resulting q β histograms; however, we attain a good fit to Eq. (3.6), and it provides us with a closed-form solution for q lim β , as per Eq.(3.7).With q lim β at hand, we may derive the main result of this section, β lim .We once again follow an MC approach, generating 10 6 data sets consistent with a background-only observation (setting β = 0), and finding, for each hypothetical experiment, that value of β for which Eq. (3.4) returns q β = q lim β .This produces a distribution of limits, for which we take the median as our best estimator.We also produce the 1σ error bars on our limits by finding the 16 th -and 84 th -percentile of the distribution in β lim .We show the 90% CL limit arising from our three-peak analysis in Fig. 4. Also shown are the corresponding results from two single-peak analyses focusing solely on the Compton and either one of the sum or difference peaks.These results follow the same MC procedure as above but take as the full likelihood only the Compton or the sum/difference likelihood given in Eq. (3.2).We see that the three-peak analysis produces a limit that is largely latitude-independent, rising slightly towards the pole.This is because the sensitivity axis at this latitude only has a component parallel to the Earth's rotation axis and is thus only able to pick out the Compton peak.
This latitude-independent limit is in contrast with the analyses that focus on only single peaks, which are both highly sensitive to where the experiment is placed.For the study focusing on the Compton peak, the constraining power is optimal at the pole, where all of the power is contained in the Compton frequency bin, and it rapidly declines towards the equator, where the Compton peak disappears.Note that this and the three-peak results join at this point, with no difference between the approaches.Conversely, for an analysis focusing on one of the sum or difference peaks, the situation is the opposite.In this case, the results of this and the three-peak methods do not converge since half the power is contained in a single one of the sum or difference peaks at the equator; the threepeak analysis captures all of this power, whereas the single-peak analysis misses half of the power.Thus, the strength of our analysis is that the constraining power is retained no matter where an experiment is placed, such that its latitude is rendered largely irrelevant from the viewpoint of constraining a ULDM signal.
We emphasize that the interpretation of Fig. 4 is as a set of exclusion lines whereby background pseudodata is generated and the assumed DM signal strength is constrained.The key point is that the level of this constraint depends on the assumption one makes on the nature of the DM signal given a non-detection.Taking this signal to be only a single peak in Fourier space then leads to constraints that are generally dependent on the latitude of the experiment, rapidly weakening towards latitude extremes.On the other hand, employing the full signal model consisting of three peaks yields stronger constraints that are almost independent of the experiment placement.
Throughout the above analysis, we assumed that the sensitivity axis pointed in the zenith direction.However, we can relax this assumption and consider what our results would look like if this axis pointed in some different direction, say for instance directions perpendicular to the zenith-namely, the East/West and North/South directions 10 .If North/South-pointing, all of the curves in Fig. 4 would be flipped about the line ϕ = 45 • .While the strongest limit for the Compton peak would now occur at the equator instead of the pole (and vice versa for both the sum and difference peaks), crucially we would still retain a largely latitude-independent constraint for our three-peak analysis.This is because, throughout the experiment, the directionality of the detector makes a cone, making it sensitive to the vector DM power in all the three directions.If, instead, the axis pointed towards the East/West, we would only see the sum and the difference peaks.This is because, throughout the experimental expedition, the directionality of the detector is restricted to lie on a plane (which would necessarily be perpendicular to the rotation axis of the Earth).Therefore, it is always insensitive to the power contained across the perpendicular direction, which is tied to the standalone Compton peak.One final possibility is when the directionality of the detector traces out a line throughout the experiment.This is only possible when it points parallel to the Earth's rotation axis (at any given latitude).In this case, we would naturally be oblivious to the Earth's rotation and hence to the sum and difference peaks, and we would only be able to resolve the Compton peak.
In summary, the best-case scenario is when the detector's sensitivity axis traces out a cone.In this case, we capture all the three peaks since we are sensitive to the vector DM power across all three directions (Earth's rotation axis and the two orthogonal directions).
For comparison, we also derive the scaling relationship of the limits on β under the assumption of linear polarization in Appendix D using a simpler two-peak Asimov analysis.We find that the limits are mostly affected towards the equator, with the largest scaling factor being ∼ 1.2.This difference becomes larger for higher desired confidence levels.

Application to Accelerometer Studies
As an application of our analysis strategy, we consider a concrete sensor and DM model.As our sensor, we take the canonical optomechanical light cavity, which can be used to perform acceleration measurements by continuously measuring the distance between fixed and movable cavity mirrors.As our model, we consider 'dark photon' DM stemming from a gauged U (1) B−L symmetry, leading to wavelike DM in the ultralight regime that couples to the difference between baryon number, B, and lepton number, L. Gauging such a charge is popular in the context of particle physics, since it naturally leads to the introduction of right-handed neutrinos and, hence, can account for the non-zero neutrino masses [86][87][88].Motivation for such an ultralight gauge boson can also be found in [89,90].The associated Lagrangian density reads where A ′µν ≡ ∂ µ A ′ν − ∂ ν A ′µ is the field strength tensor for the new field, m is the mass of the field, and j µ B−L is the B − L vector current.Explicitly, it is given by where the sum runs over all fermions in the SM and where Q f B−L is the B − L charge of the fermion f .A multitude of studies have considered this combination and set limits or projections on B − L coupled DM [36,37,39,42,43,45].
However, those works that performed a Fourier space analysis only had access to the Compton peak.This is because the total experiment integration times had to be such that T obs ∼ 1 h < T ⊕ to maintain experimental stability, dictated by retaining the coherence of the laser in optomechanical cavity setups.In the event that one can instead measure for at least ∼ 1 day, other peaks can be resolved.As we discussed in Section 3, an analysis for an axial sensor that then does not account for these additional peaks is sub-optimal, as it fails to capture the full signal and therefore suffers from a signal loss at a range of latitudes.Moreover, ignoring the randomness of the nuisance variables leads to an overly aggressive constraint, as was illustrated in Fig. 3. Our more holistic three-peak strategy, which also considers this stochasticity, retains the full signal and is largely latitude-independent in its constraining power.Therefore, this choice of sensor and DM model makes for an excellent case study with which to showcase the improved constraining power of our method.Furthermore, a proper vector treatment of the DM field in Fourier space, which includes the stochasticity of the ULDM field variables and the effect of the rotation of the Earth, has not been done.Ignoring the randomness of the nuisance variables leads to an overly aggressive constraint, as was also pointed out in Refs.[43,81].

Recasting Generalised Limits onto B − L Dark Matter
To recast our limits on β shown in Fig. 4 into one on the parameter of interest for this model, the gauge coupling strength g B−L , we must define four quantities.Firstly, we must make clear what the quantity A is for this experiment, which will depend on both the model and the signal of interest for this sensor.Secondly, we must choose an observation time, T obs .Thirdly, we must elaborate on what the concrete noise profile, σ(ω), for this type of experiment is.Lastly, and least importantly following from our discussion above, we must choose a latitude for the experiment.Once these are known, Eq. (2.8) can be re-arranged for g B−L , giving us our model-and experiment-specific 90% CL limit.
For B − L coupled dark photon dark matter, the relevant signal is a differential acceleration.This is given by Eq. (2.6) in the time domain, with a now concrete choice for Here, g B−L is the gauge coupling strength of the model, ∆ ij is the differential B − L charge per nucleon between materials i and j, given by and a 0 ≡ 2ρ/3 u −1 ≃ 10 12 m s −2 is a characteristic acceleration imparted by the field to each nucleon (with u being the atomic mass unit).For most materials, ∆ ij ∼ 0.1, which we take in the following analysis [37].Note that, for this particular model, g ≡ g B−L ∆ ij /u.For our observation time, we take T obs = 10T ⊕ to be firmly in the regime where the three peaks can be resolved.For light cavities, which may only be able to remain coherent over the scale of hours rather than days, such a runtime is optimistic.However, since our aim here is merely to showcase how our method can be used concretely and compare to the works of Refs.[37,39], we do not see this as an issue.Our strategy is general and can be applied to any axial sensor and vector-like DM model, and we have settled on this configuration only for the sake of argument; other sensor technologies, such a magnetically levitated sensors, do not have this issue.Moreover, multiple cavities or data-stacking techniques can be employed to mitigate this issue.
We model the background according to Refs.[37,39].Namely, we split the total expected background PSD σ 2 , which we will write as S aa as per convention, into a thermal, shot-noise, and back-action component, In what follows, we do not discuss the forms of these noise terms; we instead refer the reader to Ref. [54,91] for a review on the topic.The thermal component is given by where γ represents the couplings between the sensor and the thermal bath of temperature T , m s is the mass of the sensor, and k B is Boltzman's constant.Typically, we parametrise the thermal coupling as γ ≡ ω 0 /Q, where ω 0 is the resonance frequency of the cavity and Q is its quality factor.The measurement-added noise terms-the shot-noise and back-action noise-are respectively given by and Here, κ is the cavity loss, which quantifies the efficiency of the optical modes of the cavity, L is the cavity length, ω L is the angular frequency of the laser, and P L is its power.The mechanical susceptibility is given by and the cavity susceptibility by Our choices for all of the above parameters except for the laser power, which we expand on below, are summarised in Table 1.These are in keeping with the choices made in Refs.[37,39].The choice of where to tune the laser power is critical to give us competitive limits for a wide range of dark matter masses.We have found that we can achieve excellent limits for low dark matter masses, which are of most relevance to our work, by tuning the laser power such that the back-action and shot-noise components are minimised at low frequencies.That is, by finding that P L for which ∂ P L [S SN aa (ω → 0) + S BA aa (ω → 0)] = 0 for a given choice of ω 0 .The required laser power to achieve this is given by For the resonant frequencies we have considered, the laser power ranges from P L ∼ 10 −12 W for f 0 = 0.1 Hz to P L ∼ 10 −8 W for f 0 = 10 Hz.We note that this choice of power tuning is different from the strategies usually employed in other studies.Typically, the laser power is tuned so that the measurement-added noise is either minimised on resonance or, as LIGO implements it, well above resonance [92].However, we found that both of these choices detriments the limit that we can draw at low masses, increasing the background at low frequencies beyond the thermal noise floor.For these other strategies to be beneficial at both the respective frequency targets and at low frequencies, the thermal noise would have to be significantly lower than that achieved with our choice of sensor mass, quality factor, and bath temperature.
In the treatment of our backgrounds, we have neglected seismic noise, which can become important at frequencies below ∼ 10 Hz.Since we require the use of two materials to measure the differential acceleration peculiar to B − L DM, we can envision constructing two sensors: one with a moveable mirror made of material i and another of material j.By subtracting their signals, we both isolate the differential acceleration signature and remove backgrounds common to both-this includes the seismic noise component [65] Finally, we choose Houston as the location of our experiment, ϕ = 29.76• .From Fig. 4, we find that β lim ≃ 3.1 for this latitude.However, we note that this information is almost unnecessary for the three-peak analysis since it is largely location-independent.We may then re-arrange Eq. (2.8) accounting for Eq.(4.3) to give us our limit on ultralight B − L-coupled DM, We show our limits in Fig. 5 for three choices of resonance frequency: 0.1 Hz, 1 Hz and 10 Hz.For all but the last of these frequencies, we are able to exclude new regions of  The 90% CL limits on the gauge coupling for ultralight B − L DM placed by an optomechanical cavity setup using our statistical framework.The limits using our threepeak analysis strategy (solid) for three resonance frequencies, f 0 = 0.1 Hz, 1 Hz and 10 Hz are shown.Existing bounds from the Eöt-Wash [93,94] and MICROSCOPE experiments are shown in grey.For MICROSCOPE, we show the bound based on the first [94][95][96][97] and final [98] results, the latter of which we compute in Appendix E. The vertical shaded region indicates where the observation time T obs = 10T ⊕ becomes greater than the coherence time, where our framework is no longer valid.The top axis shows the Compton frequency for a given DM mass in Hz.
the UDLM B − L parameter space, best excluded by the fifth-force satellite experiment MICROSCOPE [98] and torsion-balance experiment Eöt-Wash [93].This showcases the power of such sensors in searching for this DM candidate, as was first pointed out in Ref. [65].In a single-peak analysis focusing solely on the Compton peak, our limits would be weakened by approximately a factor of 2, as can be seen from Fig. 4.This difference becomes more dramatic for sensors located closer to the equator.
For the existing limits outlined above, we extract the Eöt-Wash limit from [94]; however, we recompute the MICROSCOPE limit.This is because the result of [94], at the time of writing, is based on [96], which computed constraints based on the first MICROSCOPE results [95,97].We update this limit to reflect the final results given in [98] following the reasoning outlined in [99,100].See Appendix E for details.At low masses, we find that g B−L ≲ 7 × 10 −26 , improving the limit given in [94], which we also show in Fig. 5, by approximately a factor of 6.2.For the B − L limits computed in [99,100], we find an improvement by approximately a factor of 2.6.
We only extend our limits to where they are appropriate.At the higher mass range, we are limited by keeping our observation time shorter than the coherence time, T obs ≪ τ coh ≃ 10 d (5 × 10 −15 eV/m).At the lower mass range, we must keep our observation time longer than a day (corresponding to m ≃ 5 × 10 −20 eV) to be able to resolve the three peaks T obs ≳ 2π/ω ⊕ .This leads to the mass range 5 × 10 −20 eV ≲ m ≲ 5 × 10 −15 eV for where our study is valid.
We note that σ(ω), though mostly slowly varying in the frequency width ∆ω = ω ⊕ , does exhibit a large gradient around the resonance frequency of the cavity.In the neighbourhood of this frequency, our assumption that σ(ω) does not vary greatly in the above range is incorrect, and a more careful analysis in which all three peaks take on different noise levels would have to be conducted for more representative limits.However, we do not expect that our limits would differ greatly from our calculation and, at any rate, would still smoothly join to the regimes on either side of the resonance frequency, where our assumption holds.
5 Future Directions

Longer Observation Times
In this work, we have focused on the wave vector DM signal within time scales much shorter than the coherence time, τ coh = 2π/mv 2 0 ≃ 50 d (10 −15 eV/m).This allowed us to treat the amplitudes and phases of the three different components of the vector to be constant random variables.Realistically, there would be modulations giving these random variables time dependence-corrections of the order O(t/τ coh ).Simultaneously, there would be spatial variations/correlations due to the finite distance covered by the Earth/detector during the observation, leading to corrections of the order O(v 0 t/ℓ coh ) = O(t/τ coh ) again.For longer observation times, T obs ≫ τ coh , such modulations necessarily need to be taken into account.While we can easily generate the realistic time-series for such long time-scale signals (see top panel of Fig. 2), a more comprehensive treatment of how it affects our limits is left for future work.
Nevertheless, we comment on a simplified study that could be done within the incoherent regime.In this limit, the stochasticity of the field is averaged over as O(T obs /τ coh ) coherent patches cross the Earth.Thus, for T obs ≫ τ coh , the randomness in the signal disappears, and we are left with a deterministic signal.Moreover, the signal in Fourier space loses its coherent T obs enhancement, reaching its maximal value at T obs = τ coh (c.f.Eq. (2.8)).
In a simplified experimental study, one could then analyse the data by first splitting the long-time time series into N coh ∼ T obs /τ coh smaller, independent time series of coherencetime durations.Each one of these series would lead to our three-peak signal in Fourier space with randomly drawn Rayleigh amplitudes and uniform phases.One could then average over all N coh PSDs, resulting in a deterministic amplitude where the randomness is no longer manifest.Crucially, this procedure would lead to the noise within each signalcontaining bin to also be averaged over, resulting in a noise suppression by the factor N −1/2 coh .Similar arguments have been made in Refs.[37,39].To make inferences, we could then proceed by redefining our β parameter as an 'incoherent' version of it, This parameter can then be used in the deterministic likelihood, which is of the form of a non-central χ 2 distribution (c.f.Eq. (3.1)), to find the limit β lim incoh given the observed (averaged) PSD.

Expanding the Mass Window
The detection scheme considered in this paper is one where the detector points in a specific fixed direction locally on Earth while rotating along with it.As noted above, in the short observation time scenario we are bounded from below by a day (corresponding to m ≃ 5×10 −20 eV) and from above by the coherence time scale τ coh ≃ 10 d (5×10 −15 eV/m) to be able to resolve the distinctive 3 peaks.To have a larger working window where we can neglect the previously mentioned corrections, the masses we can probe using this setup lie in the range 5 × 10 −20 eV ≲ m ≲ 5 × 10 −15 eV.To expand this window towards higher masses, we need to push the lower bound coming from T obs > 1 d.
Instead of a fixed detector rotating with the Earth, a detector can also be made to rotate at an angular frequency faster than a day.For example, if ω exp ≃ 2π/(1 min), then the detector needs to collect data for only a few minutes to be able to isolate the three peaks.With τ coh ≃ 15 min (5 × 10 −12 eV/m), we can probe masses up to m ≃ 5 × 10 −12 eV.Note that while the time signal would contain modulations due to Earth's rotation, the effects would be suppressed by O(ω ⊕ /ω exp ).Furthermore, in the frequency space, this modulation would split the three peaks at ω left = m−ω exp , ω middle = m, and ω right = m+ω exp , into nine peaks (each splitting into three).However, unless the frequency resolution ∆ω becomes smaller than ∼ ω ⊕ , the experiment would not be able to resolve this splitting due to the Earth's rotation.Thus, we expect our whole analysis in this paper to carry forward, with ω ⊕ replaced by ω exp everywhere.Furthermore, such a setup would also be beneficial for the optomechanical light cavities we considered in Section 4, where their typical laser coherence times are of the order of hours and not days.

Conclusions
We have provided an analysis strategy for inferring the properties of ultralight vector dark matter from terrestrial experiments, taking into account the stochastic and vector nature of the field (see Fig. 1).Our main results are suited for observation times that are longer than a sidereal day, but shorter than the coherence time.They are as follows: • We focused on the signal in Fourier space, deriving the power spectral density that such dark matter is expected to leave on an axial sensor that is sensitive to its oscillatory signal.Accounting for the rotation of the Earth, we found that the signal manifests as three peaks at definite frequencies but with random amplitudes (see Fig. 2).
• We derived the likelihoods in each of the signal-containing bins in Fourier space.We did this by considering the marginal likelihood after integrating out the six random variables exhibited by the ULDM signal in the coherent regime: the three Rayleigh amplitudes and the three uniformly distributed DM phases (see Eq. (3.2) and Fig. 3).We found that the general elliptical motion of the vector field, arising out of equipartition, afforded us a simpler analysis in Fourier space than the linear polarization assumption.This is because in the former, all peaks become statistically uncorrelated.
• We drew exclusion limits on a generalised, dimensionless parameter that can be reinterpreted in the context of a concrete sensor setup and dark matter model.We did this via a series of log-likelihood ratio tests following a hybrid frequentist-Bayesian approach.Crucially, we found that, unlike analyses focusing on only a single peak, our approach retains constraining power for experimental setups at all latitudes.This is because we make use of the entire DM signal, which is distributed across all three peaks, instead of constraining ourselves to the signal in any one peak, which is dependent on the latitude of the experiment (see Fig. 4).
• We considered a specific sensor technology (the optomechanical light cavity) and dark matter model (ultralight dark matter stemming from a new gauged U (1) B−L symmetry) as a concrete application of our analysis strategy.We recast our general limit onto one on the gauge coupling of this model, g B−L , finding that long-exposure cavities can rule out previously unexplored regions of the B − L parameter space (see Fig. 5).
In this work, we have established a framework for future experimental efforts in the detection of ultralight vector dark matter.Novel direct-detection probes require an understanding of how the signal of ultralight vector dark matter behaves in our local neighborhood and manifests itself in a sensor.We hope that our work aids in (i) designing search strategies using emerging detector technologies that are not traditionally used for dark matter searches, and (ii) in understanding how well a given model can be tested in the context of calls for Big Science projects using quantum sensing [64].
the large volume limit: Here, and the cross correlation between â and b is zero, i.e. ⟨â i * bj ⟩ = 0. Here, k/m is our velocity relative to the rest frame of the halo, and σ is the velocity dispersion.Owing to their Gaussian nature, we can further combine the two random variables and define ŵj ≡ Υ 0 âj + √ 1 − 2Υ 0 bj .This new combined random variable will have zero mean, and its variance will be equal to the sum of the individual two in Eq. (A.6) (weighted by Υ 0 and (1 − 2Υ 0 ), respectively).Also, we can extract the factor ρ/m.With all of this, we have that Âj (0, t) where ⟨ ŵj ⟩ = 0 and With the form of the stochastic vector field derived, we now need to generate these random variables ŵs.We can do this by finding an operator matrix G such that its square gives the right-hand side of the above two-point correlation in Eq. (A.9).Then, we can simply pick a 3-dimensional normal complex random variable, say ĥ (which can be equivalently thought of as a 6-dimensional normal random variable), and hit it with G to get ŵ.That is, we have ŵi = G ij ĥj , (A.10) where While this serves as a procedure to generate the stochastic random vector field Â, the problem simplifies dramatically when the power is equipartitioned between the longitudinal and transverse modes.That is, when Υ 0 = 1/3.We expect this to be the case when the vector field accounts for (at least the majority of) the virialized dark matter around us.We discuss this next.)) around halo formation for four different simulations with different Υ(0): Υ(0) = 0 (solid), Υ(0) = 1/6 (dashed), Υ(0) = 1/3 (dot-dashed), and Υ(0) = 1/2 (dotted).The convergence of the fractional power in transverse modes towards 2/3 and that in longitudinal modes towards 1/3 demonstrates the ultimate equipartition of power.That is, Υ(t) → 1/3. on an S 5 so that it is unit normalized.That is, ⟨ε i * k εj p ⟩ = δ k, p δ ij .We then hit it with the transverse and longitudinal projection operators to get εi With these, we construct the full transverse and longitudinal initial fields as with the full field being With this as the initial condition, we evolve the SP system (A.15).In Fig. 6, we present simulation results for various different initial conditions, including different values of Υ(0).Here, we plot the fractional powers in the two different sectors: With Υ → Υ 0 = 1/3, we achieve equipartition.The stochastic vector field, c.f. Eq. (A.8), takes the simple form Âj (0, t) and is the form that we have used in Eq. (2.3).This gives rise to the "random ellipse" picture, discussed in the main text.
To conclude, we have shown that even if the vector field is initialized with an unequal distribution of power between its longitudinal and transverse modes, non-linear gravitational dynamics leads to its equipartition eventually.However, we have only begun to explore this topic and leave a detailed study (including a k-dependent Υ) for future work.

B Derivation of Marginal Likelihood with Stochastic Field Amplitude
The full signal in time space is given by where N ∼ N (0, σ t ), with σ 2 t ≡ σ 2 /∆t and σ 2 being the equivalent of the noise PSD in frequency space.From this, we get that the (two-sided) periodogram, P ′ , normalised by the noise PSD is given by P′ Here, both XR and XI are such that Xi ∼ N (0, 1/ √ 2).The one-sided, noise-normalised periodogram, P/σ 2 , therefore follows a non-central χ 2 distribution with non-centrality parameter Introducing randomness in the parameters α's and φ's, with prior Π ′ ({α i , φ i }), the marginalized likelihood is where I 0 is the modified Bessel function of the first kind.To evaluate the above integral, we first note that the prior is factorizable into that for α z and for the set {α x , α y , φ x , φ y }.The latter 4 random variables (which correspond to the ω = s and ω = d peaks), can be redefined as two 2D random vectors x and y with relative angle π/2−(φ x −φ y ), in order to give α Here the subscript 1 and 2 correspond to the two components of the vectors, in the two directions of the 2D Euclidean space respectively.Since α's are Rayleigh distributed and φ's are uniformly distributed from (0, 2π), the four variables {x 1 , x 2 , y 1 , y 2 } are normally distributed with zero mean and variance equal to 1/2.Furthermore, we can now redefine x's and y's as x i + y i = u i and x i − y i = v i for i = {1, 2}, to get the following expression 2 )/2 , Π ′ (α z ) = 2α z e −α 2 z , and Using the series representation of the Bessel function, together with Gamma function identities, the 5 random variables can be integrated out analytically.We arrive at the following marginalized (and normalized) likelihood: where This likelihood can be split into three individual likelihoods for the sum/difference peaks and the Compton peak, as given in Eq. (3.2).The form of the likelihoods for the sum and difference peaks is equivalent.
To treat the total likelihood as the product of the individual likelihoods in each frequency bin, we must check that the covariance matrix is diagonal.We will consider a signal-only analysis, discarding the noise, since the noise merely adds to the power and is uncorrelated between different frequency bins.We may write the values of the three peaks as P1 = A 2 T obs 8 α2 x + α2 y + 2 αx αy sin( φy − φx ) cos 2 ϕ , We wish to compute the quantity We can do this using the expression for the raw moments, where, for us, σ = 1/ √ 2. Aside from this, we need to know that ⟨sin( φy − φx )⟩ = 0 , We then get the diagonal covariance matrix Crucially, we get that the covariance between peaks is 0, allowing us to treat them as statistically independent and hence permitting us to express the total likelihood as the product of the individual likelihoods.

C The Case of the Gradient of a Scalar
In this case, there is a preferential direction because ∇a points in the direction of the local DM velocity.Aligning the lab's working coordinate system such that this local velocity vector is parallel to the z axis, the amplitudes associated with the three different directions in Eq. (2.9) are not all the same.Effectively, there is an extra factor associated with the z direction, and the random signal in frequency space (c.f.Eq. (2.9)) takes the following form λ(ω n ) = β2 4 α2 x + α2 y + 2 αx αy sin( φy − φx ) cos 2 ϕ δ ωn,s

+ α2
x + α2 y − 2α x αy sin( φy − φx ) cos 2 ϕ δ ωn,d + 4 Υ α2 z sin 2 ϕ δ ωn,m , (C.1) where (and following the notation of [46]) Proceeding similarly as in Appendix B, the marginalized likelihood is which we can evaluate by proceeding in the same fashion as in Appendix B; i.e. making redefinitions of the variables so they become independent and the integral becomes analytically tractable.We arrive at the following: )) derived from our MC analysis (the same as in Fig. 4 but for the case of the gradient of a scalar).We show the results of our three-peak analysis and those of two single-peak analyses focusing on the Compton peak and on one of the sum or difference peaks.The shaded region indicates the 1σ error bar on our three-peak analysis.Our threepeak analysis generally provides a better/comparable limit to either of the two single-peak analyses and is largely latitude-independent. where The overall result is that Y simply gets rescaled by γ.Following the same MC analysis as outlined in Section 3, we show the 90% CL limits for the case of the gradient of a scalar in Fig. 7.The largest difference in this case is the increased constraining power of the Compton peak compared to the vector case shown in Fig. 4.This is because of the scaling that its amplitude receives by the factor γ > 1.

D Linear Polarization Statistics
Here we present the marginal likelihood for the linear polarization case.To get the relevant non-centrality parameter, we can set φ x = φ y in Eq. (2.9) to get λ′ (ω) ≡ P(ω)  As in Section 3, we verify our analytical expressions for the likelihoods via a series of MC simulations.We begin from Eq. (2.6), this time setting all φ i to be equal, drawing them from a single uniform distribution, φ i ∼ U(0, 2π).We draw each of the three Rayleigh variables independently from their respective distributions, as given in Eq. (2.5).For each simulation, we compute the PSD as in Eq. (2.7) and consider the distributions of the values of each of the Compton, sum, and difference peaks, normalised by some noise level.For our simulations, we take A = 1 [A], T obs = 10T ⊕ , σ 2 = 5 [A] 2 Hz −1 , and ϕ = 45 • .For the purposes of fast convergence, we also take m = 2π Hz and T ⊕ = 100 s.Our results for 10 4 simulations are shown in Fig. 8, displaying excellent agreement with our derived likelihoods in Eq. (D.4).
As in Appendix B, we can also compute the covariance matrix for the linear polarization case.Following the approach there, we find a non-diagonal covariance matrix Σ with We thus find that the sum and difference peaks have a non-zero covariance, with the Compton peak remaining statistically uncoupled from the other two peaks.Due to this nondiagonal covariance matrix, we cannot simply write the total likelihood as in Eq. (3.3), and we instead require a more complicated treatment accounting for the non-zero covariances.

D.1 Effect of Elliptical versus Linear Polarization on Limits
We comment on the effect that the linear polarization assumption has on our limits compared to the more realistic elliptical polarization treatment.Since the sum and difference peaks are correlated in the linear polarization case (c.f.Eq. (D.5)), a full three-peak analysis is difficult to perform without accounting for the full covariance matrix.However, we can conduct a simplified, uncorrelated two-peak analysis in which we consider both the Compton peak and one of the sum/difference peaks.We can then compare the results of this analysis with a similar two-peak one done in the elliptical polarization case to learn how the limits should scale between these assumptions.Since we are only interested in this scaling, we perform a simpler Asimov analysis in which the data are assumed to be perfectly consistent with the background [83].The result of an Asimov analysis is expected to asymptotically converge to the true result in the limit of high statistics.
The two-peak likelihood when considering the Compton peak and one of the two sum/difference peaks is given by where we have indexed the likelihoods to use for the elliptical and linear polarization cases, respectively following from Eq. (3.2) and Eq.(D.4).In an Asimov analysis, we replace the data vector p with the expectation values in the background-only case, which can be shown to be p = 2 for each bin.In this case, using the log-likelihood-ratio test statistic )) assuming elliptical versus linear polarization with latitude ϕ.The 90% (solid) and 99.7% (dashed) limits are shown, with the latter equivalent to a 3σ confidence level.Near the equator, the difference is more pronounced because the difference in the likelihoods is greater; the likelihoods are equal at the poles.
given in Eq. (3.4), we will get that β = 0 for this data.The problem then becomes finding that β for which q β reaches a value that we can exclude to our desired confidence level.This is the same procedure we followed in Section 3, and we take q lim β ≃ 2.43 for the 90% CL limit as we found there.For comparison, we also consider the case when one wants to draw a limit at the 3σ level, equivalent to a ∼ 99.7% CL limit.Solving Eq. (3.7), we get that q lim β ≃ 8.49 in this case.
We show our results in Fig. 9.We find that the scaling is greater towards the equator.This is because the difference in the form of the likelihoods is greater there, since only the sum/difference likelihood changes between the two polarization assumptions.At the poles, only the Compton peak is present and the two likelihoods are equivalent.This leads the two limits to converge towards the same value.At the 90% CL, which we have used throughout this work, we see that the limit at worst scales by the factor ∼ 1.2.As the CL grows, this factor increases; for example, at the 3σ level, the scaling from the linear to the elliptical assumption becomes ∼ 2.3.Nevertheless, in the incoherent regime, we expect both the limits to match since the stochasticity of the field vanishes in that limit.
t e x i t s h a 1 _ b a s e 6 4 = " 8 9 R S W 5 7 c M k a f t 5 L J x R Y o A + k e / E 0 = " > A A A C B H i c b V C 7 S g N B F J 2 N r x h f U c s 0 g 0 G w i r v i q w z a W E Y w D 8 i G M D u Z z Q 6 Z n V l m 7 i p h S W H j r 9 h Y K G L r R 9 j 5 N 0 4 e h S Y e u H A 4 5 1 7 u v S d I B D f g u t 9 O b m l 5 Z X U t v 1 7 Y 2 N z a 3 i n u 7 j W M S j V l d a q E 0 q 2 A G C a 4 Z H X g I F g r 0 Y z E g W D N Y H A 9 9 p v 3 T B u u 5 B 0 M E 9 a J S V / y k F M C V u o W S 3 D s A 0 m 7 m a 9 j T F U 0 8 j X v R 0 C 0 V g / d Y t m t u B P g R e L N S B n N U O s W v / y e o m n M J F B B j G l 7 b g K d j G j g V L B R w U 8 N S w g d k D 5 r W y p J z E w n m z w x w o d W 6 e F Q a V s S 8 E T 9 PZ G R 2 J h h H N j O m E B k 5 r 2 x + J / X T i G 8 7 G R c J i k w S a e L w l R g U H i c C O 5 x z S i I o S W E a m 5 v x T Q i m l C w u R V s C N 7 8 y 4 u k c V L x z i t n t 6 f l 6 t U s j j w q o Q N 0 h D x 0 g a r o B t V Q H V H 0 i J 7 RK 3 p z n p w X 5 9 3 5 m L b m n N n M P v o D 5 / M H a p i Y n Q = = < / l a t e x i t > t/⌧ coh !< l a t e x i t s h a 1 _ b a s e 6 4 = " 9 D R U Z D j l W h 9 V l Z z z V D z o B B R 2 0 4 A = " > A A A B 6 H i c d V D L S g N B E J z 1 G e M r 6 t H L Y B A 8 L b t 5 e w t 6 8 Z i A e U C y h N l J J x k z O 7 v M z A p h y R d 4 8 a C I V z / J m 3 / j J B t B R Q s a i q p u u r v 8 i D O l H e f D W l v f 2 N z a z u x k d / f 2 D w 5 z R 8 d t F c a S Q o u G P J R d n y j g T E B L M 8 2 h G 0 k g g c + h 4 0 + v F 3 7 n H q R i o b j V s w i 8 g I w F G z F K t J G a z i C X d + x i o V x z i z g l l V J K q p c l 7 N r O E n m 0 Q m O Q e + 8 P Q x o H I D T l R K m e 6 0 T a S 4 j U j H K Y Z / u x g o j Q K R l D z 1 B B A l B e s j x 0 j s + N M s S j U J o S G i / V 7 x M J C Z S a B b 7 p D I i e q N / e Q v z L 6 8 V 6 V P M S J q J Y g 6 D p o l H M s Q 7 x 4 m s 8 Z B K o 5 j N D C J X M 3 I r p h E h C t c k m a 0 L 4 + h T / T 9 o F 2 6 3 Y 5 W Y p X 7 9 a x Z F B p + g M X S A X V V E d 3 a A G a i G K A D 2 g J / R s 3 V m P 1 o v 1 m r a u W a u Z E / Q D 1 t s n 7 8 G N D g = = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " Q 7 c j r 1 y H B A P v M m C o F 4 4 Z N P j L K m 4 = " > A A A B 6 H i c d V D L S g N B E J z 1 G e M r 6 t H L Y B A 8 L b t 5 e w t 6 8 Z i A e U C y h N l J b z J m d n a Z m R V C y B d 4 8 a C I V z / J m 3 / j J B t B R Q s a i q p u u r v 8 m D O l H e f D W l v f 2 N z a z u x k d / f 2 D w 5 z R 8 d t F S W S Q o t G P J J d n y j g T E B L M 8 2 h G 0 s g o c + h 4 0 + u F 3 7 n H q R i k b j V 0 x i 8 k I w E C x g l 2 k j N 4 i C X d + x i o V x z i z g l l V J K q p c l 7 N r O E n m 0 Q m O Q e + 8 P I 5 q E I D T l R K m e 6 8 T a m x G p G e U w z / Y T / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " e a w m 5 5 U g H 3 r b O T l J s v p h q c V n u X M = " > A A A B 6 X i c d V D L S g N B E O z 1 G e M r 6 t H L Y B C 8 u O z m 7 S 3 o x W M U 8 4 B k C b O T 2 W T I 7 O w y M y u E k D / w 4 k E R r / 6 R N / / G S T a C i h Y 0 F F X d d H f 5 M W d K O 8 6 H t b K 6 t r 6 x m d n K b u / s 7 u 3 n D g 5 b K k o k o U 0 S 8 U h 2 f K w o Z 4 I 2 N d O c d m J J c e h z 2 v b H V 3 O / f U + l Y p G 4 0 5 O Y e i E e C h Y w g r W R b s + L / V z e s Y u F c s 0

L 45 •Figure 3 :
Figure 3: Example likelihoods for each of the three signal peaks once the stochastic variables have been marginalised over.The bars show the result of a numerical simulation of the noise-normalised periodogram values beginning from Eq. (2.6), while the solid lines show the analytical result of Eq. (3.2).Here, ϕ is the latitude of the experiment, p (a random variable) is the value of the measured excess power, and β is defined as per Eq.(2.8).Also shown as a dashed line is the deterministic result for the Compton peak, where the stochastic variable α 2 z is set to its expectation value, ⟨α 2 z ⟩ = 1.

Figure 4 :
Figure4: The 90% CL limits (β lim ) on the dimensionless parameter β ≡ A 2 T obs /(2σ 2 ) (see Eq. (2.8)) derived from our MC analysis .We show the results of our three-peak analysis and those of two single-peak analyses focusing on the Compton peak and on one of the sum or difference peaks.The shaded region indicates the 1σ error bar on our threepeak analysis.The dotted line indicates the latitude of Houston, which we use in Section 4. Our three-peak analysis generally provides a better/comparable limit to either of the two single-peak analyses and is largely latitude-independent.

Figure 5 :
Figure5: The 90% CL limits on the gauge coupling for ultralight B − L DM placed by an optomechanical cavity setup using our statistical framework.The limits using our threepeak analysis strategy (solid) for three resonance frequencies, f 0 = 0.1 Hz, 1 Hz and 10 Hz are shown.Existing bounds from the Eöt-Wash[93,94] and MICROSCOPE experiments are shown in grey.For MICROSCOPE, we show the bound based on the first[94][95][96][97] and final[98] results, the latter of which we compute in Appendix E. The vertical shaded region indicates where the observation time T obs = 10T ⊕ becomes greater than the coherence time, where our framework is no longer valid.The top axis shows the Compton frequency for a given DM mass in Hz.

Figure 7 :
Figure 7: The 90% CL limits ( βlim ) on the dimensionless parameter β ≡ ρ g 2 eff σ 2 v T obs /σ 2 n(see Eq. (C.2)) derived from our MC analysis (the same as in Fig.4but for the case of the gradient of a scalar).We show the results of our three-peak analysis and those of two single-peak analyses focusing on the Compton peak and on one of the sum or difference peaks.The shaded region indicates the 1σ error bar on our three-peak analysis.Our threepeak analysis generally provides a better/comparable limit to either of the two single-peak analyses and is largely latitude-independent.

Figure 8 : 2 ( 2 + 2 ,(D. 3 )
Figure 8: Example likelihoods for each of the three signal peaks once the stochastic variables have been marginalised over in the linear polarization case.The bars show the result of a numerical simulation of the noise-normalised periodogram values beginning from Eq.(2.6), while the solid lines show the analytical result of Eq. (D.4).Here, ϕ is the latitude of the experiment, p (a random variable) is the value of the measured excess power, and β is defined as per Eq.(2.8).

α 7 %Figure 9 :
Figure9: Scaling for limit on the dimensionless parameter β (c.f.Eq. (2.8)) assuming elliptical versus linear polarization with latitude ϕ.The 90% (solid) and 99.7% (dashed) limits are shown, with the latter equivalent to a 3σ confidence level.Near the equator, the difference is more pronounced because the difference in the likelihoods is greater; the likelihoods are equal at the poles.

Table 1 :
.11)The optomechanical cavity configuration we have assumed in this work.