Einstein@Home All-sky Search for Continuous Gravitational Waves in LIGO O2 Public Data

We conduct an all-sky search for continuous gravitational waves in the LIGO O2 data from the Hanford and Livingston detectors. We search for nearly monochromatic signals with frequency 20.0 Hz ≤ f ≤ 585.15 Hz and spin-down Hz s−1. We deploy the search on the Einstein@Home volunteer-computing project and follow-up the waveforms associated with the most significant results with eight further search stages, reaching the best sensitivity ever achieved by an all-sky survey up to 500 Hz. Six of the inspected waveforms pass all the stages but they are all associated with hardware injections, which are fake signals simulated at the LIGO detector for validation purposes. We recover all these fake signals with consistent parameters. No other waveform survives, so we find no evidence of a continuous gravitational wave signal at the detectability level of our search. We constrain the h 0 amplitude of continuous gravitational waves at the detector as a function of the signal frequency, in half-Hz bins. The most constraining upper limit at 163.0 Hz is h 0 = 1.3 × 10−25, at the 90% confidence level. Our results exclude neutron stars rotating faster than 5 ms with equatorial ellipticities larger than 10−7 closer than 100 pc. These are deformations that neutron star crusts could easily support, according to some models.


INTRODUCTION
Continuous gravitational waves are expected in a variety of astrophysical scenarios: from rotating neutrons stars if they present some sort of asymmetry with respect to their rotation axis or through the excitation of unstable r-modes (Lasky 2015;Owen et al. 1998); from the fast inspiral of dark-matter objects (Horowitz & Reddy 2019;Horowitz et al. 2020); through superradiant emission of axion-like particles around black holes (Arvanitaki et al. 2015;Zhu et al. 2020).
The expected gravitational wave amplitude at the Earth is several orders of magnitude smaller than that of signals from compact binary inspirals, but because the signal is long-lasting one can integrate it over many months and increase the signal-to-noise ratio (SNR) very significantly.
The most challenging searches for this type of signal are the all-sky surveys, where one looks for a signal from a source that is not known.The main challenge of these searches is that the number of waveforms that can be resolved over months of observation is very large, and so the sensitivity of the search is limited by its computational cost.
In this paper we present the results from an all-sky search for continuous gravitational wave signals with frequency f between 20.0 Hz and 585.15 Hz and spin-down −2.6 × 10 −9 Hz/s ≤ ḟ ≤ 2.6 × 10 −10 Hz/s, carried out thanks to the computing power donated by the volunteers of the Einstein@Home project.
The results from the Einstein@Home search are further processed using a hierarchy of eight follow-up searches, similarly to what previously done for recent Einstein@Home searches (Abbott et al. 2017;Ming et al. 2019;Papa et al. 2020).
We use LIGO O2 public data (Abbott et al. 2019b;Vallisneri et al. 2015;LIGO 2019) and achieve a significantly higher sensitivity than the LIGO Collaboration O2 results in the same frequency range (Abbott et al. 2019a;Palomba et al. 2019).Our results complement those of the high-frequency Falcon search (Dergachev & Papa 2020), which cover the range from 500 to 1700 Hz.
The plan of the paper is the following: we introduce the signal model and generalities about the search in Sections 2 and 3, respectively.In Sections 4 and 5 we detail the Einstein@Home search and the follow-up searches.Constraints on the gravitational wave amplitude and the source ellipticity are obtained in Section 6, and conclusions are drawn in Section 7.

THE SIGNAL
The search described in this paper targets nearly monochromatic gravitational wave signals of the form described for example in Section II of Jaranowski et al. (1998).At the output of a gravitational wave detector the signal has the form F + (α, δ, ψ; t) and F × (α, δ, ψ; t) are the detector beampattern functions for the "+" and "×" polarizations, (α, δ) the right-ascension and declination of the source, ψ the polarization angle and t the time at the detector.The waveforms h + (t) and h × (t) take the form with the "+" and "×" amplitudes The angle between the total angular momentum of the star and the line of sight is 0 ≤ ι ≤ π and h 0 ≥ 0 is the intrinsic gravitational wave amplitude.Φ(t) of Eq. 2 is the phase of the gravitational wave signal at time t.If τ SSB is the arrival time of the wave with phase Φ(t) at the solar system barycenter, then Φ(t) = Φ(τ SSB (t)).
The gravitational wave phase as function of τ SSB is assumed to be We take τ 0SSB = 1177858472.0(TDB in GPS seconds) as a reference time.

The data
We use LIGO O2 public data from the Hanford (LHO) and the Livingston (LLO) detectors between GPS time 1167983370 (Jan 09 2017) and 1187731774 (Aug 25 2017).This data has been treated to remove spurious noise due to the LIGO laser beam jitter, calibration lines and the mains power lines (Davis et al. 2019).
We additionally remove very loud short-duration glitches (Steltner et al. 2020a) and substitute Gaussian noise at frequency bins affected by line contamination (Covas et al. 2018).This is a procedure common to all Einstein@Home searches and it prevents spectral contamination from spreading to many nearby signal frequencies.The list of cleaned frequency bins can be found at (Steltner et al. 2020c, and Suppl. Mat.).
As is customary, the input to our searches is in the form of Short time-baseline (30 minutes) Fourier Transforms (SFTs).These are grouped in segments of variable duration, that correspond to the coherent time baselines of the various searches, as shown in Figure 1. based on which we decided to exclude this period from the analysis.The second large gap -starting at ≈ 120 days -is due to an interruption of the science run for detector commissioning.

The detection statistics
For each search we partition the data in N seg segments, with each segment spanning a duration T coh .The data of both detectors from each segment i are combined coherently to construct a matched-filter detection statistic, the F-statistic (Cutler & Schutz 2005).The Fstatistic depends on the template waveform that is being tested for consistency with the data.The F-statistic at a given template point is the log-likelihood ratio of the data containing Gaussian noise plus a signal with the shape given by the template, to the data being purely Gaussian noise.
The F-statistic values are summed, one per segment (F i ), and, after dividing by N seg , this yields our core detection statistic (Pletsch & Allen 2009;Pletsch 2010): The F is the average of F over segments, in general computed at different templates for every segment.The resulting F is an approximation to the detection statistic at some template "in between" the ones used to compute the single-segment F i .In fact these "in-between" templates constitute a finer grid based on which the summations of Eq. 5 are performed.
The most significant Einstein@Home results are saved in the top-list that the volunteer computer (host) returns to the Einstein@Home server.For these results the host also re-computes F at the exact fine-grid template point.We indicate the re-computed statistic with a subscript "r", as for example in F r .
In Gaussian noise N seg × 2F follows a chi-squared distribution with 4N seg degrees of freedom, χ 2 4Nseg (ρ 2 ).The non-centrality parameter ρ 2 ∝ h 2 0 T data /S h , where T data is the duration of time for which data is available and S h is the strain power spectral density of the noise.The expected SNR squared is equal to ρ 2 (Jaranowski et al. 1998).For simplicity, in the rest of the paper, when we refer to the SNR we mean SNR squared.
If the noise contains some coherent instrumental or environmental signal, it is very likely that for some of the templates the distribution of F will have a non-zero non-centrality parameter, even though there is no astrophysical signal.The reason is that in this case the data looks more like a noise+signal than like pure Gaussian noise.
It is possible to identify a non-astrophysical signal if it presents features that distinguish it from the astrophysical signals that the search is targeting, for example if it is present only in one of the two detectors, or if it is present only for part of the observation time.In the past we have used these signatures to construct ad-hoc vetoes, such as the F-stat consistency veto (Aasi et al. 2013a) and the permanence veto (Behnke et al. 2015;Aasi et al. 2013b).These vetoes are still widely used although with different names: the "single interferometer veto" in (Sun et al. 2020;Jones & Sun 2020) and the "persistency veto" of (Abbott et al. 2019a;Astone et al. 2014).
We incorporated the ideas of the F-stat consistency veto and of the permanence veto in the design of a new detection statistic, βS/GLtL .The new detection statistic is an odds ratio that tests the signal hypothesis against a noise model, which in addition to Gaussian noise also includes single-detector continuous or transient spectral lines (Keitel et al. 2014;Keitel 2016).The subscript "L" in βS/GLtL stands for line, "G" for Gaussian and "tL" for transient-line.We use this detection statistic to rank the Einstein@Home results.In this way we limit the number of results that make it in the top-list but that would later be discarded by the vetoes.This frees up space on the top list for other, more interesting, results.

The search grids
The template waveform is defined by the signal frequency, the spin-down and the source sky-position.The range searched in each of these variables is gridded in such a way that the the fractional loss in SNR, or mismatch, due to a signal falling in-between grid-points is on average 0.5.
The grids in frequency and spin-down are each described by a single parameter, the grid spacing, which is constant over the search range.The sky grid is approximately uniform on the celestial sphere orthogonally projected on the ecliptic plane.The tiling is an hexagonal covering of the unit circle with hexagon edge length d: with τ E 0.021 s being half of the light travel-time across the Earth and m sky a constant which controls the resolution of the sky grid.The sky-grids are constant over 5 Hz bands and the spacings are the ones associated through Eq. 6 to the highest frequency in each 5 Hz.The resulting number of templates used to search 50-mHz bands as a function of frequency is shown in Fig. 2. The grid spacings and m sky are given in Table 1.

The Monte Carlos and the assumed signal population
The loss in signal-to-noise ratio µ( λ 0 ) due to the parameters λ 0 of a signal not perfectly matching the parameters λ 0 ± ∆ λ of the template can be described by a quadratic form, as long as the signal and the template parameters are fairly close, i.e. as long as ∆ λ is small: The metric g ij for the search at hand can be estimated, at least numerically.Setting up a search is a matter of deciding what loss in SNR one is willing to accept (fixing the mismatch), picking a tiling method and setting up a grid accordingly.Once the search grid is established, one can determine the computational cost of the search.If that is found to be too high, one must decide whether to reduce the T coh or to increase the mismatch, and repeat the procedure.Ultimately the best operating point is a compromise between computational cost and sensitivity.
It turns out that for the first stages of all-sky surveys with T coh of at least several hours, the optimal grids are typically ones with spacings the ones at which the metric approximation of Eq. 7 holds.In particular we find that the metric mismatch overestimates the actual mismatch.This is good because it means that in order to achieve a certain maximum mismatch level, we need fewer templates than what the metric predicts.On the other hand it means that we cannot predict the mismatch analytically.Instead we must resort to simulating signals, searching for them with a given grid and measuring the loss in SNR with respect to a perfectly matched template.And we have to do this many times to probe different signals ( λ 0 values) and different random offsets between the template grid and the signal parameters.This is the basic reason why in this paper we often refer to Monte Carlo studies.In all these studies the choice of signal parameters λ 0 represents our target source population, which we assume to be uniformly distributed in spin frequency, log-uniformly distributed in spin-down, with orientation cos ι uniformly distributed between −1 and 1, polarization angle ψ uniformly distributed in ψ ≤ |π/4| and source position uniformly dis-tributed on the sky (uniform in 0 ≤ α ≤ 2π and in −1 ≤ sin δ ≤ 1).
The Monte Carlo studies make the results robust and simple to interpret: All systematic effects in the analysis, both known and unknown, are automatically incorporated.
We note that since this analysis was carried out, a new metric ansatz was suggested (Allen 2019), which shows that the metric mismatch generically overestimates the actual mismatch, and shows how to extend the range of validity of the metric approximation.This might mitigate the need for such extensive Monte Carlo studies.
4. THE EINSTEIN@HOME SEARCH 4.1.The distribution of the computational load on Einstein@Home This search leverages the computing power of the Einstein@Home project.This is built upon the BOINC (Berkeley Open Infrastructure for Network Computing) architecture (BOINC 2020;Anderson 2004;Anderson et al. 2006): a system that uses the idle time on volunteer computers to solve scientific problems that require large amounts of computing power.
The total number of templates that we searched with Einstein@Home is 7.9 × 10 17 .The search is split into work-units (WUs) sized to keep the average Einstein@Home volunteer computer busy for about 8 CPUhours.A total of 8 million WUs are necessary to cover the entire parameter space, representing of order 10 000 CPU-years of computing.
Each WU searches 98 277 129 500 templates, and covers 50 mHz, the entire spindown range and a portion of the sky.Out of the detection statistic values computed for the 98 277 129 500 templates, the WU-search returns to the Einstein@Home server only the information of the highest 7 500 βS/GLtL results.
This search ran on Einstein@Home between April 2018 and July 2019, with an interruption of 8 months at the request of the LIGO/Virgo Collaboration, after the authors left the Collaboration.

Post-processing of the Einstein@Home search
We refer to a waveform template and the associated search results as a "candidate".All in all the Einstein@Home search returns 6.0 × 10 10 candidates: the top 7 500 candidates per WU × 8 million WUs.This is where the post-processing begins.
The post-processing consists of three steps: • Banding: as described in the previous section, each volunteer computer searches for signals with frequency within a given 50-mHz band, with spindown between −2.6 × 10 −9 and 2.6 × 10 −10 Hz/s and a portion of the sky.The first step of the post-processing is to gather together all results that pertain to the same 50-mHz band.We compute some basic statistics from these results and produce a series of diagnostic plots, that we can conveniently access through a GUI (graphical user interface) tool that we have developed for this purpose.This provides an overview of the result-set in any 50-mHz band.
• Identification of disturbed bands: as done in previous Einstein@Home searches (Papa et al. 2020;Ming et al. 2019;Abbott et al. 2017;Zhu et al. 2016;Singh et al. 2016) we identify bands that present very significant deviations of the detection statistics from what we expect from a reasonably clean noise background.Such deviations can arise due to spectral disturbances or to extremely loud signals.We do not exclude these bands from further inspection, but we do flag them as this information is necessary when we set upper limits.We mark 273 50-mHz bands as disturbed.
• Clustering: in this step we identify clusters of candidates that are close enough in parameter space that they are likely due to the same root cause.We associate with each cluster the template values of the candidate with the highest βS/GLtLr , which we also refer to as cluster seed.We use a new clustering method (Steltner et al. 2020b) that identifies regions in frequency-spin-down-skyposition that harbour an over-density of candidates -a typical signal signature.This method achieves a lower false dismissal of signals at fixed false alarm rate, with respect to the previous clustering (Singh et al. 2017) by tracing the SNR reduction function with no assumption on its profile in parameter space.An occupancy veto is also applied, requiring at least 4 candidates to be associated with a cluster.Most candidates have fewer than three nearby partners, so this clustering procedure greatly reduces the number of candidates, namely from 6.0 × 10 10 to 350 145.
• Follow-up searches: After the clustering we have 350 145 candidates, shown in Figure 3.Of these, 1 352 come from bands that have been marked as disturbed.We follow all of them up as detailed in the next Section.The list of the disturbed 50-mHz bands is provided in (Steltner et al. 2020c, and Suppl. Mat.).
In order to give a sense of the overall set of Einstein@Home results, in the left panels of Figure 3 we show the detection statistic value of the most significant result from every 50-mHz band.The large majority of the results falls within the expected range for noise-only.Most of the highest detection statistic values stem from hardware injections or from disturbed bands and are due to spectral contamination, i.e. signals (as opposed to noise fluctuations) of non-astrophysical origin.

THE FOLLOW-UP SEARCHES
Each stage takes as input the candidates that have survived the previous stage.Waveforms around the nominal candidate parameters are searched, so that if the candidate were due to a signal it would not be missed in the follow-up.The extent of the volume to search is based on the results of injection-and-recovery Monte Carlo studies and is broad enough to contain the true signal parameters for 99.8% of the signal population.For this reason we also refer to this volume as the "signal-containment region"1 .The containment region in the sky is a circle in the orthogonally projected ecliptic plane with radius r sky .The search set-ups are chosen so that the SNR of a signal increases from one stage to the next.This is achieved either by increasing the Tcoh of the search and/or by decreasing the mismatch.We note that even though the average mismatch of Stage 8 is larger than that of the previous two stages, this does not imply that the expected SNR for a signal out of Stage 8 is smaller.
The search set-ups for Stages 1-8 are chosen so that the SNR of a signal would increase from one stage to the next.This is achieved in two ways: by increasing the T coh of the search and/or by using a finer grid and hence by decreasing the average mismatch.The mismatch distributions of the various searches are shown in Figure 4. We note that even though average mismatch of Stage 8 is larger than that of the previous two stages, this does not imply that the expected SNR for a signal out of Stage 8 is smaller.In fact, because of the larger T coh used in Stage 8, the expected SNR for a signal out of Stage 8 is larger than that of the same signal out of Stage 7 or 6.This can be seen by comparing the values of R 8 , R 7 and R 6 , in Table 1 (the quantity R a is defined below in Eq. 8 and is related to the expected SNR increase at Stage a with respect to Stage 0).
We cluster the results of each search and consider the most significant cluster.We associate to the cluster the parameters of the member with the highest detection statistic value, and refer to this as the candidate from that follow-up stage.
We veto candidates at stage a whose SNR does not increase as expected for signals, with respect to Stage 0. We do this by setting a threshold on the quantity The threshold is set based on signal injection-andrecovery Monte Carlos, as shown in Figure 5.The values are given in Table 1.Because of the large number of candidates in the first four follow-up stages, the R a thresholds for a = 1 • • • 4 are stricter than those used for the last four stages.All the parameters relative to the searches, as well as the number of candidates surviving each stage, are shown in Table 1.
Only six candidates are left at the output of Stage 8.They are due to fake signals present in the data stream for validation purposes, the so-called hardwareinjections.In fact there are six hardware injections with parameters that fall in our search volume, those with ID 0, 2, 3, 5, 10, 11 (LIGO & Virgo 2018).We recover them all with consistent parameters.

UPPER LIMITS
Based on our null result we set 90% confidence frequentist upper limits on the gravitational wave amplitude h 0 in half-Hz bands.The upper limit value is the smallest signal amplitude that would have produced a signal above the sensitivity level of our search for 90% of the signals of our target population (see Section 3.4).We establish the detectability of signals based on injectionand-recovery Monte Carlos.The upper limits are shown in Figure 6 and provided in machine-readable format at (Steltner et al. 2020c, and Suppl. Mat.).
Our upper limits do not hold in some 50-mHz bands, namely those marked as disturbed and those associated with the hardware injections.Even though we have Table 1.Overview of the searches.We show the values of the following parameters: the number of segments Nseg and the coherent time baseline of each segment T coh ; the grid spacings δf, δ ḟ and m sky ; the average mismatch < µ >; the parameter space volume searched around each candidate, ±∆f, ±∆ ḟ and r sky expressed in units of the side of the hexagon sky-grid tile of the Stage 0 search (Eq.6); the threshold value R a used to veto candidates of Stage a (Eq.8); the number of templates searched (Nin ) and how many of those survive and make it to the next stage (Nout).The first search, Stage 0, is the Einstein@Home search, hence the searched volume is the entire parameter space.The other searches are the follow-up stages.

Search
T  followed-up candidates from these bands, we cannot exclude that a signal with strength below the disturbance but above the detection threshold -and hence above the upper limit -could be hidden by the loud disturbance, for example by being associated with its large noise-cluster.Another reason why we cannot guarantee that our upper limit holds in the presence of a disturbance is the saturation in the Einstein@Home top-list that a loud disturbance produces.This prevents candidates from quieter parameter space regions in that band from being recorded.Given how loud the hardware injections are, for similar reasons, we also exclude the 50-mHz bands associated with these.The 50-mHz bands where the upper limits do not hold are provided at (Steltner et al. 2020c, and Suppl. Mat.).Upper limits are also not given in some half-Hz bands.This happens for two reasons: 1) If all 50-mHz bands in a half-Hz band are disturbed 2) due to the bin-cleaning procedure: In Section 3.1 we explained that we remove contaminated frequency bins and substitute them with Gaussian noise.If a signal were present in the cleanedout bins, it too, would be removed.So in the half-Hz bands affected by cleaning, the upper limit Monte Carlos include the cleaning step after the signal has been added to the data.In this way the loss in detection efficiency due to the cleaning procedure is naturally folded into the upper limit.When a large fraction of the half-Hz bins is cleaned out, however, the detection efficiency may not reach the target 90% level.In this case we do not give an upper limit in the affected band.The list of half-Hz bands for which we do not give upper limits is given in (Steltner et al. 2020c, and in the Suppl. Mat.).
Based on the upper limits, we compute the sensitivity depth D of the search (Behnke et al. 2015) and find values between (49 -56) 1/ √ Hz.This is consistent with, and slightly better than, previous performance of Einstein@Home searches (Dreissigacker et al. 2018).We provide the power spectral density estimate used to derive the sensitivity depth at (Steltner et al. 2020c, and in the Suppl.Mat.).
We can express the h 0 upper limits as upper limits on the ellipticity ε of a source modelled as a triaxial ellipsoid spinning around a principal moment of inertia axis Î at a distance D: The ellipticity ε upper limits are plotted in Figure 7.
If the spin-down of the signal were just due to the decreasing spin rate of the neutron star, then our search could not probe ellipticities higher than the spin-down limit ellipticity corresponding to the highest spin-down rate considered in the search, −2.6 × 10 −9 Hz/s.This is indicated in Figure 7 by a dashed line.
Proper motion can reduce the apparent spin-down (Shklovskii 1970), so in principle we could detect a signal from a source with ellipticity above the dashed line.However, even in extreme cases (source distance 8 kpc, spin period 1 ms, large proper motion 100 mas/yr (Hobbs et al. 2005) or source distance 10 pc, spin period 1 ms and tangential velocity of 1000 km/s ) the change in maximum detectable ellipticity is negligible.

CONCLUSIONS
We present the results from an Einstein@Home search for continuous, nearly monochromatic, gravitational waves with frequency between 20.0 and 585.15 Hz, and spin-down between −2.6 × 10 −9 and 2.6 × 10 −10 Hz/s.We use LIGO O2 public data and compare it against 7.9 × 10 17 waveforms.We follow-up the most likely 350 145 candidates through a hierarchy of eight searches, each being more sensitive but requiring more per-template computing power than the previous one.
No candidate survives all the stages.This is the most sensitive search performed on this parameter space on O2 data, and sets the most stringent upper limits on the intrinsic gravitational wave amplitude h 0 .The most constraining h 0 upper limit is 1.3 × 10 −25 at 163.0 Hz, corresponding to a neutron star at, say, 100 pc, having an ellipticity of 5 × 10 −7 and rotating with a spin period of ≈ 12 ms.Our results thus exclude neutron stars rotating faster than 12 ms, within 100 pc of Earth, with ellipticities in the few ×10 −7 range and reach the 1 × 10 −7 mark for spins of 5 ms.
These results probe a plausible range of pulsar ellipticity values, well within the boundaries of what the crust of a standard neutron star is expected to support, at the ≈ 10 −5 level (Johnson-McDaniel & Owen 2013).Since the closest neutron star is expected to be at about a distance of 10 pc, it is likely that there are several hundreds within 100 pc.On the other hand, recent analyses of the population of known pulsars suggest that their ellipticity should lie in the 10 −9 decade (Woan et al. 2018;Bhattacharyya 2020), which we reach only for sources rotating faster than 5 ms and within 10 pc.When the O3 LIGO data is released, its sensitivity improvement with respect to the O2 data used here (Buikema et al. 2020) will allow us to extend the reach of our search and probe ellipticities in the 10 −9 decade, at these higher frequencies.

Figure 1 .
Figure1.Segmentation of the data used for the Einstein@Home search and the follow-up stages.The lower two bars show the input SFTs.The first gap in the data -starting at 70 days -is due to spectral contamination in LHO, based on which we decided to exclude this period from the analysis.The second large gap -starting at ≈ 120 days -is due to an interruption of the science run for detector commissioning.

Figure 2 .
Figure 2. Number of searched templates per 50-mHz band as a function of frequency.The sky resolution increases with frequency causing the increase in the number of templates.The number of templates in frequency and spindown is 14 970 and 8 855 respectively.

Figure 3 .
Figure3.Detection statistics values of candidates as a function of frequency.The candidates coming from undisturbed bands are blue circles, from disturbed bands are red triangles and those from hardware injections are green squares.An unconventional vertical scale is used in all plots, which is linear below 10 and log10 elsewhere.Left panels: βS/GLtLr and 2Fr value of the loudest candidate (the candidate with the highest βS/GLtLr ) over 50 mHz, the entire sky and the full spin-down range, out of the Einstein@Home search.The increase in detection statistics with frequency is due to the number of searched templates increasing with frequency, as shown in Fig.2.The orange gridded area in the lower left panel indicates the 3σ expected range in Gaussian noise.Right panels: Detection statistics values of the 350 145 candidates that are followed-up.By comparing the right and left panels one can see how we "dig" below the level of the loudest 50-mHz candidate with our follow-up stages.

Figure 4 .
Figure 4. Mismatch distributions for the various follow-up searches based on 1 000 injection-and-recovery Monte Carlos.The search set-ups are chosen so that the SNR of a signal increases from one stage to the next.This is achieved either by increasing the Tcoh of the search and/or by decreasing the mismatch.We note that even though the average mismatch of Stage 8 is larger than that of the previous two stages, this does not imply that the expected SNR for a signal out of Stage 8 is smaller.

Figure 5 .
Figure 5. Distributions of R a of candidates from signal injection-and-recovery Monte Carlos (solid lines) and from the actual search (shaded areas).The dashed-shaded areas show the R a bins associated with the hardware injections.The dashed vertical lines mark the R a threshold values.The dashed horizontal lines mark the 1-candidate level in the search results.

Figure 6 .
Figure 6.Smallest gravitational wave amplitude h0 that we can exclude from the assumed population of signals (see Section 3.4).We compare our results with the latest literature: the Falcon search (Dergachev & Papa 2020) and the LIGO results (Abbott et al. 2019a) on the same data.There are multiple curves associated with the LIGO results because they used different analysis pipelines.

Figure 7 .
Figure7.Upper limits on the ellipticity of a source at a certain distance (black).We also show the recent upper limits from the low ellipticity all-sky search ofDergachev & Papa (2020).The dashed line is the spin-down ellipticity for the highest spin-down rate probed by each search.