2-OGC: Open Gravitational-wave Catalog of Binary Mergers from Analysis of Public Advanced LIGO and Virgo Data

We present the second Open Gravitational-wave Catalog (2-OGC) of compact-binary coalescences, obtained from the complete set of public data from Advanced LIGO’s first and second observing runs. For the first time we also search public data from the Virgo observatory. The sensitivity of our search benefits from updated methods of ranking candidate events including the effects of nonstationary detector noise and varying network sensitivity; in a separate targeted binary black hole merger search we also impose a prior distribution of binary component masses. We identify a population of 14 binary black hole merger events with probability of astrophysical origin >0.5 as well as the binary neutron star merger GW170817. We confirm the previously reported events GW170121, GW170304, and GW170727 and also report GW151205, a new marginal binary black hole merger with a primary mass of that may have formed through hierarchical merger. We find no additional significant binary neutron star merger or neutron star–black hole merger events. To enable deeper follow-up as our understanding of the underlying populations evolves, we make available our comprehensive catalog of events, including the subthreshold population of candidates and posterior samples from parameter inference of the 30 most significant binary black hole candidates.


Introduction
The Advanced LIGO (LIGO Scientific Collaboration et al. 2015) and Virgo (Acernese et al. 2015) observatories have ushered in the age of gravitational-wave astronomy. The first and second observing runs (O1 and O2) of Advanced LIGO and Virgo covered the period from 2015 to 2017. This provided a total of 171 days of multidetector observing time. To date, these instruments have observed a population of binary black holes (BBHs) and a single binary neutron star, GW170817, which has become one of the most observed astronomical events (Abbott et al. 2017a). Ten BBH mergers and a single binary neutron star merger have been reported in this period by the LIGO and Virgo Collaborations (Abbott et al. 2019a). Several independent analyses have examined publicly released data (Antelis & Moreno 2019;Nitz et al. 2019a;Venumadhav et al. 2019a), including an analysis targeting BBH mergers that reported several additional candidates .
The first open gravitational-wave catalog (1-OGC) searched for compact-binary coalescences during O1 (Nitz et al. 2019a). We extend that analysis to cover both O1 and O2 while incorporating Virgo data for the first time. During the first observing run, only the two LIGO instruments were observing. Joint three-detector observing with the Virgo instrument began in 2017 August during the second observing run.
We make additional improvements to our search by accounting for short-time variations in the network sensitivity and power spectral density (PSD) estimates directly in our ranking of candidate events. A similar procedure for tracking PSD variations was independently developed in Venumadhav et al. (2019aVenumadhav et al. ( , 2019b and Zackay et al. (2019b). We produce a comprehensive catalog of candidate events from our matchedfilter search which covers binary neutron star (BNS), neutron star-black hole (NSBH), and BBH mergers. 7 While not individually significant on their own, subthreshold candidates can be correlated with gamma-ray burst candidates (Nitz et al. 2019c), high-energy neutrinos (Countryman et al. 2019), optical transients (Andreoni et al. 2019;Setzer et al. 2019), and other counterparts to uncover new, fainter sources.
In addition to our broad search, we conduct a targeted analysis to uncover fainter BBH mergers. It is possible to confidently detect BBH mergers that are not individually significant in the context of the wider search space by considering their consistency with the population of confidently observed BBH mergers. The collection of highly significant detected events constrains astrophysical rates and distribution with relatively small uncertainties (Abbott et al. 2019b). For this reason, we do not yet employ this technique for binary neutron star or NSBH populations, as their rates and mass and spin distributions are much less constrained. We improve over the BBH focused analysis introduced in Nitz et al. (2019a) by considering an explicit population prior (Dent & Veitch 2014). This focused approach is most directly comparable to the results ofVenumadhav et al. (2019b), who consider only BBH mergers, rather than a broad parameter search such as that employed in Abbott et al. (2019a).
We find eight highly significant BBH mergers at false alarm rates less than 1 per 100 yr in our full analysis along with the binary neutron star merger, GW170817. No other individually significant BNS or NSBH sources were identified. However, if the population of these sources were to be better understood, it may be possible to pick out fainter mergers from our population of candidates. When we apply a ranking to search candidates that optimizes search sensitivity for a population of BBH mergers similar to that already detected, we identify a further six such mergers with a probability of astrophysical origin above 50%. These include GW170818 and GW170729 which were reported in Abbott et al. (2019a) along with GW170121, GW170727, and GW170304 which were reported in Venumadhav et al. (2019b). We report one new marginal BBH candidate, GW151205. Our results are broadly consistent with both Venumadhav et al. (2019b) and Abbott et al. (2019a).

LIGO and Virgo Observing Period
We analyze the complete set of public LIGO and Virgo data (Vallisneri et al. 2015). The distribution of multidetector analysis time and the evolution of the observatories' sensitivities over time is shown in Table 1 and Figure 1 respectively. To date, there have been 288 days of Advanced LIGO and Virgo observing time. Two or more instruments were observing during 171 days. There were only 15.2 days of full LIGO-Hanford, LIGO-Livingston, and Virgo joint observing. O2 was the first time that Virgo conducted joint observing with the LIGO interferometers since initial LIGO (Abbott et al. 2016a). The Virgo instrument significantly surpassed the average BNS range during the last VSR2/3 science run (∼10 Mpc; Abadie et al. 2012) to achieve an average of 27Mpc during the joint observing period of O2. While the amount of triple-detector observing time is limited during the first two observing runs, the ongoing third observing run will considerably improve the availability of three-detector joint observing time. The methods demonstrated here will be applicable to future analysis of the O3 multidetector data set. Our analysis during triple-detector time remains sensitive to signals that appear only in two of the three detectors, as discussed below in Section 3.3.
We note that there are ∼117days of single-detector observing time. In this work we do not consider the detection of gravitational-wave mergers during this time; however, methods for assigning meaningful significance to such events have been proposed (Callister et al. 2017) and will be investigated in future work. Single-detector observing time has been used in follow-up analyses where a merger could be confirmed by electromagnetic observations (Abbott et al. 2019c;Nitz et al. 2019c).

Search for Binary Mergers
We use a matched-filtering approach as implemented in the open source PyCBC library (Allen 2005;Usman et al. 2016;Nitz et al. 2019b). This toolkit has been similarly employed in LIGO/Virgo collaboration and independent analyses (Abbott et al. 2019a;Nitz et al. 2019a). We extend the approach used in the 1-OGC analysis (Nitz et al. 2019a) to handle the analysis of Virgo data. We also incorporate improvements to the ranking of candidates by accounting for time variations in the PSD and network sensitivity.
The search procedure can be summarized as follows. The data from each detector are correlated against a set of possible merger signals. Matched filtering is used to calculate a signalto-noise (S/N) time series for each potential signal waveform. Our analysis identifies peaks in these time series and follows up the peaks with a set of signal consistency tests. These singledetector candidates are then combined into multidetector candidates by enforcing astrophysically consistent time delays between detectors, as well as enforcing identical component masses and spins. Finally, these candidates are ranked by the ratio of their signal and noise model likelihoods (see Section 3.3).

Search Space
Our analysis targets a wide range of BNS, NSBH, and BBH mergers. We perform a matched filter on the data with waveform models that span the range of desired detectable sources. Although the space of possible binary component masses and spins is continuous, we must select a discrete set of points in this space as templates to correlate against the data: we use the set of ∼400,000 templates introduced in Dal Canton & Harry (2017) which has been previously used in Nitz et al. (2019a) and Abbott et al. (2019a). This bank of templates is suitable for the detection of mergers up to binary masses of several hundred solar masses, under the conditions that the dominant gravitational-wave emission mode is adequate to describe the signal and that the effects of precession caused by misalignment of the orbital and component object angular momenta can be neglected Dal Canton & Harry (2017). Neglecting precession causes a 7% (14%) loss in sensitivity to BBH (NSBH) sources with mass ratio 5 (14) when assuming an isotropic distribution of the components' spins; the loss is negligible for mergers with comparable component masses . Figure 2 shows the distribution of template detector-frame component masses. We use the spinning effective-one-body model (SEOBNRv4) for templates corresponding to mergers with (redshifted, detector frame) total mass Taracchini et al. 2014;Bohé et al. 2016). The TaylorF2 post-Newtonian model is used in all other cases (Sathyaprakash & Dhurandhar 1991;Droz et al. 1999;Blanchet 2002;Faye et al. 2012). Note. We use here the abbreviations H, L, and V for the LIGO-Hanford, LIGO-Livingston, and Virgo observatories respectively. Note that some data (∼0.5%) may not be analyzed due to analysis constraints. Only the indicated combination of observatories were operating for each time period, hence each is exclusive of all others.

Single-detector Candidates
The first stage of our analysis is to identify single-detector candidates. These correspond to peaks in the S/N time series of a particular template waveform. Each is assigned a ranking statistic as we will discuss below. In this work, we do not explicitly conduct a search for sources that only appear in a single detector. However, a ranking of single-detector candidates forms the first stage of our analysis. For each template waveform and detector data set we calculate a signal-to-noise time series ρ(t) using matched filtering. This can be expressed using a frequency-domain convolution as where h is the normalized (Fourier domain) template waveform and s˜is the detector data. S n is the noise PSD of the data which is estimated using Welch's method. The integration range extends from a template-dependent lower frequency limit f l (ranging from 20 to 30 Hz in our search) to an upper cutoff f h given by the Nyquist frequency of the data. Peaks in the S/N time series are collected as single-detector candidates (triggers). To control the rate of single-detector candidates to be examined, our analysis preclusters these triggers. Only those that are among the loudest 100 every ∼1 s within a set of predefined chirp-mass bins are kept. The binning ensures that loud triggers from a specific region (which may be caused by non-Gaussian noise artifacts) do not cause quiet signals elsewhere in parameter space to be missed.
We remove candidates where the instrument state indicates the data may be adversely affected by instrumental or environment noise artifacts as indicated by the Gravitational-Wave Open Science Center (GWOSC; Vallisneri et al. 2015;Abbott et al. 2016bAbbott et al. , 2018. This affects ∼0.5% of the observation period. However, there remain classes of transient non-Gaussian noise in the LIGO data which produce triggers with large values of S/N (Nuttall et al. 2015;Abbott et al. 2016bAbbott et al. , 2018Cabero et al. 2019). The surviving single-detector candidates are subjected to the signal consistency tests introduced in Allen (2005) and Nitz (2018). These tests check that the accumulation of signal power as a function of frequency, and power outside the expected signal band, respectively, are consistent with an astrophysical explanation. They produce two statistic values which are χ 2 distributed: c r 2 and c r sg , 2 respectively (Nitz 2018). These are used to reweight (Babak et al. 2013) the single-detector signal strength in two stages. This reweighting allows candidates that well match an expected astrophysical source to be assigned a statistic value similar to their matched filter S/N, while downweighting many classes of non-Gaussian noise transients. For all candidates we apply The latter test is only applied to these short duration, higher mass signals as the test is computationally intensive and has the greatest impact for short duration signals which may be otherwise confused with some classes of transient non-Gaussian noise (Nitz 2018;Cabero et al. 2019). Otherwise, we set r r =˜.  This statistic r is the same used in the 1-OGC analysis (Nitz et al. 2019a) and LVC O2 catalog Abbott et al. (2019a). We further improve upon this by accounting for short-term changes in the overall PSD estimate. The issue of PSD variation was also addressed in Venumadhav et al. (2019a). Previously we modeled the PSD for each detector as a function of frequency S n ( f ), which is estimated on a 512 s timescale. We now introduce a time-dependent factor v S (t) which accounts for short-term O(10 s) variations in sensitivity, estimated using the method described in Mozzon et al. (2020). Short-term variation in the PSD will introduce variation in ρ as we use the estimated PSD S( f ) to calculate it. The estimated PSD over short timescales S s ( f ) can be different from a PSD estimated over a longer duration S l ( f ) if the noise is nonstationary.
To track the variation in the PSD we use the variance of the S/N. In the absence of a signal, this is given by To estimate the variance of the S/N, we first filter the detector data s˜with where  is a normalization constant and h f |˜( )| is an approximation to the Fourier domain amplitude of CBC templates. Using Parseval's theorem, we can then estimate the variation in the PSD at a given time t 0 as ( ) is the convolution between the filter and the data and Δt is chosen to match the typical timescale of nonstationarity.
After finding v S (t), we evaluate its correlation with the S/Ns and rates of noise triggers empirically. The rate of noise triggers above a given statistic threshold is r > R N t (ˆ), where the statistic r is (proportional to) the S/N obtained by matched filtering using the long-duration PSD S l ( f ). The noise trigger rate varies over time due to the nonstationarity of the PSD and is thus a function of the short-duration PSD variation measure. Since S/N scales as S f 1 ( ), we naively expect the noise trigger rate to be a function of a "corrected" S/N r v ; Ŝ in practice we allow for a more general dependence, which we write as Here f N is a fitting function for the expected noise distribution. Empirically we find, for data without strong localized non-Gaussian transients (glitches), Linearizing the PSD variation measure v S (t) around unity, , the logarithm of the trigger rate above threshold will vary as r a r akr > - By determining the slope of the log-rate versusò S dependence for various thresholds r we estimate κ∼0.33, thus if we construct a "corrected" statistic the rate of noise triggers above a given threshold of the corrected statistic is on average no longer affected by variation in v S (t).
The analysis of Venumadhav et al. (2019a) included a similar correction factor and Zackay et al. (2019b) indicate a modest improvement in sensitivity for the sources they consider. In our analysis, the greatest improvement is for sources corresponding to long-duration templates (BNS and NSBH) while there is negligible improvement for the shorterduration BBH sources.

Multidetector Coincident Candidates
In the previous section, we discussed how we identify singledetector candidates and assign them a ranking statistic. We now combine single-detector candidates from multiple detectors to form multidetector candidates (Davies et al. 2020). We introduce a new ranking statistic formed from models of the relative signal and noise likelihoods for a particular candidate. This ranking statistic is based on the expected rates of signal and noise candidates, and is thus comparable across different combinations of detectors by design. We are then able to search for coincident triggers in all available combinations of detectors (for instance, during HLV time, coincidences can be formed in HL, HV, LV, and HLV), and then compared to one another, clustering and combining false alarm rates while maintaining near-optimal sensitivity.
Our signal model is composed of two parts. First, the overall network sensitivity of the analysis at the time of the candidate. Assuming a spatially homogeneous distribution of sources, the signal rate is directly proportional to the sensitive volume. We approximate this factor using the instantaneous range of the least sensitive instrument contributing to the multidetector candidate for a given template labeled by i, s i min, , relative to a representative range over the analysis, which is defined by the median network sensitivity in the HL detector network, s i HL, , for that template. Note that the detectors that contribute to the candidate are not necessarily all of the available detectors at that time. The second part is the probability, given an isotropic and volumetric population of sources, that an astrophysical signal would be observed to have a particular set of parameters defined by q  , including time delays, relative amplitudes, and relative phases between the network of observatories. This probability distribution q p S ( | )  is calculated by a Monte Carlo method similarly to Nitz et al. (2017). For this work we have extended this technique to three detectors for the first time. Combined, our model for the density of signals recovered with network parameters q  in a combination of instruments characterized by s i min, can be expressed as The noise model is calculated in the same manner as in Nitz et al. (2017). We treat the noise from each detector as being independent and fit our single-detector ranking statistic to an exponential slope. This fit is performed separately for each template. The fit parameters (such as the slope and overall amplitude of the exponential) are initially noisy due to low number statistics, so they are smoothed over the template space using a three-dimensional Gaussian kernel in the template duration, effective spin c eff , and symmetric mass ratio η parameters. The rate density of noise events in the ith template with contributing detectors labeled by n and single-detector rankings r n {˜} can be summarized as here r n i , and α i are the overall amplitude and slope of the exponential noise rate model respectively. The prefactor A n { } is the time window for which coincidences can be formed, which depends on the combination of detectors {n} being considered. The three-detector coincidence rate is vastly reduced compared to the two-detector rate; in a representative stretch of O2 data, the HLV coincidence rate is found to be around a factor of 10 4 lower than that in HL coincidences. Details of both the signal and noise model calculations will be provided in Davies et al. (2020).
The ranking statistic for a given candidate in template i is the log of the ratio of these two rate densities: where we drop the dependences on q  and r n {˜} for simplicity of notation. Typically, one signal event (or loud noise event) in the gravitational-wave data stream may give rise to a large number of correlated candidate multidetector events within a short time, in different templates and with different combinations of detectors {n}. To calculate the significance of such a "cluster" of events, we will approximate their arrival as a Poisson process: in order to do this, we keep the event from each cluster with highest L -typically the highest-ranked event within a 10 s time window-and discard the rest.
Comparing this new statistic to the one employed for the 1-OGC analysis (Nitz et al. 2019a) using a simulated population of mergers, we find an average 8% increase in the detectable volume during the O1 period at a fixed false alarm rate of 1 per 100 yr. This population is isotropically distributed in sky location and orientation, while the mass distribution is scaled to ensure a constant rate of signals above a fixed S/N across the log-component-mass search space in Figure 2. In addition to this improved sensitivity for events where H1 and L1 contribute, this search will also benefit by analyzing times where Virgo and only one LIGO detector are operating (as in Table 1), and also by improved sensitivity in times when all three detectors are operating, due to the ability to form threedetector events. Such sensitivity improvements are detailed in Davies et al. (2020).

Statistical Significance
In the previous section we introduce the ranking statistic used in our analysis. We empirically measure the statistical significance of a particular value of our ranking statistic by comparing it to a set of false (noise) candidate events produced in numerous fictitious analyses. Each analysis is generated by time-shifting the data from one detector by an amount greater than is astrophysically allowed by light travel time considerations (Babak et al. 2013;Usman et al. 2016). Otherwise, each time-shifted analysis is treated in an identical manner as the search itself. By repeating this procedure, upwards of 10 4 yr' worth of false alarms can be produced from just a few days of data. By construction, the results of these analyses cannot contain true multidetector astrophysical candidates, but may contain coincidences between astrophysical sources and instrumental noise. We use a hierarchical procedure as in Abbott et al. (2016c) and Nitz et al. (2019a) to minimize the impact of astrophysical contamination while retaining an unbiased rate of false alarms (Capano et al. 2017): a candidate with large L is removed from the estimation of background for less significant candidates. This method has been employed to detect significant events in numerous analyses (Abbott et al. 2009(Abbott et al. , 2019aAbadie et al. 2012;Nitz et al. 2019a;Venumadhav et al. 2019b). The validity of the resulting background estimate follows from an assumption that the times of occurrence of noise events are statistically independent between different detectors; see Was et al. (2010), Capano et al. (2017) for further discussion of empirical background estimation and the time shift method. This is a reasonable assumption for detectors separated by thousands of kilometers (Abbott et al. 2016b). The time shift method has the advantage that no other assumptions about the noise need be accurate: the populations and morphology of noise artifacts need not be uncorrelated or different between detectors, only the times at which they occur. In fact the LIGO and Virgo instruments share common components and environmental coupling mechanisms which may produce similar classes of non-Gaussian artifacts.

Targeting Binary Black Hole Mergers
Given a population of individually significant BBH mergers, it is possible to incorporate knowledge about the overall distribution and rate of sources to identify weaker candidates. A similar approach was employed in Nitz et al. (2019a) and is the basis of astrophysical significance statements in Abbott et al. (2019a). In this catalog we improve over the strategy of Nitz et al. (2019a) which considered an excessively conservative parameter space for BBH and did not use an explicit model of the distribution of signals and noise within that space. In addition, we restrict to sources that are consistent with our signal models by imposing a threshold on our primary signal consistency test to reject any single-detector candidate with χ r >2.0. Simulated signals within our target population, and the individual highly significant candidates previously detected are consistent with this choice. (The full, non-BBH-specific analysis allows a much greater deviation from our signal models before rejection of a candidate.) As a first step in obtaining the targeted BBH results we restrict the analysis to a subspace of the full search, illustrated in Figure 2. Rather than applying this constraint after obtaining the set of "clustered" candidates via selecting the highest ranked event within 10 s windows, as in Nitz et al. (2019a), here we apply the constraint to candidates prior to the clustering step. This allows us to choose a less extensive BBH region containing fewer templates than employed in Nitz et al. (2019a) without loss of sensitivity. (The previous method used a wider BBH template set to allow for the possibility that a signal inside the intended target region is recovered only by a template lying outside that region, due to clustering.) Our BBH region is specified by , and <  M 60  . The upper boundary is consistent with the redshifted detector-frame masses that would be obtained by the observed highest-mass sources near the detection threshold.
Applying a prior over the intrinsic parameters of the distribution of detectable sources was proposed in Dent & Veitch (2014) and tested in Nitz et al. (2017). In this work, we impose an explicit detection prior that is flat over chirp mass.
As seen in Figure 2, the distribution of templates is highly nonuniform. The BBH region of the template is placed first using a stochastic algorithm (Ajith et al. 2014;Dal Canton & Harry 2017), where density of templates directly correlates to density of effectively independent noise events. The template density over  scales as - 11 3 , which we verify empirically for our bank. Our detection statistic aims to follow the relative rate density of signal versus noise events at fixed S/N, and we make the simple choice of assuming a signal density flat over : thus the ranking statistic receives an extra term describing the ratio of signal-to-noise densities over component masses: Roughly, any given lower-mass template is less likely to detect a signal than a higher-mass template given that templates are much sparser at high masses.
Our choice of BBH region and detection prior has a similar effect as the highly constrained search space and multiple chirp mass bins used in Venumadhav et al. (2019b) but avoids the multiple boundary effects present there and provides a more clearly implemented and astrophysically motivated prior distribution. Furthermore, our method provides a path forward to more accurate assessment of lower-S/N candidates as our understanding of the overall population evolves.
To estimate the probability p astro that a given candidate is astrophysical in origin we combine the background of this targeted BBH analysis with the estimated distribution of observations. We improve upon the analysis in Nitz et al. (2019a), which employed an analytic model of the signal distribution and a fixed conservative rate of mergers by using the mixture model method developed in Farr et al. (2015) and similar to that employed in Abbott et al. (2019a). This method requires the distribution of noise and signals over our ranking statistic, which we take from our time-slide background estimates and a population of simulated signals respectively. 8 Using a simulated set of mergers that is isotropically distributed in orientation and uniformly distributed over mass to cover the targeted BBH region, we find that the targeted BBH analysis recovers a factor of 1.5-1.6 more sources at a fixed false alarm rate of 1 per 100 yr than the full parameter space analysis. The majority of this change in sensitivity is attributed to the inclusion of only background events consistent with BBH mergers. The choice of ranking statistic to optimize sensitivity to a target BBH signal population has a smaller effect.

Observational Results
We present compact binary merger candidates from the complete set of public LIGO and Virgo data spanning the observing runs from 2015 to 2017. This comprises roughly 171 days of multidetector observing time which we divide into 31 subanalyses. Except as noted, each analysis contains ∼5 days of observing time which allows for estimation of the false alarm rate to <1 per 10,000 yr. This interval allows us to track changes in the detector configuration which may result in timechanging detector quality. All data was retrieved from GWOSC (Vallisneri et al. 2015), and we have used the most up-to-date version of bulk data released. We note that an exceptional data release was produced by GWOSC which contains background data relating to GW170608. We have analyzed this data release separately to preserve consistent data quality.
The top candidates sorted by FAR from the complete analysis are given in Table 2. All of the most significant candidates were observed by LIGO-Hanford and LIGO-Livingston which are the two most sensitive detectors in the network and contribute the bulk of the observing time. There are 8 BBH and 1 BNS candidates at a FAR less than 1 per 100 yr. These sources are confidently detected in the full analysis without optimizing the search for any specific population of sources. The most significant following candidates correspond to GW170729, GW170121, GW170727, and GW170818 respectively. A similar PyCBC-based analysis was performed in Abbott et al. (2019a) but used a higher singledetector S/N threshold than employed in our analysis (ρ>5.5 versus 4.0); as the latter three events were found with ρ<=5.1 in the LIGO-Hanford detector, we would not expect this earlier analysis to identify them.

Binary Black Holes
Using the targeted BBH analysis introduced in Section 3.5 we report results for BBH mergers consistent with the existing set of highly significant merger events in Table 3. The probability that a candidate is astrophysical in origin, p astro , is calculated for the most significant candidates. Our analysis identifies 14 BBH candidates with p astro >50%, meeting the standard detection criteria introduced in Abbott et al. (2019a) and similarly followed in Venumadhav et al. (2019b). Our results are broadly consistent with the union of those two analyses as our candidate list includes all previously claimed BBH detections. We confirm the observation of GW170121, GW170304, and GW170727 reported in Venumadhav et al. (2019a) as significant. We also report the marginal detection of GW151205.
Several marginal events reported in Venumadhav et al. (2019bVenumadhav et al. ( , 2019a are found as top candidates, but do not meet our detection threshold based on estimated probability of astrophysical origin. Numerous differences between these two analyses-including template bank placement, treatment of data, choice of signal consistency test, and method for assigning astrophysical significance-may be the cause of reported differences. The consistency of results for less marginal candidates indicates that differences in analysis sensitivity are likely marginal. Cross comparison with a common set of simulated signals would be required for a more precise assessment. Future analyses incorporating more sophisticated treatment of the source distribution may yield different results for the probability of astrophysical origin for some subthreshold candidates. For example 151216+09:24:16UTC, which was first identified in Nitz et al. (2019a) and is now assigned ã p 0.2 astro , could obtain a higher probability of being astrophysical under a model with a distribution of detected mergers peaked close to its apparent component masses, rather than uniform over  as taken here. In any case the astrophysical probability we assign assumes that the candidate event, if astrophysical, is drawn from an existing population. The prior applied here to the population distribution over component masses could be extended to the distribution over component-object spins. (Here, we implicitly apply a prior over spins which mirrors the density of templates, which is not far from uniform over c eff .) As 151216+09:24:16UTC may have high component spins, if the set of highly significant observations does not include any comparable systems its probability of astrophysical origin could be arbitrarily small, depending on a choice of prior distribution over spins.
We infer the properties of our BBH candidates using Bayesian parameter inference implemented by the PyCBC library . We use the IMRPhenomPv2 model which describes the dominant gravitational-wave mode of the inspiral-merger-ringdown of precessing noneccentric binarie-s (Hannam et al. 2014;Schmidt et al. 2015). For each candidate, we use a prior isotropic in sky location and binary orientation. As in Abbott et al. (2019a), our prior on each component object's spin is uniform in magnitude and isotropic in orientation.
Since many of the candidates are at large (>1 Gpc) distances, we assume a prior which is uniform in comoving volume, and a prior uniform in source-frame component mass. We use standard ΛCDM cosmology (Planck Collaboration et al. 2016) to relate the comoving volume to luminosity distance, and to redshift the masses to the detectors' frame. This choice of prior differs from previous analyses (Abbott et al. 2019a;Venumadhav et al. 2019aVenumadhav et al. , 2019b, which used a prior uniform in volume (ignoring cosmological effects) and detector-frame masses. A prior uniform in comoving volume assigns lower weight to large luminosity distances than a prior uniform in volume. Consequently, the luminosity distances we obtain for some candidates is slightly lower than previously reported values (e.g., we obtain 1400 Mpc). The marginalized parameter estimates of the component masses, effective spin, and luminosity distance for the top 30 BBH candidates are given in Table 3. Plots of the marginalized posteriors for the BBH candidates with  p 0.5 astro are show in Figure 3. For candidates previously reported by the LVC, our results broadly agree with existing parameter estimates Abbott et al. 2019a). Similarly, we find no clear evidence for precession in our candidates. Venumadhav et al. .15 , which excludes c~0 eff . In addition to assigning these candidates lower Notes.
x Also identified in GWTC-1 (Abbott et al. 2019a), y 1-OGC (Nitz et al. 2019a), or z Venumadhav et al. (2019a. Candidates are sorted by FAR evaluated for the entire bank of templates. Note that ranking statistic and false alarm rate may not have a strictly monotonic relationship due to varying data quality between subanalyses. The mass and spin parameters listed are associated with the template waveform yielding the highest ranked multidetector event for each candidate, and may differ significantly from full Bayesian parameter estimates. Masses are quoted in detector frame, and are thus larger than source frame masses by a factor (1+z), where z is the source redshift. a The FAR is limited only by the available background data. A short analysis period is used for the 170,608 data which was released separately due to an instrument angular control procedure affecting data from the Hanford observatory (Abbott et al. 2017b). Notes.
x Also identified in GWTC-1 (Abbott et al. 2019a), y 1-OGC (Nitz et al. 2019a), or z Venumadhav et al. (2019a. The source-frame masses, c eff , and luminosity distance, D L , are estimated with Bayesian parameter inference (see Section 4.1) and are given with 90% credible intervals. a The false alarm rate is limited by false coincidences arising from the candidate's time-shifted LIGO-Livingston single-detector trigger. If removed from its own background, the FAR is <1 per 10,000 yr. b Parameter estimates for this candidate are derived only from the LIGO-Hanford and Virgo detectors. LIGO-Livingston was operating at the time, but did not produce a trigger that contributed to the event (see the discussion in Section (4.1)). . This indicates that the discrepancy in c eff between our analysis and that of Venumadhav et al. (2019b) cannot be entirely explained by differences in prior choice; the difference may be due to differing analysis methods.
We find three other events with < p 0.3 astro that have c eff and masses similar to that of 151216+09:24:16 UTC. These are illustrated in Figure 4. The four events differ from the other events listed in Table 3 in that the posterior distribution of c eff strongly deviates from the prior, with the peak in the posterior between c~0.5 eff and ∼0.7. All four events also have similar chirp masses. If these events are from a new population of BBHs, then ongoing and future observing runs should yield candidates with similar properties at high astrophysical significance. Alternatively, they may indicate a common noise feature selected by our analysis.
GW151205, a BBH merger withp 0.53 astro , may challenge standard stellar formation scenarios if astrophysical. Models that account for pulsational pair instability supernovae or pairinstability supernovae in stellar evolution suggest the maximum mass of the remnant black hole is ∼40-50 M   (Belczynski et al. 2016;Woosley 2017;Marchant et al. 2019;Woosley 2019;Stevenson et al. 2019). We estimate that there is >95% probability that the primary black hole has a source-frame mass >50 M  , which may suggest formation through an alternate channel such as a hierarchical merger. Studies have proposed that GW170729 may have a similar origin (Kimball et al. 2020;Khan et al. 2020;Yang et al. 2019). However, Fishbach et al. (2019) showed that when all of the BBHs are analyzed together, GW170729 is consistent with a single population of binaries formed from the standard stellar formation channel. Likewise, GW151205 will need to be analyzed jointly with the other events to determine if there are one or more populations present.
The least significant candidate in the targeted BBH analysis, 170818+09:34:45 UTC, was identified in the LIGO-Hanford and Virgo detectors by the search pipeline; the parameter estimates in Table 3 are derived using these observatories alone. However, the LIGO-Livingston detector was operational at the time of the event. Our search does not currently enforce that a candidate observed only in a subset of detectors is consistent with a lack of observation in the others. We find that if LIGO-Livingston is included in the parameter estimation analysis, the log likelihood ratio is significantly reduced. This suggests that the event is not astrophysical in origin.   Zackay et al. (2019a), we find that 151216+09:24:16UTC has support at zero effective spin. The candidate bears striking resemblance to 170201+11:03:12UTC, and, to a lesser extent, 151217+03:47:49UTC and 170629+04:13:55UTC. All four have c eff posteriors that diverge strongly from the prior (with peaks between ∼0.5 and ∼0.7) and similar chirp masses, which distinguishes them from the other BBH candidates in Table 3. This may indicate a new population of BBHs, or a common noise feature. If the former, ongoing and future observing runs should yield more candidates with similar properties and larger astrophysical significance.

Neutron Star Binaries
Our analysis identified GW170817 as a highly significant merger; however, no further individually significant BNS nor NSBH mergers were identified. As the population of BNS and NSBH sources is not yet well constrained, we cannot reliably employ the methodology used to optimize search sensitivity to an astrophysical BBH merger distribution. However, BNS candidates especially are prime candidates for the observation of electromagnetic counterparts such as GRBs and kilonovae. It may be possible by correlating with auxiliary data sets to determine if weak candidates are astrophysical in origin. An example is the subthreshold search of Fermi-GBM and 1-OGC triggers (Nitz et al. 2019c), which defined, based on galactic neutron star observations (Ozel et al. 2012), a likely BNS merger region to span < <  1.03 1.36 and effective spin c < 0.2 eff | | . This region is highlighted in Figure 2 and the top candidates are shown in Table 4.

Data Release
We provide supplementary materials online which provide information on each of ∼10 6 subthreshold candidates (Nitz et al. 2020). Reported information includes candidate event time, S/N in each observatory, and results of the signalconsistency tests performed. A separate listing of candidates within the BBH region discussed in Section 3.5 is also provided, including estimates of the probability of astrophysical origin p astro for the most significant of these candidates. To help distinguish between this large number of candidates, our ranking statistic and estimate of the false alarm rate are also provided for every event. Configuration files for the analyses performed and analysis metadata are also provided. For the 30 most significant BBH candidates, we also release the posterior samples from our Bayesian parameter inference.

Conclusions
The 2-OGC catalog of gravitational-wave candidates from compact-binary coalescences spanning the full range of binary neutron star, NSBH, and BBH mergers is an analysis of the complete set of LIGO and Virgo public data from the observing runs in 2015-2017. A third observing run (O3) began in 2019 April Abbott et al. (2016a). Alerts for several dozen merger candidates have been issued to date during this run. 9 The first half of the run (O3a) ended on 2019 October 1 with a planned release of the corresponding data in Spring 2021. As the data is not yet released, the catalog here covers only the first two observing runs.
We use a matched-filtering, template-based approach to identify candidates and improve over the 1-OGC analysis (Nitz et al. 2019a) by incorporating corrections for time variations in PSD estimates and network sensitivity. Furthermore, we have demonstrated extending a PyCBC-based analysis to handle data from more than two detectors. The 2-OGC catalog contains the most comprehensive set of merger candidates to date, including 14 BBH mergers with > p 50% astro along with the single BNS merger GW170817. We independently confirm many of the results of Abbott et al. (2019a) and Venumadhav et al. (2019b). We find no additional individually significant BNS or NSBH mergers; however, we provide our full set of subthreshold candidates for further analysis (Nitz et al. 2020). Note. The chirp mass  of the candidate's associated template waveform is given in the detector frame. All candidates here were found by the LIGO-Hanford and LIGO-Livingston observatories. The table lists the false alarm rate for each candidate in the context of the full search (FAR FULL ) or just the selected BNS region (FAR BNS ).