Recent Advances and Challenges of Waveform‐Based Seismic Location Methods at Multiple Scales

Source locations provide fundamental information on earthquakes and lay the foundation for seismic monitoring at all scales. Seismic source location as a classical inverse problem has experienced significant methodological progress during the past century. Unlike the conventional traveltime‐based location methods that mainly utilize kinematic information, a new category of waveform‐based methods, including partial waveform stacking, time reverse imaging, wavefront tomography, and full waveform inversion, adapted from migration or stacking techniques in exploration seismology has emerged. Waveform‐based methods have shown promising results in characterizing weak seismic events at multiple scales, especially for abundant microearthquakes induced by hydraulic fracturing in unconventional and geothermal reservoirs or foreshock and aftershock activity potentially preceding tectonic earthquakes. This review presents a comprehensive summary of the current status of waveform‐based location methods, through elaboration of the methodological principles, categorization, and connections, as well as illustration of the applications to natural and induced/triggered seismicity, ranging from laboratory acoustic emission to field hydraulic fracturing‐induced seismicity, regional tectonic, and volcanic earthquakes. Taking into account recent developments in instrumentation and the increasing availability of more powerful computational resources, we highlight recent accomplishments and prevailing challenges of different waveform‐based location methods and what they promise to offer in the near future.


Introduction
Seismic source location is a classical inverse problem in seismology and geoengineering. A seismic source location specifies the origin time and spatial coordinates of seismic energy release generated by a seismic source (earthquake), and it also refers to the process of locating a seismic event (e.g., Lomax et al., 2009). This review concerns only the location of individual events under the point source assumption; the contents of location and imaging of a finite source or fault are not covered. Observed seismicity is either naturally occurring in seismogenic areas (e.g., Shearer, 2009) or is triggered/induced by anthropogenic/industrial operations, including mining, water reservoirs impoundment, fluid injection/extraction operations, oil and gas field exploration, and geothermal exploitation (McGarr et al., 2002;Stump, 2002;Zang et al., 2014;Elsworth et al., 2016;Grigoli et al., 2017;Keranen & Weingarten, 2018;Foulger et al., 2018;Zoback & Kohli, 2019). In particular, several cases of felt earthquakes with Mw > 3 related with hydraulic fracturing or fluid injection have been reported worldwide, including the United States, the United Kingdom, Western Canada, and Southwestern China (Atkinson et al., 2016;Bao & Eaton, 2016;Clarke et al., 2014;Holland, 2013;Lei et al., 2017;Meng et al., 2019), as well as a recently reported event of Mw 5.5 near an enhanced geothermal system site in South Korea Kim et al., 2018). In addition to the aforementioned field or exploration scale scenarios, seismicity also occurs at broader scales, varying from laboratory to local and global scales (Eaton, 2018;Hardy, 2003;Shearer, 2009). In earthquake seismology (cover local to global scales), seismic waves and related attributes (e.g., first arrival time) of tectonic earthquakes or tectonic and volcanic tremors are used to estimate velocity-depth models for the Earth's crust and the deeper interior and provide earthquake location and its focal mechanism estimation. Seismic activity associated with millimeter-scale fractures and kilometer-scale faults share highly similar physical processes (Goodfellow & Young, 2014;Kwiatek et al., 2011), although the dominant frequency content and energy release (magnitude) of laboratory acoustic emissions (AEs) are different from those of globally felt tectonic earthquakes. An illustration of differences in scales of seismic activities with different ranges of corner frequency, magnitude, source radius, and detectable distances is shown in Figure 1. The seismic source radius, magnitude, and the detectable distance or range decrease with corner frequency. Laboratory studies are generally based on rock compression or fracturing tests and highfrequency AE monitoring, and they contribute to revealing the small-scale deformation and failure process of brittle rock (Lockner et al., 1991;Scholz, 1968). Seismic waves are used for delineating the near-surface or subsurface (down to 10 km) structures, which are associated with exploration and development of metals, Figure 1. Scaling of corner frequency and source radius versus moment magnitude and detectable distance associated to different-scale seismic events. Several assumptions are adopted here in this schematic diagram, for example, peak energy approximation for corner frequency and pure shear source model for source radius, and average attenuation is assumed for magnitude calculation. Detectable distance is roughly indicated as a critical distance beyond which the event of a given magnitude cannot be detected with a generic receiver. The illustration considers values in a general sense and for an illustrative purpose only; different geological settings may result in different relationships between these parameters. Modified after Hardy (2003), Kwiatek et al. (2011), Zoback and Gorelick (2012), and Eaton et al. (2018). minerals, and hydrocarbons, while regional and global earthquakes are used in earthquake seismology to characterize fault zones and the deeper interior (down to 100 km or more; Gibowicz & Kijko, 1994;Stein & Wysession, 2003;Yilmaz, 2015;Ikelle & Amundsen, 2018). For applications at all scales, earthquake source location plays one of the most fundamental roles in the workflow. It provides fundamental information on the earthquake that creates detected seismic waves and lays the foundation for subsequent seismological analysis and characterization, such as magnitude calculation, seismicity pattern delineation, source zone monitoring, source mechanism, and stress field estimation.
Seismic source location problem has experienced significant methodological progress during the past century (e.g., Thurber & Rabinowitz, 2000;Lomax et al., 2009; see also Figure 2 for an overview). The earliest seismic source location method was based on manual triangulation (e.g., Milne, 1886;Pujol, 2004), making use of the arrival time differences between compressional wave (P wave) and shear wave (S wave) for calculating the distance between a source and a station. Once arrival time differences for three or more stations are available, the hypocenter can be determined by intersecting equidistant lines from the stations. This procedure is also known as the graphical approach. Alternatively, Geiger (1912) introduced an iteratively linearized traveltime inversion that requires phase picking, and this algorithm laid the foundation for subsequent numerical location methods. Since the 1980s, the digital seismic recordings and computer technologies enabled seismic source location being processed based on computational or numerical methods, rather than manual approaches (e.g., Sambridge & Kennett, 1986;Thurber & Rabinowitz, 2000). Latter, nonlinear methods that did not involve linearization emerged, and direct (including regular or stochastic) searches, such as the genetic algorithm (Kennett & Sambridge, 1992), and the Monte Carlo searches (Sambridge & Mosegaard, 2002) over the target space were utilized to determine the source location. All these methods use picked arrival times, and in principle, they involve searching for the minimum misfit between the theoretical and observed traveltimes by applying either an inversion or a grid search. Traveltime-based location methods require phase picking and only make use of the traveltime information. They are also known as picking-or ray-based methods (Z. Li & van der Baan, 2016;Pesicek et al., 2014). Many modifications have been introduced to improve the performance of the traveltime-based methods, most of which lie in the construction of misfit function and inversion strategies, such as joint hypocenter location method (e.g., Douglas, 1967;Pujol, 1992), relative location method (e.g., Fitch, 1975;Grechka et al., 2015;Spence, 1980), plain grid search (e.g., , masterstation or station-pair double difference method (e.g., Font et al., 2004;Zhang et al., 2010;Zhou, 1994), joint location and velocity inversion (e.g., Block et al., 1994;Jansky et al., 2010;Z. Zhang et al., 2017;Diekmann et al., 2019), double difference relocation method (Waldhauser & Ellsworth, 2000; H. Zhang Figure 2. Historical evolution of seismic source location methods, with the most significant conceptual cornerstones highlighted. Red colors indicate the direct involvement of waveform information in the location process. Only several prominent methods are explicitly mentioned; closer investigation is provided in the main text. & Thurber, 2003;Waldhauser & Schaff, 2008;Kwiatek et al., 2013), and cluster-based relocation methods (G. Lin et al., 2007;Matoza et al., 2013;Trugman & Shearer, 2017).
As an alternative of the conventional traveltime-based methods, modern waveform-based seismic source location methods do not require phase identification nor phase picking and can detect and locate events with low signal-to-noise ratio (SNR) by exploiting migration and stacking techniques commonly used in exploration seismology (e.g., Beskardes et al., 2018;Cesca & Grigoli, 2015;Gajewski et al., 2007;Pesicek et al., 2014;Zhebel & Eisner, 2015). The basic principle of waveform-based methods is locating the source by focusing or reconstructing the source energy into discrete grid of points with a certain migration or imaging operator, which uses traveltime information. It is worth noting that there are several different terminologies for the waveform-based location methods, such as waveform migration or stacking, backprojection imaging, beamforming, and coherence scanning (Cesca & Grigoli, 2015;Gajewski et al., 2007;Maxwell, 2014). Here, for simplicity and avoiding ambiguity, the term "waveform-based" is used throughout the manuscript for the type of location methods using waveforms and/or waveform techniques, not only exploiting the timing of seismic phase arrivals as traveltime-based methods do. For waveform-based methods, no phase picking is required. Every possible source location is firstly considered through stacking and imaging process; then, a detection and picking criterion (or imaging condition) is utilized to select the final location. The procedure can be summarized as "locate and then pick" (Eaton, 2018). In contrast, the general procedure of traveltimebased methods is phase picking first and then locating the events through inversion strategies, namely, "pick and then locate." The origin of waveform-based methods can be traced back to around 1990s. In 1982, McMechan proposed a pioneering recipe for locating earthquake events through wavefield extrapolation (McMechan, 1982). Then, seismic emission tomography was developed by utilizing maxima of the semblance (a measure of coherence) over space, time, and channels to detect and locate microseismic events (Kiselevitch et al., 1991). An advance on this method, so-called passive seismic emission tomography, has also been proposed to image microseismic emission energy with surface monitoring arrays (Duncan, 2005). This method efficiently produced promising results of source location. The basic idea behind these approaches is the same as diffraction stacking in exploration seismology, namely, stacking the waveforms from individual receivers along one-way traveltime moveout for all possible source locations and origin times. In addition to the Kirchhoff diffraction stacking operator, cross-correlation stacking operator is another well-established, interferometric-type migration operator for source location (Schuster et al., 2004). Cross-correlation stacking shares the same principle of exploiting differential traveltimes at pair-wise stations from common events, with master-station method and station-pair method mentioned above. This method has been applied to microseismic monitoring in the oil and gas industry (Eisner et al., 2008;Grandi & Oates, 2009;L. Li et al., 2015;Trojanowski & Eisner, 2017;, volcanic tremor data (Droznin et al., 2015;, mining-induced seismicity (Dales, 2018;L. Li et al., 2018), and slow (tectonic tremor and low-frequency earthquakes) and regular earthquake (Poiata et al., 2016;Ruigrok et al., 2017;. In this study, we will call it interferometric-type method. Most of these methods use only direct P waves, as the information of the velocity models is usually limited and does not allow use of other arrivals. Therefore, they are also known as partial waveform stacking (PWS; e.g., Pesicek et al., 2014). Beside aforementioned waveform stacking techniques, time reverse imaging (TRI) and full waveform inversion (FWI) are also well-known waveform-based methods for source location. TRI locates the seismic source by performing a numerical backpropagation of the recorded full waveforms through a fixed velocity model. This approach was initially proposed by McMechan (1982) and was further developed by, for example, Gajewski and Tessmer (2005), Larmat et al. (2008), Artman et al. (2010), and T. Zhu (2014). The basic principle of FWI is to find a model (e.g., velocities and source parameters) that can generate synthetic waveforms that match the observed waveforms best. Though FWI involves high nonlinearity and requires expensive computational effort, it is increasingly used not only for subsurface velocity estimation in exploration seismology (e.g., Virieux & Operto, 2009), but also for seismic source location (e.g., Kaderli et al., 2015;Shekar & Sethi, 2019). More recently, a largely datadriven technique, known as wavefront tomography (WT), combines kinematic information estimated from wavefront attributes with time-reverse modeling to jointly estimate source location and the velocity structure (Diekmann et al., 2019).
So far, there is no universal name or categorization for these relatively modern location methods involving waveform techniques. There are alternative ways of categorizing seismic location methods, besides the concise but rough categorization as traveltime-and waveform-based methods (see Figures 2 and 3 for more details). For instance, conventional traveltime inversion involves data domain operation only, and waveform stacking techniques generally determine a source location from imaging function in the image domain, while FWI is an example of hybrid domain methods. Another possible classification strategy is based on the distinction between ray-based modeling (e.g., traveltime inversion and stacking methods involving theoretical traveltime) and full waveform modeling (e.g., TRI and FWI). Without loss of generality, here we name those methods simply as waveform-based methods, though some of them also adopt kinematic information, for example, traveltime and wavefront attributes, and involve both data and image domain operations. Table 1 lists studies and applications of waveform-based location methods applied to seismic sources observed at different scales. A detailed description on transition of traveltime-based to waveform-based methods, and the increasing interest of waveform-based methods at multiple scales is presented in the next section. So far, PWS is still the dominant approach for data at local to global scales. Although TRI and FWI have already been developed for more than a decade, the study concerning their applicability and potentials in seismic source location is still ongoing.
In this work, we aim to provide insights of waveform-based location methods by a comprehensive review of their recent advances and challenges. The methodological theories for different categories of waveformbased methods are followed by representative applications ranging from laboratory AE to field hydraulic fracturing-induced seismicity, regional tectonic, and volcanic earthquakes. These examples illustrate the success of applications to various cases of seismic activity covering wide range of scales. We also discuss current challenges and perspectives associated with waveform-based methods. By elaborating the methodological principles, presenting categorization, and connections, as well as illustrating their applications at  Figure 2, red colors indicate a stronger (more explicit) utilization of full-waveform information in the location procedure. Taking into account implementational aspects, a precise positioning of methods is rarely achievable. Possible deviations from the chosen positions are denoted by arrows. The color of these arrows indicates how means of data abstraction or data availability let traditionally separated techniques approach each other. multiple scales, this review is aimed on building a framework for better understanding the idea of waveformbased seismic source location and conducting further research.

The Rise of Waveform-Based Location Methods
Starting from graphical approaches like the circle method, seismic source location has significantly matured in the 1960s, spurred by the rise of modern-day computers. Figure 2 illustrates the historical evolution and major milestones of quantitative location methods. With computational infrastructure available, the first practical applications of Geiger's least squares traveltime inversion (Geiger, 1912) became possible, leading to a rapid development of quantitative traveltime-based methods. As a result, nonlinear extensions, capable of honoring, for example, seismic anisotropy, were subsequently introduced in the 1980s. At the same time, owing to the increasing availability of large data volumes, relational techniques, like the double-difference, the master-station, or the master-event method, were developed, which allowed for improved earthquake location estimates in well-instrumented or seismically active areas. After the 1980s, the installation of dense station networks and broadband instruments then sparked the ongoing trend of utilizing full-waveform information for seismic location.
As a more or less natural extension of established traveltime inversion, FWI aiming at utilizing amplitude and phase information in the location framework was introduced. At roughly the same time, following the first experiments in the 1980s and early 1990s, the 2000s saw a rapid transfer of waveform-based imaging techniques from hydrocarbon exploration. Tailored to the location problem, methods like coherence-based WT or PWS (Kirchhoff migration) originally emerged in a controlled-source context and were built on the diffraction concept. Likewise, other emerging array-processing techniques such as TRI or matched-field processing have their origin in the adjacent fields of (hydro)acoustics. Both, FWI and the aforementioned source imaging techniques have in common that their successful application and further development continuously relies on improvements of data acquisition systems. Although in this paper we focus on emerging waveform-based techniques, it should be noted that traveltime-based methods remain a very trusted and continuously developing approach for seismic source location, with improvements concentrating on the reliable and automated identification and picking of seismic phases.
Because of the sparse distribution of early seismological monitoring stations, the single or small number channel techniques, such as those phase-picking-based methods, were the favorite methods. Traveltimebased location methods have been continuously improved and undoubtedly dominated the location routines during the 20th century. Phase picking required by these techniques is time consuming, error prone, and sometimes subjective, especially for weak aftershocks or microseismic data, which typically have relatively low SNR (e.g., Chambers et al., 2010;Liao et al., 2012). Though advanced and automated procedures can ensure an efficient picking process (e.g., Akram & Eaton, 2016;Ross et al., 2018;Withers et al., 1998), robust phase identification and picking are still challenging for low SNR events, and missed arrival times and picking errors may directly cause location errors. Other nonnegligible limitations of most traveltime-or ray-based methods consist of their incapability of utilizing multichannel coherency (Poiata et al., 2016;, or considering secondary features (like scattering) or finite-frequency effects (Dahlen et al., 2000;H. Zhang & Thurber, 2003).
The limitations imposed by a low SNR on the picking process can be considered one of the main incentives for the development of alternative location methods. The advent of dense local networks for aftershocks and seismic swarm observation, as well as the interest of industry in passive seismic methods for reservoir and fracture monitoring with even denser networks naturally, induced the interest in exploiting the redundancy in the recorded seismic data. This leads to the rise of multichannel approaches for seismic source location, which are well known in reflection seismology and formed the foundation for the "waveform-based methods" (beamforming has been already used in seismology for some time). Specifically, an increasing demand of noise-robust and efficient location approaches arose since early 2000s, due to the rapid development of microseismic monitoring for hydraulic fracturing in unconventional oil and gas fields. Recent interest in microseismic monitoring has been driven by the global expansion in the development of unconventional oil and gas resources, especially shale and tight sands from North America. Microseismic monitoring is a passive seismic technique enabling analysis of microearthquakes, which are used to map fracture reactivation/creations, and fluid movements. Microseismic monitoring is used in mining, fluid injection and extraction, hydraulic fracturing, and other geoengineering operations (Grechka & Heigel, 2017;Maxwell, 2014;Shapiro, 2015). Passive seismic monitoring has been applied in the mining industry and geothermal reservoirs for more than 60 years (Gibowicz & Kijko, 1994;Obert & Duvall, 1945). Currently, microseismic monitoring has become frequently applied in stimulations of unconventional resources (Maxwell, 2014;Rutledge & Phillips, 2003). Seismic source location is the most crucial step of microseismic processing, and it is needed for subsequent source mechanism inversion and geomechanical analysis. A multitude of studies have been dedicated to the improvement of waveform-based microseismic location during the past decade (e.g., Anikiev et al., 2014;Cesca & Grigoli, 2015;Trojanowski & Eisner, 2017;L. Li et al., 2018;Diekmann et al., 2019; see also Table 1 for a more representative list of references). The main motivation of these studies is to improve locations of the weak microseismic events as the Gutenberg-Richter (Gutenberg & Richter, 1956) empirical law implies exponential increase of the number of events if we can locate weaker events. Since the ability to detect and locate a larger number of events offers possibility for a better characterization of the stimulated rock, the methodologies are focusing on locating as weak events as possible. This eventually leads to the use of waveform-based location methods allowing to detect and locate weaker events by stacking of the waveforms.
With the increasing development and application of hydraulic fracturing in unconventional oil and gas fields, AE monitoring of laboratory hydraulic fracturing tests has drawn attention. AE location methods are directly adapted from earthquake location, and the traveltime inversion acts as the dominant approach (e.g., Ge, 2003aGe, , 2003b. In recent years, a category of signal-or waveform-based analysis techniques has been developed in AE location and source mechanism inversion (Grosse & Ohtsu, 2008;Ishida et al., 2017), benefiting from the improvement of computation, communication, and storage technologies. Both PWS and TRI have shown promising results in locating high-frequency AE events (e.g., López-Comino et al., 2017;Saenger et al., 2011). In addition, densification of seismic networks and increased volumes of recorded seismic data at regional and global scale is another important reason for increased interest in waveform-based methods, which are beneficial for tectonic earthquake location problem. Earthquakes are predominantly related to failures, faults, or tectonic boundaries across all scales of the crust, and the most reliable structural information on these features has been gained through the location of connected seismicity. Waveform-based methods like seismic migration are still the primary processes for the construction of structural images in controlled source studies. Thus, waveform-based location methods promise to eventually provide a more direct, data-driven characterization of source and structure.
Nevertheless, the distinction between traveltime-based and waveform-based methods is not as strict as it appears. Stacking techniques require traveltimes (obtained from the velocity model), and TRI works with velocities as well. WT provides velocities (inversion) and source locations (folding it with the waveform provides a source image). FWI stands out since it is the only technique combining inversion and imaging in one step. The clearest distinction seems to be the single-channel and multichannel style of approach, though phase picking can also be enhanced by multichannel techniques (through modern techniques like machine learning). It should be acknowledged that continued advances in traveltime-based methods are emerging that we do not discuss here. Upon closer inspection, it is no surprise that PWS underpins most of the successful applications of waveform-based location in the literature. It can be viewed as a direct descendant of Kirchhoff migration, which-owing to its flexibility, efficiency, and robustness-can still be considered the most trusted controlled-source imaging method to date (Etgen et al., 2009).

Methodologies
As many factors fuel the current rise of waveform-based seismic source location techniques, a systematic and consistent classification of different location methods is desirable yet challenging. Figure 3 presents an attempt to perform a categorization in physical or information-theoretical terms, that is, on how explicitly they make use of the full information content encoded in a seismic time series at all considered stations. The four different poles of the displayed diagram are interrelated either via means of abstraction or instrumentation. The source imaging techniques all have their origin in the fields, where spatial wavefield sampling is reliably dense (controlled-source seismology, optics, or acoustics), whereas more indirect but flexible inversion algorithms like traveltime and full-waveform inversion are less restricted in that regard. TRI and WT represent direct counterparts and can only be distinguished by the degree of data abstraction. Just like a source location alone, traveltimes and wavefronts can be viewed as idealized abstractions and systematically neglect additional information about the source. The utilization of fullwaveform information in turn, despite the valuable additional details contained, remains practically challenging and still often leads to unreliable results. It is a combination of the strengths, namely, the insightfulness of waveform information and the stability of data abstractions like traveltimes, that make hybrid approaches like PWS arguably the most successful candidate so far. Indicated by the arrows in Figure 3, improved instrumentation and advances in wavefield reconstruction, as well as intermediate forms of abstraction, such as characteristic functions (CFs), or waveform fitting lead to the different mindsets approaching each other.
The common principle of waveform-based location methods is focusing or reconstructing the seismic energy radiated by the source into a point with a certain migration or imaging operator. From a theoretical perspective, these methods are based on the Kirchhoff-Helmholtz formula and Huygens' principle, which are fundamental representation theorems in elastodynamics (Kirchhoff, 1883;Pao & Varatharajulu, 1976;Born & Wolf, 1980;Aki & Richards, 2002;B. B. Baker & Copson, 2003). These theorems provide a theoretical framework for seismic imaging and state the elastic wavefield at any point within a subsurface region that can be completely reconstructed through a weighted integration of the wavefield and its normal derivative measured around a closed surface. This is also the theoretical foundation of Kirchhoff migration (e.g., Schneider, 1978), a routinely used processing technique in exploration seismology. As opposed to the forward propagation of seismic waves from the source to the receiver, source location through waveform-based methods can be regarded as backpropagation of seismic energy from the receivers to the source zone ( Figure 4). Thus, the waveform-based methods are based on reciprocity theorem in a general sense. Currently, most waveform-based methods provide absolute locations, which is different from relative locations that are inverted based on known master events. Relative locations basically measure the source locations relative to master events and are often used in high-precision relocation procedures. Though a few pioneering studies (see discussion in later parts of section 3.1) on combining waveform techniques and relative location idea have shown promising results, these new and hybrid methods are still in the early stage of development. Here we focus on waveform-based, absolute location methods only.
In this section, we present a systematic introduction to the theory and development of waveform-based location methods, including PWS, TRI, WT, and FWI. These methods have all in common that they either directly or indirectly utilize waveform information in the location process (cf. Figure 3). Specific aspects of implementation, ranging from the data abstraction through CFs to velocity model building or the generation of traveltime tables, will be discussed in the following section. While PWS has already proven to be successful in challenging environments, other methods, such as WT, TRI, or FWI, either just appeared on the radar or still face significant practical challenges that need to be addressed. Accordingly, the following section emphasizes implementational aspects where recent advances have been made and discusses the potential and challenges of each method in more detail.

Partial Waveform Stacking
PWS methods are currently the most widely used among waveform-based location methods, especially for detection and location of seismic events at local and regional scales. These methods usually make use of primary phases only. This category mainly includes diffraction stacking and cross-correlation stacking, which corresponds to Kirchhoff-type and interferometric-type stacking, respectively.
Kirchhoff migration is a weighted backprojection of wavefields onto an isochron surface (a surface with equal traveltime) that passes through the scattering object (Esmersoy & Miller, 1989). When the scattering object turns into a point or an edge, the diffraction theory works (e.g., Klem-Musatov, 1994;B. Schwarz, 2019). The seismic source can be approximated by a point in most scenarios ranging from laboratory to global scales of seismic monitoring, because the source-receiver distance usually exceeds more than five prevailing wavelengths (e.g., see discussion on dominant frequency of microseismic sources in Eisner et al., 2013). Therefore, the diffraction stacking can be naturally applied to source imaging, and numerous studies have provided development and application of Kirchhoff-type methods for source location, which are based on stacking of the waveforms from individual receivers along a theoretical traveltime moveout curve (Figure 5a; e.g., Kao & Shan, 2004;T. Baker et al., 2005;Gajewski et al., 2007;Zhebel & Eisner, 2015). In earthquake seismology, Kao and Shan (2004) firstly proposed the source-scanning algorithm to identify seismic event location and origin time with a brightness function, which is constructed by stacking the recorded seismograms along a moveout theoretical traveltimes. Gajewski et al. (2007) proposed to use diffraction stacking for source location and demonstrated its automatism and feasibility with numerical examples in passive seismic scenarios. They indicated that the location uncertainty was highly dependent on the Figure 5. Sketch of the principle for partial waveform stacking methods. (a) Kirchhoff-type method, which is based on one-way traveltimes from the source to receivers. (b) Interferometric-type method, which is based on differential traveltimes at pair-wise stations from common events. As indicated in the migration operators, Kirchhoff-type methods search both source location and origin time, while interferometric-type methods stack the cross-correlograms of waveforms from pair-wise receivers, without inverting the decoupled origin time. The generalized equations of partial waveform stacking and those of commonly used characteristic functions (CFs) are summarized in equations (1) and (2).
Another well-established and interferometric-type migration operator for source location is the crosscorrelation stacking (Schuster et al., 2004), which originated from seismic interferometric imaging and is based on differential traveltimes at pair-wise stations from common events (Figure 5b; e.g., Eisner et al., 2008;L. Li et al., 2015). Instead of stacking waveforms along one-way traveltime curves as in diffraction stacking, cross-correlograms along differential traveltime curves are stacked taking into account the correlation among the receiver channels. Actually, the idea of utilizing differential traveltimes on pair-wise receivers from common events has emerged as the master-station method in earthquake seismology since the 1990s (e.g., H. Zhou, 1994). Font et al. (2004) extended the master-station method to maximum intersection method by incorporating differential traveltimes from all unique receiver pairs. Analogous to double difference method (Waldhauser & Ellsworth, 2000), Zhang et al. (2010) proposed the station-pair double difference method and used it to locate nonvolcanic tremors successfully. Recently, several studies implemented this idea into waveform-based methods with cross-correlation techniques from seismic interferometry (e.g., Dales et al., 2017;Droznin et al., 2015;Grandi & Oates, 2009;Poiata et al., 2016;Ruigrok et al., 2017;. By including higher-order cross correlations, K. L.  and K. L.  extended the previous single crosscorrelation stacking to a double-correlation method and a more generalized high-order correlation method, which showed superior performance in lateral imaging resolution for locating tremors with sparse station networks.  proposed a novel multichannel coherency migration method for locating earthquakes using continuous seismic data based on stacking of Pearson correlation coefficients for waveforms from different receiver pairs or groups. Synthetic and field microseismic data sets demonstrated the multichannel coherency could produce improved source energy focusing. Besides the imaging operator, the PWS methods also differ in preprocessing of the input waveforms and the event detection and location detection criteria. There are various CFs adopted to improve the SNR and compensate the side effects of source radiation patterns; for example, the input data are converted to 1. envelope (Gharti et al., 2010;Liao et al., 2012;Zeng et al., 2014); 2. semblance (Furumoto et al., 1990;W. Zhang & Zhang, 2013;Staněk et al., 2015;C. Zhang et al., 2019); 3. short-term average to long-term average ratio (STA/LTA; Drew et al., 2013;Grigoli et al., 2014;L. Li et al., 2018); 4. kurtosis (Langet et al., 2014;Poiata et al., 2016); 5. multichannel coherency . Different CFs can exhibit different performance of imaging resolution and noise suppression. Cesca and Grigoli (2015) investigated the location performance of different CFs with both synthetic and field mining-induced seismicity and concluded that kurtosis traces are sensitive with respect to noise, envelope traces produce more stable results but lower imaging resolution, and STA/LTA traces have a better overall performance of location resolution and noise resistance. Trojanowski and Eisner (2017) tested the performance of several CFs with diffraction stacking and cross-correlation stacking operators, showed the semblance and cross-correlation stacking have better enhancement of the quality of the stacking profiles, and demonstrated that accounting for the source mechanism by polarization correction improves most efficiently the capability of noise suppression and retrieving weak microseismic events. Beskardes et al. (2018) compared the performance of envelope, STA/LTA, and kurtosis with raw waveforms for local earthquakes detection and location. The comparative study showed for an aftershock sequence that stacking of kurtosis waveforms produced the most robust results for detecting the weak events, while stacking of raw waveforms had the best spatial resolution and retrieved magnitude. Therefore, a two-phase backprojection imaging process is suggested (Beskardes et al., 2018): backprojection of kurtosis waveforms for event detection and location within large data volumes, followed by backprojection of raw seismograms of the previously detected events to improve the location accuracy and obtain more reliable magnitudes. A sketch of the principle for PWS methods is shown in Figure 6.
Generally, PWS can be written in a simplified form as creating an image function for all possible origin times t 0 : 10.1029/2019RG000667

Reviews of Geophysics
(2) Figure 6. The simplified sketch of migration and stacking process for partial waveform stacking. (a) Input waveform generation; (b) subsurface grid discretization; (c) source location picking after migration and stacking of the input waveforms. The raw data are processed to generate input waveforms, which are then stacked as the imaging value of each potential grid points; lastly, the source location is determined by picking the grid point(s) with the maximum imaging value. The illustration presents the general form of the theory and stacking process, neglecting some details (e.g., the origin time issue) and probable difference and variations for different methods. Hot colors of the imaging profiles indicate the increased energy due to constructive interference.
where x is the source coordinates; t 0 is the origin time; t is the time sample in the waveform data u; N is the number of receivers; CF(t) is the characteristic function; M is the migration or imaging operators, for exam- ] are imaging operators for Kirchhoff-type and interferometric-type stacking methods; δ is the Dirac delta function; and t i,x is the theoretical traveltime from receiver i to source x. Please note that equation (1) is a generalized form of PWS method. Different combinations of CFs and migration operators lead to different specific formulae. The above stacking-based location problem can be regarded as a global optimization problem, in which source coordinates and origin time corresponding to the maximum imaging value max{S(x, t 0 )} are selected as the inverted source parameters. There are five representative CFs in equation (2), that is, envelope, semblance, STA/LTA, kurtosis, and multichannel coherency, where u is the input waveform, and it is not necessarily the raw waveform, H indicates the Hilbert transform function, nt is the length of selected time window for semblance and multichannel coherency, ns and nl are lengths of short and long time windows for STA/LTA, u is the mean value of u for nt samples, i,j are the station index, and σ is the standard deviation of the corresponding input waveforms. As the formulae denote, the envelope is computed as the amplitude after Hilbert transform, semblance represents ratio of signal energy to total energy for the signal, STA/LTA is the short-term average to long-term average ratio, kurtosis is defined as the fourth normalized central moment higher-order statistics, and the twodimensional (2-D) multichannel coherency is based on the Pearson correlation coefficients of the pair-wise groups of records.
Currently, utilizing combination of direct P and S phases for seismic source location has drawn attention, due to the potential of introducing more constraints (e.g., Gharti et al., 2010;Grigoli et al., 2013;Pesicek et al., 2014). However, major challenges remain in building and calibrating S wave velocity model suitable for waveform stacking. Furthermore, the interferences from multiple phases might overweight their additional constraints when assumed multiple phases are submerged in noisy data, or inaccurate velocity models are used (e.g., L. Li et al., 2018). In theory, besides the primary phases, scattering or coda waves (including reflected and converted waves) can also be incorporated into the stacking process to enhance the coherence of the source energy. However, the traveltimes of these secondary phases depend even more strongly on the velocity model than the primary phases, and using such phases may introduce additional bias.

Time Reverse Imaging
TRI is based on the time reversal (TR) theory and technique, which was first established in medical field and nondestructive evaluation (e.g., Fink, 1999). TR has robust stability in focusing seismic source energy in both space and time. Currently, it is evolving into a standard approach for seismic source location and has been demonstrated to be a reliable location method in various fields, such as ocean acoustics and seismology (e.g., Artman et al., 2010;Douma et al., 2015;Gajewski & Tessmer, 2005;Kuperman et al., 1998). In TRI, the recorded full waveforms are extrapolated backward in time to focus the source energy, and the theoretical foundation is the reciprocity of the elastic wave equation. In principle, this approach shares the same essence with reverse time migration (RTM) technique in exploration seismology, which is based on the principle of time-reversal invariance of wavefields, enabling optimal focusing of the source energy if a good velocity model is used (e.g., Baysal et al., 1983;Kees Wapenaar et al., 2010;H.-W. Zhou et al., 2018). The inputs of RTM include both forward and reversed wavefield, and the goal is to image the structures, while TRI only backpropagates the recorded waveforms to focus the source energy. The general workflow of TRI consists of three steps ( Figure 7): The recorded seismograms at all receivers are first reversed in time; then numerical modeling (e.g., finite difference) is conducted with time-reversed seismograms as source wavelets; finally, similar to RTM, wavefield backpropagation is followed by an imaging condition or a focusing criterion (equations (3)-(5)), which allows to determine source origin time and location ( Figure 7).
This category of location method has been referred to as time reverse modeling (Gajewski & Tessmer, 2005;Steiner et al., 2008;T. Zhu, 2014), TR/reverse imaging (Artman et al., 2010;Kawakatsu & Montagner, 2008;Larmat et al., 2008;Sun et al., 2015), TR extrapolation (Z. Li & van der Baan, 2016), RTM (Nakata & Beroza, 2016), and reverse time imaging (Zou et al., 2014). Here, we use the term "time reverse imaging" for simplicity. During the past decade, many studies have shown the feasibility of TRI in locating seismic sources at laboratory and regional scales (e.g., Larmat Table 1 for more references). The application of TRI for seismic source location was first proposed by McMechan (1982), in which it was used to locate earthquakes. Gajewski and Tessmer (2005) further demonstrated its feasibility in locating low-SNR events with 2-D and three-dimensional (3-D) synthetic acoustic examples and revealed its great potential for very weak events with large passive seismic monitoring networks. Steiner et al. (2008) applied TRI to synthetic and real low-frequency microtremors originated from a small underground area in the oil reservoirs. They showed that the approach was feasible for microtremor location and could also provide a possible technique for reservoir location, though the characteristics of source radiation significantly affect location accuracy. T. Zhu (2014) improved the quality of TRI by compensating for attenuation and dispersion effects in attenuating media. Z. Li and van der Baan (2016) proposed a representation-theorem-based reverse time extrapolation scheme for microseismic location, which can combine the three-component particle velocities (displacements) and the pressure wavefield and eliminate the ghost event in downhole monitoring. Werner and Saenger (2018) investigated the significance of station distributions, complex velocity models, and SNRs with respect to location accuracy with 3-D numerical examples. They obtained several prerequisites for applying TRI reliably at field scale: Interstation distance should be smaller than the source depth, and the aperture of the array should be at least twice the source depth; an increased number of stations could alleviate the influence of noise; additional scattering of wavefields resulting from complex velocity model could improve location accuracy.
Theoretically, the imaging conditions proposed for RTM can be adapted to TRI in seismic source location. Among the various imaging conditions for RTM, the cross-correlation imaging condition is a commonly used implementation and enables reliable amplitude reconstruction and good imaging resolution (e.g., Chattopadhyay & McMechan, 2008). Artman et al. (2010) introduced physically meaningful elastic timereverse imaging conditions for passive seismic data. Sun et al. (2015) extended the traditional TRI to a hybrid multiplicative TR imaging algorithm, which is based on multiplication between wavefields from receiver groups (equation (5)) and applies a causal integration over time to obtain the source evolution process of multiple asynchronous sources. Both synthetic and field data examples revealed the feasibility and superiority of the method in imaging the spatial and temporal distribution of microseismic events (T. Zhu et al., 2019). Nakata and Beroza (2016) proposed the geometric-mean imaging condition by multiplying all receiver wavefields at each space and time sample (equation (4)) and then take a summation over the time axis. Compared with conventional arithmetic-mean imaging condition (equation (3)), this imaging condition could reduce the computational cost and increase the spatial resolution of the source location image. These imaging conditions for TRI in source location are shown in equations (3)- (5).
where x is the source coordinates and t is the time sample, N is the number of receivers, R i is the backpropagated wavefields at receiver i, g is the total number of groups, and n j is the number of receivers in each group j , that is, ∑ g j¼1 n j ¼ N. Similar to equation (1), the source coordinates and time corresponding to the maximum imaging value max{I(x,t)} are determined as the inverted source parameters. Equation (3) denotes summation over all the receiver wavefields, while equation (4) indicates multiplication of these wavefields. On one hand, the summation will result in nonzero values across the wave propagation paths, and multiplication will lead to nonzeros only at the focused sources (Sun et al., 2015;Sun et al., 2016). The imaging resolution corresponding to multiplication is supposed to be higher than that of summation. On the other hand, the summation operation in equation (3) is implicit, since the entire receiver wavefields can be back-propagated at once. However, the multiplication has to be conducted explicitly, which increases the computational cost. In order to leverage the efficiency and resolution of TRI, a hybrid multiplicative TRI was proposed by Sun et al. (2015), as shown in equation (5). The summation-and multiplication-based imaging conditions can also be termed as arithmetic-mean and geometric-mean imaging condition, respectively (Nakata & Beroza, 2016). If the entire receiver wavefields are set as one single group, the hybrid multiplicative imaging condition reduces to the autocorrelation imaging condition as in Artman et al. (2010). The performance of the imaging conditions for different situations (e.g., complex velocity models, irregular array geometry, complex source mechanisms, and earthquake clusters) still needs further investigation.
This methodology is extremely sensitive to the correctness of the velocity model, as focusing and defocusing of the energy are required not only for direct waves but also for all multiply scattered and refracted waves. This makes this methodology suitable only for a very simple medium or more likely to extremely wellknown velocity model. This poses significant challenges for microseismicity where the wavelengths of the propagating waves range from a few meters (downhole) to tens (surface) of meters and hence velocity models must be very well constructed and is also often problematic for natural seismicity at regional scale where only very general knowledge of the underground structure is available. This makes this methodology more popular and applicable in theoretical studies.

Wavefront Tomography
Seismic source location methods can be divided into three broad categories based on the role of the velocity model (Grechka & Heigel, 2017): (i) Geiger-type inversion with a priori fixed velocities, (ii) tomographic techniques that iteratively update the velocity structure in the location process, and (iii) relative location methods that can position events relative to each other and relative to the selected velocity distribution. Due to the lack of sufficient constraints and related significant computational demands, the challenge of locating and characterizing an unknown source in a largely unknown medium is commonly divided into isolated subproblems. While this separate treatment of source and medium properties led to significant success and can still be considered common practice, there also exist techniques that aim at a joint estimation of the source location and the velocity distribution, thereby accounting for the coupled nature of the full inverse problem (cf. Grechka & Heigel, 2017). A recent attempt at confronting the practical challenges of the underdetermined joint inversion is based on an analysis of the local wavefield coherence (B. Schwarz et al., 2016; Diekmann et al., 2019). The method builds on the notion of wavefronts, which can be reliably tracked, when a sufficient number of stations in a spatial vicinity are considered. There are multiple analytical functions that relate the geometrical attributes of wavefronts to traveltimes (e.g., Höcht et al., 1999;Jäger et al., 2001;Benjamin Schwarz & Gajewski, 2017). The most natural extension of the planar wavefront assumption that is commonly employed in beamforming is the second-order traveltime expansion around a reference station, where p is the horizontal slowness, x − x 0 is the lateral distance vector to the reference location, and t ′ 0 denotes the reference onset time. In addition to the first-order term, the 2 × 2 matrix H encodes the wavefront curvatures of the locally coherent wavefield. For every regularized central data point x 0 ; t ′ 0 À Á , within a predefined local aperture, the maximization of a coherence measure like the semblance norm (cf. equation (2)) leads to a maximally coherent stack and local estimates of the coefficient vector p and the coefficient matrix H (Jäger et al., 2001;Neidell & Taner, 1971). Equation (6) acts as a multidimensional fitting operator, along which a coherence measure is estimated. The wavefront attributes contained in p and H represent local approximations of the wavefield and can be traced back to the source. As is illustrated in Figure 8, for sufficiently dense arrays, this approximation can be used not only to enhance waveforms through coherent summation but also to estimate new (imaginary) station responses, as they would have been recorded at an intermediate location. For a fixed central location and when the array aperture becomes small in comparison with its distance to the source, the process reduces to conventional beamforming and results in estimates of the directionality of incoming seismic radiation. In the general case of larger apertures, it additionally includes the second-order moveout term, which contains information on the emerging wavefront's curvature. Crucially, in contrast to Kirchhoff migration, event and phase-consistent traveltimes need not be computed using a predefined velocity model but rather are automatically extracted from the data. The fact that coherence analysis can be fully driven by collective wavefield patterns in turn makes it very suitable for velocity inversion.
Given the near-surface velocity, the estimated slope and curvature values can be used to characterize the actual geometry of the wavefront, as it arrives at the surface array. This is the underlying idea of WT Duveneck, 2004), which bears close resemblance to eikonal tomography (F.-C. Lin et al., 2009) but is applicable for body waves and unknown source locations. Just like conventional traveltime tomography, it is based on a least squares misfit of picked and modeled data, where, however, in addition to traveltime information, the inversion also takes slopes and curvatures into account. As these represent collective properties of the wavefield, they directly relate and stabilize adjacent measurements. With the near-surface constant velocity backprojection as a starting guess, trial source locations are iteratively updated with the velocity structure. In coherence analysis attribute fields describe the whole Figure 8. Sketch of the wavefront-based workflow for the joint inversion of velocity structure and source locations (Diekmann et al., 2019). Coherence analysis is commonly based on the maximization of the semblance norm and results in estimates of reference onset time (indicated by a red dot), the tilt, and the curvature of seismic event as it is recorded at the surface. In addition to an improved signal-to-noise ratio, new (imaginary) station responses can be generated. The estimated wavefront attributes then allow for multiple estimates of the source location and a joint inversion of the long-wavelength velocity structure. Location estimates and the waveform-based uncertainty are indicated by black dots and a black circle, respectively. recorded event locally, and WT typically delivers multiple location estimates for a single event (illustrated in Figure 8). This redundancy can in turn be used to additionally constrain the inversion by linking phaseconsistent contributions . Due to the intrinsic possibility of data enhancement and regularization and because a robust estimation of the velocity structure becomes feasible, the wavefront picture naturally complements all other waveform-based seismic location methods, including FWI. Improved locations can usually be obtained by combining the estimated velocity model and reverse-time imaging for the enhanced data set (Diekmann et al., 2019). The method appears to be efficient and accurate as long as the data coverage is dense and the underlying velocity structure captured with a single local aperture is only moderately complex. Similar to PWS, kinematic summation trajectories (here wavefronts) are combined with the utilization of waveform coherence, which makes this approach a hybrid method. Also, in analogy to PWS, the concept of diffraction is naturally honored (B. Schwarz, 2019), and the same workflow has been successfully applied to controlled-source back-scattering . Owing to different instrumentation and the scale-related differences in wavefield complexity, the array-based detection of directional information has dominated in earthquake studies (Vespagrams, slowness-back azimuth analysis; e.g., Rost & Thomas, 2002), whereas in exploration seismology, wavefront tilt is often neglected, and curvatures have traditionally been at the center of attention (controlled-source velocity spectra; Taner & Koehler, 1969). In the framework of WT, where slopes and curvatures are combined, both approaches are unified. With the continuing rise of dense nodal deployments and the emergence of quasi-continuous strain measurements with ground-coupled fiber-optic cables, the utilization of collective properties of the wavefield is possible on very short length scales and can be expected to continue and intensify in the foreseeable future.

Full Waveform Inversion
FWI is a tool for geophysical imaging based on the waveform matching technique through minimizing the residuals between the synthetic waveforms and the observed waveforms (e.g., Virieux & Operto, 2009; see also Figure 9). Aside from many impressive applications in earthquake seismology, FWI has led to first highly resolved regional and global images of the Earth's mantle (e.g., A. Fichtner et al., 2009;Bozdağ et al., 2016). The different methods of FWI differ in the way of measuring difference between observed and synthetic waveforms (misfit) and inversion strategy of minimizing this misfit, as well as the variety of inversion parameters, for example, source location, moment tensor, and velocities. The misfit function is also known as objective function, although there is little objective about particular choice of the misfit functions. Enhanced computer resources and a method that enables efficient computation of the gradient of the misfit function, for example, the adjoint-state method (Fichtner et al., 2006;Plessix, 2006), are the two crucial factors that make FWI applicable for field data sets. Several existing misfit functions of FWI for source location (and velocity inversion) are Figure 9. Sketch of full waveform inversion for source parameters, for example, source location, origin time, source mechanism, and velocity model. In contrast to other waveform-based location methods, FWI relies on extensive forward modeling starting with a reasonable guess of the long-wavelength velocity structure and source characteristics to arrive at full synthetic waveforms that are repeatedly compared with the observations.
where m is the source model vector, including the spatial and temporal source components, that is, s(x,y,z,t) or f(x,y,z)w(t); u syn i and u obs i are synthetic and observed data at receiver i; and N is the number of receivers. c is a positive scalar coefficient that controls the weight of the sparse model s(x,y,z,t). v is the velocity model, * denotes the convolution operator, and u ref is the data of a reference trace. Equations (7) and (8) are commonly used misfit functions (e.g., Kaderli et al., 2018;Michel & Tsvankin, 2014Shekar & Sethi, 2019), and equation (9) is recently proposed by Wang and Alkhalifah, (2018) and the unknown source origin time is decoupled by convolving reference traces with the observed and modeled traces. When FWI is applied to source location problem, the source location (and maybe together with velocity model) can be inverted and imaged in one step using above misfit functions.
In earthquake seismology, FWI has already been used to solve for the event location and source mechanism since more than two decades (e.g., Y. Wu & McMechan, 1996;Ramos-Martinez & McMechan, 2001;Sokos & Zahradnik, 2008). Y. Wu and McMechan (1996) used FWI to determine the spatial coordinates and origin time associated with a synthetic double-couple source in 2-D scenario, assuming the velocity model and the source time function are well known. The principal advantage of FWI application to microseismicity is in the short duration of the microseismic event and therefore no need to invert the source time function, which can be approximated by a delta function. Recent studies are trying to include the velocity model in the inversion. In the context of passive seismic monitoring, Tsvankin (2014, 2017) utilized FWI to invert for the source parameters (including source location, origin time, and moment tensor), and velocity model (see Figure 9) in a sequential fashion, in which the adjoint-state method is adopted to compute the gradient. Synthetic examples based on 2-D layered VTI (transversely isotropic with a vertical symmetry axis) media have illustrated the feasibility of the sequential inversion for source parameters and VTI model. Kaderli et al. (2015) tested an acoustic FWI method to invert for the spatial-temporal source characteristics with a homogeneous velocity model, and the spatial and temporal components of the source are updated alternately. Sun et al. (2016) combined the FWI technique with time-reversal imaging to obtain a least squares time-reversal imaging method, which could provide accurate source location and source function after a sufficient number of linear iterations. Then these inverted source parameters could be further incorporated into the FWI for velocity model updates as shown in simple 2-D acoustic model. Huang et al. (2017aHuang et al. ( , 2017b developed an acoustic-wave-equation-based FWI method to locate microseismic events. They proposed to evaluate Frechet derivatives with respect to location parameters and apply the truncated Gauss-Newton method to accelerate the inversion process. Synthetic and field microseismic data from above studies have been used to show the practical feasibility of the proposed method. Wang and Alkhalifah (2018) developed a source function independent FWI method to invert for the microseismic source location, source function, and the velocity model simultaneously. The effect of the unknown source origin time is mitigated by convolving reference traces with the observed and modeled traces (equation (9)). Hence, this methodology is extremely sensitive to selecting reference trace. Shekar and Sethi (2019) formulated an FWI scheme for determining the spatiotemporal source function of microseismic events, and an additional regularization term in the misfit function (8) is used by exploring the sparsity inherent in the microseismic location problem. Synthetic examples based on 2-D SEG/EAGE overthrust model have illustrated the possibility of the FWI applications (Shekar & Sethi, 2019). Most recent studies of FWI for source location and velocity are limited to 2-D synthetic acoustic scenarios, which are still far from field application, and therefore, it is hard to judge if they will be practical because microseismic signals are dominated by S wave energy (in the vicinity of the source) and this is not captured in the acoustic models. Furthermore, radiation complexity is simplified in most of these studies by isotropic (explosive) sources. While using FWI for a fixed velocity medium is already quite successful in source location, further studies on how to improve the performance (both accuracy and efficiency) of FWI including the velocity model are still needed.

Comparison and Relation of Different Methods
In a general sense, waveform-based methods locate the seismic source through backpropagation of source energy: TRI uses the reciprocity principle explicitly through backpropagation of seismic wavefields, while other methods make use of it by means of generating waveforms and/or the traveltime tables. The backprojection process of waveform-based methods can also be regarded as a generalized type of beamforming with an image function. Although the idea of waveform stacking is quite the same, the PWS methods distinguish themselves from the beamforming method in at least two major aspects (e.g., Rost & Thomas, 2002;Liao et al., 2012;Poiata et al., 2016;L. Li et al., 2018): 1. First, the nature of problems solved by beamforming is determination of the direction from which the source energy arrives, instead of determining the spatial and temporal distribution of seismic sources. 2. Second, PWS methods stack only waveforms of selected phases (primary phases in most cases) within specific time intervals, while beamforming method sums up all observed phases through a summation over time. More specifically, the general beamforming result is equal to the sum of autocorrelations and approximately delayed cross correlations of the waveforms, and the cross-correlation term dominates the beamforming value.
An advantage for waveform-based methods is the ability to suppress noise. However, varying degrees of redundancy have been exploited for these methods. The classical diffraction stacking method using singlechannel-based CFs (e.g., envelope and STA/LTA) does not consider the correlation among the receiver channels, while multichannel-based waveforms (e.g., semblance) and the cross-correlation stacking operator have explored mutual channel coherency. For TRI and FWI, the entire waveforms including multiples and scattered waves in addition to the direct waves are utilized; thus, in theory more accurate source locations and higher-resolution velocity models can be achieved because the scattered waves add illumination points for the source. However, these methods require much more accurate velocity model, which is often not available. Therefore, various imaging functions and conditions are adopted to overcome poor focusing due to inaccurate model. The principal difference between aforementioned methods also lies in the distinction between ray-based modeling and wave modeling techniques. The former corresponds to traveltime calculation for PWS, and latter corresponds the full waveform modeling of TRI and FWI. In general, the PWS is more robust because it requires the velocity model to be correct on average between the source and receivers along the whole ray, while the TRI and FWI require the inversion model to be accurate not only on average but to reproduce all reflectors and refractions, a rare achievement. Furthermore, joint velocity inversion and location may eventually become generally applicable, although it is extremely hard to identify the wellconstrained parts of the velocity model and avoid overfitting data and projecting model errors into source locations.
As discussed above, the critical issue for all methods is the velocity model, since most of these methods need accurate velocity models to locate seismic sources correctly. As explained above, the PWS methods need the velocity model to compute the theoretical traveltime table; this means that a smooth velocity model that predicts accurately average velocity along the path between the source and receiver is required. This is a great advantage of PWS methods. The TRI and FWI also require velocity model to simulate full seismic waveforms that need to realistically fit the observed data. Given the high-frequency nature of the microseismic events (see Figure 1), this may require velocity model reproducing synthetic seismograms at hundreds to several thousands of Hz in downhole monitoring and tens of Hz in surface monitoring. The FWI of active seismic data has partial success if low-frequency waveforms can be inverted initially to fit long wavelengths of the velocity model. Unfortunately, none of current FWI or TRI method shows this feature with microseismic data sets. The joint inversion of velocity and location is an alternative solution. The wavefront-based method provides an alternative for simultaneous location and velocity inversion through WT of long wavelength parts of the model. A better understanding and estimation of the role of the velocity model in waveformbased location are still a challenge and perspective for future study, which will be discussed in detail in section 5.1. Another common shortcoming of TRI and FWI methods is the relatively expensive computational efforts, due to the storage and transmission of large volume of waveforms during the waveform backprojection process. Moreover, current TRI usually does not enable the flexible implementation of CFs as in PWS methods described above, and the implementation of the TRI generally requires the data to be recorded with a denser and more regular array, which may not be feasible for all configurations. Sparse monitoring arrays or any limited spatial coverage will lead to insufficient spatial sampling of the waveform-based location method. For TRI, FWI, and WT, the high nonlinearity of the inversion process associated with multiple source parameters (e.g., source coordinates and source mechanisms) and the velocity model is a challenge to get stable and reliable location results.
Advantages and disadvantages of waveform-based location methods discussed above are summarized in Table 2. It is also worth noting that different methods could be appropriate for different types of data inputs, depending on several factors, for example, application scale (frequencies), geometry of monitoring array, and monitoring targets and objectives. Table 3 lists several open source codes of waveform-based location methods.
Waveform-based methods seldom rely on purely waveform information; instead, they also exploit traveltime information in an explicit or implicit way. For instance, WT combines kinematic information from wavefront attributes and dynamic information through waveform coherence, to obtain a source location image. In this sense, the method can be also regarded as a hybrid method. Another example of hybrid method is an explicit combination of an automatic waveform stacking and manual picking corrections (Bardainne, 2011). This method involves a two-step procedure: First, a microseismic source location is imaged using a fully automated migration technique, then the unreliable picking or detections are manually verified and

10.1029/2019RG000667
Reviews of Geophysics corrected, and a refined location is obtained by combining the waveform-based stacking function and the conventional traveltime-based function. The method delivers more reliable results but also increase the processing time; thus, it provides a trade-off between location efficiency and reliability.
In addition to the above-mentioned methods, there are several variants of empirical waveform-based seismic detection and/or location methods based on correlation of waveforms, such as the template matching method and matched field processing. The main difference between them and above-discussed techniques is the use of empirical waveforms, instead of a complex velocity model. Though these methods also exploit the waveform information in seismic detection and/or location process, they are slightly different from waveform-based methods discussed here and cannot be strictly classified into one of the above four subcategories.
Template matching method, also known as matched filtering algorithm, is an efficient approach for detecting weak events from continuous waveforms using cross correlation against a reference, the template event (Eisner et al., 2008;Gibbons & Ringdal, 2006;Peng & Zhao, 2009;Shelly et al., 2007). The method detects weak events with waveforms similar to the template event assuming that those occur at similar location and have similar mechanism as the template event. This fully automatic approach has been extensively applied to the detection of regular and low-frequency earthquakes (e.g., Beaucé et al., 2019;Frank & Shapiro, 2014;Janská & Eisner, 2012;Shelly et al., 2007), foreshock and aftershock activity (e.g. Peng & Zhao, 2009;Sugan et al., 2014;M. Zhang & Wen, 2015), and microseismic events induced by hydraulic fracturing recorded by small networks (e.g., Caffagni et al., 2016;Eisner et al., 2008). The method can approach PWS by selecting template waveform only around P or S wave arrival and using corresponding velocity models for relative stacking between template (master) and detected events (e.g., Cieplicki et al., 2012) or using the FWI by selecting much larger template interval (e.g., the whole P to S wave interval). This choice makes the methodology not being able to strictly fit into PWS discussed above.
For large data sets collected by dense or regional networks comprising long time series (e.g., 1-10 years), high-performance computing exploiting multiple central processing units (CPU; Vuan et al., 2018), and/or graphics processing units (GPU; Beaucé et al., 2018) to achieve feasible computing times. Alternatively, more efficient frequency domain operation (Senobari et al., 2019) may be applied. Template matching is commonly used to detect small events or low-SNR signals (SNR < 1) requiring a prior knowledge of template events in the target area without relatively locating them. If the template matching is used also for relative location, the stacking process involved in template matching is applied to cross-correlation function, and it is designed to produce high stack function as a detection function over all possible relative locations: where CC is stack of cross correlation and NCC is normalized cross correlation between template (master) waveform and detected event waveform. The (x,y,z) represents spatial coordinates where we search for the detected event. Δt i is relative traveltime between the master and detected events. τ stands for delay between origin times. The above equation is normalized to be independent of the number of receivers (N) to have maximum value of 1. The advantage of this approach is low sensitivity to the velocity model in the overburden; the drawback is the need for strong master events.
There is a continuous ongoing improvement of the method, and several studies have extended it to a detection-and-location approach by incorporating waveform stacking techniques (e.g., Withers et al., 1999;Cieplicki et 2019) presented a detection and location approach that combines a PWS approach (Frank & Shapiro, 2014), for estimating the target seismic activity used as template events and the template matching for small event extraction, followed by a relative relocation step. Such composite approaches provide an efficient tool for generating detailed seismic catalogs.
Matched field processing is an array-processing technique originated from ocean acoustics to track submarines or marine mammals (Baggeroer et al., 1993). It is an extension of plane wave beamforming that aims at locating coherent noise sources through correlation of phase information from the measured wavefield and a synthetic wavefield. The approach considers each grid node as a potential source position and yields phase matches as corresponding imaging values, which indicate the estimation of the source location. Recently, matched field processing has been tested with success for seismic source location in the field of environmental and exploration seismology, such as hydrothermal vents of geysers (Cros et al., 2011;Legaz et al., 2009), hydrocarbon reservoirs (Chmiel et al., 2019;Corciulo et al., 2012), and CO2 degassing zones (Umlauft & Korn, 2019). A similar approach based on time-reverse principle and using strain Green's tensor as a matched filter was applied for locating nonimpulsive seismic events along the mid-oceanic ridge system west of Mexico (Solano et al., 2017). Compared to above waveform-based methods, which mainly work in the time domain, matched field processing is a frequency domain method that takes into account the complex phase, instead of the amplitude of the waveforms. An important advantage of the method is the capability of detecting long-duration tremor-like events in the frequency domain, which might not produce a constructive stack in the time domain.
Although very efficient for weak event location, both template matching and matched field processing have a practical limitation, namely, that the seismic sources (i.e., the master event and the target event) must be located close to each other regarding the distance between the master event and the receiver array. In addition, the source mechanisms must be similar to make the matched filter or field effective. Thus, their applicability is limited by the available set of template (master) events and therefore is limited to post processing (Gibbons & Ringdal, 2006;J. Wang et al., 2015). The big advantage of these methods is low sensitivity to the velocity model, which makes it very attractive compared to all above-mentioned methods.

Applications
Recent studies and applications at different scales have already revealed the performance of waveformbased methods and could provide a guidance on their applicability and limits. Next, we will elaborate on several representative examples of waveform-based location methods, ranging from laboratory scale to field and regional scales. So far, only PWS are applied as processing routines for field applications; other waveform-based methods, e.g., TRI and FWI, are still at their infant stage for field application, perhaps due to their high computational expenses, as well as many other issues listed in Table 3. Therefore, most examples discussed here are based on PWS, which has been successfully applied at multiple scales.

Laboratory and Small Scale 4.1.1. AE Monitoring Experiments
AE monitoring under controlled experimental conditions has been widely used to investigate rock fracturing (e.g., Lockner et al., 1991;Sellers et al., 2003;Johnson et al., 2008;Benson et al., 2008;Goebel et al., 2013;Bolton et al., 2019). Traditional AE technology is limited by data acquisition systems and computer storage capabilities, and it is mainly based on statistical parameters (e.g., event count, location, type of failure, and energy) that characterize the activity of AE (e.g., Ishida et al., 2017;Mair et al., 2007). These conventional approaches are called as parametric AE analysis (Grosse & Ohtsu, 2008;Hardy, 2003). A conspicuous shortcoming of parameter analysis is that AE parameters can only provide limited analysis of the fracturing mechanism. In order to improve the reliability of AE technology and further quantify the processing results, the researchers proposed another category of signal-based techniques by making use of the recorded AE waveforms (Grosse & Ohtsu, 2008;Ishida et al., 2017;Stanchits et al., 2015). Waveform-based AE location is a relatively new application of signal-based techniques. There are at least four main challenges that make reliable source characterization of AE events difficult, including low-SNR and high frequency content of AE data, high occurrence rate of AE events, and unreliable sensor calibration (Gu, 2016). Another specific challenge for waveform-based location and characterization of AE is extremely short interval of different phases and reflected waves from sample boundaries. For this reason, PWS methods are thought to be more preferential for AE studies in the foreseeable future. Saenger et al. (2011) applied TRI to locate and characterize AE events with a numerical model of concrete block. The forward and back propagation of the wavefields is simulated with the rotated staggered finite-difference method, and the maximum value for each grid point throughout the entire time of modeling was stored to image the event location. Through the comparison between source TRI with limited knowledge of boundary values and full TRI where a complete set of boundary conditions, they demonstrated the feasibility of TRI in locating source areas and characteristics of AEs with a limited number of sensors and effective elastic properties. Gu (2016) implemented the source-scanning algorithm-a PWS method-along with two traveltime-based location methods to laboratory AE data. The laboratory loading experiment was conducted on a cylindrical Berea sandstone sample (diameter 36.43 mm and height 76.7 mm), and eight piezoelectric sensors were attached to the rock sample and connected to the data acquisition system to collect AE data. The AE data segment was divided into small windows, and then source-scanning algorithm was applied to each window assuming only one event was contained in each window. The detection and location were achieved simultaneously by setting a threshold of the imaging value. This study showed that source-scanning algorithm could provide the initial guess of the AE locations and delineate the fracture shape roughly. The in situ monitoring network consisted of 11 AE sensors and four accelerometers. In this experiment, the acquisition system with the piezoelectric sensors was improved to record continuous signals with 1-MHz sampling rate. Homogeneous velocity models obtained from active ultrasonic transmission tests were used for an envelope-based diffraction stacking method to locate AE events ( Figure 10). Continuous waveforms along with traveltimes of P and S wave were used in the waveform stacking process. Stacking of CFs (Figure 10b) yielded the preliminary location ( Figure 10c) and a quantified detector function (Figure 10d).
López-Comino et al. (2017) presented a novel study on implementation of sufficiently robust and automatic algorithms to detect and locate massive microseismic or AE activity. More than 4,000 AE events were located using the STA/LTA-based waveform stacking and coherence analysis techniques. The absolute location of the largest 258 events (with a detector level threshold of 1,500, which is calculated with normalized amplitude envelope as the CF) delineated the fracture geometry. A low detector threshold can enhance the catalog completeness, but a higher detector threshold may better exclude false detections. Moreover, the location results were further refined by a master event stacking approach (F. Grigoli et al., 2016), which produced spatially clustered locations that outlined the main fracture plane. The Jackknife resampling of STA/LTA configuration was utilized to estimate the location uncertainty, and the median uncertainty was reduced from 1.43 m to 0.64 m through incorporating a master event of high accuracy and maximal coherence value.
In summary, small-scale hydraulic fracturing experiments generate high-rate, high-frequency AE events. The PWS techniques allow automatic AE events detection and location using continuous waveforms.

Field and Local Scale 4.2.1. Mining-Induced Seismicity
Seismicity associated with underground mining has been observed since early last century (e.g., Cook, 1976;S. J. Gibowicz, 1990) and is tightly related to the safety and productivity of mining. For instance, rockbursts are very often the major cause of fatalities in mines. Seismic monitoring can provide a powerful tool for the detection and evaluation of seismic events occurring around underground mines (e.g., Gibowicz & Kijko, 1994;T. Li et al., 2007;Ma et al., 2019). Two examples of waveform-based location methods applied to mining-induced seismic events are shown in this section. A temporary microseismic monitoring network (HAMNET) was deployed from June 2006 to July 2007 in order to record the coal mining-induced seismicity in the Ruhr region, Germany (Bischoff et al., 2010;Grigoli et al., 2013;Wehling-Benatelli et al., 2013). The HAMNET network consisted of 15 three-component surface seismic stations, deployed in a region of about 3 km × 2 km (Figure 11a). The monitored longwall panel has a spatial extent of about 1 km × 0.3 km and is located at a depth of about 1.1 km. The exploitation of the coal seam was performed from August 2006 to April 2007. More than 7,000 events have been detected and located using P wave onsets and a homogenous velocity model.
Accompanied by testing on 200 selected events from the above described data set, L. Li et al. (2018) provided a systematic analysis of correlation-based location methods and unified the formulae by describing crosscorrelation stacking with beamforming, and L.  compared the performance of two PWS methods (i.e., diffraction stacking and cross-correlation stacking) with different stochastic optimization algorithms. The waveforms for all the events had already been truncated as separate files, and the raw waveforms were band-pass filtered (5-30 Hz). The STA/LTA traces, which were calculated with the energy of filtered waveform (equation (2)), were used as the input waveforms. Only P waves were used to locate these events at first and then compared the result with that of using multiple phases. Figure 10b shows the seismograms and STA/LTA traces of a sample field event, which has a M L magnitude of −0.8. The lateral imaging resolution for the two methods is of reasonable quality due to the proper lateral coverage of the surface monitoring array (Figures 11c and 11d). The unknown origin time is canceled out in the correlation process of cross-correlation stacking, and the basic imaging pattern is hyperbolae (hyperboloids) intersections in 2-D (3-D) scenario. The decoupling of the origin time from the location process alleviates the origin timedepth trade-off, which is an inherent characteristic of diffraction stacking (e.g., Anikiev et al., 2014), but significantly lower the vertical imaging resolution.
Another example is waveform-based microseismic event location in the Schlema-Alberoda mining area (Germany). Some of the microseismic events that occurred between 2005 and 2012 in this uranium mine area were located through Kirchhoff-type approach (PWS method) to investigate the long-term role of mining-induced seismicity (Hassani et al., 2018). A homogeneous and a 3-D velocity model were compared for seismic migration. The resulting images were comparable and exhibited minor differences to the subsurface reflectors. However, the waveform-based location method was more sensitive to the lateral and vertical variations in the velocity model than seismic migration algorithm. Waveform-based methods can provide high-precision results using only P waves when an accurate velocity model is available, but a homogeneous or average velocity model may also lead to unreliable locations. Comparison between located seismic sources and the 3-D reflection seismic image of the area confirmed that some of reflectors can be directly correlated with the induced seismicity.

Hydraulic Fracturing-Induced Local Seismicity
Hydraulic fracturing stimulation is a methodology used in the oil and gas industry to produce hydrocarbons from unconventional reservoirs or to create heat exchangers in geothermal exploration. Hydraulic fracturing induces microearthquakes, which are located and characterized to map the hydraulic fractures. The monitoring arrays can be installed either in the nearby monitoring wells or deployed at the (near-)surface. In contrast to the limited number of receivers (usually smaller than 20) in the well, the surface arrays may consist of several hundreds of stations in different layouts such as circles, regular grids, patches, or star-like patterns. A high number of receivers and spatially dense recordings represent ideal conditions for waveform-based location methods.
In the context of unconventional gas reservoirs, processing of data acquired at large array configurations was initially tailored to mapping high energy radiation (Duncan, 2005). It was then refined to map microseismic events as proposed in academic studies, for example, detect high semblance over space, time, and stations  (Kiselevitch et al, 1991); evaluate brightness function created by stacking of absolute amplitudes in sourcescanning algorithm (e.g., Kao & Shan, 2004, 2007Liao et al., 2012); and stack envelopes of waveforms along body-wave moveouts (Gharti et al., 2010) or using semblance-weighted stacking (W. Zhang & Zhang, 2013). The previously referenced methods account for polarity changes arising from the source mechanism's radiation pattern, while only providing location estimates. Later, many authors started to develop methods to determine not only event location but also source mechanism. Gharti et al. (2011) combined TRI with source mechanism estimation; Rodriguez et al. (2012) presented a method for the simultaneous determination of the source origin time, location, and the radiation pattern. Zhebel and Eisner (2015) developed a workflow that combines migration-type location capabilities with the estimation of the source's full moment tensor. With further improvements, Anikiev et al. (2014) applied this method to a competitive data set acquired during hydraulic fracturing in shales in Oklahoma. Staněk et al. (2015) improved the detection by adding semblance computation of stacked amplitudes along the moveout as another criterion to traditional STA/LTA. It allowed them to eliminate false detections and detect more weak events (Figure 12), which were further interpreted with a bedding plane slip model (Staněk & Eisner, 2017).

10.1029/2019RG000667
Reviews of Geophysics 4.3. Regional and Global Scale 4.3.1. Natural Events Seismic source location and characterization are a key challenge in the analysis of both natural and anthropogenic seismicity. Detailed seismic catalogs are backbone of many studies in seismology such as characterizing the crustal stress field and estimating the Earth structure. Their practical values are directly related to the seismic monitoring and earthquake forecasting. The development and application of waveform-based detection and location methods for the analysis of seismic activity at regional and global scales were pushed forward by densification of seismic instrument deployments as permanent networks (e.g., Figure 13d) or temporary observations (e.g., Figure 13a) as well as accessibility of computational resources. Although traveltime-based methods continue to be a standard procedure for earthquakes detection and location used by seismological centers, waveform-based methods exploiting the coherency of seismic wavefield characteristics are attracting more attention due to their capability to efficiently process large volumes of seismic data in a fully automatic way. Here we illustrate application of waveform-based cross-correlation stacking to the detection and location of subduction earthquakes in central Chile (Poiata et al., 2016) and the lowfrequency earthquakes (Katsumata & Kamaya, 2003), short-transient events hidden in tectonic tremor signal, in western Shikoku, Japan (Poiata et al., 2018), as well as volcano-tectonic earthquakes at Uturuncu volcano . Compared to diffraction stacking which directly utilizes waveforms of individual stations (e.g. on N traces), the cross-correlation stacking exploiting multichannel coherency of different station pairs (e.g. N(N-1)/2 station pairs), and can extract more information from continuous seismic recordings.
The selected applications cover what are currently the main areas of interest and concern in seismic source seismology extending to the problems of seismic activity monitoring and source characterization. Automatic detection and location of earthquakes during the foreshock/aftershock and swarm sequences or regular background activity in seismically active environments such as subduction zones or active faults provide important information about the state of stress field in the area. The ability to highlight the evolution of seismic activity in greater detail (down to low magnitudes) during, for example, foreshock sequences can potentially lead to the information about the initiation stage of the larger earthquakes. Low-frequency earthquakes and tectonic tremor are a class of seismic slow earthquakes (K. Obara, 2002) that contain a unique source of information about the mechanical properties of subduction zones during the seismic cycle. Observed in the transition zones surrounding the regions of large (megathrust) earthquakes, slow earthquakes might represent a unique source of information for improving the seismic hazard and risk assessment (K. Obara & Kato, 2016). Thus, detailed monitoring and characterization of slow earthquakes activity represent an important topic in seismic source seismology. In the seismic environments associated to the volcanoes, volcanic earthquakes such as volcano-tectonic events and long-period events are generally associated to the movement of magma below the Earth surface. Volcano-seismic monitoring is an important tool for prediction of volcanic eruptions, as, in many cases, precursory increase of the seismic activity rate often precedes large eruptions. Currently, there are about 1,500 potentially active volcanoes worldwide, and more than 200 volcanoes are seismically monitored (Thompson, 2014). Volcano hazards pose great danger to daily life of nearby populated regions and can affect extended area far from a volcano. Therefore, rapid (automatic) and accurate detection and location of associated seismic events (e.g., volcano-tectonic earthquakes and volcanic tremors) are essential for early-warning and hazard assessment purposes as well as studying the mechanism and structure of volcano system.
With the increased volumes of data available from networks and arrays installed for natural seismic activity observations (in both tectonic and volcanic environments), waveform-based detection and location methods become an increasingly important tool for efficient and fully automated processing of the waveform data having the ability to provide detailed and uniform catalogs. The examples presented here provide an illustration supporting this.
An example application of this approach to the detection and location of subduction earthquakes in central Chile recorded by 22 selected regional seismic stations is shown in Figures 13a-13c on a 3-min-long velocity record containing two events. Only vertical components are considered under the assumption of P wave propagation in a homogeneous velocity model. Simultaneous detection and location are performed when the value of the image function, normalized by the number of station pairs, exceeds a threshold value of 0.7 (Figure 13c). In this implementation, the CFs (Figure 13b) correspond to the time-frequency kurtosis estimated by the multiband filter approach (Poiata et al., 2016) providing an efficient enhancement of signals like small aftershocks hidden in the coda of large events. This increases the number of the detected and located small earthquakes. An extended application of the same PWS method (Poiata et al., 2016) to a larger data set in the subduction zone of central Chile is shown in the study of Ruiz et al. (2017), analyzing the precursory (foreshock) activity of the 2017 Valparaiso earthquake using the recordings of the temporary and permanent stations of the National Seismological Center of the University of Chile.
Since the discovery of tectonic tremor by K. Obara (2002), one standard technique for the detection and location of such seismic events is based on envelope cross-correlation stacking (K. Obara, 2002;Kao & Shan, 2004;Wech & Creager, 2008;Ide, 2010). A typical signal of deep tectonic tremor recorded at the surface stations (Figure 13d) is depicted as waveforms energy arrivals that are coherent across the stations (Figure 13e). Thus, envelope CFs estimated in the predominant frequency band of tremor signals (2-10 Hz) are used in the PWS location schemes assuming S waves as the dominating phase. The tectonic tremor signals may contain mixed small-magnitude impulsive events called low-frequency earthquakes (Figure 13f; Katsumata & Kamaya, 2003) that are mixed within the tremor signal (Figures 13e and 13f). Detection and location of these events are highly challenging, since their waveforms are characterized by very low SNR. Poiata et al. (2018) demonstrated that the automatic PWS detection and location scheme based on time-frequency kurtosis and the waveform-based cross-correlation stack allows for efficient automatic extraction of such events. Figures 13d-13g show an example of the detection and location of low-frequency earthquakes using this approach applied to the 9-day continuous records of tremor activity recorded by Hi-net (Okada et al., 2004) seismic stations in western Shikoku (Japan). Theoretical S wave traveltimes were calculated assuming a one-dimensional velocity model of the area defined at a 3-D grid covering the region of interest (Figures 13d and 13g). The maximum likelihood source location is estimated by stacked station-pair crosscorrelated kurtosis CFs projected according to the theoretical time delays (Figure 13f). Event detection is declared each time the normalized imaging function exceeds the threshold value of 0.65 (Figure 13f). The spatial location corresponds to the maximum of the determined image function, and the timing is calculated according to the provided location and automatically picked S wave arrivals at the stations (Figure 13f). This analysis provided a catalog of 2,212 low-frequency earthquakes showing details of migration and clustering along the strike of the subduction slab. In comparison, the low-frequency catalog, based on manual picking and conventional traveltime-based methods, provided by the Japan Meteorological Agency contains 341 events for the same period of time (see Poiata et al., 2018, for more details).
Although there have been many efforts to increase network coverage and station density (Dougherty et al., 2019), seismic monitoring array with poor coverage and sparse deployment is still very common especially in barren areas. Another example illustrating the performance of cross-correlation stacking is locating volcanic-tectonic earthquakes recorded with sparse stations at Uturuncu volcano . Figure 14a shows a regional elevation map at Uturuncu volcano, Bolivia. As shown, 15 threecomponent seismometers have been temporarily deployed surrounding the inflating Uturuncu to monitor the volcanic activities (Jay et al., 2012). The largest intersection spacing between stations is about 40 km. Four different CFs, that is, envelope (Kao & Shan, 2007), STA/LTA (Drew et al., 2013), kurtosis (Langet et al., 2014), and Pearson correlation coefficients , are compared using the same example of volcanic-tectonic event. Figure 14b shows the migration profiles of the four different migration methods; the catalog location of the example event is displayed as the red star in Figure 14a. The source energy of the envelope and STA/LTA migration is not well focused due to strong scattering in the recorded wavefield. The horizontal locations of this seismic event using the kurtosis and correlation methods are well consistent with the catalog location. However, the located event depths are deeper than the event depth in the catalog (1.72 and 1.92 km deeper, respectively). Nevertheless, compared to the horizontal location of the seismic event, the event depth is often not well constrained, especially for surface arrays. As discussed (e.g., in section 4.1.2), the trade-off between event depth and origin time often makes event depth determination problematic and more difficult (e.g., . The cross-correlation stacking method is then applied to locate triggered volcano-tectonic events on 4 hr of continuous seismic data. The recorded waveform data at a station are shown in Figure 14c, in which many small magnitude events are very difficult and time consuming to pick manually. As the triggered earthquakes start immediately after the surface wave train of the Mw 8.8 Maule earthquake, many events occurred in a short time period with very close or overlapping waveforms. Figure 14d shows the variation of the stacked maximum coherency values at different trial origin times. There are many local peaks rising from the background noise in Figure 14d, which correspond to potential seismic events. In total, 560 seismic events are identified in Figure 14d by tagging the maximum value of each local peak (the number depends of the applied threshold criterion). Finally, 322 seismic events have been verified, which have clear phase arrivals. The verified seismic events are shown as red dots in Figure 14d. Figure 14e shows the locations of the 322 verified seismic events (colored dots) and the 114 current catalog events (black dots). The distribution of automatically located seismic events is consistent with that of the events in the catalog. There are two main earthquake clusters. One is located in the northern part of the study area and close to the volcano. This earthquake cluster occurred earlier (6-8 a.m.), and the events are mainly triggered by the surface waves (Love and Rayleigh waves) of the Maule earthquake. The other earthquake cluster occurred from 8 to 10 a.m. and is located in the southern part of the study area. The seismic events are mainly triggered by the surface wave overtones of the Maule earthquake.

Induced Events
Many human activities can induce seismic activity, the magnitudes and patterns of which vary from continued microseismicity (e.g., mines, hydrocarbon, and geothermal fields), intense microseismicity after a certain event (e.g., hydraulic fracturing), moderate-magnitude seismic activity (e.g., hydrocarbon fields and mines), to catastrophic activity of large magnitude (e.g., reservoir induced seismicity; Suckale, 2009). Induced seismicity monitoring is important for both industrial operations and public safety. Monitoring of the associated microseismicity is essential for reservoir characterization, for example, fracture geometry delineation, fluid migration, and reservoir geomechanical analysis, while larger-magnitude earthquakes are mainly exploited for potential geohazard management and mitigation (L. Li, Tan, Wood, et al., 2019). Several cases of local and regional earthquakes associated with hydraulic fracturing have been reported worldwide, such as in southcentral Oklahoma, the United States (Holland, 2013); the Blackpool Area, the United Kingdom (Clarke et al., 2014); the West Canadian Sedimentary Basin, Canada (Atkinson et al., 2016;Bao & Eaton, 2016); and the Sichuan Basin, China (Lei et al., 2017). Caffagni et al. (2016) adopted a new Matched Filtering Algorithm for detecting and analyzing microseismic events recorded by downhole monitoring arrays. The method required a set of well-located events that serve as templates to detect and analyze new events. The method involves stacking multichannel cross correlation of the continuous waveform data and waveforms of master events. Besides detection of weak events as conventional template matching techniques do, this new algorithm can also locate events by maximizing the amplitude of stacked envelope functions surrounding the template event. The moveout correction of the envelop functions is determined using a lookup table generated with 2-D cylindrical coordinates, defined by radial distance and depth. This approach has been successfully applied in several field data sets associated with hydraulic fracturing in western Canada, such as the multistage treatment at the Garrington field in Alberta (Caffagni et al., 2016), regional earthquakes associated with hydraulic fracturing in the Duvernay shale play (Bao & Eaton, 2016), and the Tony Creek Dual Microseismic Experiment in the Kaybob-Duvernay region of Alberta . This waveform-based Matched Filtering Algorithm was demonstrated to produce a more complete catalog, which enables more robust interpretation of the induced microseismicity and potentially provides novel insights into corresponding dynamic rupture processes. A more generalized form of waveform stacking referring to template waveforms in 3-D Cartesian coordinates has also been reported in F. Grigoli et al., (2016). Other variants of waveform-based relative location methods involve cross-correlation waveform stacking based on differential traveltime between template and target events (e.g., M. Zhang & Wen, 2015;L. Li et al., 2016). Verdon et al. (2017) introduced another example of a waveform-based method applied to induced (micro) seismicity associated with a multiwell, multistage hydraulic fracturing treatment in Oklahoma. The surface monitoring array consisted of 17 three-component broadband seismometers. The target depth for the stimulation was approximately 3.8 km. A total of 155 events were detected and located using a STA/LTA-based PWS method, including the 17 events with relatively higher SNR and larger magnitude (ranged from 0.0 < M W < 1.0) detected and located using automated phase picking. Waveform-based location methods enabled a lowering of the detection threshold and could be used at industrial sites to maximize the information of induced seismic events recorded by small, sparse seismometer arrays deployed at the surface.

The Role of the Velocity Model
The velocity model is necessary for both seismic forward modeling and inversion. Waveform-based location methods need velocity models to compute the theoretical traveltime table or generate synthetic waveforms. For example, TRI and FWI require waveform modeling and extrapolation with a good velocity model, and inaccuracies in the velocity model will be accumulated and thus lead to location errors. The most common type of seismic data for velocity model calibration are controlled shots (usually perforation shot), whose source locations and origin time are known. As opposed to weak elastic anisotropy of the earth at the global scale, the anisotropy of tight reservoirs (e.g., kerogen-rich shales) at the exploration-scale may be strong. The high magnitude of observed anisotropy is another challenge that impedes reliable field microseismic application, and the small-scale natural and hydraulic fractures in these formations make the anisotropy even more complicated (Gajewski et al., 2009;Grechka, 2007;Grechka et al., 2011). Moreover, industrial activities (e.g., hydraulic stimulation) may produce or extend fractures that change the local seismic velocity (Fehler et al., 1998), which in turn affects seismic location. Another example illuminating the influence of the velocity models involves the role of multiple phases in the imaging process. Waveform-based location is sensitive to the velocity model, especially when using multiply scattered and reflected waves. While these later arrivals increase illumination of the source, they may also increase sensitivity of the source location to these scatters and diffractors in the model. A fundamental problem of downhole waveform location is the fact that the use of high-frequency radiation of the source is not accompanied by sufficient small-scale complexity of the utilized velocity model. Controlled-source full-waveform investigations make use of low-frequency information to stabilize the inversion process. However, currently, such low-frequency contributions are not available in microseismic monitoring. Perhaps future research will focus on overcoming instrumental limitations.
The velocity model can be improved by taking advantage of both active and passive seismic data. Both the traveltime and waveform amplitude of recorded data contain information on velocity structure and can be utilized to constrain the inversion either tomographically or through migration principles (Grechka & Heigel, 2017). The joint estimation of both source characteristics and the seismic velocity distribution is expected to confront model uncertainties in a more effective manner (see sections 3.3 and 3.4). A potential problem of joint location and velocity inversion is the trade-off between the source location, origin time, and velocity model, which might lead to significant bias in the inversion results if it is not treated properly (Jansky et al., 2010). In addition, in the context of microseismic monitoring, it is difficult to obtain an accurate tomographic model due to the poor coverage of the receiver arrays (e.g., downhole monitoring) or a limited number of well-located microseismic events (Bardainne & Gaucher, 2010). Reservoir imaging using passive seismic data and seismic migration techniques can help resolve field-scale velocity variations, and this emerging area still faces many challenges Huang & Zhu, 2019, see Section 5.4). The relative location method (Fitch, 1975;Spence, 1980) has been demonstrated to be an effective alternative method, which can compensate the implicit velocity errors and anomalies and create correct relative location of the located events. Waveform-based relative location efforts are likewise on the rise and promise to provide additional, experience-driven constraints by relying on a catalog of formerly well-located events. By introducing master events, F. Grigoli et al. (2016) improved the standard diffraction stacking to master-event waveform stacking with additional traveltime correction terms estimated at each station. L. Li et al. ( , 2018 proposed two novel waveform-based relative location methods, relative correlation stacking and hybrid correlation stacking, which are based on the differential traveltime from an event pair (i.e., a master event and a target event) to a receiver and double differential traveltime from an event pair to a receiver pair, respectively. The combination of relative location methods and waveform-based methods has great potential since it inherits advantages from both sides: Information from master events can help to compensate the negative effect of velocity model uncertainty and is also applicable to weak seismic event through waveform stacking, though the location results and selection criteria of master events need to be further investigated.

Resolution and Uncertainty
Location uncertainty estimates generally describe location as a Gaussian distribution with mean and standard deviation. Factors contributing to location uncertainty include the input data quality (e.g., SNR), location algorithms (e.g., the imaging operator), the velocity model, and the coverage of the monitoring networks. The location uncertainty estimation for waveform-based methods is still in its early stages, far from being standardized, though there exist several attempts on this topic (see Table 1). For example, the imaging resolution or distribution can also provide preliminary knowledge about the location uncertainty (Anikiev et al., 2014;Kao & Shan, 2004), and the Bootstrap or Jackknife method can help to extract some statistical information of the location results from waveform-based methods (Grigoli et al., 2013;L. Li et al., 2018). According to several existing studies for PWS, the estimated location uncertainty ranges from tens of meters to about 200 m, which are comparable to the corresponding prevailing wavelengths (Grigoli et al., 2013;Pesicek et al., 2014;L. Li et al., 2018).
It may be argued that the image function, introduced in section 3 as a central ingredient of most waveformbased location methods, can to some extent be viewed as a probability distribution of the source location. In analogy with the Gaussian model, the detected peak of the image function represents the most likely source location for the given observations. This previously noted "locate, then pick" mentality clearly distinguishes the source imaging methods from conventional inversion, in that the method itself leads to a landscape of possible solutions and their likeliness, rather than a specific solution only. As can be observed in section 4 for essentially all considered examples, the sharpness of the image function is strongly affected by limited and sparse apertures and directional biases, clearly reflecting respective location uncertainties.
All current methods of uncertainty assessment are limited to testing a fixed velocity model. We argue that the estimation of location uncertainty should also consider the uncertainty of the velocity model, which may be a main contributor (Gesret et al., 2015;Huang et al., 2017b). For traveltime-based methods, the velocity uncertainty can be implicitly contained in the observed or theoretical traveltime, while it is not straightforward to estimate velocity uncertainty with waveforms. One potential solution is adding perturbations to the theoretical traveltime table. Given the short development course of waveform-based methods and the difference of location uncertainty estimation between the two types of methods, more systematic benchmarking studies of seismic source location algorithms are in demand, though several recent comparisons of microseismic waveform-based and traveltime-based location algorithms can be found in recent literature (e.g., Pesicek et al., 2014;Trojanowski & Eisner, 2017;L. Li et al., 2018;Wuestefeld et al., 2018;Grigoli, Scarabello, et al., 2018). Note that full waveform-based methods have more sources of uncertainty from, for example, density, S wave velocity, and attenuation models. Compared to the high nonlinearity and requirement of a more accurate velocity model for other methods (see Table 2), we can expect that PWS methods have greater potential to provide an improved uncertainty when model uncertainty is included. This is also partially demonstrated with field applications in section 4. In addition, band-limited waveforms are acquired and utilized for field application. Thus, the prevailing wavelength or the size of the Fresnel volume of the wavefield, along with the grid spacing, may roughly determine the range of the source imaging resolution and/or location uncertainty for waveform-based methods (Gajewski & Tessmer, 2005). Recent emergence of denser acquisitions across the scales may help solve the prevailing problem of undersampling and resolution. With this ongoing transition from waveforms to wavefields, the increasing volume of dense measurements promises to arrive at a more constrained and thus less uncertain reconstruction properties of a seismic source and the surrounding medium. Compared to local or regional scales, waveforms of smallscale events and from the laboratory experiments are characterized by their high-frequency content, resulting in shorter prevailing wavelengths. In these scenarios, an increased image resolution can principally be achieved. However, the related wavefield complexity and increased sensitivity of the records act as additional sources of uncertainty and reasonably accurate, and detailed velocity information is required. This may be regarded as one of the major current challenges; waveform-based methods will have to overcome in order to be successful in AE investigations.

Complex Source Mechanisms
In contrast to controlled sources in active seismic exploration, both natural and induced/triggered (micro) earthquakes have more complex source mechanisms. Seismicity at different scales may be governed by different physical processes and involves different source mechanisms. Tectonic earthquakes are likely to rupture as shear failure, while volcanic-tectonic events maybe characterized by other processes, such as collapses, rock bursts, tensile cracks, and explosions (e.g., Cesca & Grigoli, 2015). The underlying mechanism of slow earthquakes, on the other hands, is still not fully understood. Seismicity induced by industrial operations, for example, fluid injection or extraction, may involve more complex source mechanisms (from pure shear, over different combinations of shear and tensile to pure tensile) due to pore pressure and stress disturbances (e.g., Eaton et al., 2014;. Laboratory AE measurements of rock materials also contain information about hybrid failure mechanisms, besides simple tensile and shear type of failure (e.g., Ramsey & Chester, 2004;Q.-Z. Zhu, 2017). Complex source mechanisms in anisotropic medium may result in apparent nonshear components of the source radiation. Consequently, the polarities of the amplitudes can be either positive or negative, depending on the direction of takeoff angles for each pair of the source and receiver positions. For example, a pure double-couple strike-slip or dip-slip type of source has such radiation patterns that the stacking value equals to zero at the true source position, if no polarity correction is utilized and the receivers are distributed symmetrically (e.g., Staněk et al., 2015;Zhebel & Eisner, 2015). As mentioned in section 3.1, various CFs have been applied to remove the effects of waveform polarity changes. However, a transformation of the original waveforms with CFs results in losing source mechanism information and reducing imaging resolution, and stacking may also be less effective in terms of noise elimination; that is, the detectability of weak events may be lower. The issues of complex source mechanisms also inhibit the applicability of those methods under the assumption of constant source characteristics (sources are nearly colocated), such as template matching method and matched field processing.
An alternative approach to compensate the effects of waveform polarity changes is joint inversion of location and source mechanism. The diffraction stacking method presented in Gajewski et al. (2007) has been further improved by Anikiev et al. (2014), Chambers et al. (2014), and Zhebel and Eisner (2015) through accounting for the source mechanism. They proposed to include polarity corrections in diffraction stacking through a joint location and moment tensor inversion. The full moment tensor representing source mechanism is inverted, and subsequently, amplitude polarities are corrected at every time before stacking. The method has been applied to field microseismic data acquired with a dense star-like surface array during hydraulic fracturing in shale gas reservoirs. Location quality was consistent with manual processing, and stability of inverted source mechanisms for the large surface arrays was proved by Staněk et al. (2014). They demonstrated a high quality of the source mechanisms even with a high level of noise in the data. Afterward, Liang et al. (2016) and Yu et al. (2018) proposed a similar joint inversion strategy by extending the source-scanning algorithm to the joint source-scanning algorithm, which scans both locations and focal mechanism. They carried out a theoretical and numerical analysis of the feasibility of the new method and demonstrated its superior performance in robustness and efficiency with field data from surface microseismic monitoring in the Sichuan Basin.

From Source Imaging to Reservoir Imaging
Unlike using man-made active seismic sources in exploration seismology, passive sources, such as natural and induced seismicity, ocean waves, and ambient seismic noise, are utilized in passive seismic imaging to image geological structure and locate seismic sources and underground oil, gas, or other resources (Daneshvar et al., 1995;Schuster et al., 2004). Once a seismic event is accurately located, it can resemble an active source used in exploration seismology if unknown source parameters are inverted. Then the advanced seismic migration techniques, along with the weak diffractions and reflections, can be used to image subsurface structures. For example, Dyer et al. (2008) and Reshetnikov et al. (2015) demonstrated the feasibility of seismic reflection imaging techniques in imaging fractures and faults in the vicinity of microseismic clouds in geothermal reservoirs, although uncertainty of these inversions remains questionable.
With the rise of interest in passive seismic monitoring of oilfields and production sites, the concepts of global-scale earthquake characterization techniques have gained increasing interest in the exploration seismology community. A technology exchange between earthquake seismology and exploration seismology has already emerged. It is quite promising to extend the waveform-based methods to reservoir imaging through linking active and passive seismic techniques, for example, using TRI methods. In exploration seismology, the process of RTM has been routinely used to image subsurface structures. With passive seismic records, TRI can be used to image the passive seismic source (e.g., extended finite-fault source characteristics), as well as the surrounding fractures and structures. L. Li et al. (2015) applied TRI to synthetic microseismic events monitored with a surface array and a known velocity model, and it could simultaneously image seismic sources and surrounding reservoir structures. Lin and Zhang (2016) and Lin et al. (2018) applied RTM to microseismic records containing strongly scattered seismic waves for a synthetic downhole monitoring system. Zhu and Sun (2016) and T. Zhu et al. (2019) proposed a workflow of data-driven and source-independent fracture imaging for passive seismic data. In this method, no source information is required, and zero-lag cross correlations of the back-propagated direct and scattered wavefields are used to image the scatters and fractures. Numerical examples and an ultrasonic data set have indicated the approach's improved performance in imaging fractures compared to standard RTM, while a field microseismic data from the Marcellus Shale demonstrated the feasibility of the proposed method in imaging subsurface natural fractures (T. Zhu, 2019;. Though only several exploratory studies have been conducted, passive seismic reservoir imaging bears great potential for providing new opportunities in reservoir characterization (Grechka & Heigel, 2017;. However, there are still several technical issues that need further study, for example, the influence of irregular spatial distribution of sources and receivers and reliable wavefield separation from low-SNR passive seismic data (Grechka & Heigel, 2017).

Computational Efficiency
Waveform-based location methods involve waveform storage and transmission, which raises new challenges for real-time application, especially for large monitoring arrays and large target regions (Pesicek et al., 2014;Xue et al., 2015;. For example, it took tens of minutes to several hours for PWS when hundreds of receivers and millions of grid points were considered (Li, Xie, et al., 2017), let alone TRI and FWI, which are more computational demanding (see Table 2). Currently, there are two solutions to this issue. One solution is taking advantage of high-performance computing technologies, and adopting parallelization for both waveform modeling and inversion process. Parallelization techniques based on software improvements (e.g., OpenMP and message passing interface) and hardware improvements (e.g., GPU) have been utilized in seismic forward modeling since the early 2000s (e.g., Bohlen, 2002;Michéa & Komatitsch, 2010;Thorbecke & Draganov, 2011). For most waveform-based methods, the discretized grid points representing potential source points are independent from each other. Thus, corresponding simulations and image function evaluations lend themselves naturally to parallel processing. Xue et al. (2015), Xue et al., 2016) described the performance of two new GPU features (Hyper-Q and GPU Direct, which are technologies introduced to improve the GPU performance and usability) applied in microseismic location. According to tests of a realistic 3-D microseismic data set, they demonstrated that GPU-accelerated (require only several seconds with two NVidia Tesla C2070 [Fermi] graphics cards) TRI can be 30 times faster compared to an execution on a CPU (require several minutes with a personal workstation with an 8-core Intel Xeon E-2650 CPU). Though several minutes even hours are acceptable for academic or seismic hazards monitoring purposes, while (near-)real-time process is highly required for field industry application and will also benefit the methodological study and nonindustry applications for natural seismicity monitoring (e.g., monitoring of seismic activity at volcanoes).
Another promising solution is incorporating optimization algorithms with waveform-based methods. Conventional gradient-based optimization algorithms are not very suitable for waveform-based methods, since the imaging function has local extrema and a decent initial model cannot be ensured. Fortunately, stochastic optimization algorithms are found to be powerful alternatives. Stochastic algorithms are generally inspired by natural laws or empirical principles and can be classified into global optimization algorithms due to their ability to search the extremum over all the search space, though they are not guaranteed to converge to the global optimum consistently. These strategies are derivative-free and only need random models as initial models; they include evolutionary algorithms (e.g., genetic algorithm), swarm intelligence (e.g., particle swarm optimization), and simulated annealing (e.g., Yang, 2010). Several algorithms have been proven to be effective for solving seismic source location problems. These optimization methods enable efficient convergence and reserve the event clustering by ensuring a high-level of location accuracy. For example, Kennett et al. (2000) utilized the neighborhood algorithm to obtain reliable hypocenter and source mechanism estimation for a real earthquake. Gharti et al. (2010) completed pioneering work on incorporating differential evolution into waveform-based methods. Pesicek et al. (2014) and Verdon et al. (2017) successfully utilized a covariance matrix adaptation evolution strategy and neighborhood algorithm in diffraction stacking-based microseismic location, respectively. L. Li, Tan, Xie, et al. (2019) implemented three stochastic algorithms to improve the efficiency of waveform-based methods and proposed a parameter tuning workflow comprising two types of repeated tests, which is supposed to enhance the algorithmic performance through evaluating the trade-off between accuracy and efficiency.
The availability of waveform-based location methods toward real-time approaches can be improved with state-of-the-art high-performance computing and advanced inversion strategies. On one hand, highperformance computing facilities are not universally available, nor are the cloud computing services. On the other hand, the performance of optimization algorithms is not only problem dependent but also data dependent. Besides the algorithmic aspects, many other issues, such as the SNR, the source-receiver geometry, and the velocity model, also affect the final performance of optimization algorithms and need further investigation.

Conclusions
We present a comprehensive review of the current status of waveform-based seismic location methods and their applications at different scales, ranging from laboratory AEs to regional earthquakes. Seismic monitoring has been widely used in the wider areas of geophysics, seismology, AE, and engineering. In seismic monitoring, source location plays a central role, in that it provides fundamental information about crustal failure and active faults. Waveform-based location methods enable detection and location of weak seismic events recorded during underground activity monitoring, which are beneficial for characterizing tectonically and volcanologically active areas and unconventional resources development.
We elaborate on the evolution, principles, and categorization of waveform-based methods. Here, we classify them into four different categories as PWS, TRI, WT, and FWI. The framework of waveform-based location methods builds on the same theoretical foundation of seismic migration and imaging, and the location process can be regarded as backpropagation of seismic energy from the receivers to the source region. Specifically, the source is localized by focusing or reconstructing the source energy for discrete grid points with a certain migration or imaging operator. A significant advantage of waveform-based methods constitutes their noise resistance. Different preprocessing workflows have been introduced to address the challenges posed by wavefield interference, high noise levels, and polarity changes due to the radiation characteristics of the source. For example, conventional diffraction stacking generally utilizes single-channel-based CFs, while cross-correlation stacking explores mutual channel coherency. Waveform stacking methods generally make use of direct waves, while TRI and FWI take advantage of the entire waveforms including multiples and scattered waves. WT in turn integrates collective kinematic attributes (of wavefronts) and waveform information in a hybrid framework. Due to additional constraints from entire waveforms, more accurate source locations with higher certainty may be expected with reliable velocity models and phase recognitions. However, as the underlying problem is highly nonlinear, reliable starting conditions, that is, a priori information on the source and the velocity model, are required. Currently, it appears unlikely that passive seismic FWI or TRI may exist as a standalone tool but it may coexist with other techniques discussed in this review providing these initial conditions.
The feasibility and potential of waveform-based methods have been revealed through representative applications at multiple scales associated with natural and induced seismicity. So far, most field applications of waveform-based methods have been focused on PWS and at exploration and local scale, for example, for the monitoring of hydraulic fracturing in unconventional oil and gas reservoirs and local earthquakes. Meanwhile, these methods were effectively adapted to detect and locate low-frequency seismic events recorded at local and regional scales. There also exist emerging applications of other methods in rockburst and AE events with comparably high-frequency content, though larger challenges of location uncertainty and velocity reliability are involved. As a general remark, the waveform-based theories and methodologies introduced in this review can be directly applied and/or easily adapted to seismic source location problems at different scales.
Finally, we discuss perspectives and summarize challenges that need to be addressed in future investigations. Arguably, the velocity model currently represents the Achilles heel of most seismic location methods, since it directly determines the reliability of location results. Strategies jointly inverting the location and velocity structure may help to confront this problem, though many challenges remain when field applications are concerned. Recent work on FWI and wavefront attributes of passive seismic events show promising results in constraining the velocity model while locating the source. The imaging resolution achievable by waveform-based methods is largely determined by the prevailing wavelength of the events under consideration. The high computational efforts can be alleviated by making use of high-performance computing facilities and advanced optimization algorithms. Passive seismic reservoir imaging is another promising perspective that could provide new opportunities in reservoir characterization. Because of the increased demand for active CO 2 reduction from the atmosphere, sequestration technologies may provide an important contribution to meet climate change targets in the future. Passive seismic monitoring is expected to play an important role in reaching public acceptance of such facilities and waveform-based methods most likely resemble the primary means of realizing such monitoring systems.

Glossary
Acoustic emission (AE): Seismic wave radiation from dynamic formation of a microcrack. Due to the small crack lengths, the frequencies of the radiated waves are generally in the kHz (acoustic or ultrasonic) range.
Backprojection: Use of the wave equation to track waves backward in time. Also called backpropagation.
Beamforming: A procedure for forming an array beam. Time delays are applied to the seismograms of an array to bring the desired signal recorded at different receivers or stations into phase, followed by summing of the outputs with appropriate weights. Also called delay-and-sum processing.
Characteristic function (CF): Nonlinear transformation of the seismogram, providing information that can be exploited by a picking algorithm for event detection and phase picking or directly by a stacking algorithm for event location.
Coherency: The property of two or more waveform traces having an in-phase relationship. Also, a measure of the similarity of two or more functions.
Data domain: The data domain or space denotes the set of all possible values of the data.
Diffraction theory: A theory describing the principle and phenomenon of diffraction, which is defined as bending of waves around the corners of an obstacle or an aperture and propagating into the region of the obstacle's geometric shadow. The diffracting object or aperture effectively becomes a secondary source of the propagating wave.
Fault: A discontinuous surface separating two rock masses across which there has been significant displacement as a result of rock mass movement. Fault is also the place where earthquakes often happen.
Full waveform inversion (FWI): The process of using the whole waveform data instead of their particular parameter characteristics to estimate the model parameters of Earth structure and seismic source, by minimizing the misfit between the observed and synthetic seismograms.
Hydraulic fracturing (HF): The process of pumping fluid into a wellbore or the formation at high injection rate and pressure to fracture the rocks.
Image domain: The image domain or space denotes the set of all possible imaging values.
Inverse problem: The problem of determining the parameters of a physical system from measured data. The process of solving an inverse problem is called inversion, which involves searching for a distribution of parameters which fits the observed set of measurements. For instance, geophysical inversions generally derive a model to describe the subsurface that is consistent with the field data.
Kirchhoff-Helmholtz theory: A theory combines the Helmholtz equation (a time-independent form of wave equation) with the Kirchhoff integral theorem (the wave function at a certain point can be represented as the integral/sum of wave contributions from the surroundings). It is expressed as Kirchhoff-Helmholtz representation/integral, which states that the elastic wavefield at any point within a subsurface region that can be reconstructed through a weighted integration of the wavefield and its normal derivative measured around a closed surface.
Kirchhoff migration: A migration technique integrates along diffraction curves and places the results at the crests of the diffraction curves.
Matched field processing: An array-processing technique that originated from underwater acoustics. It aims to locate seismic noise sources through comparison of phase matches between measured and synthetic wavefields over a certain frequency bandwidth.
Microseismic event: The earthquake that has a magnitude less than 3. See also Seismic event.
Migration: An inversion operation involving rearrangement of seismic information elements so that reflections and diffractions are placed at their true locations. Also called imaging, the transformation of seismic data recorded as waveforms into a scaled version of the true geometry of subsurface geologic features that produced the recorded seismic waveforms.
Model domain: The model domain or space denotes the set of all possible values of model parameters.
Objective function: A function expressing the quality of all points in the model space. Usually objective functions are defined in terms of misfit functions, which quantify the disagreement between observed and theoretical data.
Origin time: The time of occurrence of initial energy release of a seismic event. Also called excitation time.
Partial waveform stacking (PWS): A subcategory of waveform-based seismic source location methods by stacking partial waveforms (generally the primary phases).
Passive seismic monitoring: Seismic investigations using passive monitoring or listening capabilities only, and no controlled seismic energy is stimulated actively by the investigator. Earthquake seismology, ambient noise studies, and microseismic monitoring are examples of passive seismic techniques, which are used for detection and characterization of fractures and faults, amplitude spectra of ground noise, and so forth.
Receiver: An observation point or equipment where ground motion is detected and a seismogram is recorded. Also called station.
Resolution: The ability to separate two features that are close together. The minimum separation of two bodies before their individual identities are lost. For instance, a high imaging resolution of the seismic source means the source energy is highly focused and well distinctive in the imaging profile.
Reverse time migration (RTM): A seismic migration method through backward propagation of the wavefield in time. The image of the subsurface is obtained by seeking the best match between the extrapolation of time-reversed waveform data and the theoretical waveforms based on estimated velocity model and source parameters.
Seismic event: A coherent lineup on traces or an arrival of a new seismic wave, usually accompanied by a phase change and an increase in amplitude on a seismic record. A seismic event may indicate a reflection, refraction, diffraction, or other type of wavefront, as well as the movement of the earth resulting from the abrupt release of accumulated strain and energy, for example, tremors, natural earthquakes, microseismic events, and acoustic emissions, which may cover different magnitudes and frequencies and may happen naturally or triggered/induced artificially. See also seismicity and triggered/ induced seismicity.
Seismic source location: The spatial position and time of occurrence for a seismic event. Also refers to the process of locating a seismic event.
Seismic phase: An event on a seismogram section marking the arrival of a new wave, indicated by a change of period and amplitude.
Seismic tremor: A weak vibration of the ground usually lasting for a long time period. Tremor can be observed in a variety of tectonically active areas around the world and can be classified into different types according to their origin, such as tectonic tremor and volcanic tremor. See also Seismic event.
Seismicity: The likelihood and distribution of earthquakes. Also a general term for the activity (e.g., the frequency and intensity) of earthquakes.
Semblance: A measure of multichannel coherence, and it is basically the energy of the stack normalized by the mean energy of the components of the stack.
Short-term average to long-term average ratio (STA/LTA): The average energy (including both signal and noise) in a short-time window divided by the average energy of waveforms in a long-time window.
Signal-to-noise ratio (SNR): The energy or amplitude of the signal divided by all remaining (noise) energy or amplitude at the same channels and time window.
Source mechanism: The mechanism describes the deformation (motion) and state of the stress field in the source region that generate the recorded seismic waveforms. Also known as focal mechanism. Two different forms are generally used to express the source mechanism. First is the fault plane solution, which describes the initial slippage of the faulting with strike, dip, and rake. The second is the moment tensor, which represents the movement on a fault during an earthquake with a second order tensor, which consists of nine single-couple forces in the Cartesian coordinate system.
Stacking: The process of summing several seismic traces with corrected with corrected time delays (as known as moveout).
Template matching: A filter that maximizes the output in response to a template signal. Filtering with a matched filter is equivalent to cross correlating with the signal, and it is the very powerful for identifying the presence of a given waveform or event in the presence of noise.
Time reverse imaging (TRI): An imaging technique for source or structure imaging through backpropagating the recorded full waveforms in time.
Tomography: The science and technology of creating images of the interior of a body. For instance, seismic tomography refers to an imaging method using a set of seismic traveltime, amplitude, or waveform data to deduce the velocity and/or attenuation/scattering structure of the Earth's interior.
Traveltime: The time of seismic waves to propagate from one point to another.
Triggered/Induced seismicity: Minor earthquakes and tremors that are caused by human activity that alters the stresses and strains in the Earth's crust. See also Seismicity.
Uncertainty: Random variation in the values assumed by a variable. For instance, the location uncertainty is embodied by a set of locations resulted by random variation of the input data.
Waveform: The complete analog or sufficiently dense sampled digital representation of a continuous wave group (e.g., of a seismic phase) or of a whole wave train. The seismic waveforms of a whole wave train are also called seismograms.