Synthetic aperture source localization

Many state-of-the-art methods in source localization require large numbers of sensors and perform poorly or require additional sensors when emitters of interest transmit highly correlated waveforms. We present a new source localization technique which employs a cross correlation measure of the time difference of arrival (TDOA) for signals recorded at two separate platforms, at least one of which is in motion. This data is backprojected through a process of synthetic aperture source localization (SASL) to form an image of the locations of the emitters in a region of interest (ROI) This method has the advantage of not requiring any a priori knowledge of the number of emitters in the scene. Nor does it rest on an ability to identify regions of the data which come from individual emitters, though if this capability is present it may improve image quality. We demonstrate that this method is capable of localizing emitters which transmit highly correlated waveforms, though complications arise when several such emitters are present in the scene. We discuss these complications and strategies to mitigate them.


Introduction
The detection and localization of sources of electromagnetic (EM) radiation has many applications in both the civilian and defense communities. An example of such an application is the search for modern personal and commercial ships and airplanes in the event of a crash or other catastrophic failure; such craft contain emergency radio sources for exactly such purposes [1].
In figure 1 we show an example of the scenario we consider in this paper, in which we use the data from two receivers to identify the location of stationary sources emitting deterministic but unknown waveforms. We assume that at least one of the receivers is moving so that their relative velocity is non-zero.
Source localization is an example of a classical inverse problem. As such, it has been studied by researchers in acoustics, optics, radar, sonar, and biophysics, with each paper using the technical language of its particular discipline. The existing literature can be organized in a variety of different ways. In [2,3] the authors divide approaches into two-step methods and one-step methods. In two-step methods, such as [1], the first step is to estimate certain scalar parameters (the relative time delays and Doppler shifts), and the second step is to use those parameters, typically in systems of polynomial equations, to obtain the source locations. Onestep methods, on the other hand, operate directly on the received signals to obtain the source locations, and these methods are sometimes referred to as 'direct position determination'. Our work falls into this second class.
Of the one-step methods, an important class formulates the problem in terms of detection and hypothesis testing [4,5]. In many of the one-step methods, the source is hypothesized to lie at various candidate source positions, and the discrepancy between the (cross-correlation of the) actual signals and the computed signals from a hypothesized source is measured in terms of the minimum mean square error [6]. One then selects the candidate source position that results in the minimum discrepancy between the two. These approaches typically consider stationary receivers and a single moving source, and typically the data come from a single 'look', that is over a time interval too short to allow appreciable platform motion [2,[6][7][8][9], so that the relative time delays and Doppler shifts can be considered constants. Some variants of these methods address multiple emitters, but require the user to know or guess the number [7,8,10]. Moreover, for combining data from different looks, these methods are currently restricted to forming an incoherent sum only. These methods are often not well equipped to identify and localize sources in a scene containing several emitters which are transmitting highly similar waveforms. This motivates a search for methods which can improve on these drawbacks.
An early attempt to overcome this last difficulty was presented in [11] where the author used signals obtained at a pair of synchronized receivers to form a cross-ambiguity function for each look. The cross-ambiguity 'images' from different looks were then combined non-coherently.
Traditional active imaging methods, however, typically involve coherent summation, often using filtered backprojection [12,13]. Moreover, the problem of locating a source is closely related [14] to the problem of passive imaging, which is the problem of forming an image of the reflectivity of a scene illuminated by one or more unknown transmitters. (The scatterers that are illuminated by transmitters can be considered to be sources of reflected energy.) This problem has been studied via cross-correlating the signals at different receivers, typically assuming that the reflectivity is stochastic and delta-correlated in order to eliminate cross terms that would otherwise arise [13,[15][16][17][18][19]. The cross-correlated signals are then added coherently to form an image of the scene radiance.
To form coherent images of deterministic but unknown sources, the work [20] used a synth etic-aperture imaging approach together with the assumption that the transmitted signals are orthogonal in a certain sense. This assumption, however, may not be appropriate for some situations, and moreover the approach did not lead to a method for estimating resolution of the resulting image.
The work in this paper draws inspiration from passive imaging, in that we use filtered backprojection to form an image of the location of stationary sources from data measured over a synthetic aperture [29][30][31][32][33][34][35]. Our assumption about the sources, however, is that they are deterministic but unknown. We make no orthogonality assumption about the waveforms. Consequently, in the process of passive coherent imaging, we must carefully treat the contribution of the cross-terms.
Coherent summation of the cross-correlation function from stationary receivers typically produces an image containing both true sources and 'ghost' sources, which result from the cross-terms [36][37][38][39]. These ghost sources can occur in any correlation imaging technique in which the data is a function quadratic in the received signals [21]. We show that coherent integration of cross-correlation data, over a synthetic aperture formed by receiver motion, causes the image of the actual sources to focus and the image of the ghost sources to smear out.
We refer our method to as synthetic aperture source localization (SASL). The mathematical formulation of this method is similar to traditional synthetic aperture radar (SAR). In traditional SAR, however, the aim is to form an image of scatterers from measurements of the reflections of a known signal [40,41] transmitted from a known location. SASL, on the other hand, both detects and geolocates emitters, via a similar imaging process, but without prior information about the transmitted signal. SASL is capable of localizing several RF emitters using only two receiving antennas, provided their relative velocity is nonzero.
SASL overcomes the array requirements of localization systems such as MUSIC and ESPRIT which must measure a scene with more receivers than there are transmitters present [10,22]. We are able to do this because we use data from many locations. We are able to analyze the resolution of the resulting image, and find that the resolution depends on the transmitter bandwidth and center frequency, and on the geometry of the flight path relative to the source locations. This paper begins by developing a model for the cross-correlation data recorded at the two platforms. We then show how this data is filtered and backprojected to create an image of the target scene and how artifacts arise in the image. We prove two theorems which together demonstrate that, at least in the case of one stationary and one moving receiver, the cross terms will An example of the scenario we consider in this paper. Here the airplane is assumed to carry a receiver, and its data is combined with received signals from one of the stationary antennas. not produce focused phantom emitter peaks. We then conclude with some numerical simulations of the method under ideal noiseless conditions.

The problem statement and data model
The goal of SASL is to form an image that shows the locations e n , n = 1, 2, . . . of all emitters of electromagnetic energy. We assume that these emitters all lie on a flat plane.
We model the propagation of electromagnetic waves with the scalar wave equation Here j(x, t) represents the distribution of emitter waveforms; we take this distribution to be of the form j(x, t) = n p n (t)δ(x − e n ). For simplicity we assume that all antennas are isotropic. Although technically this is unrealistic, antennas do receive signals even through their sidelobes, and inclusion of the antenna patterns would significantly complicated the analysis. Let S i (t) be the signal recorded at the ith receiver. Then we model the received signals as Here J(y, f ) = F{ j(y, t)} is the temporal Fourier transform of j , and (x, t), (x , t) are the two spatiotemporal locations at which the signals are recorded. We assume that the clocks on the two receivers are synchronized. We allow the two receivers to move along paths x = γ 1 (t) and x = γ 2 (t), respectively. The motion of the receivers, however, is much slower than the speed of light. Using the language of traditional synthetic-aperture radar (SAR) imaging we thus describe time using two scales. The 'fast time,' denoted by t, is the time scale of the wave propagation, that is, the time it takes for a signal to travel from an emitter to the receiver. The 'slow time', denoted by s, is the time scale of the platform's motion. This slow time variable thus describes how the platforms move around the scene as the data is collected. For each slow time point s i there is thus a vector of fast time samples describing the signal recorded by the platform at that slow time location. The separation between slow time and fast time can be accomplished mathematically by application of a fast-time windowing function centered at time t = s (see [20] for example); we omit this windowing process for simplicity. Thus, making this 'start-stop approximation', we use x = γ 1 (s) and x = γ 2 (s).
We then take our data collected from the scene to be the cross correlation of these two signals. That is Here J denotes the complex conjugate of J.

A single emitter
If the field being measured is due to a single isotropic emitting antenna, located at e 0 , and radiating an EM field, then the source function takes the form Here P( f ) corresponds to the Fourier transform of the waveform sent to the emitting antenna. The particular waveform sent to the emitting antenna can take a wide variety of forms, however a chirped waveform is the most common in radar transmission [12,23].
Using this approximation the cross-correlation data is where V(y) = δ(y − e 0 ) is the source intensity function we desire to image.

Multiple emitters
We now consider the form of the cross-correlation data when multiple isotropic emitters are present. To do so we extend our previous source function to the case of N point-like emitters each transmitting its own waveform. We thus have where e n is the location of the nth receiver. Our data is then modeled by d(s, t) = e −2πif (t−r(s,y,y )/c0) A(s, y, y ) We can then separate the contribution to the cross-correlation data from those terms which are the correlation of the two copies of a signal from one emitter recorded at each platform and those terms which result from the correlation of two signals which were emitted from different points in the scene. Doing so produces d(s,t) = e −2πif (t−r(s,y,y)/c0) A(s, y, y) where d D (s, t) is the data due to the correlation of each emitted signal with itself.
Here we have chosen the subscript D to denote the fact that these are the diagonal terms as opposed to the off diagonal cross terms. These diagonal terms may be thought of as the 'correct' emitter data, or the data we would collect if there were no correlation between signals emitted from different sources. Equivalently, these terms are the data we would record if we were able to separate the data contributions from each emitter before the correlation operation was performed.
The second term d C (s, t) is then the contribution of the cross terms, that is, those terms wherein a signal from one emitter is cross correlated with the signal from a different emitter.
We shall assume that the expression N n=1 |P n ( f )| 2 δ(y − e n ), which appears in the diagonal d D of the cross-correlation (12), can be approximated by an expression B( f )V(y) where V(y) = N n=1 δ(y − e n ) and B( f ) accounts for the frequency content of the signals under observation. B is referred to as the power spectral density. The effect of this approximation is, in the diagonal term only, to spread all the emitted energy in the scene around to all the emitter locations. We justify the use of the simplification in section 2. 3 With the approximation above, the diagonal term of the data can be written This approximation to the diagonal data has the general form of a Fourier integral operator applied to the source density function V. This observation motivates the imaging approach below in section 3.1.

Estimating the power spectral density
The function B( f ) containing information about the frequency content of the signals transmitted from a target scene is unknown to us since we assume no knowledge of the emitters prior to localization. Thus we must estimate it in order to account for its presence in our backprojection filter derived in section 3.3. If only one emitter, located at e 0 and transmitting the waveform P( f ), is present in the scene, then the t-autocorrelation of the signal (3) corresponds to taking V(y) = δ(y − e 0 ) in (9) with r(s, e 0 , e 0 ) = |γ 1 (s) − e 0 | − |γ 1 (s) − e 0 | = 0. Denoting the autocorrelation in the t variable by A we have Thus, Thus, in the single emitter case N n=1 |P n ( f )| 2 δ(y − e n ) = B( f )V(y) holds true and B( f ) can be extracted from the signal data by Fourier transforming the autocorrelation function. The factor (4π|γ 1 (s) − e 0 |) 2 is constant for a single slow time look and can be treated as a simple scale factor which is absorbed into the data. Additionally, in the case that the first receiver is stationary, γ 1 (s) = constant, the scale factor is constant over all slow time looks.
We generalize the results obtained above for the case of several emitters in the scene. Autocorrelation of the signal recorded at one receiver produces Here we take B( f ) = N n=1 |P n ( f )| 2 and approximate it by setting B ( f ) = F {A(s, t)}. This provides a scaled estimate of the spectral content of the signal which can be used to unambiguously define a filter Q to be applied in the imaging operator. While B ( f ) provides less information for us to construct the filter than does the perfect matched filter employed in traditional SAR we must be satisfied with something less than a perfect estimate of the spectral content since we assume that the emitted waveforms are unknown and inseparable in our recorded data. We shall use the notation m =nP n ( f )P m ( f ) for convenience in discussing the cross term components of the data.

Emitter imaging
We turn now to the derivation and analysis of the back projection operator which shall act on our data to form the final image.

Image formation
The image formation strategy is to construct an approximate inverse [12,17,40] to the operator of (13). This is done by filtered backprojection, which can be thought of as a form of matched filtering. The idea is to construct a Fourier integral operator that is a filtered adjoint of the forward operator of (13); this filtered adjoint has a phase which is the negative of that of (13), with y replaced by the hypothesized location z, and the (filtered) adjoint integrates over the data variables s and t. Thus we look for the approximate inverse in the form (20) where the filter Q is determined below.

Image analysis
Substituting our expression (12) for d(s, t) into (20) we have where we have denoted the source intensity function of the diagonal terms by V(y) and that of the cross terms by W(y, y ). We are interested in forming an image which recovers the function V(y) as accurately as is possible. The function W(y, y ) is a nuisance which we would like to eliminate, or at least minimize the presence of. The effect of not having W(y, y ) = 0 will become apparent in our numerical simulations.
We thus wish to choose a filter which recovers V(y) as accurately as possible, regardless of its impact on the undesirable term W(y, y ). We therefore consider only the first term in equation (21) when constructing Q. Thus we imagine that we can form an image Ĩ (z) according tõ (22) where D D denotes the fast-time Fourier transform of d D and where the kernel K is Here we have written r(s, y) = r(s, y, y) and similarly for A(s, y). We would like to choose the filter Q so that K is as close to the delta function δ(y − z) as possible. That is, we hope to find a choice of Q so that K ≈ e i2π(x−z)·ξ dξ at least around those points which contribute most to the image I(z). This will allow us to reconstruct V(y) as accurately as possible.
To determine the critical points which contribute most to the value of I(z) we apply the method of stationary phase. A more detailed look at the method of stationary phase is presented in [24].

Stationary phase for the image kernel.
The points at which the phase of K is stationary are those for which where, writing R 1,s,y = γ 1 (s) − y, where the hat denotes unit vector: R = R/|R|. The first condition amounts to a restriction that the points y and z must lie on the same curve of time difference of arrival (TDOA) while the second equation is a condition on the frequency difference of arrival (FDOA) of the two points.
One solution of (24) and (25) is the point y = z. These are the points in image space which correspond to the correct location in the scene. In [24] it is shown that, when one receiver is stationary throughout the entire data collection, these are the only critical points. When both receivers are in motion, the FDOA curves are solutions of high-degree polynomials and can be quite complicated. A full analysis of this case is a subject of further work.

Stolt-like change of variables.
In order to force the phase of K to match that of the delta function in a region around y = z, we use the identity (27) with the change of variables similar to that of Stolt [25] ( where we have defined Applying this change of variables allows us to make the substitution where is the reciprocal of η = ∂ξ ∂(s,f ) which is referred to as the Beylkin determinant [12,13] for the PSF. A more detailed analysis of this factor is presented in [24].
Our goal is to choose Q so that (30) is as close as possible to δ(y − z) = e i2πξ·(y−z) dξ.

Derivation of the filter Q
First we note that applying the Stolt change of variables puts K into the form of a psuedodifferential operator (ΨDO), provided that A satisfies a certain 'symbol estimate' (see [40] and references therein). From (30) we have We then desire to choose Q to be the reciprocal of all of the non-phase factors in the integrand so that their product will be approximately equal to unity in those regions near a critical point. There are two difficulties with this choice. The first is that A is a function of the scene variable y while Q is a function of the image variable z. The second difficulty is that we do not know the value B( f ) a priori.
The second difficulty has already been addressed in our estimation of B( f ) by B ( f ). To address the first problem with our desired filter recall that the geometric spreading factors of the diagonal terms take the form We have shown that the bulk of the image integral comes from those points satisfying y = z. We thus evaluate the geometric spreading factor A for y = z so that we calculate A(s, z, z) for the purposes of determining Q.
Finally, we account for any regions in which the function A is zero by application of a smooth cutoff function χ(s, f , z) which is equal to one over the data collection manifold and zero outside it. Our filter then takes the form The imaging operator is then Since K is a ΨDO, standard theorems of microlocal analysis [12,26] imply that the singularities of the target function V(y) will be reconstructed in their correct locations and with their correct orientations provided they were visible to the receiver during the data collection.
We have thus constructed an imaging operator which is guaranteed to reconstruct the diagonal data terms in their correct locations. This allows for the localization of the emitters which give rise to those terms. We shall now consider the resolution which can be achieved using this imaging method.

Resolution analysis
With our choice of filter the kernel of our imaging operator is approximately The resolution of our image is then determined by the data collection manifold (DCM) [12], that is, the region of ξ where we have data.
Recall that we have defined where Ξ is defined in equation (29). The critical points of this function occur at y = z. At these critical points we have Here we may write , with ψ some smooth function describing terrain elevation. Then we obtain We note here that the two unit vectors on the right hand side of (40) point from the location of the receivers to the point z in the scene. The projection matrix operating on the difference of the unit vectors projects these unit vectors onto the plane tangent to the surface at the point z. Thus, for a flat earth, the change of variables In the resolution analysis below, we will use the convention that the interval in Fourier space [−b, b] corresponds to a null-to-null resolution of 1 b . This comes from the calculation b −b e i2π(δx)ξx dξ x = 2b sinc(2πb(δx)). In other words, better resolution is obtained from a large region in ξ.

Cross range resolution.
We shall consider the cross-range (along-track) resolution of an image created from the data collected by a receiver flying a straight line along the flight path γ 2 (s) = (s, 0, h) where h is a constant height above the ground plane. We shall assume that the first receiver is stationary so that γ 1 (s) = (γ 1x , γ 1y , γ 1z ). Figure 2 shows the DCM for a point (10 km, 10 km) at the center of the scene under observation for γ 1 = (10 km, 20 km).
We obtain an expression of the along-track resolution by considering a pair of points which differ only in their azimuthal component. That is, y − z = (y x − z x , 0, 0). The phase expression of (36) in K is then 2πf c0 (y x − z x )ξ x . This implies that it is only the first coordinate of ξ which affects the along-track resolution. From equation (41) this is given by where R x denotes the x-component of the unit vector R. Thus we see that the resolution depends on the emitter bandwidth and center frequency, and on the geometry of the receivers relative to the emitter position. Better resolution is obtained when the x-component of the difference of unit vectors is large, which means that resolution is better if the receivers are on opposite sides of the emitter. Co-located receivers, for example, results in very poor resolution. Better resolution also corresponds to a large angular aperture.

Range resolution.
We turn now to the other component of the image quality, the range resolution. From our argument in section 3.4.1 the range resolution of the constructed SASL image is determined by the range of values of Note that the quantity in the parentheses is bounded by 2. For the flight path described above we have γ 2y = 0 and γ 1y constant. The measure of the resolvability of target emitters in the range direction thus depends mainly upon the range of values of f , that is, on the bandwidth of the transmitted waveform.
For the data collection geometry in figure 4 and assuming a transmitter with bandwidth B w , we find that the range of values taken on by ξ y is where R T depends on the receiver locations. As above, co-location of the two receivers provides the worst resolution. The resolution in the range direction for an emitter with bandwidth B w is thus ∆ r ≈ 2c0 BwRT . Numerical examples of resolution estimates, together with associated simulations, are given in [24].
While the discussion above describes the resolution of the SASL image, the effect of the cross terms on the image has not yet been considered. We are interested in what can be shown regarding where and when they will appear in the image and whether this information can help us mitigate their appearance. We begin by examining what can be said regarding how the cross terms in the data are projected into the image space.

The cross term backprojections
Since every cross term is the result of exactly two emitters in the scene, we can restrict our analysis to the two-emitter case. The multi-emitter case then follows easily as the sum over Figure 2. The data collection manifold (DCM) when the stationary receiver γ 1 is located at (10 km, 20 km), the source is at (10 km, 10 km), and the moving receiver γ 2 travels along the x axis. the cross term effects of all pairs of emitters present. From (12) we have that the cross terms in the data due to two emitters in the scene located at e 1 and e 2 are d C (s, t) = e −2πif (t−r(s,y,y )/c0) A(s, y, y ) For simplicity we will consider an analysis of the first term only. The analysis of the second term is identical. The backprojection operator acts on the first term of d C (s,t) to yield Thus, the cross term results in a backprojected return along the hyperbola for each slow time step s. This backprojection will have amplitude A e1e2 (s) given by Note that the location of the cross term backprojection for a given slow-time step is a function of the location of both emitters and both receivers. We have already used the pseudolocal property of a ΨDO to demonstrate that the backprojections of diagonal terms interfere constructively in regions in which an emitter is located and destructively elsewhere regardless of flight path.
We will now present two theorems which demonstrate that this is not the case for the cross terms when one receiver is held stationary and the other flies at a constant height h above the ground plane. We shall arbitrarily denote receiver γ 1 as the stationary receiver. Indeed, we are able to prove that, with the exception of a small number of specific flight paths easily avoided by a pilot, the cross term backprojections will never focus at a point which is not the location of an emitter. Thus except for a certain flight path, the only peaks in the SASL image will be the locations of the target emitters.
3.5.1. The backprojection envelope. Before presenting the theorems regarding the backprojection of the cross terms it is necessary to briefly introduce some additional mathematical foundations.
The envelope of a family of curves is a curve that is tangent to every curve in the family [27,28].
If, for some data term, we can demonstrate that the envelope of the family of backprojection curves degenerates to a single point, that is, that all such curves overlap at a single location, then those backprojections will constructively interfere in the image producing a strong peak. If, on the other hand, the backprojection curves of a cross term do not intersect at a single point we will have shown that any artifacts due to the backprojection imaging process for that term will gradually blur out along some curve in the image as the receiver moves along its flight path.
First we note the following theorem: If the curves of the family f (x, y, C) = 0 have no singular points, the curve defined by the system of equations f (x, y, C) = 0, f C (x, y, C) = 0 is the envelope of the family provided it also has no singular points [27,28].
The backprojection hyperbolas for a term in the data collection are the family of curves Differentiating with respect to the parameter s we have Since the first receiver is stationary, γ 1 (s) is constant and thus γ 1 (s) = 0 so that Thus the envelope is the curve satisfying the system of equations given by Note that neither equation has any singular points since we assume that the receiver platforms separated some distance from the scene of emitters.
We define a focused point of a family of curves as: a point through which every member of the family passes. We shall first prove that if such a point exists for the family of backprojection hyperbolas in our model then it is unique. Thus, if all the backprojections for a data term constructively interfere at a point they do not constructively interfere at any other point in the scene. Once uniqueness is demonstrated we will examine existence.
For simplicity of notation we shall allow e i to refer both to the location of the ith emitter in the scene as well as shorthand to reference the emitter itself and do the same with the receiver located at γ i (s) at slow time step s.

Theorems on cross-term focusing.
In the appendices we include the proofs of the following theorems. Theorem 1. Let two receivers observe a scene with one (γ 2 (s)) in motion and the other (γ 1 (s)) stationary and let γ 2 (s) fly at a constant height h above the ground plane. If γ 2 (s) flies any flightpath through or around the target scene for which its velocity vector is not persistently aimed at γ 1 (s) then: if the family of hyperbolas, over which a data term is backprojected according to the imaging operator defined in equation (35), has a focused point, that focused point is unique.
The proof of this theorem can be found in appendix A. The uniqueness of focused points in the SASL image having been established, we can turn to the question of existence. Before doing so however we briefly illustrate the result of placing the stationary receiver at various points relative to the flightpath of the moving receiver. In figure 3 we plot examples of the family of backprojection hyperbolas for one receiver flying a straight line flight path along the horizontal axis and for five possible locations of the other, stationary, receiver.
We see that, as proven in theorem 1, when the receiver in motion flies directly toward or away from the stationary receiver (figures 3(b) and (f)) the backprojections constructively interfere at the emitter location and also at a second mirror point on the opposite side of the aircraft. This is reminiscent of the left-right ambiguity encountered in traditional monostatic SAR. See [12] for more on this.
However, when the stationary receiver is moved to another location in the scene (figures 3(c)-(e)) the backprojections do not pile up at any such point.
Our second theorem demonstrates that the only focused points in the SASL image are the emitter locations.

Theorem 2. If, as described in the data collection model of theorem 1, one receiver is stationary and not in the same vertical plane with the moving receiver's flight path, the only focused points for all backprojection hyperbolas in the image given by equation (35) are the emitter locations.
The proof of this theorem can be found in appendix B. Since the cross term backprojections do not focus at points other than the locations of emitters, the contribution of the cross terms to the SASL image will be to gradually blur out along the envelope of the family of backprojections as the receiver travels along its path. We will now consider numerical simulations which illustrate this insight.

Numerical examples
First we consider the case in which the transmitted signals are dissimilar, having different bandwidths, center frequencies, and duty cycles. In this first example we have a scene which contains nine emitters transmitting signals with center frequencies in the range of 10 MHz to 50 MHz and bandwidths between 10 MHz and 20 MHz. We assume each emitter is transmitting isotropically and that the data is collected in the absence of noise.
The noise which will be present in real world data is due to complex multiplicative factors, rather than the well studied case of additive white noise. In the interest of brevity and clarity we therefore leave a detailed discussion of the noise for a future paper and consider only the noiseless case here.
In figure 4 we show the scene geometry. For these simulations we take one receiver to be stationary and ground-based. The other receiver flies a straight line flight path at a height of 1km above the ground plane.
We display the resulting backprojected images for this scene in figures 5 and 6. In both figures, on the left we image the scene assuming the signal contributions are separable before any processing is done. This ensures that d C (s,t) = 0 by construction. On the right in figures 5 and 6, we use the more realistic model in which we assume such separation is impossible a priori. In this case the term d C (s,t) is not necessarily zero. The magnitude of its terms will depend on the degree of correlation between the different signals. Figures 5(a) and (b) show the backprojected images for the case in which each emitter transmits a different waveform. Here we find that, despite including the contribution to the data set which arises from correlation of different emitters in the scene, the resulting image remains highly accurate with few additional artifacts.
We conclude from this that the cross terms will have little to no effect on the ability to identify and localize emitters in the scene when the emitters transmit uncorrelated waveforms. This should accurately reflect the situation in many target scenes. We expect that operators of any emitters in a scene will often intentionally transmit waveforms which are uncorrelated with those transmitted by other operators in the same scene in order to distinguish their own signals from the signals of the surrounding emitters. However, some scenes of interest may contain one or more pairs of highly correlated emitters. In figure 6, we simulate an example in which nine emitters in a target scene, located at the same points as those in the previous case, emit identical chirps. Since the waveforms are identical and are transmitted simultaneously we expect that the correlation between the signals arriving from different emitters should approximate a worst case scenario providing insight into the limitations of our methods. For this example these chirps have center frequency f c = 30 MHz and bandwidth b w = 20 MHz.
In this case the backprojection images which result from the use of the two data sets are significantly different. In figure 6(a) the emitters are clearly distinguishable and no phantoms or excessive artifacts are present. In figure 6(b) however, several artifacts are present in the reconstructed scene and some of these artifacts appear to be roughly on par with the strength of the true emitters in the scene. Furthermore some of the true emitters appear slightly more blurred more than in figure 6(a), and in the center line the emitter positions overlap with some of the artifacts. This causes the emitter strengths to be incorrectly increased with respect to the strength of the reconstruction of the emitters along the top and bottom of the scene.
We conclude that when a scene contains several identical emitters the effect of the cross terms can be of significant concern.
We also show a simulation in which the two platforms observe the same scene previously shown in figure 6. In this case however, the platform in motion flies a longer flight path which is curved to observe the scene from a greater number of viewing angles. This is shown in figure 7(b).
The results of the two flight paths are compared side by side in figures 8(a) and (b). As previously seen, the cross term artifacts have a significant contribution to the final image in figure 8(a). As proven in the two theorems above we see that, while the cross term artifacts do not disappear when the scene is viewed from a larger number of angles, their contribution becomes more spread out along the envelope of the family of backprojections. On the other hand, the effect of viewing the scene from a wider variety of angles is to cause the emitter locations imaged by the diagonal data terms to become more sharply focused.
Finally we present a series of images created with apertures of progressively larger sizes to illustrate how the image of the emitters gradually sharpens while the image phantoms due to the cross terms blur out as the aperture lengthens. In figure 9 we have a scene with two emitters transmitting identical signals. We see that the two emitter locations become more discernible as the aperture lengthens while the cross term phantoms gradually become less of a nuisance.

Conclusions and future work
In conclusion, we have demonstrated a new approach to source localization using a backprojection imaging technique we have named synthetic aperture source localization (SASL). This technique has advantages over current state-of-the-art methods in that it images the emitters without attempting to identify signal sources a priori. The SASL method also shows promise in improving the state-of-the-art for localizing highly correlated sources which is a difficulty for many other source localization methods.
In this analysis we have made some assumptions that might seem restrictive but in fact have minimal effects.
• The assumption of isotropic antenna beam patterns in practice has little effect for several reasons: (a) For many signals of interest, adequate resolution can be obtained from a narrow angular aperture, over which an antenna beam varies little. (b) The combination of cross-correlation, backprojection, and coherent imaging acts an effective noise filter, suppressing signals that do not transform as sources. This means that low signal intensity in one receiver is not a problem. • Although our analysis of cross-terms was only for the case when one receiver is stationary, we expect the method to work well with two or more receivers in arbitrary motion (although of course some trajectories are preferable). We have here limited ourselves to the simplest case of one moving platform, because it is easiest to test and simpler to study theoretically. But having two moving receiver platforms is actually an advantage because it increases the effective synthetic aperture. • Although in deriving the imaging operator, we used an approximation that assigns the same energy spetral density to all emitters, we see from the simulations that actually the method works better for sources emitting distinct waveforms. This is because crosscorrelation tends to suppress cross terms.
A number of issues we leave for the future: • The effects of noise on the resulting SASL image (see discussion in [24]); • The effects of errors in the receiver positions; • The effects of clock synchronization error and methods for synchronizing the receiver clocks; • Inclusion of the effects of realistic antenna beam patterns; • Analysis of the more complicated case when both receivers are moving.
This implies that the difference in the distance from the second receiving platform to each of the points z 0 , z 1 , is equal as the platform moves from γ 2 (s 1 ) to γ 2 (s 2 ). Since we have postulated that z 0 = z 1 the platform must move along some hyperboloid with the two focused points as the foci.
This can be more clearly seen by setting |γ 2 (s 1 ) − z 0 | − |γ 2 (s 1 ) − z 1 | = C where C is constant. This is just a specific realization of a single point along the flight path of γ 2 (s). Then we have where z 0 and z 1 are fixed. This implies that the flight path γ 2 is restricted to a hyperboloid with its vertex in the ground plane, with foci at the locations z 0 and z 1 .
If we allow the perpendicular bisector of the line segment joining z 0 and z 1 to form the x-axis of a new coordinate system and the line between z 0 and z 1 to form the y -axis, the receiving platform must follow a flight path constrained to the surface of some hyperboloid like the one shown in figure A1.
Note that the hyperboloid shown is merely a single example of the infinite number of such hyperboloids which satisfy our conditions at this point. We will soon see that the hyperboloid must also satisfy other constraints that will restrict the hyperboloid to the degenerate case of a plane located between the two focused points.
From (A.6) and (A.10) we have Thus, for z 1 to be a focused point of the family, the stationary receiver γ 1 (s) must also lie on the same hyperboloid that constrains the flight path of γ 2 (s). We refer to this hyperboloid as the 'flight' hyperboloid. We can use the knowledge that both receivers must lie on the same flight hyperboloid, along with our knowledge of the family of TDOA backprojections, to demonstrate that no such point z 1 can exist unless the flight hyperboloid under consideration is actually the plane bisecting the line segment z 0 z 1 . First consider the case in which the stationary receiver γ 1 hovers at a height h above the ground, that is, at the same height as the moving platform. Then we may define a coordinate system as shown in figure A2. Here we define the y -axis to be the line z 0 z 1 and the x-axis to be the perpendicular bisector of the line segment z 0 z 1 . In figure A2 these coordinate axes are shown in black. We define the plane z = 0 in this coordinate system to be the ground plane.
Since one branch of a hyperbola is a concave curve, any line in the plane may intersect it in at most two points. Specifically to our interest, the y -axis of the coordinate system we have defined can intersect the TDOA hyperbola (shown in figure A3(b)) in at most two points. We know that one of these points is z 0 . We have hypothesized that the other point of intersection is the point z 1 for all locations of the receiver γ 2 (s).
We can define a second coordinate system shown in red using the locations of the two stationary receivers. We denote this second coordinate system by (x,ỹ,z). Since γ 2 and γ 1 are located on the same flight hyperboloid at the same height, the angle θ that the x-axis intersects the y -axis with must be between 0 and π/2. First, note that for any hyperbola, all of the points of the curve are contained between the asymptotes. These asymptotes pass through the origin. Thus, when we consider the flight hyperbola, the line γ 2 γ 1 between the two receivers must have a slope whose magnitude is less than the magnitude of the slope of the asymptotes and must intersect the y -axis of the z 0 z 1 coordinate system above the x-axis as shown in figure A2.
We thus denote by I the magnitude of the distance from the origin of the z 0 z 1 coordinate axis to the point of intersection with the line γ 2 γ 1 and note that I is strictly positive whenever the two receivers are on a non-degenerate flight hyperbola as shown in figure A2.
The location of the two focused points with respect to this second coordinate system are given by Here we have denoted by I x the distance along the x-axis at which the line z 0 z 1 crosses, and in a slight abuse of notation we have used z 0 to denote the the magnitude |z 0 | = |z 1 |. Now since γ 1 and γ 2 are equally spaced across the ỹ-axis we can define an elliptical cylindrical coordinate system (µ, ν, z) based on γ 2 and γ 1 , namelỹ (A.16) An example of a such a coordinate system is shown graphically in figure A3(a). In order for z 1 to be a focused point it must lie on both the ground plane and the same TDOA hyperboloid as z 0 for all locations of γ 2 (s). In our elliptic cylindrical coordinate system, z 0z = z 1z = 0 because the ground plane is flat and both emitters are located on the ground. Thus, for z 1 to be a focused point we must havẽ Note that when θ = π/2, the receivers γ 1 and γ 2 are equally spaced across the y -axis and the TDOA hyperbola is the straight line forming the y -axis of the z 0 z 1 coordinate axis. At this point, the TDOA hyperbola clearly hits both z 1 and z 0 . This is true for all flight paths constrained by (A.11). However, since z 1 is a focused point only if it is hit by every TDOA hyperbola we can simply note that θ = π/2 will satisfy the condition in equation (A.11) and constrain our analysis to those points for which θ = π/2. Since, for the geometry shown in figure A2, 0 < θ < π 2 and 0 < ν 0 < π 2 so that 0 < cos(θ) and 0 < cos(ν 0 ) equation (A.29) implies that we have that cosh(µ 1 ) > cosh(µ 0 ). Since µ is strictly positive this implies µ 1 > µ 0 . Likewise 0 < sin(θ) and 0 < sin(ν 0 ), thus we must have 0 < sinh(µ 0 ) − sinh(µ 1 ) . However, since sinh(x) is a positive monotonic function in the first quadrant, we must have µ 0 < µ 1 , which is a contradiction.
Consequently there cannot be two focused points when 0 < θ < π/2. This implies that θ = 0, which means that the flightpath must lie in the degenerate hyperboloid, namely the plane y = 0, and it must be that γ 2 (s) and γ 1 (s) are confined to this plane. Any other flight path will not produce a second focused point.
Thus, when the two receivers are at equal height, a second focused point is produced only if the moving receiver flies a flight path for which it is always flying directly toward or away from the stationary receiver.
The case in which the stationary receiver is not at the same height as the moving receiver is only slightly more complicated. In the case in which the moving and stationary receivers are confined to the same flight hyperboloid, but do not necessarily occupy the same height in the scene a few additional details need to be explained. As previously mentioned, when the receiver in motion is constrained to fly at a constant height h within the flight hyperboloid defined by |γ 2 (s) − z 0 | − |γ 2 (s) − z 1 | = C , its path must follow along some hyperbola at a height above the ground plane; let this height be h 2 .
Similarly, regardless of its height, the stationary receiver must also be on some hyperbola at a height h 1 above the ground plane not necessarily equal to h 2 . Note that we could have h 1 = 0 for the case in which our stationary receiver is located on a short tower near our scene of interest for example. This situation is shown graphically in figure A4.
This more general data collection geometry can be broken down into a few broad cases. By treating each of these cases in turn we can show that in all cases the resulting data collection geometry can be shown to produce an equivalent set of conditions on the locations of z 0 and z 1 to those found in equations (A.17) and (A.18). We have already shown that these conditions lead to a contradiction.
The cases can be broken down as follows. For any position of the two receivers constrained to the flight hyperboloid as shown in figure A2, the line γ 1 γ 2 must either intersect the y -axis above the x-axis, below it, or not intersect the y -axis at all.
When the line γ 1 γ 2 intersects the y -axis above the x-axis the value of I is positive and the result is a set of conditions identical to those previously investigated in equations (A.17) and (A.18).
When the line γ 1 γ 2 intersects the y -axis below the x-axis the value of I is negative. In this case we we simply replace I with −I in equations (A.17) and (A.18). Similarly we now have I x on the negative side of the x-axis so that we will have −I x + (z 0 − (−I)) cos(θ) rather than I x − (z 0 − I) cos(θ) whenever the x-axis intersects the y -axis at a point when x is less than zero. These two alterations merely change the overall sign in our expressions for z 0x and z 1x .  These are exactly the same two conditions arrived at in equations (A.27) and (A.28) which we found led to a contradiction. Thus, in this case also, the two receivers must lie on the bisecting plane of z 0 z 1 . Finally, it may be true that there is a point of the flight path at which the two receivers lie on the same line of constant x. In this case, the line between them will have no y intercept. The simplest way to handle this point is merely to remark that the behavior of any one location along the flight path is inconsequential. Since a focused point must satisfy the constraints for all slow times s and we have already shown that every other location of the moving receiver does not satisfy the conditions in equations (A.17) and (A.18) it is inconsequential whether it is possible for this point to satisfy them or not.  Thus, regardless of the height of the stationary receiver, in order for a second focused point to appear in the image, the two receivers must be confined to the same plane bisecting the line z 0 z 1 . Since both receivers have known trajectories which are under the control of the data collection designer, we can safely assume that this one highly specific data collection path can be avoided.
Thus if the receiver in motion flies at a constant height above the ground plane, whenever a focused point exists in the family of backprojections it is unique and the envelope contains only that point regardless of the data collection flight path unless for some portion of the flight path the moving receiver flies directly toward or away from the stationary receiver's position. □

Appendix B. Proof of theorem 2
We shall prove this theorem by exhausting the possible cases in which the receiver/emitter geometries can be set up. We first examine the diagonal terms and then move to the cross terms.
Proof of theorem 2. Case 1: The diagonal terms. The contribution to the image of any diagonal term in the data takes the form where d D (s,t) is the data term being backprojection onto the image. Thus, a diagonal term projects the product Q(s, f , z)d D (s, t) onto the image for each set of receiver positions γ 1 (s), γ 2 (s).
In the case of a single diagonal term, the data from only one emitter is present by definition; let the position of this emitter be e 1 . As previously shown these backprojections are over the hyperbolas defined by where d C (s,t) is the data term resulting from either the correlation of the signal at emitter e 1 with the signal at emitter e 2 or vis versa. Without loss of generality we will examine the case in which the term under consideration arises from the correlation of the signal emitted by e 1 and received at the receiver at γ 1 (s) with the signal emitted by e 2 and received at the receiver at γ 2 (s).
As we have seen, this results in the backprojection of the product Q(s, f , z)d C (s, t) along the hyperbola which is the intersection of the ground plane and the hyperboloid defined by Assume that there exists a focused point for these backprojections and let that point be denoted z 0 where z 0 = e 1 or z 0 = e 2 .
We first consider the case where z 0 = e 2 , that is, the case in which a focused point occurs at the location of the emitter whose signal was recorded at the moving receiver. We shall handle the case for which z 0 = e 1 below in Subcase 2 B. Since z 0 is a focused point for all s, since we have postulated z 0 = e 2 . This is true if and only if the two emitters lie on the same sphere centered at the stationary receiver. Since we assume that the emitters are located within the same flat ground plane this condition amounts to a restriction of the location of the two emitters to a circle in the ground plane centered at the x, y coordinates of the stationary receiver. Thus, a focused point for the cross term data exists at the location of the emitter whose signal was received at the moving receiver only when the distance from the emitter to the stationary receiver is equal to its distance to the moving receiver for all s. This can happen only when the moving receiver flies in the vertical plane containing both receivers. See figure 3. for all s. This is possible for a moving receiver only when the moving receiver flies a path contained entirely within the plane which bisects the line segment joining z 0 = e 1 and e 2 = e 1 and is perpendicular to the ground plane. In the rare case that the moving receiver flies a flight path exactly between two emitters in the scene in this manner, then there will be a focused point due to the cross term of the two emitters at the point corresponding to the location of the emitter whose signal was received at the stationary emitter. See figure 3. Thus, to summarize the situation in which the backprojection image contains a focused point due to a cross term which arises at the location of one of the emitters: • If two emitters are equidistant from the stationary receiver, then each emitter is the focused point of the backprojection of the cross term in which that emitter's signal is the signal received at the moving receiver.
• If the moving receiver flies a flight path in a plane so that it is at all times exactly between two emitters so that it is always equidistant from both of them, then each emitter is the focused point of the backprojection of the cross term in which that emitter's signal is the signal received at the stationary receiver.
• Otherwise, there are no focused points at the location of either emitter due to the cross terms. In these cases, since a focused point is unique, no other focused points exist in the image due to that data term.
Subcase 2 C: Now assume that there exists a focused point in the image due to a cross term which is not located at an emitter's position. Let this point be denoted by z 0 where z 0 = e 1 = e 2 . Then for two points of the flight path of γ 2 , call them s 1 and s 2 , it must be true that Allowing the two points of the flight path, γ 2 (s 1 ) and γ 2 (s 2 ) to form the foci of a new coordinate system we see that z 0 must lie on the same hyperbola as e 2 for all points along the flight path of γ 2 . Thus z 0 is a focused point of the family of backprojections which contain e 2 . However, since e 2 is contained in every member of the family by definition it is also a focused point. Since a focused point for a family is unique by theorem 1 we must have z 0 = e 2 , but z 0 = e 2 by assumption. Thus, no such focused point z 0 exists. Conclusion: The cross terms do not produce focused points in the image except for certain specific geometries. Those geometries are: (1) When two emitters are equidistant from the stationary receiver. In this case the cross term due to the signal received by the moving receiver and cross correlated with the signal received at the stationary receiver will focus at the emitter which transmitted the signal recorded at the moving receiver. (2) When two emitters are at all times equidistant from the moving receiver, that is, when the moving receiver flies a trajectory constrained to a plane between two emitters that makes a right angle with the ground plane. In this case, the cross term due to the signal recorded at the stationary receiver and cross correlated with the one received at the moving receiver will focus at the location of the emitter whose signal was received by the stationary receiver. Thus, excluding flight paths contained in a single vertical plane containing the stationary receiver, the only focused points for any family of back projection hyperbolas in the image are the emitter locations, regardless of the flight path of the moving receiver. Furthermore, with the exception of a few special cases, the focused points will be due to the diagonal terms in our data. □