EM and SAGE Algorithms for DOA Estimation in the Presence of Unknown Uniform Noise

The existing expectation maximization (EM) and space-alternating generalized EM (SAGE) algorithms are only applied to direction of arrival (DOA) estimation in known noise. In this paper, the two algorithms are designed for DOA estimation in unknown uniform noise. Both the deterministic and random signal models are considered. In addition, a new modified EM (MEM) algorithm applicable to the noise assumption is also proposed. Next, these EM-type algorithms are improved to ensure the stability when the powers of sources are not equal. After being improved, simulation results illustrate that the EM algorithm has similar convergence with the MEM algorithm, the SAGE algorithm outperforms the EM and MEM algorithms for the deterministic signal model, and the SAGE algorithm cannot always outperform the EM and MEM algorithms for the random signal model. Furthermore, simulation results show that processing the same snapshots from the random signal model, the SAGE algorithm for the deterministic signal model can require the fewest computations.


Introduction
Direction of arrival (DOA) estimation is an important part of array signal processing and some high-resolution estimation techniques have been developed in the literature [1,2]. In particular, the maximum likelihood (ML) technique plays a critical role since it can offer the highest advantage in terms of both accuracy and spatial resolution. However, ML direction finding problems are non-convex and difficult to obtain their solutions in closed form.
One computationally efficient method to solve ML estimation problems is the classic expectation maximization (EM) algorithm [3,4], which has been employed for ML direction finding [5,6]. Each iteration of the EM algorithm is composed of an expectation step (E-step) and a maximization step (M-step). At the M-step, however, the EM algorithm updates all of the parameter estimates simultaneously, which causes slow convergence. In order to speed up the convergence of the EM algorithm, the space-alternating generalized EM (SAGE) algorithm has been proposed in [7]. References [8,9] show that the SAGE algorithm does yield faster convergence in terms of DOA estimation.
The existing EM and SAGE algorithms are usually derived under known noise [5,6,8,9]. The known noise is without unknown parameters, which may be unrealistic in certain applications. In fact, many seminal works in ML direction finding consider the so-called unknown uniform noise model [1,2,[10][11][12], i.e., the noise covariance matrix can be expressed as τI K , where τ is the only unknown noise parameter and I K is the K × K identity matrix. Under this noise assumption, a computationally attractive alternating projection algorithm is presented for computing the deterministic ML based DOA estimator in [10]. The authors in [11] investigate the statistical performance of this ML estimator and derive the Cramer-Rao lower bound. Moreover, some statistical properties of both the deterministic • We apply and design the EM and SAGE algorithms for DOA estimation in unknown uniform noise. In particular, we derive the SAGE algorithm for random ML direction finding, which is not discussed in [7,8,22]. • We propose a new MEM algorithm applicable to the unknown uniform noise assumption. • We improve these EM-type algorithms to ensure the stability when the powers of sources are not equal. • Via simulation we show that the EM algorithm has similar convergence with the MEM algorithm and the SAGE algorithm outperforms the EM and MEM algorithms for the deterministic signal model. However, the SAGE algorithm cannot always outperform the EM and MEM algorithms for the random signal model. • Via simulation we show that processing the same snapshots from the random signal model, the SAGE algorithm for the deterministic signal model can require the fewest iterations and computations.
The rest of this paper is outlined as follows: in Section 2, we formulate both the deterministic and random ML direction finding problems in unknown uniform noise. In Sections 3-5, we design the EM, MEM, and SAGE algorithms, respectively. We analyze some convergence properties of these EM-type algorithms in Section 6 and provide simulation results to compare the convergence of these EM-type algorithms in Section 7. Finally, we conclude this paper in Section 8.

Signal Model and Problem Statement
An array of K sensors is assumed to receive the plane waves emitted from G narrowband sources, which share the same known center wavelength χ. We use the Cartesian coordinate ζ k = [x k y k z k ] T and the Spherical coordinate (1, µ g , η g ) to locate the kth sensor and the direction of the gth source, respectively. Here, [·] T denotes transpose, µ g and η g denote the elevation and azimuth angles of the gth source, respectively. For convenience, we transform (1, µ g , η g ) into the corresponding Cartesian coordinate γ g = [sin(µ g ) cos(η g ) sin(µ g ) sin(η g ) cos(µ g )] T . Let the origin be the reference point such that the signal received at the array is written as is the signal of the gth source, and v(t) is complex Gaussian noise of zero mean and covariance τI K , i.e., v(t) ∼ CN (0, τI K ) with 0 = [0 · · · 0] T . In (1), EM-type algorithms need to define the unavailable complete data. According to the classic EM paradigm for superimposed signals [5,6], we construct L independent snapshots, the incomplete data of the EM algorithm, by where the h g (t)'s are the complete data. Moreover, the v g (t)'s are mutually uncorrelated and v g (t) ∼ CN (0, β g τI K ), where β = [β 1 · · · β G ] T > 0 and 1 T β = 1 with 1 = [1 · · · 1] T . Note that the incomplete-and complete-data LLFs require the distributions of the m g (t)'s, we adopt the following two statistical models separately.

EM Algorithm
In this section, we design and derive the EM algorithm for solving problems (5) and (8). The E-and M-steps at the rth iteration are introduced below. Let [·] (r) , E {·}, and D{·} denote an iterative value at the rth iteration, expectation, and covariance, respectively. [·] (0) is an initial estimate.

M-Step
Update the estimates of Ψ and τ by solving min ξ∈Ω,ρ≥0,τ>0 which is difficult to be decomposed into parallel subproblems due to τ. To obtain Ψ (r) and τ (r) easily, we rewrite (18) as min ξ∈Ω,σ≥0,τ>0 where We now divide the M-step into the following two CM-steps based on the ECM algorithm [21], i.e., the algorithm becomes the ECM algorithm. For convenience, we still call it the EM algorithm.
• First CM-step: Estimate Ψ but hold τ = τ (r−1) fixed. Then, problem (19) can be decomposed into the G parallel subproblems where e (r) where d

MEM Algorithm
In the previous section, β is fixed and known. In this section, we regard β as a parameter to be estimated and, thus, propose an MEM algorithm applicable to the unknown uniform noise assumption.

Deterministic Signal Model
Based on (25), the complete-data LLF in (4) is rewritten as Calculate the conditional expectation of the complete-data LLF in (26) where h (r)

M-Step
Update the estimates of Ψ and τ by solving the G parallel subproblems where whereP (r)

E-Step
Calculate the conditional expectation of the complete-data LLF in (34) wherê

SAGE Algorithm
In the SAGE algorithm, each iteration consists of G cycles and ξ At the qth cycle of the rth iteration, the SAGE algorithm first constructs the complete data by [7,8] Then, the E-and M-steps at the qth cycle of the rth iteration are introduced below.

Deterministic Signal Model
Based on (48), we have h q (t) ∼ CN b(ξ q )m q (t), τI K and the h g (t)'s with g = q are deterministic. The complete-data LLF is expressed as

M-Step
Update the estimates of ξ q , m q , and τ by solving min ξ q ∈Ω,m q ,τ>0 q , and τ (r,q) are obtained by whereP (r) q . Moreover, the other parameter estimates are not updated at this cycle and their iterative values are  q (t)'s in (55) are unrelated to τ (r,q−1) , we can omit (56) due to the nuisance parameter τ.

Random Signal Model
Based on (48), we have h q (t) ∼ CN (0, N q ) with N q = ρ q b(ξ q )b H (ξ q ) + τI K . However, the distribution of h g (t) with g = q is only associated with m g (t). The complete-data LLF is written as whereP g = (1/L) ∑ L t=1 |m g (t)| 2 and | · | denotes modulus.

Convergence Point
It is easy to verify that the above EM-type algorithms satisfy standard regularity conditions [5,21,26] and always converge to stationary points of J (Ψ, τ). Of course, the convergence points of these EM-type algorithms depend on their initial points. To generate appropriate initial points, we can employ the method presented in [10] using the deterministic signal model.

Complexity and Stability
At the rth iteration, the computational burdens of the above EM-type algorithms lie in solving the G maximization problems ξ (r) Thus, these EM-type algorithms have almost the same computational complexity at every iteration and if an algorithm has faster convergence, its number of iterations required will be smaller and its computations will be fewer. However, when the powers of sources are unequal, we have found via simulation that the DOA estimates of multiple sources, updated by (75), tend to be consistent with the true DOA of the source with the largest power. Accordingly, these EM-type algorithms may be unstable. To address this issue, we can reduce the difference between ξ (r) g and ξ (r−1) g by increasing Tr Γ gP (r) g rather than maximizing it, i.e., which guarantees the monotonicity of these EM-type algorithms [3]. As a good example, Algorithm 1 in the next section has given excellent simulation results.

Deterministic Signal Model
To compare the convergence of EM, MEM, and SAGE, Figure 1 plots their J (Ψ (r) , τ (r) )'s, η (r) 1 's, and η (r) 2 's under one trial. In Figure 1, It is easy to see that EM, MEM, and SAGE converge to a consistent (η 1 , η 2 ) estimate given an accurate initial point. Moreover, EM has a similar convergence with MEM while SAGE converges faster than EM and MEM. Figures 2 and 3 show two scatter plots of (η 1 , η 2 ) estimates under 200 trials. In Figure 2,  Note that in each of Figures 2 and 3, the SAGE algorithm requires the least processing time due to the fastest convergence and thus performs the fewest computations required.
Moreover, both sources in Figure 2 are not closely spaced, so it is very difficult to mix up both sources and the desirable points in Figure 2 are centered around the true position (25 • , 75 • ). However, both sources in Figure 3 are closely spaced and the desirable points are centered around (78 • , 70 • ) or (70 • , 78 • ), i.e., these EM-type algorithms are likely to mix up closely spaced sources.
According to these simulations, we can conclude that (1) EM has similar convergence with MEM, and (2) SAGE outperforms EM and MEM.

Random Signal Model
To compare the convergence of EM, MEM, and SAGE, Figure 4 plots their J (Ψ (r) , τ (r) )'s, η (r) 1 's, and η (r) 2 's under one trial. In Figure 4, We can also observe that given an accurate initial point, EM, MEM, and SAGE converge to a consistent (η 1 , η 2 ) estimate. Moreover, EM has similar convergence with MEM while SAGE converges faster than EM and MEM.  Figures 5 and 6 imply that EM has similar convergence with MEM but compared to EM and MEM, SAGE is less and more efficient for avoiding the convergence to an undesirable stationary point of J (Ψ, τ) in Figures 5 and 6, respectively. Note that in each of Figures 5 and 6, the SAGE algorithm requires the least processing time due to the fastest convergence and thus performs the fewest computations required. In addition, these EM-type algorithms are likely to mix up closely spaced sources, so the desirable points in Figure 5 are centered around (25 • , 75 • ) and the desirable points in Figure 6 are centered around (78 • , 70 • ) or (70 • , 78 • ).
Based on the above figures, we can conclude that (1) EM has similar convergence with MEM, and (2) SAGE cannot always outperform EM and MEM.   Figure 6. Scatter plot of (η 1 , η 2 ) estimates.

Deterministic and Random Signal Models
Snapshots from the random signal model can be processed by these EM-type algorithms for the deterministic signal model, which means that we can compare these algorithms for both signal models. The above simulation results have shown that EM has similar convergence with MEM, so we only compare EM and SAGE for both signal models in this subsection for simplicity.
Based on Figure 7, Figure 8 compares the numbers of iterations required by these algorithms. We can observe that EM for the deterministic signal model generally needs more iterations than EM for the random signal model. The reason is that EM for deterministic signal model needs to update more parameter estimates at each iteration and, thus, has slower convergence than EM for the random signal model. Moreover, SAGE for the deterministic signal model generally needs fewer iterations than SAGE for the random signal model. More importantly, SAGE for the deterministic signal model always requires the fewest iterations for each trial. Thus, we can conclude that SAGE for the deterministic signal model is superior to the other algorithms in the computational cost.  Figure 7. Scatter plot of (η 1 , η 2 ) estimates.  Figure 9 shows the root mean square error (RMSE) performances of DOA estimation obtained by the SAGE algorithm for the deterministic and random signal models. In Figure 9, η 1 = 60 • , η 2 = 120 • , ρ 1 = 0 dB, ρ 2 = 1 dB, τ = 3 dB, η Each RMSE is computed from 1000 independent realizations. We can observe that as the number of sensors K increases, the SAGE algorithm for each signal model yields small RMSEs, which indicates that increasing the number of sensors K can improve the accuracy of DOA estimation.  Figure 9. RMSEs of the SAGE algorithm.

Conclusions
In this paper, we applied and designed the EM and SAGE algorithms for DOA estimation in unknown uniform noise and proposed a new MEM algorithm applicable to the noise assumption. Next, we improved these EM-type algorithms to ensure the stability when the powers of sources are unequal. After being improved, simulation results illustrated that the EM algorithm has similar convergence with the MEM algorithm, the SAGE algorithm outperforms the EM and MEM algorithms for the deterministic signal model, and the SAGE algorithm cannot always outperform the EM and MEM algorithms for the random signal model. In addition, simulation results indicated that when these EM-type algorithms process the same snapshots from the random signal model, the SAGE algorithm for the deterministic signal model can require the fewest iterations and computations.

Data Availability Statement:
The data presented in this study are available on request from the corresponding author. The data are not publicly available, due to the data in this paper not being from publicly available datasets but obtained from the simulation of the signal models listed in the paper.

Conflicts of Interest:
The authors declare no conflicts of interest.