Robust Adaptive Lcmv Beamformer Based on an Iterative Suboptimal Solution

The main drawback of closed-form solution of linearly constrained minimum variance (CF-LCMV) beam-former is the dilemma of acquiring long observation time for stable covariance matrix estimates and short observation time to track dynamic behavior of targets, leading to poor performance including low signal-noise-ratio (SNR), low jammer-to-noise ratios (JNRs) and small number of snapshots. Additionally, CF-LCMV suffers from heavy computational burden which mainly comes from two matrix inverse operations for computing the optimal weight vector. In this paper, we derive a low-complexity Robust Adaptive LCMV beamformer based on an Iterative Suboptimal solution (RAIS-LCMV) using conjugate gradient (CG) optimization method. The steepest descent weight updated strategy is adopted to obtain a simple iteration process. The merit of our proposed method is threefold. Firstly, RAIS-LCMV beamformer can reduce the complexity of CF-LCMV remarkably. Secondly, RAIS-LCMV beamformer can adjust output adaptively based on measurement and its convergence speed is comparable. Finally, RAIS-LCMV algorithm has robust performance against low SNR, JNRs, and small number of snapshots. Simulation results demonstrate the superiority of our proposed algorithms.


Introduction
Linearly constrained minimum variance (LCMV) beamformer is a conventional and powerful tool for signal enhancement in multiple sources cases.The conventional LCMV beamformer is a kind of closed form LCMV (CF-LCMV) beamformer which is obtained by directly using Lagrange multipliers and is discussed extensively in many works, such as [1,2,4].As a generalized minimum variance distortion-less response (MVDR) beamformer, the conventional LCMV beamformer minimizes the array output power while maintaining a constant response in the direction of the signal of interest (SOI).However, CF-LCMV beam-former requires inversions of the input data covariance matrix R xx and C H C (C is a constraint matrix), causing the main computation burden of CF-LCMV algorithm.Additionally, the robustness of CF-LCMV is poor when small number of snapshots, low SNR and JNRs, are present [16].
Numerous adaptive versions of LCMV were reported in the last decades to overcome the above listed drawbacks of CF-LCMV, including LMCV with stochastic gradient (LCMV-SG) algorithm [13], LCMV with recursive least squares (LCMV-RLS) algorithm [6], and so on.Among these algorithms, the LCMV-SG algorithm is a low complexity method but converges slowly with correlated data inputs [9].The LCMV-RLS algorithm has a fast convergence speed but suffers from high complexity numerical instability.To reduce the computational complexity and improve the stability of CF-LCMV algorithm, a low-complexity constrained affine-projection (CAP) algorithm is proposed to update the weight vector of linearly constrained adaptive filters [12] recently.The performance of CAP beamformer is robust but the complexities of CAP and its derivations are still high.
Comparatively speaking, conjugate gradient (CG) technique shows an attractive trade-off between performance and complexity and has been adopted in many related works, such as [10,9,11].Among them, [10] is a convex optimization framework.In each iteration, the algorithm in [10] needs to solve a constrained least square problem, which leads to a more heavy computational burden.Based on the conventional MVDR criterion, [11] proposed a stronger constraint set to constrain the magnitude of array output and the so-called set-membership )(SM) technique is adopted to specify a bound on the magnitude of the estimation error or the array output, which can reduce the computational complexity because of data selective updates.While [9] is based on a constrained constant modulus (CCM) criterion and a modified conjugate gradient (MCG) and a conventional conjugate gradient (CCG) methods are used to drive the optimal weight vector.The weight update strategies of the two approaches are recursive least squares (RLS), which is more complicated than steepest descent technique.The two approaches can improve the performance to some extent with a more complicated iteration process.Additionally, their performances are affected by too many parameters, which degenerates their performance if these parame-ters are not given accurately.In this paper, by using CG technique and steepest descent technique, we derive robust adaptive LCMV beamformer based on iterative suboptimal solution (RAIS-LCMV) to improve the performance of CF-LCMV beamformer.Compared with the aforementioned approaches, our approach is only based on the conventional LCMV framework, we needn't other constraints to our cost function.Hence, it only needs several simple iteration steps.Firstly, we derive the updating procedures of RAIS-LCMV by using CG technique.The way to determine the parameters to be used in RAIS-LCMV is addressed subsequently.Furthermore, we discuss the convergence speed and computational complexity of RAIS-LCMV algorithm.Compared with some existing LCMV beamformers, our proposed algorithm has a low complexity and achieves better performance with small number of snapshots and low SNR and JNRs cases.

Problem Formulation
Consider D point source signals (including signals and interferences) impinging on an array comprising M sensors with an arbitrary geometry from directions The output y(k) of array beamformer can be expressed as where k = 1, 2, • • • , K is the time index and K is the number of snapshot, w is a M × 1 complex vector of beamformer weight to be estimated, and (•) H stands for Hermitian transpose.
For a minimum-variance distortionless-response (MVDR) beamformer, the objective is to minimize the array output energy, subject to a linear constraint on the desired direction-of-arrival (DOA), i.e., minimize where is the covariance matrix of the received signal x with E {•} being the expectation operator.
For LCMV-type beamformers, w is the solution to the following multiple linearly constrained optimization problem: where C is the aforementioned constraint matrix and f is a constraint vector.For example, if we need to generate unit gains in L DOAs θ i (i = 1, 2, • • • , L) while forming nulling in other DOAs, the constraint matrix and constraint vector can be expressed as and respectively.Based on (3), (4), and (5), by using Lagrange multipliers technique, one can obtain a closed-form solution of the optimal weight vector w as follows [13] Equation ( 6) is the well-known optimal weight vector of LCMV beamformer and it is a closed-form solution to (3).The closed-form solution of ( 6) can constrain the interferences and keep simultaneous response of multiple expected signals.To obtain the optimal weight vector, we need to compute two matrix inversions from (6), i.e., one is the inverse operation of covariance matrix R xx , with size M × M, the other is the inverse operation of matrix C H R −1 xx C, with size D × D. For large arrays or adaptive phase array radar (APAR) [18], the number of array sensors is very large, so the computational complexity of covariance matrix inversion is very high.So how to decrease the computational burden of CF-LCMV is a serious problem.Additionally, the main drawback of the aforementioned two beamformers is the dilemma of acquiring long observation time for stable covariance matrix estimates and short observation time to track dynamic behavior of targets, leading to poor performance including low SNR, low JNRs, and small number of snapshots.
In [1], Breed and Strauss derived an equivalent generalized sidelobe canceller (GSC) structure of LCMV (GSC-LCMV).GSC-LCMV is obtained by splitting the applied filters into two components, i.e., w = w q − w s .The components of w q and w s lie in the column-subspace of the constraint matrix C and its component null-subspace, respectively.The GSC-LCMV formulation of the problem can be expressed as following where the matrix C a ∈ C D×(D−L) is composed of the orthocomplement vectors of constraint matrix C. It should be noted that GSC-LCMV also requires two matrix inversions and its computational complexity will be slightly decreased when number of signals is far less than that of array elements.Overall speaking, the computational burden of GSC-LCMV is still heavy and its performance is still poor with small number of snapshots, low SNRs and JNRs.
In order to ease the computational burdens of CF-LCMV and GSC-LCMV and obtain a better performance under small number of snapshots, low SNRs and JNRs, we derive a low-complexity RAIS-LCMV algorithm using conjugate gradient (CG) optimization technique.The proposed RIAS-LCMV beamformer does not require matrix inversion in each iteration and the suboptimal weight vector is computed iteratively given small number of iterations.Compared to CF-LCMV, GSC-LCMV beamformers, and some other existing iterative methods, RAIS-LCMV beamformer has lower computational complexity and shows great robustness to SNR, JNRs, and number of snapshots.

Robust Adaptive LCMV
Beamformer Based on an Iterative Suboptimal Solution (RAIS-LCMV) Consider the objective optimization function of (3) which is based on the typical least mean square (LMS) criterion.One can obtain a solution of weight vector by using conjugate gradient (CG) optimization algorithm [7,8,14].In order to derive an iterative suboptimal solution of weight vector w, the proposed RIAS-LCMV beamformer is designed using the following optimization problem in real valued form: where Real{•} is an operator of taking real part and λ ∈ C D×1 is a Lagrange multipliers vector.Calculating the conjugate gradient of J(w) with respect to w, we have in which It is noted that λ H C H w is a complex scalar, thus we have Based on ( 11), ( 10) can be expanded to Since λ H C H = (Cλ) H and Cλ ∈ C M×1 , using the rule of gradient derivation, the first item of right side of ( 12) is zero, and the second item of right side of ( 12) is Cλ, so we have Based on the results above, we can calculate the derivation of ∇ w J (w) as The following updating formulation is used to compute the weight vector w (n + 1) where µ is a non-negative step size parameter, which decides the convergence of our method.Substituting ( 14) into ( 15) and using λ (n) instead of λ as the n index, we have where λ (n) varies with index n.We constrain that the weight vector w (n + 1) must satisfy the following constraint condition in each iteration Premultiplying both sides of ( 15) by C H yields Combining ( 17) and ( 18), we can obtain the following equivalent form Consequently, the expression of λ (n) is given as follows Substituting ( 20) into ( 16), we can obtain the iterative equation of weight vector w (n + 1) as follows Simplifying ( 21) further yields Denoting and we can rewrite the iterative process of weight vector w (n + 1) as the following compact form Equation ( 25) is the updating formulation of estimated suboptimal weight vector.Because the covariance matrix is not an iterative form, the beamformer constructed by (25) is not an adaptive one.For simplicity, we denote the updating procedure of (25) as a simple iterative suboptimal solution of LCMV (SIS-LCMV) beamformer.Note that the covariance matrix of R xx is unavailable in real time, so SIS-LCMV beamformer is still torn between the need of long observation time for stable covariance matrix estimates and the need of short observation time to track dynamic behavior of targets.
Generally speaking, an available approximation of in which K is the total number of snapshots.Hence, the weight updating formula of SIS-LCMV should be substituted by The drawback of SIS-LCMV is that it needs long observation time for stable covariance matrix estimate.In order to solve the problem of long observation time and to improve the dynamic behavior of beamformer, the approximation of covariance matrix (26) can be substituted by the instantaneous estimate of covariance matrix as follows In this case, the suboptimal weight updating process of our proposed iterative adaptive LCMV beamformer can be rewritten as and It should be noted that the updating process of AIS-LCMV beamformer can improve the dynamic behavior of SIS-LCMV beamformer remarkably.However, one can imagine that the error will be large due to the poor approximation of covariance matrix.In order to improve the performance of AIS-LCMV, we adopt the recursive strategy of covariance matrix as follows where β is a forgetting factor, and Rxx (k) and Rxx (k − 1) are the estimates of covariance matrix in the kth and (k − 1)th observation time, respectively.Based on the recursive estimate of covariance matrix of (31), we obtain RAIS-LCMV algorithm as follows Obviously, the smooth technique used here can reduce the estimating error of covariance matrix effectively.
It should be noted that AIS-LCMV and RAIS-LCMV are the special cases of SIS-LCMV algorithm.Among them, AIS-LCMV is just the case of K = 1 in the estimation of covariance matrix in (26), while RAIS-LCMV adopts a data reusing strategy to improve the estimation precision of covariance matrix.We believe that RAIS-LCMV has a better performance in both tracking speed and estimation of stable covariance matrix than AIS-LCMV beamformer.Additionally, the computational complexity of RAIS-LCMV is lower than that of SIS-LCMV because the latter needs more observation time to compute the approximation of covariance matrix and the robustness of RAIS-LCMV will be superior to that of SIS-LCMV and AIS-LCMV with small number of snapshots, low SNRs and JNRs cases.Now we summarize the three listed algorithms: SIS-LCMV, AIS-LCMV, and RAIS-LCMV in Tab. 1, Tab. 2 and Tab. 3, respectively.In Tab. 1, N is the total number of iteration in SI-LCMV and ε is the weight vector difference between two adjacent estimated weight vectors.δ is a small threshold we set.In Tab. 2 and Tab. 3, the termination condition of iterative process is determined only by the number of snapshots K.

Parameters Selection
The parameters to be considered here are the step size parameter µ, the initial weight vector w (0), the total iterative number N or equivalently speaking, the iterative error δ, the initial covariance matrix Rxx (0), and the forget factor β.Among them, the initial covariance matrix Rxx (0) is usually set to be identity matrix with size of M × M and the forget factor β is always selected as a constant close to one, e.g., β = 0.998.Note that AIS-LCMV and RAIS-LCMV are just the special cases of SIS-LCMV, for the sake of simplicity, we only take SIS-LCMV algorithm as an example to demonstrate the selection of some parameters.Such conclusions can be applied to AIS-LCMV and RAIS-LCMV algorithms directly.

The Step Size Parameter µ
The first parameter to be determined is the step size parameter µ.In order to determine the range bound of µ, we substitute ( 17) and ( 24) into (25), which yields Further simplifying (33), we obtain a compact form about w as follows Note that P is a full-rank matrix if all signals and interferences are not coherent.
Premultiplying both sides of (34) by P −1 yields Recall that the eigen-decomposition of R xx is given as where is the normalized eigenvector, Q is a unitary matrix whose columns are composed of all eigenvectors.
and the diagonal matrix Λ Λ Λ is composed of its eigenvalues Substituting (36) into (35), we have Because of Q H Q = QQ H = I, (39) can be rewritten as Premultiplying both sides of (40) by Q H and denoting b Equivalently, each row of (42) has the following recursion form If the initialized value of b i (n) is b i (0), then the following relationship holds Obviously, if |1 − µλ i | < 1, i.e., −1 < 1 − µλ i < 1.Or equivalently, if the following condition is satisfied, we can always have lim To ensure the step size parameter µ satisfies (45) for all where λ max is the biggest eigenvalue of R xx .

The Initialized Weight Vector w (0)
The second parameter to be determined in our proposed algorithm listed in Tab. 1, Tab. 2, and Tab. 3 is w (0).Recall that G in (25) is a fixed part in each iteration and note that the expression of G is just a min-norm solution to (3), if we choose the initialized condition of w (0) = 0, then that is to say, w (1) is the min-norm solution of LCMV.In other words, the expected weight vector will be the minnorm solution after only one iteration.Meanwhile, setting w (0) = 0 can ensure the constrain condition in each iteration due to Based on w (0) = 0, we can calculate w (n + 1) by Because P is the orthocomplement matrix of C, we have Premultiplying both sides of (49) by C H , we find that the weight vector in each iteration satisfies the following constrain condition From the analysis above, we find that the choice of w (0) = 0 is reasonable and it can give a good suboptimal solution of weight vector.

The Iteration Number N and δ
The third parameter to be determined is the iteration number N. The iteration number N can be set preliminarily and the value of N must be big enough for obtaining the optimal weight vector w opt .Based on our experiences of many simulations, the iteration number N must be big enough to obtain a reasonable suboptimal weight vector.Empirically, any N ≥ 20 is appropriate because a suboptimal weight vector can be acheived after several iterations in most cases.Such selection of N is justified subsequently by convergence analysis in Section 5 and simulation results in Section 6.
As a candidate parameter, the iterative error of weight vector δ is also an efficient way to control the termination condition of our proposed SIS-LCMV, AIS-LCMV, and RIAS-LCMV algorithms.The iterative error δ can be computed in the following way where ŵ (n) is the estimate of the nth iteration and • 2 2 is the 2 -norm.We set δ = 10 −3 in our simulation unless stated otherwise.A smaller δ will take much time to get stable point.

Discussion
In this section, we evaluate the performance of our proposed algorithms from two aspects, i.e., convergence performance and computational complexity.

Convergence Analysis
It is obvious that the orthocomplement matrix P of C has the following properties Based on (49) and (53), with w (0) = 0, the relationship between weight vectors w (n + 1) and w (n) can be further rewritten as Hence, (54) can be rewritten as Note that the formula of finite sum of matrix can be expressed as Let A = I − µR xx in (56), and (55) can be equivalently expressed as The first term on the right side of (57) is the transient component of the vector x (n), and the second term is the steady-state component.Equation ( 55) shows that the estimated suboptimal weight vector of our proposed algorithm includes two parts: fixed part and adaptive part.The fixed part is known as the aforementioned min-norm solution of constraint condition C H w = f.The adaptive part is xx G, which is related to the covariance matrix of the receiving data.To prove the convergence performance of our proposed methods, we only require to consider the extreme value of 1  µ P (I λ max .From the step size selection of (46), we know that if n → ∞, item (I − µR xx ) n → 0 for any µ lies in 0 < µ < 2 λ max .Therefore, (57) can be rewritten as Equation ( 58) tells us that the stable point of suboptimal weight vector is 1 µ PR −1 xx G + G when the step size lies in the range of 0 < µ < 2 λ max .That is to say, our proposed algorithms are stable when finding the suboptimal weight vector.

Complexity Evaluation
In this section, we compare the computational complexity of our proposed algorithms, i.e., SIS-LCMV, AIS-LCMV, and RIAS-LCMV, with the aforementioned CF-LCMV [19], GSC-LCMV [1], and two other iterative algorithms, i.e., constrained affine-projection (CAP) algorithm [12], and low-complexity addition or removal of sensors/constraints in LCMV beamformers [17].The computational analysis is based on the complexity of basic operations defined in Tab. 4. For the convenience of comparison, we summarize the computational complexities of them in Tab. 5.Among them, the computational complexities of CF-LCMV and GSC-LCMV can be derived directly from the closed-form solution of optimal weight.It should be noted that here we considered is only L = 1 case in CAP algorithm because it has the lowest computational complexity among all the cases of L ≥ 1 (L is the data reuse number, refer [12] for details) to justify the fairness of the complexity comparison.Despite of multiple different updating algorithms proposed in, we only consider the typical sensor update incremental LCMV (SUI-LCMV) algorithm for the sake of simplicity.For the listed iterative algorithms, we only consider the computation in each iteration.Note that for our proposed algorithms, we do not need compute P and G in each iteration because they are computed in initializing step.Hence, the computation number of our proposed algorithms have nothing to do with D. Compared with other algorithms in Tab. 5, it is obvious that our proposed algorithms have lower computational complexity compared with other listed algorithms.Among them, AIS-LCMV has the lowest computational complexity.Different LCMV Computations algorithms CF-LCMV Tab. 5. Computational complexity of five listed LCMV algorithms.

Simulation and Results
In this section, we compare the performance of our proposed algorithms with CF-LCMV, GSC-LCMV, CAP, and SUI-LCMV algorithms from calculations, robustness and convergence.

Comparison of Computational Complexity
In this section, the computational complexity of five aforementioned algorithms is validated by simulation data.From Tab. 5, we note that CF-LCMV and GSC-LCMV are direct solutions without any iteration process and the computational complexities of them exceed the other listed algorithms by far.We do not plot the calculations of CF-LCMV and GSC-LCMV algorithms in the following simulations due to their heavy computation burden.We only compare the calculations of our proposed algorithms with other two existing iterative adaptive LCMV algorithms, named CAP, and SUI-LCMV, for the convenience of comparative.
Considered that the source number D is usually much smaller than the number of sensors M.So we only consider the case of D ≤ M without loss of generality.To test the relationship between the computational complexity of listed LCMV algorithms and the number of sensors, we fix the number of sources D and only change the number of sensors M. Fig. 1 gives the results of different number of sensors versus calculations of listed LCVM algorithms when the number of source D is five.Here the number of sensors varies from 20 to 100 with increment of 10.To test the relationship between and computational complexity of listed algorithms and the number of sources, we change the number of sources D from 5 to 15 and keep the number of sensors M varying as before to show the relationship between the calculations and the number of sources D. Figures 2 gives the results of calculations of listed LCMV algorithms when the number of sources D is 15.
From Fig. 1 and Fig. 2, we observe that our proposed SIS-LCMV, AIS-LCMV, and RAIS-LCMV algorithms have competitive computational complexity in both cases.For the three proposed LCMV algorithms, the calculations of AIS-LCMV are smaller than that of the other two algorithms.In the former case, the calculations of AIS-LCMV are a bit smaller than that of SUI-LCMV while the calculations of RAIS-LCMV are close to that of CAP.In the latter case, the calculations of AIS-LCMV are smaller than that of SUI-LCMV and the calculations of RAIS-LCMV are smaller than that of CAP.As the number of sources increases, the superiority of our proposed algorithms is remarkable.

Accuracy in Beamforming and Interference Constraining
This experiment is carried out in a beamforming application.In this scenario, a uniform linear array with M = 8 antennas with element spacing equal to a half-wavelength was used in a system with D = 4 users.The signal of one user whose look-direction is 0 • is of interest, and the other three signals whose incident angles are −20 • , −40 • , and −50 • , respectively, are treated as interferences or jammers.In order to show the performance of our proposed algorithms, we also compare the performance of our algorithms with a constrained affine-projection (CAP) algorithm [12].Because GSC-LCMV is only a different version of CF-LCVM and they have the same performance in beam-forming ability, we omit GSC-LCMV in this simulation.Additionally, SUI-LCMV is a sensor update incremental algorithm and it is not appropriate for fixed sensor case, hence we do not consider its performance in this experiment.The parameters used in CAP algorithm in this experiment are, L = 1, δ = 10 −4 , and µ = 0.05, respectively.The forget factor β used in RAIS-LCMV is 0.998 and the initial covariance matrix used in RAIS-LCMV is set to identity matrix, i.e., R xx (0) = I M×M .Firstly, we consider the performance of different algorithms against snapshot number.The signal-to-noise ratio (SNR) is set to 0 dB, and jammer-to-noise ratios (JNRs) of 10 dB are used.The beamforming performances versus the number of snapshots with fixed SNR, JNRs, and the number of sensors are depicted in Fig. 3 and Fig. 4, respectively.It is evident that the listed iterative adaptive algorithms, including our proposed algorithms and CAP algorithm, are more robust to small number of snapshots, while CF-LCMV is very sensitive to the number of snapshot and its performance is poor when fewer snapshots are available.
Secondly, we fix that the number of snapshots N is 20 and the number of sensors M is 8. Fig. 5 shows the beamforming performance of listed algorithms with SNR being 10 dB, and JNRs being 20 dB.Fig. 6 plots the beamforming performance of listed algorithms with SNR being −10 dB, and JNRs being −20 dB.It is evident that our proposed algorithms and CAP algorithm are robust to low SNR and JNRs, while CF-LCMV shows degraded performance in low SNR and JNRs case, as demonstrated in Fig. 6.From the four listed figures, we observe that our proposed algorithms have a competitive performance in both cases.Among SIS-LCMV, AIS-LCMV, and RAIS-LCMV, RAIS-LCMV has a better performance in trade-off robustness and tracking speed of LCMV beamformer.It is noted that the sidelobe level of RAIS-LCMV is lower than other algorithms, including CAP, in the four listed simulation cases.AIS-LCMV has the lowest computational complexity but is more sensitive to low SNR, JNRs, and small number of snapshots and sensors.SIS-LCMV is not an adaptive beamformer but it has a better performance than CF-LCMV in the same condition.Finally, we also compared the performance of RAIS-LCMV with CMV-CCG (constrained minimum varianceconventional conjugate gradient) [9], CCM-MCG (constrained constant modulus-modified conjugate gradient) [9], and SM-CG (set membership-conjugate gradient) [11] algorithms.The number of snapshots, the number of sensors, SNR, JNRs and the directions of sources are all the same as the above case.The parameters of CMV-CCG, CCM-MCG, and SM-CG algorithms are listed in Tab. 6.These parameters are used in the simulations of [11] and [9].It is worthy to point out that we just consider the fixed bound case of δ for the sake of simplicity.And the results are shown in Fig. 7. From the figure, we observe that our proposed RAIS-LCMV has a competitive performance in beamforming with respect to the other listed algorithms.Our proposed algorithm can work well with small snapshots number, while the CCM-MCG, CMV-CCG, and SM-CG need more snapshots number to obtain a good performance (more than 3000 in the simulation parts of the original literatures).Hence the speed of convergence of RAIS-LCMV is better than the above three algorithms because the steepest descent strategy is adopted.

Convergence Speed
In this scenario, we fix the number of sensor element as eight.Four sources are the aforementioned directions-ofarrival.The number of snapshots is set to 1000 in order to test the ability of convergence of our proposed algorithms under different SNR and JNRs cases.We compare the convergence speed of our proposed algorithms with CAP algorithm.Fig. 8 and Fig. 9 show the convergence speed of our proposed algorithms and CAP algorithm under different SNR and JNRs.Note that the vertical axis in Fig. 8 and Fig. 9 is 2 -norm of the difference of weight vectors between two adjacent iterations, i.e., ∆w (k) is given by where ŵ (k) and ŵ (k − 1) are the estimates of the kth and (k − 1)th snapshots, respectively.We observe that, from both figures, our algorithms present a better performance in terms of convergence speed for both SNR and JNRs cases.Among them, the convergence performance of AIS-LCMV is slightly poorer than that of CAP algorithm due to adopting poor approximation of covariance matrix.The convergence performance of SIS-LCMV and RAIS-LCMV is superior to that of CAP algorithm no matter what SNR and JNRs are in both cases.Also we see from the two figures that our proposed algorithms performs very similarly to the CAP algorithm in convergence speed: they all obtain a stable point after only several iterations, as demonstrated in Fig. 8 and Fig. 9.In other words, our algorithm needs only several snapshots to obtain the suboptimal weight vector under lower and higher SNR and JNRs cases.

Conclusion
In this paper, we derive a low-complexity RAIS-LCMV algirithm using conjugate gradient (CG) optimization method.Our proposed algorithms can, in certain applications, substantially improve the robustness to low SNR, JNRs, and small number of snapshots.
Through theoretical analysis and simulation results, we show that our proposed algorithms can effectively overcome the dilemma of acquiring long observation time for stable covariance matrix estimates and short observation time to track dynamic behavior of targets.Parameters used in our algorithm are easily determined and the performance of our proposed algorithms are superior to existing iterative adaptive algorithms, such as CAP, SUI-LCMV, etc., in inter-ference suppression, computational complexity, and robustness.Simulation results show the efficacy of our proposed algorithm.
in which the iteration index k = 1, 2, • • • , K with k being the index of measurement number.The scalar y (k) is the instantaneous output of beamformer in kth observation time.Using the updating formulae (29) and (30), we can obtain an adaptive iterative suboptimal solution of LCMV (AIS-LCMV) beamformer.