Next Article in Journal
DRL-RNP: Deep Reinforcement Learning-Based Optimized RNP Flight Procedure Execution
Previous Article in Journal
A Fusion Model Based on Dynamic Web Browsing Behavior Analysis for IoT Insider Threat Detection
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Low-Complexity Joint 3D Super-Resolution Estimation of Range Velocity and Angle of Multi-Targets Based on FMCW Radar

1
School of Information Science and Engineering, Harbin Institute of Technology, Weihai 264209, China
2
School of Electronic Science, National University of Defense Technology, Changsha 410073, China
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(17), 6474; https://doi.org/10.3390/s22176474
Submission received: 11 July 2022 / Revised: 22 August 2022 / Accepted: 25 August 2022 / Published: 28 August 2022
(This article belongs to the Section Electronic Sensors)

Abstract

:
Multi-dimensional parameters joint estimation of multi-targets is introduced to implement super-resolution sensing in range, velocity, azimuth angle, and elevation angle for frequency-modulated continuous waveform (FMCW) radar systems. In this paper, a low complexity joint 3D super-resolution estimation of range, velocity, and angle of multi-targets is proposed for an FMCW radar with a uniform linear array. The proposed method firstly constructs the size-reduced 3D matrix in the frequency domain for the system model of an FMCW radar system. Secondly, the size-reduced 3D matrix is established, and low complexity three-level cascaded 1D spectrum estimation implemented by applying the Lagrange multiplier method is developed. Finally, the low complexity joint 3D super-resolution algorithms are validated by numerical experiments and with a 77 GHz FMCW radar built by Texas Instruments, with the proposed algorithm achieving significant estimation performance compared to conventional algorithms.

1. Introduction

In recent years, with the development of millimeter-wave (mm-wave) semiconductor technology, the application of frequency modulated continuous wave (FMCW) radar sensor systems has become easy and popular [1] and many commercial radars can be used for automotive applications [2,3], UAV detection [4,5], and medical diagnosis [6,7,8], etc. Radar sensor systems play an important role in these applications to obtain state features such as range, velocity, and angle of targets. Therefore, various estimation algorithms based on FMCW radar have also been proposed due to the wide application of FMCW radar sensor systems. In those traditional algorithms [9,10,11], the range, radial velocity, and angle information of the targets can be estimated by taking the 3D fast Fourier transform (FFT) on the beat-frequency (BF) signal, which is achieved by the dechirping operation of the received and transmitted signal. Although this algorithm is simple and efficient to use, its resolution is limited by hardware aspects such as the bandwidth of the transmitted signal, the area of the receiving antenna aperture and the number of chirps transmitted [12]. The resolution of the radar system often cannot meet our actual needs due to physical hardware limitations, which has led to the super-resolution estimation algorithm based on FMCW radar being intensively researched, especially for multi-targets detection scenarios.
The conventional one-dimensional (1D) multiple signal classification (MUSIC) algorithm [13,14], Root-MUSIC algorithm [15,16], and ESPRIT algorithm [17,18], etc., provide super-resolution estimation through the subspace-based technique. However, these algorithms only estimate the direction of arrival (DOA) and the estimation of only one parameter cannot meet our actual needs. Moreover, the estimation of range, velocity, and angle can be processed sequentially, solving each parameter estimation problem independently, but this leads to a severe pairing problem of the estimated results as the number of targets increases [19]. Therefore, many joint estimation algorithms without pairing have been suggested in recent years. For example, the L-shaped, matrix or other shapes of receiving antenna array are used to implement the algorithm for joint super-resolution estimation of elevation and azimuth based on the two-dimensional (2D) MUSIC algorithm [20,21]. Additionally, the 2D MUSIC algorithm [22,23,24] achieves super-resolution joint estimation of range and angle by extending conventional 1D MUSIC for the angle domain to 2D MUSIC for the range-angle domain. However, the difficulty of this kind of algorithm in practice is that a lot of calculations are required in the process of constructing the covariance matrix and multi-dimensional peak searching; especially when the dimension is higher, the calculation amount is huge. To reduce the amount of calculation, the methods of dimensionality reduction and beam projection are applied to the 2D MUSIC algorithm [25,26,27] and the ESPRIT algorithm is used to replace the peak search process of MUSIC [28] in 2D. However, such low-complexity and super-resolution estimation algorithms are rarely studied in the joint 3D estimation of range, velocity, and angle. Although some 3D joint estimation algorithms based on 3D MUSIC have also been proposed [29,30], they have not solved the problem of high complexity in 3D.
Hence, in this paper, we propose a low-complexity super-resolution joint 3D estimation algorithm of range velocity and angle of multi-targets. Based on 3D MUSIC, this algorithm reduces the amount of computation through two-part processing. First, we reduce the amount of matrix computation and data storage by extracting useful frequency information in the beat signal. Then, the conventional 3D peak searching is transformed into three-level cascaded 1D searching by applying the Lagrange multiplier method. Additionally, this algorithm uses a three-dimensional spatial smoothing technique [31] to solve the problem of coherent echoes. The results of simulation and actual experiments show that the algorithm not only substantially reduces computation, but also maintains super-resolution ability.
This paper is organized as follows: Section 2 presents the radar system and signal model. Section 3 introduces this low-complexity super-resolution algorithm. The results of experiments and the performance analysis of this algorithm are discussed in Section 4. Finally, the conclusion is presented in Section 5.

2. Signal Model

Consider a SIMO FMCW radar system, which consists of a uniform linear array (ULA), and the system block diagram is as shown in Figure 1:
A chirp signal consists in transmitting a frequency-modulated signal which exhibits a linear frequency increase or decrease over the bandwidth of Bw Hz in the duration time of Tc s. The linear frequency increase version transmitted by the FMCW radar can be modeled as [32]:
s t t = e j 2 π f c t + μ 2 t 2 0 t < T c 0   elsewhere
where fc is the carrier frequency, μ = Bw/Tc is the ratio of the bandwidth of the transmitted chirp signal to its duration time of sweep. Considering K narrowband sources impinging on the ULA of the radar system reflected by K moving targets in the far field, the received signals can be defined as:
s r t = k = 1 K s t t τ k + w t                     k = 1 K e j 2 π μ 2 t 2 R k c 2 e j 2 π f c t τ k + w t
τ k = 1 c 2 R k + l 1 T c V k + V k t + n 1 d sin θ k
where K is the number of targets, Rk, θk, Vk are range, angle, and velocity of the k-th target, respectively, c is the speed of light, and d is the spacing of the uniform linear array antenna; n = 1…N, N is the number of elements of the received antenna array, l = 1…L, L is number of transmitted chirps, w(t) is the transformed additive white Gaussian noise (AWGN) signal. The item τk stands for the phase shift in the received chirp signal, and it is induced by range, velocity, and angle at the same time, as explained in [33]. To be more specific, the item Rk + (l − 1)TcVk + Vkt denotes the instant range at time t for the received l-th chirp reflected from the k-th moving target, and it deduces the phase shift induced by range and velocity. Meanwhile, the item (n − 1) sinθk denotes the angle for the n-th sensor element of the incoming signal reflected from the k-th target, and it deduces the phase shift induced by angle. The beat signals can be achieved from the received signals through mixers, and the down-converted signal can be expressed as:
s b t = s r t s t t                 = k = 1 K b k e j 2 π 2 μ c R k + f c μ V k t e j 2 π f c c l 1 T c V k e j 2 π f c c n 1 d sin θ k + w t
where bk is the complex amplitude of the k-th received signal reflected by the k-th target, after analog-to-digital conversion, we can get the discrete beat signal:
S b n , m , l = k = 1 K b k e j 2 π 2 μ c R k + f c μ V k m 1 f s e j 2 π f c c d n 1 sin θ k e j 2 π f c c 2 l 1 T c V k + w n , m , l
where fs is the sampling frequency, m = 1…M, M is the number of time samples of one chirp signal, and w[n, m, l] is the AWGN signal after discretization.
The received radar data cube with dimension N × M × L can be constructed as shown in Figure 2. In the Figure 2, y[n] is the 2D data matrix of beat signal received by the n-th antenna and can be described as Equation (6), W is additive white Gaussian noise matrix.
y n = S b n , 1 , 1 S b n , 2 , 1 S b n , M , 1 S b n , 1 , 2 S b n , 2 , 2 S b n , M , 2 S b n , 1 , L S b n , 2 , L S b n , M , L + W L × M

3. The Proposed Low-Complexity Super-Resolution Algorithm

In the previous multi-dimensional joint estimation [22,23,24], matrix operations and multi-dimensional peak search are the main reasons for the high complexity. For coherent signals, smoothing is necessary [34]. In this paper, matrix block selection and smoothing are firstly performed, and then the algorithm to reduce the complexity is tried.

3.1. Targets-Located Blocks Selection

In Section 2, the time dimension of the 3D radar data cube is much larger than the angle and velocity dimensions. Selecting the effective time dimension blocks, which contain the range information of targets, can significantly reduce the size of the conventional 3D covariance matrix.
As shown in Figure 3, the target-located blocks of interest corresponding to temporal frequencies can be selected from the matrix Y2, which can be achieved by performing 1D FFT in range domain (range-FFT). Those peaks in temporal frequencies will locate in several blocks, for example: Block 1(B1) with index range [m1, m2] and Block 2(B2) with index range [m3, m4], and then the two blocks of data are jointly estimated, respectively. The relationship between range and temporal frequency is: R = fc/2μ. This process has two other advantages: one can roughly estimate the approximate range, and the other can filter part of the white noise to improve the signal-to-noise ratios (SNR). The targets-located blocks selection process is as shown in Figure 3.

3.2. Decorrelation Processing

The eigenvalue decomposition (EVD) of the covariance matrix can obtain the signal subspace and the noise subspace. When the target echo signals are correlated, the size of the signal subspace will no longer be equal to the number of targets [35,36]. It is necessary to decorrelate the data, and the use of spatial smoothing technology is an effective means to decorrelate related signals. Frequency domain processing and spatial smoothing need to be done together. The process is as shown in Figure 4.
We define a small cube of size [h1 × h2 × h3], which is identified with the red frame in Figure 4, and scan all possible positions in the radar data cube, there are then p1 = Nh1 + 1 positions in the angle dimension, p2 = Mh2 + 1 positions in the time dimension, and p3 = Lh3 + 1 positions in the velocity dimension, and each sub-matrix needs to be processed on the frequency domain as in Figure 3. The phase shift between the adjacent samples in h1 dimension, h2 dimension, and h3 dimension are induced by angle, range, and velocity, respectively, as shown in [36].
Perform frequency domain processing on the data of each sub-matrix [h1 × h2 × h3], and then get the covariance matrix of each block. This paper takes the selected block B1 which contains K ^ targets as an example, and denotes its data matrix as Y3 for simplicity. Y3 is the sub-matrix with range index [m1, m2] of Y2, and Y2 is the matrix of Y1 after range-FFT, and Y1 can be expressed as:
Y 1 = Y p ˜ 1 : p ˜ 1 + h 1 1 , p ˜ 2 : p ˜ 2 + h 2 1 , p ˜ 3 : p ˜ 3 + h 3 1
Three exponential items, which respectively contain the phase shift induced by range, angle, and velocity, are presented in Equation (5). The conceptual understanding of three kinds of phase shift can be found in Figure 5a, and each kind of phase shift has its corresponding 1D steering vector for conventional 1D MUSIC. It is important to notice that the exploitation of orthogonality between signal subspace and noise subspace remains true for the extension to 3D MUSIC. The phase shift in the received signal is obviously comprised by the three kinds of phase shifts according to Equation (5), and thus the steering vector for the 3D MUSIC is the Kronecker product of three above-mentioned 1D steering vectors. We call the steering vector for the 3D MUSIC as 3D steering vector, which contains three kinds of 1D steering vectors, and the search space for 3D MUSIC is a 3D grid of range, velocity, and angle. Therefore, the covariance matrix of B1 can be constructed as:
D = 1 p 1 p 2 p 3 p ˜ 1 = 1 p 1 p ˜ 2 = 1 p 2 p ˜ 3 = 1 p 3 1 h 1 h 4 h 3 C C H
where h4 = m2m1 + 1, C = vec(Y3) is the 3D steering vector, and the operation vec() denotes vectorization of a matrix contained within, namely, reshaping the matrix with size h1 × h4 × h3 to a vector with size h1h4h3 × 1. To be more specific, if we define vector Aij =Y3(:,i,j) with i = 1,2,…., h4 and j = 1,2,…., h3, the size of vector Aij is h1 × 1. Then, we define vector Bi = [Ai1, Ai2, …, Aih3,] with size h1h3 × 1, and the interested 3D steering vector C can be denoted as C = [Bi, Bi,…, Bh4] with size h1h4h3×1. Obviously, Aij contains the phase shift induced by angle, Bj contains the phase shift induced by angle and velocity, and C contains the phase shift induced by angle, velocity, and range at the same time. The construction of 3D steering vector C is as shown in Figure 5b.

3.3. Low Complexity Joint 3D Estimation of Range-Velocity-Angle

The signal subspace and the noise subspace can be obtained after performing EVD on the constructed covariance matrix D:
D = U Λ U H
where Λ = λ 1 , λ 2 , , λ h 1 h 4 h 3 , the noise subspace can be defined as:
U N = U K ^ + 1 , U K ^ + 2 , , U h 1 h 4 h 3 h 1 h 4 h 3 × h 1 h 4 h 3 K ^
Defining the first phase item R + fcV/μ of Equation (5) as G, the 3D MUSIC spectrum can be calculated as:
P G , θ , V = 1 β G υ V α θ H U N U N H β G υ V α θ
where
α θ = 1 , e j 2 π f c c d sin θ , , e j 2 π f c c d h 1 1 sin θ 1 × h 1 Τ υ V = 1 , e j 2 π f c c 2 T c V , , e j 2 π f c c 2 ( h 3 1 ) T c V 1 × h 3 Τ β ˜ G = 1 , e j 2 π 2 S c G 1 f s , , e j 2 π 2 S c G h 2 1 f s 1 × h 2 Τ
the β(G) is index range [m1, m2] of β ˜ ( G ) after FFT, ⊗ denotes Kronecker product. The item β(G) ⊗ υ(V) ⊗ α(θ) usually called the 3D steering vector, which represents the set of phase-delays for the received signal at each sensor element. Since three arguments specify the testing vector, the calculated spectrum P(G, θ, V) is a 3D matrix and the estimations of range, velocity, and angle of all targets can be achieved from the value of 3D peaks of it.
For the calculated 3D MUSIC spectrum P(G, θ, V), conventional joint estimation methods find the K ^ maximum values by 3D peak searching, and the range, velocity and angle of multi-targets can be estimated by the indexes of the corresponding 3D peak. However, the 3D peak searching presents a heavy computational burden. The conventional Lagrange multiplier method [25,26] is adopted to reduce the computational complexity of 3D peak searching, so we define:
M G , V , θ = 1 P G , V , θ = β G υ V α θ H U N U N H β G υ V α θ                                                                             = υ V α θ H Q 1 G υ V α θ
where Q 1 G = β G Ι h 1 h 3 H U N U N H β G Ι h 1 h 3 . Then, the above problem becomes a quadratic optimization problem, and the Lagrange multiplier method is used to reduce its dimension. We consider using e 1 H υ V α θ = 1 , e 2 H α θ = 1 to eliminate the trivial solution of zero, where e 1 = 1 , 0 , 0 Τ h 1 h 3 × 1 , e 2 = 1 , 0 , 0 Τ h 1 × 1 . This optimization problem can be defined as:
min G , V , θ υ V α θ H Q 1 G υ V α θ s . t .   e 1 H υ V α θ = 1 , e 2 H α θ = 1
Let υ V α θ = T V , θ , the cost function, be:
L G , V , θ = T V , θ H Q 1 G T V , θ λ e 1 H T V , θ 1 η e 2 H α θ 1
where λ and η are constant. Take the derivative of L(G, θ, V) as:
T V , θ L ( G , V , θ ) = 2 Q 1 G T V , θ + λ e 1 = 0
α θ L G , V , θ = 2 Q 2 G , V α θ + η e 2 = 0
where Q 2 G , V = β G υ V I h 1 H U N U N H β G υ V I h 1
Firstly, according to Equation (15), we can get T V , θ = μ Q 1 1 G e 1 , where μ is a constant. For e 1 H T V , θ = 1 , μ = 1 e 1 H Q 1 G 1 e 1 , and to get T ^ V , θ = Q 1 G 1 e 1 e 1 H Q 1 G 1 e 1 . Taking T ^ V , θ into Equation (13), G can be estimated as:
G ^ = min G 1 e 1 H Q 1 G 1 e 1       = max G e 1 H Q 1 G 1 e 1
Therefore, through a 1D local search at G m 1 c 2 μ , m 2 c 2 μ , the G ^ k k = 1 , , K ^ of the k-th target can be obtained.
Then, the Equation (16) can be rewritten as:
α θ L G ^ k , V , θ = 2 Q 2 G ^ k , V α θ + η e 2 = 0
Similarly, according to Equation (18), we get α θ = ε Q 2 1 G ^ k , V e 2 , where ε is a constant. For e 2 H α θ = 1 , ε = 1 e 2 H Q 2 1 G ^ k , V e 2 , to get α ^ θ = Q 2 G ^ k , V e 2 e 2 H Q 2 G ^ k , V e 2 . V can be estimated as:
V ^ = max V e 2 H Q 2 G ^ k , V e 2
Through a 1D search, the V ^ k k = 1 , , K ^ of the k-th target can be obtained, and the range of k-th target can be obtained from:
R ^ k = G ^ k f c S V ^ k
Finally, using least square method to estimate the angle of k-th target, taking G ^ k and V ^ k into α ^ θ , we can get:
α ^ θ k = Q 2 G ^ k , V ^ k e 2 e 2 H Q 2 G ^ k , V ^ e 2
for α ( θ ) = 1 , e j 2 π f c c d sin θ , , e j 2 π f c c d h 1 1 sin θ 1 × h 1 Τ , let q ^ k = p h a s e a ^ θ , q ^ k can be expressed as:
q ^ k = 0 , f c c d , , f c c d h 1 1 Τ 2 π sin θ k           = p b k
where p = 0 , f c c d , , f c c d h 1 1 Τ , b k = 2 π sin θ k , the least square method as:
min b k p b k q ^ k 2 2
The solution of b is b k = p Τ p 1 p Τ q ^ k , the θ can be estimated as:
θ ^ k = arcsin b k 2 π
Therefore, only 1 + K ^ 1D searches are used to estimate the range, velocity, and angle of the targets, which greatly reduces the computational complexity of 3D the search.
We summarize the steps for the proposed low complexity algorithm in Figure 6.

4. Experimental Results and Performance Analysis

This section presents the results of several experiments, mainly including three simulation experiments in Section 4.1 and four actual indoor and outdoor experiments in Section 4.2, and the performance analysis of the proposed algorithm in Section 4.3.

4.1. Simulation Experiment

We consider the simulated FMCW radar parameters as carrier frequency is fc = 77 GHz, the sweep duration is Tc = 40 μs, the signal bandwidth is Bw = 300 MHz, the time sampling frequency is fs = 7 MHz, the number of time samples is M = 1/fs = 280, the number of Chirps is L = 12, the number of array antennas is N = 6, and the spacing of antenna is d = λ/2. The truncated length in the temporal domain for Block construction illustrated in Figure 3 is selected as 10, namely, m2m1 + 1=m4m3 + 1 = 10. The size of the spatial smoothing window is set to: h1 = 4, h2 = 250, h3 = 8, p1 = 2, p2 = 30, p3 = 4.

4.1.1. Detection Simulation

The first simulation experiment is conducted for verification of effectiveness of the proposed algorithm under the scenario with six targets, Target 1 [30 m, −3 m/s, −20°], Target 2 [50 m, 4 m/s, 35°], Target 3 [50.1 m, 6 m/s, 20°], and Target 4 [70 m, 5 m/s, 40°], Target 5 [100 m, 7 m/s, −30°], Target 6 [100.5 m, −4 m/s, 30°], and the proposed algorithm is used for super-resolution estimation of the targets. It can be noticed that the Target 2 and Target 3 are very close to each other with a space of 0.1 m in the range domain, and Target 5 and Target 6 are spaced with 0.5 m. The estimation process is shown in Figure 7 and final estimated results are listed in Table 1.
According to the proposed algorithm, the range, angle, and velocity can be estimated sequentially for each target following the flowchart presented in Figure 6. As shown in Figure 7, four estimated range blocks, B1 to B4 (shown in Figure 7(a1–a4)), can be selected for all targets in the first step. Obviously, the block B2 contains the range information of Target 2 and Target 3, and the block B4 contains the range information of Target 5 and Target 6 targets. Then, the cascaded estimation processes are performed for blocks B1 to B4, as shown in Figure 7b,e, respectively. To be more specific, Figure 7b presents the estimated G, in Figure 7(b1), and the estimated V for Target 1 in Figure 7(b2). Figure 7c shows the estimation process of G and V for two targets, Target 2 and Target 3; namely: Figure 7(c1) shows the estimated G of the two targets, and Figure 7(c2,c3) shows the corresponding estimated V for the two targets, respectively. Finally, through the 1D search of estimated G and V, the value of the peaks will be utilized to calculate the G and velocity for each target by the Equations (17) and (19), respectively, and then the estimated G and velocity for each target will be utilized to calculate the corresponding range and angle by the Equations (20) and (24). The final estimated results are listed in Table 1.
This simulation experiment performs the cascaded estimation process of the proposed algorithm estimating the targets, and proves the effectiveness of the algorithm under a simulated environment. The results in Table 1 show that the proposed algorithm can accurately achieve the 3D joint estimation of the range, velocity, and angle of multi-targets.

4.1.2. Algorithm Accuracy

The second simulation experiment is to compare the accuracy of the proposed algorithm with 3D FFT and 3D MUSIC algorithm, and use root mean squared error (RMSE) to evaluate the accuracy of the algorithm. Define the RMSE as: 1 K k = 1 K 1 N n = 1 N ζ n ζ k 2 , where K is the number of targets, N is the number of the experiments per target, ζ n is the estimated parameters of the Nth experiment, ζ k is the parameter of set target. In this experiment, the number of detections for each target is N = 100 and a total of K = 5 targets are tested. The result is as shown in Figure 8.
Through the second simulation experiment, it can be found that the accuracy of the proposed algorithm is much higher than that of the 3D FFT algorithm under the same experimental conditions, and slightly higher than that of the 3D MUSIC algorithm.

4.1.3. Complexity Analysis

The proposed algorithm to greatly reduce the complexity was introduced in Section 3. In this simulation experiment, we analyze its complexity and compare the complexity with 3D MUSIC and 2D MUSIC algorithm. The main contribution of complexity includes: FFT, correlation matrix, eigenvalue decomposition, and peak searching. The complexity of the proposed algorithm is O (h1h2h3log2h2 + (mh1h3)3 + (h1mh3K) (nrh1h3 (h1h3 + m) + Knvh1 (h1 + mh3)) + Kh1). The complexity of 3D MUSIC is O ((h1h2h3)3 + (h1h2h3K) h1h2h3nrnvna). The complexity of 2D MUSIC is O ((h1h2)3 + (h1h2 – K) h1h2nrnv), where K is the number of targets, m is the frequency domain extraction length, nr is the number of steps for range search, nv is the number of steps for velocity search, and na is the number of steps for angle search. The result is as shown in Figure 9. The experimental results show that the complexity of the proposed algorithm is much lower than that of the 3D MUSIC algorithm, and even lower than that of the 2D MUSIC algorithm.

4.2. Actual Experiment

The actual experiments will be carried out in both indoor and outdoor environments to demonstrate the feasibility of the proposed algorithm. The indoor experimental environment is in a small microwave anechoic chamber with size L × W× H 2.4 m× 2.4 m× 2 m, as shown in the Figure 10, and the outdoor experimental environment is on the road. The FMCW radar sensor system is the AWR1443 radar device of Texas Instruments (TI).
Consider the FMCW radar parameters as: the carrier frequency fc = 77 GHz, the sweep duration Tc = 58 us, the signal bandwidth Bw = 1.16 GHz, the time sampling frequency fs = 5 MHz, the number of time samples M = 1/fs = 290, the number of Chirps L = 12, the number of array antennas N = 4, the spacing of antenna d = λ/2, The length of the frequency domain extraction is 10. The size of the spatial smoothing window is set to: h1 = 3, h2 = 260, h3 = 8, p1 = 1, p2 = 30, p3 = 4.

4.2.1. Corner Reflector Detection

The first and second actual experiments are detecting targets (corner reflectors with regular shape) indoors which were carried out in these experimental scenarios as shown in the Figure 11, which detected two stationary targets, one stationary and one moving targets, respectively. Specific information for each target is listed in Table 2.
After the calculation of the proposed algorithm, the estimation process of G and V is as shown in Figure 12 and final estimated results are listed in Table 3.
As with simulation experiments, the range and angle of the target can be obtained from Equations (20) and (24) after obtaining the G and V. The final estimated results are listed in Table 3.
In order to present the actual effect of the complexity-reducing effect of the proposed algorithm, the computational times of real radar data for 3D-MUSIC, 3D-FFT, and the proposed algorithm, respectively, are obtained. All implementations are performed on a dual Intel(R) Xeon(R) Gold 5218 CPU 2.3 GHz server with 128 G of memory running Ubuntu Linux 20.04, and the comparison results are listed in Table 4. The 3D-FFT is with just sub-seconds but with very low resolution. Several days are needed for the 3D-MUSIC algorithm, while the execution time of the proposed algorithm is just seconds.
The SNR of Experiment 1 and 2 is estimated as about 15 dB, and the calculated RMSEs are as shown in Table 5. The corner reflectors are selected as experimental targets, and the calculated RMSEs of range and velocity are a bit of bigger than the corresponding simulated results as shown in Figure 8a,b. The estimated angle of Target 1 of Experiment 1 is estimated as −2.8°, while the center of Target 1 (one corner reflector) is set up in the angle of −4°, and thus the RMSE of angle is around 1.49°. However, the precision of the estimated results of angle is in a very small value with about 0.028°, which is similar to the simulated RMSE result as shown in Figure 8c.

4.2.2. Irregular-Shape Target Detection and 2D Imaging

In comparison with corner reflector detection, it is also necessary to test the proposed algorithm with irregular-shape target detection. A small metal knife with irregular shape is set up in the same plane with radar system, and the angle of its center is very mall, as shown in Figure 13a. The target is detected through 1500 trials, and then the estimated R ^ and θ ^ are used to calculate the position information of x = R ^ sin θ ^ and y = R ^ cos θ ^ in the coordination system as shown in Figure 13b. As the proposed algorithm can achieve super-resolution estimation of the range, velocity, and azimuth angle of the target, a shape profile of the metal knife along the x-axis direction, namely, the projection of the target in the azimuth cross range, can be estimated successfully as shown in Figure 14a. However, 3D FFT has worse resolution, especially in the angle domain, and thus only one point-like target is estimated as shown in Figure 14b.

4.2.3. Targets Detection in Outdoor Environment

The fourth actual experiment is an outdoor targets detection experiment which was conducted in the scenario as shown in Figure 15. The targets of this experiment are a moving person and a stationary bicycle. The estimation process of G and V as shown in Figure 16 and the result of this experiment is listed in Table 6.

4.3. Discussion

A number of experimental trials were conducted in this section to verify the effectiveness of the proposed algorithm. Experimental results show that the proposed algorithm can successfully achieve the joint 3D estimation of range, velocity, and angle for multi-targets with the characteristics of super resolution and low complexity when compared with the conventional 3D FFT and 3D MUSIC algorithms. Moreover, the accuracy of the proposed algorithm is much higher than that of the 3D FFT algorithm under the same experimental conditions, and slightly higher than that of the 3D MUSIC algorithm because it slightly improves the SNR. According to the analysis in Subsection 4.1.3, the complexity of the proposed algorithm is lower than that of the conventional 3D MUSIC algorithm, and even lower than that of the 2D MUSIC algorithm. However, similarly to the conventional 3D MUSIC, the proposed method is still composed of a variety of matrix operations, such as EVD, and two operations of 1D spectrum searching are inevitable. Thus, it is still difficult to implement the low complexity algorithm on FPGA and DSP for real time application.

5. Conclusions

This paper has presented a low complexity 3D joint super-resolution estimation algorithm for an FMCW radar system, implemented by the Lagrange multiplier method and rank-reduced techniques. Various experiments, including simulation experiments, corner reflector detection and irregular-shape target detection in the chamber, and the real person and bike detection in outdoor environment, were conducted to clarify the proposed algorithm. The experimental results verified the super resolution and low complexity performance. However, it was necessary to operate the proposed algorithm on a PC due to the amount of matrix operations and searching costs. Development of a simpler version of the suggested algorithm and novel hardware design would be needed for a real-time detection FMCW radar system.

Author Contributions

Y.L. and Q.L. contributed equally to this paper. Conceptualization, Y.L.; methodology, Y.L. and Q.L.; software, Z.W.; validation, Y.L., Q.L. and Z.Z.; formal analysis, Y.L.; investigation, Z.W.; resources, Z.Z.; data curation, Q.L.; writing—original draft preparation, Q.L.; writing—review and editing, Y.L.; visualization, Q.L.; supervision, Z.Z.; project administration, Y.L. and Z.Z.; funding acquisition, Z.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China grant number [62001143, 61871149, 62171150], the National Natural Science Foundation of Shandong Province grant number [ZR2020QF006, ZR2020YQ46], the Major Scientific and technological innovation project of Shandong Province grant number [2020CXGC010705, 2021ZLGX05], and the National Natural Science Foundation of Hunan Province grant number [2022JJ40562]. And The APC was funded by [2021ZLGX05].

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Acknowledgments

We thank Haozhen Bai for helping with the experiments and helpful discussions.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Patole, S.M.; Torlak, M.; Wang, D.; Ali, M. Automotive radars: A review of signal processing techniques. IEEE Signal Process. Mag. 2017, 34, 22–35. [Google Scholar] [CrossRef]
  2. Lee, S.; Jung, Y.; Lee, M.; Lee, W. Compressive Sensing-Based SAR Image Reconstruction from Sparse Radar Sensor Data Acquisition in Automotive FMCW Radar System. Sensors 2021, 21, 7283. [Google Scholar] [CrossRef] [PubMed]
  3. Son, Y.-S.; Sung, H.-K.; Heo, S.W. Automotive Frequency Modulated Continuous Wave Radar Interference Reduction Using Per-Vehicle Chirp Sequences. Sensors 2018, 18, 2831. [Google Scholar] [CrossRef] [PubMed]
  4. Passafiume, M.; Rojhani, N.; Collodi, G.; Cidronali, A. Modeling Small UAV Micro-Doppler Signature Using Millimeter-Wave FMCW Radar. Electronics 2021, 10, 747. [Google Scholar] [CrossRef]
  5. Coluccia, A.; Parisi, G.; Fascista, A. Detection and Classification of Multirotor Drones in Radar Sensor Networks: A Review. Sensors 2020, 20, 4172. [Google Scholar] [CrossRef] [PubMed]
  6. Turppa, E.; Kortelainen, J.M.; Antropov, O.; Kiuru, T. Vital Sign Monitoring Using FMCW Radar in Various Sleeping Scenarios. Sensors 2020, 20, 6505. [Google Scholar] [CrossRef] [PubMed]
  7. Iyer, S.; Zhao, L.; Mohan, M.P.; Jimeno, J.; Siyal, M.Y.; Alphones, A.; Karim, M.F. mm-Wave Radar-Based Vital Signs Monitoring and Arrhythmia Detection Using Machine Learning. Sensors 2022, 22, 3106. [Google Scholar] [CrossRef] [PubMed]
  8. Antolinos, E.; García-Rial, F.; Hernández, C.; Montesano, D.; Llorente, J.I.G.; Grajal, J. Cardiopulmonary Activity Monitoring Using Millimeter Wave Radars. Remote Sens. 2020, 12, 2265. [Google Scholar] [CrossRef]
  9. Hyun, E.; Lee, J.H. A Method for Multi-target Range and Velocity Detection in Automotive FMCW Radar. In Proceedings of the 12th International IEEE Conference on Intelligent Transportation System, St. Louis, MO, USA, 4–7 October 2009; pp. 7–11. [Google Scholar]
  10. Sediono, W.; Sediono, W. Method of measuring Doppler shift of moving targets using FMCW maritime radar. In Proceedings of the 2013 IEEE International Conference on Teaching, Assessment and Learning for Engineering (TALE), Bali, Indonesia, 26–29 August 2013; pp. 378–381. [Google Scholar] [CrossRef]
  11. Winkler, V. Range Doppler detection for automotive FMCW radar. In Proceedings of the 4th IEEE European Radar Conference, Munich, Germany, 10–12 October 2007; pp. 166–169. [Google Scholar]
  12. Manokhin, G.O.; Erdyneev, Z.T.; Geltser, A.A.; Monastyrev, E.A. MUSIC-based algorithm for range-azimuth FMCW radar data processing without estimating number of targets. In Proceedings of the 2015 IEEE 15th Mediterranean Microwave Symposium (MMS), Lecce, Italy, 30 November–2 December 2015; pp. 1–4. [Google Scholar] [CrossRef]
  13. Schmidt, R.O. Multiple emitter location and signal parameter estimation. In Proceedings of the RADC Spectral Estimation Workshop; Saxpy Computer Corporation: Pleasanton, CA, USA, 1979; pp. 243–258. [Google Scholar]
  14. Zhou, L.; Zhao, Y.-J.; Cui, H. High resolution wideband DOA estimation based on modified MUSIC algorithm. In Proceedings of the 2008 International Conference on Information and Automation, Changsha, China, 20–23 June 2008; pp. 20–22. [Google Scholar] [CrossRef]
  15. Iizuka, T.; Toriumi, Y.; Ishiyama, F.; Kato, J. Root-MUSIC Based Power Estimation Method with Super-Resolution FMCW Radar. In Proceedings of the 2020 IEEE/MTT-S International Microwave Symposium (IMS), Los Angeles, CA, USA, 4–6 August 2020; pp. 1027–1030. [Google Scholar] [CrossRef]
  16. Hwang, H.K.; Zekeriva Alivazicioglu, Z.; Yakovlev, A. Direction of arrival estimation using a root-MUSIC algorithm. In Proceedings of the International Multi-Conference of Engineers and Computer Scientists, Hong Kong, China, 19–21 March 2008; Volume II. [Google Scholar]
  17. Lemma, A.; van der Veen, A.-J.; Deprettere, E. Analysis of joint angle-frequency estimation using ESPRIT. IEEE Trans. Signal Process. 2003, 51, 1264–1283. [Google Scholar] [CrossRef]
  18. Roy, R.; Kailath, T. ESPRIT-estimation of signal parameters via rotational invariance techniques. IEEE Trans. Acoust. Speech Signal Process. 1989, 37, 984–995. [Google Scholar] [CrossRef]
  19. Gurcan, Y.; Yarovoy, A. Super-resolution algorithm for joint range-azimuth-Doppler estimation in automotive radars. In Proceedings of the 2017 European Radar Conference (EURAD), Nuremberg, Germany, 11–13 October 2017; pp. 73–76. [Google Scholar] [CrossRef]
  20. Porozantzidou, M.G.; Chryssomallis, M.T. Azimuth and elevation angles estimation using 2-D MUSIC algorithm with an L-shape antenna. In Proceedings of the 2010 IEEE Antennas and Propagation Society International Symposium, Toronto, ON, Canada, 11–17 July 2010; pp. 1–4. [Google Scholar] [CrossRef]
  21. Zheng, Z.; Yang, Y.; Wang, W.-Q.; Li, G.; Yang, J.; Ge, Y. 2-D DOA estimation of multiple signals based on sparse L-shaped array. In Proceedings of the 2016 International Symposium on Antennas and Propagation (ISAP), Okinawa, Japan, 24–28 October 2016; pp. 1014–1015. [Google Scholar]
  22. Belfiori, F.; van Rossum, W.; Hoogeboom, P. Application of 2D MUSIC algorithm to range-azimuth FMCW radar data. In Proceedings of the 2012 9th European Radar Conference, Amsterdam, The Netherlands, 31 October–2 November 2012; pp. 242–245. [Google Scholar]
  23. Seo, J.; Lee, J.; Park, J.; Kim, H.; You, S. Distributed Two-Dimensional MUSIC for Joint Range and Angle Estimation with Distributed FMCW MIMO Radars. Sensors 2021, 21, 7618. [Google Scholar] [CrossRef] [PubMed]
  24. Ivanov, S.; Kuptsov, V.; Badenko, V.; Fedotov, A. An Elaborated Signal Model for Simultaneous Range and Vector Velocity Estimation in FMCW Radar. Sensors 2020, 20, 5860. [Google Scholar] [CrossRef] [PubMed]
  25. Liu, W.; He, J.; Yu, W. A Computationally Efficient Scheme for FMCW Radar Detection and Parameter Estimation. In Proceedings of the 2019 IEEE International Conference on Signal, Information and Data Processing (ICSIDP), Chongqing, China, 11–13 December 2019; pp. 1–5. [Google Scholar] [CrossRef]
  26. Zhang, X.; Xu, L.; Xu, L.; Xu, D. Direction of Departure (DOD) and Direction of Arrival (DOA) Estimation in MIMO Radar with Reduced-Dimension MUSIC. IEEE Commun. Lett. 2010, 14, 1161–1163. [Google Scholar] [CrossRef]
  27. Beizuo, Z.; Xiong, X.; Xiaofei, Z. DOA and Polarization Estimation with Reduced-Dimensional MUSIC Algorithm for L-shaped Electromagnetic Vector Sensor Array. In Proceedings of the 2019 IEEE 4th International Conference on Signal and Image Processing (ICSIP), Wuxi, China, 19–21 July 2019; pp. 61–64. [Google Scholar] [CrossRef]
  28. Fang, W.-H.; Fang, L.-D. Joint Angle and Range Estimation With Signal Clustering in FMCW Radar. IEEE Sens. J. 2019, 20, 1882–1892. [Google Scholar] [CrossRef]
  29. Zhao, E.; Zhang, F.; Zhang, D.; Pan, S. Three-dimensional Multiple Signal Classification (3D-MUSIC) for Super-resolution FMCW Radar Detection. In Proceedings of the 2019 IEEE MTT-S International Wireless Symposium (IWS), Guangzhou, China, 19–22 May 2019; pp. 1–3. [Google Scholar] [CrossRef]
  30. Wisudawan, H.N.; Ariananda, D.D.; Hidayat, R. 3-D MUSIC Spectrum Reconstruction for Joint Azimuth-Elevation-Frequency Band Estimation. In Proceedings of the 2020 54th Asilomar Conference on Signals, Systems, and Computers, Virtual, 1–4 November 2020; pp. 1250–1254. [Google Scholar] [CrossRef]
  31. Pillai, S.; Kwon, B. Forward/backward spatial smoothing techniques for coherent signal identification. IEEE Trans. Acoust. Speech Signal. Process. 1989, 37, 8–15. [Google Scholar] [CrossRef]
  32. Nam, H.; Li, Y.-C.; Choi, B.; Oh, D. 3D-Subspace-Based Auto-Paired Azimuth Angle, Elevation Angle, and Range Estimation for 24G FMCW Radar with an L-Shaped Array. Sensors 2018, 18, 1113. [Google Scholar] [CrossRef] [PubMed]
  33. Li, Y.-C.; Oh, D.; Kim, S.; Chong, J.-W. Dual Channel S-Band Frequency Modulated Continuous Wave Through-Wall Radar Imaging. Sensors 2018, 18, 311. [Google Scholar] [CrossRef] [PubMed]
  34. Schoor, M.; Yang, B. Subspace based DOA estimation in the presence of correlated signals and model errors. In Proceedings of the 2009 IEEE International Conference on Acoustics, Speech and Signal Processing, Taipei, Taiwan, 19–24 April 2009. [Google Scholar]
  35. Chen, Y.-M. On spatial smoothing for two-dimensional direction-of-arrival estimation of coherent signals. IEEE Trans. Signal Process. 1997, 45, 1689–1696. [Google Scholar] [CrossRef]
  36. Wang, H.; Liu, K.R. 2-D spatial smoothing for multipath coherent signal separation. IEEE Trans. Aerosp. Electron. Syst. 1998, 34, 391–405. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Block diagram of FMCW radar.
Figure 1. Block diagram of FMCW radar.
Sensors 22 06474 g001
Figure 2. The received radar data cube.
Figure 2. The received radar data cube.
Sensors 22 06474 g002
Figure 3. Frequency domain processing.
Figure 3. Frequency domain processing.
Sensors 22 06474 g003
Figure 4. Frequency domain processing and spatial smoothing.
Figure 4. Frequency domain processing and spatial smoothing.
Sensors 22 06474 g004
Figure 5. Understanding of the 3D steering vector C, (a) three kinds of phase shifts, (b) constructions of vector elements.
Figure 5. Understanding of the 3D steering vector C, (a) three kinds of phase shifts, (b) constructions of vector elements.
Sensors 22 06474 g005
Figure 6. Flowchart of the proposed low complexity algorithm.
Figure 6. Flowchart of the proposed low complexity algorithm.
Sensors 22 06474 g006
Figure 7. Simulation results of experiment for six targets at SNR = 10 dB. (a) is the targets-located blocks selection for range estimation. (b) is the G and V estimation process of B1. (c) is the G and V estimation process of B2. (d) is the G and V estimation process of B3. (e) are the G and V estimation process of B4.
Figure 7. Simulation results of experiment for six targets at SNR = 10 dB. (a) is the targets-located blocks selection for range estimation. (b) is the G and V estimation process of B1. (c) is the G and V estimation process of B2. (d) is the G and V estimation process of B3. (e) are the G and V estimation process of B4.
Sensors 22 06474 g007aSensors 22 06474 g007b
Figure 8. The results of accuracy comparison. (a) the RMSE of range. (b) the RMSE of velocity. (c) the RMSE of angle.
Figure 8. The results of accuracy comparison. (a) the RMSE of range. (b) the RMSE of velocity. (c) the RMSE of angle.
Sensors 22 06474 g008
Figure 9. The comparison results of complexity. (a) the relationship between the complexity and the number of time samples when the search precision is 0.01. (b) the relationship between the complexity and the search precision when the number of time sample is 750.
Figure 9. The comparison results of complexity. (a) the relationship between the complexity and the number of time samples when the search precision is 0.01. (b) the relationship between the complexity and the search precision when the number of time sample is 750.
Sensors 22 06474 g009
Figure 10. Indoor experimental environment: (a) the inside view of the microwave anechoic chamber; (b) illustration of the size of the chamber and the coordinate system.
Figure 10. Indoor experimental environment: (a) the inside view of the microwave anechoic chamber; (b) illustration of the size of the chamber and the coordinate system.
Sensors 22 06474 g010
Figure 11. Experimental scenarios. (a,c) are the experimental scene and model diagram of experiment 1, there are two stationary targets in the experiment. (b,d) are the scene and model diagram of experiment 2, one stationary and one moving targets in the experiment.
Figure 11. Experimental scenarios. (a,c) are the experimental scene and model diagram of experiment 1, there are two stationary targets in the experiment. (b,d) are the scene and model diagram of experiment 2, one stationary and one moving targets in the experiment.
Sensors 22 06474 g011
Figure 12. The estimation process of G and V. (a) is the targets-located blocks selection of experiment 1. (b) is the result of G and V of experiment 1. (c) is the targets-located blocks selection of experiment 2. (d) is the result of G and V of experiment 2.
Figure 12. The estimation process of G and V. (a) is the targets-located blocks selection of experiment 1. (b) is the result of G and V of experiment 1. (c) is the targets-located blocks selection of experiment 2. (d) is the result of G and V of experiment 2.
Sensors 22 06474 g012aSensors 22 06474 g012b
Figure 13. The experimental scene diagram. (a) is the experimental scene. (b) is the model diagram of experiment.
Figure 13. The experimental scene diagram. (a) is the experimental scene. (b) is the model diagram of experiment.
Sensors 22 06474 g013
Figure 14. The result of the experiment. (a) is the imaging result of the proposed algorithm. (b) is the imaging result of 3D FFT algorithm.
Figure 14. The result of the experiment. (a) is the imaging result of the proposed algorithm. (b) is the imaging result of 3D FFT algorithm.
Sensors 22 06474 g014
Figure 15. Outdoor experimental scenes. (a) is the outside environment. (b) is the model diagram of environment.
Figure 15. Outdoor experimental scenes. (a) is the outside environment. (b) is the model diagram of environment.
Sensors 22 06474 g015
Figure 16. The estimation process of G and V. (a) is the targets-located blocks selection. (b) is the result of G and V of the experiment.
Figure 16. The estimation process of G and V. (a) is the targets-located blocks selection. (b) is the result of G and V of the experiment.
Sensors 22 06474 g016
Table 1. The estimated results of the first experiment.
Table 1. The estimated results of the first experiment.
Target Number R ^ V ^ θ ^
129.9976 m−2.9965 m/s−20.1844°
250.0114 m4.087 m/s34.2881°
350.0857 m5.8879 m/s20.7431°
469.9981 m5.0175 m/s40.2631°
5100.0015 m6.9885 m/s−29.9204°
6100.5007 m−3.987 m/s29.9461°
Table 2. Specific information of targets.
Table 2. Specific information of targets.
Target 1Target 2
Experiment 1[2.312 m, 0 m/s, −4°][2.403 m, 0 m/s, 30°]
Experiment 2[2.295 m, 0 m/s, 0°][1.832 m, −0.5 m/s, 12°]
Table 3. The estimated results of experiment 1 and experiment 2.
Table 3. The estimated results of experiment 1 and experiment 2.
Target No. R ^ V ^ θ ^
Experiment 112.2987 m0 m/s−2.8483°
22.4447 m0 m/s28.1479°
Experiment 212.2834 m0 m/s−0.26748°
21.8321 m−0.5 m/s11.7486°
Table 4. The comparison results of computational times for real radar data.
Table 4. The comparison results of computational times for real radar data.
3D-FFT3D-MUSICThe Proposed Algorithm
Experiment 10.1115 s3days1.4936 s
Experiment 20.1123 s3days1.5018 s
Table 5. The calculated RMSE for actual experiments.
Table 5. The calculated RMSE for actual experiments.
RVθ
Experiment 10.0273 m0 m/s1.4913°
Experiment 20.0721 m0.0630 m/s0.8776°
Table 6. The estimated results of experiment 4.
Table 6. The estimated results of experiment 4.
Target No. R ^ V ^ θ ^
Experiment 416.0998 m1.1 m/s−16.3641°
25.2129 m0 m/s−2.1378°
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Li, Y.; Long, Q.; Wu, Z.; Zhou, Z. Low-Complexity Joint 3D Super-Resolution Estimation of Range Velocity and Angle of Multi-Targets Based on FMCW Radar. Sensors 2022, 22, 6474. https://doi.org/10.3390/s22176474

AMA Style

Li Y, Long Q, Wu Z, Zhou Z. Low-Complexity Joint 3D Super-Resolution Estimation of Range Velocity and Angle of Multi-Targets Based on FMCW Radar. Sensors. 2022; 22(17):6474. https://doi.org/10.3390/s22176474

Chicago/Turabian Style

Li, Yingchun, Qi Long, Zhongjie Wu, and Zhiquan Zhou. 2022. "Low-Complexity Joint 3D Super-Resolution Estimation of Range Velocity and Angle of Multi-Targets Based on FMCW Radar" Sensors 22, no. 17: 6474. https://doi.org/10.3390/s22176474

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop