Next Article in Journal
Crowd-Sourced Mapping of New Feature Layer for High-Definition Map
Previous Article in Journal
A Hardware Implemented Autocorrelation Technique for Estimating Power Spectral Density for Processing Signals from a Doppler Wind Lidar System
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Long-Range Drone Detection of 24 G FMCW Radar with E-plane Sectoral Horn Array

1
Collaborative Robots Research Center, Daegu Gyeongbuk Institute of Science and Technology, Daegu 42988, Korea
2
Department of Electronic Engineering, Hanyang University, Seoul 04763, Korea
*
Author to whom correspondence should be addressed.
Sensors 2018, 18(12), 4171; https://doi.org/10.3390/s18124171
Submission received: 28 September 2018 / Revised: 13 November 2018 / Accepted: 24 November 2018 / Published: 28 November 2018
(This article belongs to the Section Remote Sensors)

Abstract

:
In this work, a 24-GHz frequency-modulated continuous-wave (FMCW) radar system with two sectoral horn antennas and one transmitting lens antenna for long-range drone detection is presented. The present work demonstrates the detection of a quadcopter-type drone using the implemented radar system up to a distance of 1 km. Moreover, a 3D subspace-based algorithm is proposed for the joint range-azimuth-Doppler estimation of long-range drone detection. The effectiveness of the long-range drone detection is verified with the implemented radar system through a variety of experiments in outdoor environments. This is the first such demonstration for long-range drone detection with a 24-GHz FMCW radar.

1. Introduction

The use of civilian drones has stirred public concern in recent years due to the threat to public safety and national security, and civilian drone surveillance has become a very important but largely unexplored topic [1,2,3]. Many efforts have been made in terms of drone detections based on various techniques, such as audio detection/classification of drones in [4,5,6] and the use of cameras for the movements of drones in [7,8,9]. This work focused on a radar detection technique for long-range drone detection utilizing radar systems.
Generally, the small drones have a few pairs of rotor blades, and the blades rotate when a drone is flying. The micro-scale movements of blades produce additional Doppler shifts, referred to as micro-Doppler feature [10]. Due to the rotation of blades, the phenomenon of “blade flashes” will be captured by analyzing the micro-Doppler features [11]. The “blade flashes” is induced by the strong reflection of rotor blades when a pair of blades are momentarily aligned normal to the radar beam. Thus, the analysis of micro-Doppler features is always utilized for drone classification [12,13]. As presented in [14], the micro-Doppler signatures including blade flashes due to the propeller blades were observed from a drone “DJI Phantom 3 Standard” using a 94-GHz FMCW radar at a range of 120 m. Instead of FMCW radar, an X-band CW radar at 9.7-GHz in [15] was used to collect data for the micro-Doppler feature analysis of drones, and the distances between the radar and targets are limited from 3 m to 150 m. In [16], a coherent pulsed radar NetRAD operating at 2.4 GHz was utilized for data collection to extract the micro-Doppler features of drones at ~60 m from the baseline. The work [17] aims to classify the birds and drones by using the BirdRad radar system working at 3.25 GHz, and the observations of drones were done at a range of 300 m to 400 m. In [18], a distributed FMCW radar system with a transmitter and a receiver was proposed to detect drones within a 500 m range, and only the range and Doppler of drones were obtained through 2D FFT processing. The conventional radar systems in [14,15,16,17] organized the drone classification successfully, but the effective detection range of drones are limited to a few meters. Although the radar system in [17,18] can detect the drones in a longer range from 300 m to 500 m, only micro-Doppler or range and Doppler information are estimated by micro-Doppler feature analysis or simple FFT processing. However, the effective detection of drones in long range and the three-dimensional (3D) parameter estimation of range/azimuth/Doppler for the detected drones are important.
In this work, a 24-GHz FMCW radar system is implemented with a transmitting lens horn antenna and a two-element receiving sectoral antenna array, for the joint range-azimuth-Doppler 3D detection of drones over a long range up to 1 km. The FMCW signals should be transmitted with a high power and a high gain. Thus, the power amplifier (PA) circuit is designed to achieve 38 dBm output with 4 PA chips of 34 dBm output P1. The PA output signals of 38 dBm are emitted by the high gain lens horn antenna of 25 dBi. Moreover, the receiving antenna array, with the element spacing one lambda of 24 GHz signals, are also designed in a form of E-plane sectoral horn antennas [19] to achieve a high gain of 14 dBi. Base on the designed two-element sectoral horn antenna array, a 3D subspace-based algorithm is proposed for the joint range-azimuth-Doppler 3D estimation of the long-range drones. In Section 2, we give the system model for the proposed algorithm and implemented radar system. In Section 3, we present the conventional algorithms. In Section 4, we demonstrate the proposed algorithms. In Section 5, we demonstrate the implementation of the designed radar system. In Section 6, the conducted experiments and experiment results are presented. We summarize our results and discuss future work in Section 7.
Several experiments in outdoor environments were conducted, and the good performance of the modified 3D subspace-based algorithm and the implemented 24-GHz FMCW radar system with a transmitting lens antenna and two receiving sectoral horn antennas was verified experimentally.

2. System Model

In our implemented system, the transmitted P FMCW chirp pulses can be defined by:
s p ( t ) = p = 0 P 1 s ( t p T P R I ) where   s ( t ) = { exp [ j ( 2 π f c t + μ 2 t 2 ) ]   for   0 t < T s y m 0   elsewhere   ,
where p = 0, 1, …, P − 1, TPRI is the pulse repetition interval (PRI), fc denotes the carrier frequency, μ is the rate of change of the instantaneous frequency of a chirp signal, and Tsym is the duration of the FMCW chirp pulse. Then the bandwidth B of the transmitted FMCW chirp pulses can be obtained by B = μ/Tsym. For one observation of the targets, P pulses are collected for parameter estimation.
As seen in Figure 1, we assume that the reflected signals from the K targets arrive at the receiving sectoral horn array with (ϕk, vk, τk), k = 0, 1, …, K − 1, where ϕk, vk, and τk are the azimuth angle, radial velocity, and time delay of the k-th target, respectively. We define xl,p(t) as the received signal of the p-th pulse on the l-th antenna element, l = 0, 1, …, L − 1. Thus, the signal representation of the two-element sectoral antenna array can be modeled as:
x l , p ( t ) = k = 0 K 1 a k exp ( j 2 π l d sin ϕ k λ ) s p ( t τ k ) + w l , p ( t ) ,
where ak is the complex response of the antenna to the k-th receiving signal, d is the spacing between the sensors, λ denotes the wave length, c is the propagation speed of the wavefronts, and wl,p is the additive white Gaussian noise (AWGN) of the p-th pulse at the l-th antenna.
The received FMCW signals can be transformed into a sinusoidal waveform by the de-chirp operation [20], which involves multiplication of the received signal with a transmitted chirp replica (the reference signal). The obtained sinusoidal waveforms are called beat signal as in [21], and it can be represented after the de-chirp operation by:
y l , p ( t ) = x l , p ( t ) s ( t ) = k = 0 K 1 a k exp ( j π λ l d sin ϕ k ) exp ( j ( f c t + 2 π f c τ k μ 2 τ k 2 ) ) + w ¯ l , p ( t )
where w ¯ l , p ( t ) denotes the transformed AWGN signal. Assuming that the k-th target in the direct path distance Rk away from the antenna is moving with a constant radial velocity vk over P pulses, the direct path distance between the antenna and the k-th object for the p-th pulse is changed to Rp,k = Rk + vkpTPRI = Rk + vkt. Thus, the time delay τk of the k-th target for the p-th pulse can be obtained by:
τ k = 2 R p , k c = 2 ( R k + v k t ) c ,
We substituting Equation (4) in Equation (3), and the Equation (3) is rewritten with some approximation of [22] by:
y l , p ( t ) = k = 0 K 1 ( a k exp ( j π λ l d sin ϕ k ) × exp ( j ( 2 π f c 2 ( R k + v k t ) c + 2 μ R k c t + γ k ) ) )
where γ k = 2 B v k T s y m c t 2 denotes the Range-Doppler-Coupling and can be neglected as described in [15].
After analog-to-digital conversion, the discrete time model for yl,p (t) of (3) with the sampling frequency fs = 1/Ts satisfying the Nyquist criterion can be derived by yl,p[n] = yl,p(nTs) for p = 0, …, P − 1, l = 0, …, L − 1, and n = 0, …, N − 1, where N = Tsym/Ts. Thus, each of the signals received by the l-th antenna elements can be represented by
Y l = [ y l , 0 [ 0 ] y l , 0 [ 1 ] y l , 0 [ N 1 ] y l , 1 [ 0 ] y l , 1 [ 1 ] y l , 1 [ N 1 ] y l , P 1 [ 0 ] y l , P 1 [ 1 ] y l , P 1 [ N 1 ] ]
Then, there are three kinds of phase shifts in the received signal model, as depicted in Figure 2.
The three kinds of phase shifts, range-induce phase shift κk, azimuth-induced phase shift ξk, and Doppler-induced phase shift ρk, can be defined from the obtained beat signal of Equation (3), as:
κ k = exp ( j 2 μ R k c T s ) , ξ k = exp ( j π λ d sin ϕ k ) , and   ρ k = exp ( j 4 π f c T P R I c v k ) .

3. Conventional Algorithms

In order to make it easy to understand the proposed 3D estimation algorithm in Section 4, the conventional 1D [23] and 2D [24] super-resolution algorithms by using correlation matrix are introduced in this section.

3.1. Conventional 1D Auto-Correlation Matrix

Conventional super-resolution techniques, such as MUltiple SIgnal Classification (MUSIC) [25] and ESPRIT [26], are based on eigen-decomposition of the auto-correlation matrix of the preceding signal model in Equation (3). For instance, in [23], the temporal auto-correlation matrix RT is utilized for range estimation, and it can be obtained from the sampled data of the p-th pluse received by the l-th antenna, here p = 1 and l = 1, as
R T = n = 0 N L 1 y n y n H
where
Sensors 18 04171 i001
Here, L1 denotes the selection parameter satisfying 2 ≤ L1 < N. The sampled data of the p-th pluse received by the l-th antenna includes N elements as [yl,p[0], yl,p[1], …, yl,p[N − 1]], and they will be divided into NL1 + 1 consecutive segments of length L1. The n-th segment include the elements [yl,p[n], yl,p[n + 1], …, yl,p[n + L1 − 1]] as shown in the Equation (9). To be specific, the first segment include the first L1 elements [yl,p[0], yl,p[1], …, yl,p[L1 − 1]], the second segment include the elements [yl,p[1], yl,p[2], …, yl,p[L1]], and so on, the last segment include the elements [yl,p[NL1 − 1], yl,p[NL1], …, yl,p[N − 1]]. The range-induced phase shift can be obtained by the multiplication of the adjacent elements of yn, for example, yl,p[0]y*l,p[1]. [•]* denotes the conjugation of the data contained within. Therefore, the temporal auto-correlation matrix RT includes the range-induced phase shifts. Then, the matrix RT can be decomposed by eigenvalue decomposition (EVD), and the conventional 1D MUSIC algorithm is adopted for range estimations. The steering vector of the 1D MUSIC algorithm can be defined by:
s q = [ 1 exp ( j 2 π Q q ) exp ( j 2 π Q ( L 1 1 ) q ) ] .
The pseudo-spectrum is obtained by the MUSIC algorithm. By the peak detection method, the K peaks can be detected, and the indexes {q0, q1,…, qK−1}, at which the K peaks are found, are used for delay estimation based on the relationship in κk of (4), such that:
R ^ k = c 2 μ ( q k Q T s ) .
This method is also effective for 1D angle estimation or 1D Doppler estimation.

3.2. Conventional 2D Auto-Correlation Matrix

The authors of [24] show that the 1D auto-correlation matrix was extended to the spatial-temporal auto-correlation matrix for 2D estimation of joint range and azimuth angle. The spatial-temporal auto-correlation matrix is defined based on the sampled sequences of the p-th pulse by:
R S T = l = 0 L L 2 n = 0 N L 1 y l , n y l , n H ,
where:
y l , n = [ m l , n m l + 1 , n m l + L 2 1 , n ]   where   m l , n = [ y l , p [ n ] y l , p [ n + 1 ] y l , p [ n + L 1 1 ] ] .
In Equation (13), the vector yl,n includes two kinds of phase shift: range-induced phase shift between the elements yl,p[n] and yl,p[n + 1], and azimuth-induced phase shift between the elements yl,p[n] and yl+1,p[n]. Similar with the previous method in Section 3.1, the conjugation between the corresponding adjacent elements can be utilized for phase shift calculation. For example, the range-induced phase shift can be obtained by the multiplication yl,p[n]y*l,p[n + 1], and the azimuth-induced phase shift can be obtained by the multiplication yl,p[n]y*l+1,p[n]. As shown in Equation (13), the vector yl,n consist of the stacked vectors ml,n, which make is possible to estimate the range-induce phase shift and azimuth-induced phase shift simultaneously. Further, there is a dual-shift-invariant structure in the proposed spatial temporal auto-correlation matrix RST, and we call the matrix RST conceptually as a 2D matrix based on the parameter space (herein, range and azimuth). Similarly, the spatial temporal auto-correlation matrix in (12) can be factorized by EVD, and the corresponding 2D steering vector is defined such that
s 2 D = s q s w = [ 1 exp ( j 2 π Q q ) exp ( j 2 π Q ( L 1 1 ) q ) ] ,   [ 1 exp ( j 2 π W w ) exp ( j 2 π W ( L 2 1 ) w ) ] ,
for q = 0, …, Q − 1 and w = −W/2, …, W/2−1, and denotes the Kronecker product.
After the K peaks are detected from the obtained 2D pseudo-spectrum, the paired indexes { q k , w k } K 1 k = 0 are obtained. Finally, the paired range and azimuth angle estimations can be estimated as
R ^ k = c 2 μ ( q k Q T s ) , θ ^ k = arcsin ( λ w k π d W )
Similarly, the conventional 2D auto-correlation matrix can also be utilized for range-Doppler estimation, azimuth-Doppler estimation, or azimuth-elevation angle estimation.

4. Proposed Algorithm

Since the proposed method was developed for joint estimation of range, azimuth angle and velocity for FMCW radar, we propose a 3D spatial-temporal auto-correlation matrix to reduce the processed matrix size.
As shown in Figure 2, there are three kinds of phase shifts in the received signal model. We put the simplified version Figure 3 here and make a further explanation on the 3D phase shifts. Each kind of phase shift can be calculated by the multiplication between the adjacent elements. For instance, the range-induced phase shift can be obtained by the multiplication of y0,0[0]y*0,0[1] and y1,0[0]y*1,0[1], the azimuth-induced phase shift can be obtained by the multiplication of y0,0[0]y*1,0[0] and y1,0[0]y*L−1,0[0], and the Doppler-induced phase shift can be obtained by the multiplication of of y0,0[0]y*0,1[1] and y1,0[0]y*1,1[1]. The proposed method is developed based on 3D phase shifts by organizing the 3D steering vector and calculating the 3D pseudospectrum as shown in this section.

4.1. Proposed 3D Subspace-Based Algorithm

To jointly estimate the 3D parameters of range, Doppler, and azimuth angle, the proposed 3D auto-correlation matrix R is defined as:
R = l = 0 L L 3 p = 0 P L 2 n = 0 N L 1 y l , p , n y l , p , n H ,
where
y l , p , n = [ M l , p , n M l + 1 , p , n M l + L 3 1 , p , n ]   where   M l , p , n = [ m l , p , n m l , p + 1 , n m l , p + L 2 1 , n ] , m l , p , n = [ y l , p [ n ] y l , p [ n + 1 ] y l , p [ n + L 1 1 ] ]
Here, L1, L2, and L3 denote the selection parameters satisfying 2 ≤ L1 < N, 2 ≤ L2 < P, and 2 ≤ L3 < N, respectively. In Equation (17), the vector yl,p,n includes three kinds of phase shift: range-induced phase shift between the adjacent elements yl,p[n] and yl,p[n + 1], azimuth-induced phase shift between the adjacent elements yl,p[n] and yl+1,p[n], and Doppler-induced phase shift between the adjacent elements yl,p[n] and yl,p+1[n]. Similar with the previous method in Section 3.2, the range-induced phase shift can be obtained by the multiplication yl,p[n]y*l,p[n + 1], the azimuth-induced phase shift can be obtained by the multiplication yl,p[n]y*l+1,p[n], and the Doppler-induced phase shift can be obtained by the multiplication yl,p[n]y*l,p+1[n]. Therefore, the stacked spatial temporal auto-correlation matrix R consists of three kinds of phase shift: range-induced phase shift, azimuth-induced phase shift, and Doppler-induced phase shift. The matrix is R called 3D matrix conceptually due the three estimated parameters. Then, a 3D shift-invariant structure exits in the proposed 3D auto-correlation matrix R, and the 3D auto-correlation matrix can be factorized using EVD by
R = U Σ U H , where   U = [ U s U n ] , Σ = [ Σ s Σ n ] ,
where Us denotes the signal subspace of R; Un denotes the noise subspace of R; Σs = [σ0, σ1, …, σK−1] denotes the diagonal matrix having K eigenvalues, corresponding to the K column vectors of Us; and Σn denotes the diagonal matrix having L1 × L2 × L3 − 1 eigenvalues, equivalent to the noise variance. After EVD on R, the matrices U and Σ are given as pairs, [Us Un] and [Σs Σn], respectively. However, EVD cannot separate the signal subspace and noise subspace automatically. In this paper, the criterion of the minimum description length (MDL) [27] for separation of signal subspace and noise subspace is used, and the derived singular values in Equation (18) are processed by MDL to obtain the estimated number of targets K ^ .
The 3D steering vector for calculating the 3D pseudo-spectrum can be defined by:
s q = [ 1 exp ( j 2 π Q q ) exp ( j 2 π Q ( L 1 1 ) q ) ] 1 × L 1 , s w = [ 1 exp ( j 2 π W w ) exp ( j 2 π W ( L 2 1 ) w ) ] 1 × L 2 , s g = [ 1 exp ( j 2 π G g ) exp ( j 2 π G ( L 3 1 ) g ) ] 1 × L 3
for q = 0, …,Q − 1, w = −W/2, …,W/2 − 1, and g = 0, …, G − 1, respectively.
PseudoSpectrum 3 D = 1 s q , w , g H U s U s H s q , w , g
where sq,w,g = csq     sw   sg.
By the peak detection method, the K peaks can be detected for each 1D pseudo-spectrum searching, and the estimated three indexes { q k } k = 0 K 1 , { w k } k = 0 K 1 , { g k } k = 0 K 1 at which the K peaks are found, such that
{ q k , w k , g k } = { max k [ PseudoSpectrum 3 D ] } k = 0 K 1 ,
where maxk[•] denotes the k-th biggest value. For example, the indexes {qk,wk,gk} is for the k-th peak of the 3D pseudo-spectrum.
Since three indexes of 3D pseudo-spectrum are estimated, estimations for ranges, azimuth angles and velocities of K targets can be organized by the relationship in κk, ξk and ρk of in Equation (7), for k = 0, …, K − 1, respectively:
R ^ k = c 2 μ ( q k Q T s ) , θ ^ k = arcsin ( λ w k π d W ) and   v ^ k = c g k 4 π f c T P R I G .

4.2. Comparision with the Previous Works

This paper proposed joint 3D estimation of range, azimuth angle and velocity of drones for FMCW radar. There already exist many previous works on the drone detection or classification through the radar system. The designed algorithm and radar system in this paper aims to detect the drones in long range up to 1 km, and the comparison between the previous works and this work is shown in Table 1. It is apparently that the implemented radar system can detect the drones in a longer range than the previous works.

5. Implementation of 24-GHz FMCW Drone Detection Radar

In this section, the structure of the implemented 24 GHz FMCW long range drone detection radar system will be introduced, including the transmitter (Tx) and receiver (Rx) antennas, 24-GHz transceiver and IF, data logging system, and data processing system. The data-logging platform logs and transmits the raw radar data to the PC, and then the saved raw radar data is processed by the implemented proposed algorithm with MATLAB 2016b of The MathWorks, Inc. (Natick, MA, USA).

5.1. Tx Lens Antenna and Rx Sectoral Horn Antennas

The Tx antenna in the implemented FMCW radar system employs a lens antenna as shown in Figure 4, and Figure 5 shows the simulated Tx antenna radiation pattern. The designed lens antenna has high gain of 25-dBi, side-lobe levels lower than 5-dB, dimensions of 20 cm × 20 cm × 5 cm (including the radome, as shown in Figure 4), and a 4° half power beam width (HPBW). The Tx lens antenna is available for any frequency in the 22 to 25-GHz range, and the implemented FMCW radar system operates at the 24.025 to 24.225-GHz range with a 200-MHz bandwidth.
Two designed E-plane sectoral horn antennas are utilized as the Rx antenna array in our implemented FMCW radar system, as shown in Figure 6, and the two receiving antennas are located by 1 wave length distance horizontally, resulting in a field of view of ±30° in azimuth.
The antenna spacing d of the receiving antenna array determines the field of view (FOV) of the radar systems, and the FOV will be 90° when d = λ/2, and it will be 60° when d = λ. Generally, for a standard horn antenna, the gain can be improved by increasing the aperture dimensions, and the FOV of the radar systems will be decreased. However, for long-range drone detection, high gain of the received antenna is required. In our implemented system, the designed E-plane sectoral horn antenna is adopted as the receiving antenna. The aperture dimension of the designed E-plane sectoral horn antenna in E-plane is enlarged to provide a higher gain, while the FOV of the radar system is unchanged. The E-plane sectoral horn antenna is designed to have a 14-dBi gain, an E-plane pattern (20°) and an H-plane pattern (50°), as shown in Figure 7.

5.2. 24-GHz Transceiver and IF

The 24-GHz FMCW RF system was implemented with the evaluation board EV-RADAR-MMIC2 from Analog Devices (Norwood, MA, USA), which enables the user to evaluate the performance of a radar chipset comprising a two-channel transmitter chip (ADF5901 designed by Analog Devices), a four-channel receiver chip (ADF5904 designed by Analog Devices), and a fractional-N synthesizer chip (ADF4159 designed by Analog Devices) for FMCW operation. Moreover, the evaluation board EV-RADAR-MMIC2 works with the baseband adapter board EV-ADAR-D2S, which physically attaches to the EV-RADAR-MMIC2. This converts the ADF5904 differential baseband signals to single-ended signals. Signals can propagate through the circuits in differential signaling mode or single-ended signaling mode [28,29]. Differential signaling is a method of signal transfer that uses two signal paths. Whereas the alternative, single-ended signaling, transmits signals between one signal path and a reference ground path, differential signaling transmits signals between two signal paths and a reference ground path. The evaluation board is controlled from a laptop over USB via a microcontroller interface board (SDP adaptor board), which also physically attaches to the main board. Evaluation software provided by the manufacturer provides a user interface from which one can control all the functions of all three chips such as transmit or receive channel enable, FMCW chirp duration, bandwidth and so forth. A block diagram of the implemented FMCW radar system is shown in Figure 8, and the applied boards, EV-RADAR-MMIC2, ADAR-D2S (designed by Analog Devices), and SDP adaptor board (designed by Analog Devices), are shown in Figure 9a,b.
Since the implemented FMCW radar system was developed for long-range drone detection up to 1 km, the Tx signal must be transmitted with a high gain. As shown in Figure 8, the transmission signal generated by the evaluation board, EV-RADAR-MMIC2, is first fed to a designed 24-GHz power amplifier (PA) with 32-dB gain. A photograph of the designed 24-GHz PA is shown in Figure 9c; the PA is mounted with a cooling fan, and the output/input interface employs an end-launch waveguide to a coaxial adapter with a working frequency range of 18-GHz to 26.5-GHz. Figure 9d shows the structure of the designed 24-GHz PA, and the designed circuit consists of four identical single PA chips PA1−PA4. The single PA chip MAAP-011146-STD, developed by MACOM, provides 24-dB gain, 34-dBm output P1dB (P1dB means 1 dB compression point of power amplifiers). In our designed circuit as shown in Figure 9d, four single MAAP-011146-STD chips are in parallel connection to achieve the 32-dB Gain and 38-dBm output P1dB.
For each receiving channel, the output of the adapter board EV-ADAR-D2S is first amplified by the voltage gain control amplifier (VGA), as shown in Figure 10c, with the gain of 21-dB. The gain of the designed VGA covers from −11-dB to 32-dB.
Since the isolation between the Tx antenna and the Rx antenna array are significantly large with the maximum value of −30-dB, the amplified beat signals from the first VGA are fed to a high-pass filter (HPF), as shown in Figure 10a, with the 3-dB cutoff frequency of 73.8-KHz. Figure 11 shows the performance of the HPF by the values of insertion loss and return loss. The curve for CH1 (S21) presents the insertion loss of the filter along with the frequency, and the frequencies under 73.8-KHz is filtered by the HPF. The curve for CH2 (S22) presents the return loss of the filter along with the frequency, and return loss above 73.8-KHz is very low with about −20 dB. Thus, the designed HPF will filter the frequencies above 73.8-KHz with a very low return loss, and the strong signals affected by the isolations between the Tx antenna and the Rx antenna array will be weakened by the HPF. Following the HPF, the second VGA is applied and then the amplified beat signals are transferred to the data-logging platform. Figure 10b shows the coaxial cables for the connections between the modules.

5.3. Data-Logging Platfrom

The data-logging platform was implemented to transfer maximum 8 CH ADC input signals to the PC in real-time, and the 12-bit ADC samples the received beat signals with the sampling frequency of 12.5 MHz. This platform mainly consists of DSP and FPGA chips, namely TI TMS320C6455 and Stratix EP3C25F32. The beat signals sampled by the ADC are first saved in the FIFO of the FPGA and then transferred to the DDR2 SDRAM through a direct memory access mechanism. The saved data can be handled by the DSP chip or transferred to the PC through 1 G LAN communication through the RJ45 interface, as shown in Figure 12, and then the raw radar data saved in the PC is processed by the proposed super resolution algorithm.

6. Experiments

This section presents several experiments that were conducted to investigate the performance of the proposed 3D subspace-based algorithm and the developed 24-GHz drone detection FMCW radar system. The received data of the radar system are sampled by the FPGA and DSP board and then processed by the proposed algorithm in MATLAB 2016b with accelerated computing by the graphics processing unit (GPU) of the GeForce 1060 (3 G memory version) graphic card.
The experiments were carried out on the roof of the building R3 of Daegu Gyeongbuk Institute of Science and Technology (DGIST) in breezy and sunny weather, and the experiment scene is shown in Figure 13. The Tx antenna and the Rx antennas were positioned on a designed firm system case, and the 24-GHz transceiver and IF are placed inside the firm system case. The whole radar system was arranged on a steady fixture with a 1.5 m height, and the fronts of all the Tx/Rx antennas were aligned to the horizontal level to ensure the boresight of the antennas was upright. Thus, the modeled Cartesian coordinate system in Figure 1 was established, with the y-axis along the boresight of the Rx antenna array (facing the sky) and the x-axis along the Rx antenna array.
Four flying paths of drones are depicted in Figure 14, and they are all in the x-y plane. Path 1 was a horizontal line in the height around of 1000 m, and the azimuth angle is limited inside the FOV (±30°) of radar system. Path 2 was a vertical line along the z-axis, Path 3 is along the line x = 50 (a path parallel to z-axis), and Path 4 is along the line x = −50. The range in the direction of y-axis for Path 2, 3 and 4 is limited from 200 m to 1000 m.
We chose two quadcopter drones (QD) manufactured by DJI corporation, and photographs of QD1 and QD2 are shown in Figure 15. Three sets of experiments were designed to investigate the proposed algorithm and the implemented FMCW radar system: (1) Experiment 1: QD1 flew along Path 1; (2) Experiment 2: QD2 flew along Path 2; (3) Experiment 3: QD1 and QD2 flew at the same time along Path 3 and Path 4, respectively.

6.1. Stationary Clutter Mitigation

In the realistic environment, there is much undesired stationary clutter, such as stationary targets around the radar system, multiple reflections from the walls, and strong Tx/Rx coupling signals, which render drone detection difficult. One subspace projection approach has been proposed for wall clutter mitigation in a through-wall radar imaging (TWRI) system [30]. TWRI system deals with imaging and detection of targets behind walls and in enclosed structures. Generally, wall reflections are often stronger than target reflections, and they tend to persist over a long duration of time. The work [30] proposed a subspace-based wall clutter mitigation approach to mitigate the spatial zero-frequency and low-frequency components which correspond to wall reflections. In our implemented radar system, the reflections induced by stationary clutter are similar with the wall reflections in the through the wall radar imaging radar system. Therefore, we adopted the wall cutter migration technique in [30] to mitigate the stationary clutter induced reflections. For instance, in the detection results without the stationary clutter mitigation approach, the real drone target is masked by the strong undesired signal, as shown in Figure 15a. Since there are many stationary clutter signals, the separation of the signal subspace and the noise subspace in Equation (18) fails, and the drone induced signal is assigned to the noise subspace by the MDL method. Therefore, the preprocessing for stationary clutter mitigation is applied before organizing the EVD for the 3D auto-correlation matrix R in Equation (18). Following the subspace projection approach in [21], one stationary clutter subspace projection operator Pc is utilized to mitigate the stationary clutter induced signals:
R ^ = P c R ˜
where R ˜ = R m e T , m is the mean of the columns of R, and eT = [1, …, 1]. More details about the subspace projection approach can be found in section IV of [21]. Since the stationary clutter induced signals are mitigated, the drone induce signals can be separated into signal subspace. The obtained matrix R ^ is processed by the EVD:
R ^ = U ^ Σ ^ U ^ H , where   U ^ = [ U ^ s U ^ n ] , Σ ^ = [ Σ ^ s Σ ^ n ]
The obtained U ^ s is utilized for spectrum calculation instead of Us in Equation (20). By mitigating the stationary clutter induced signals with the subspace projection approach, a drone can be successfully detected, as shown in Figure 16 and the results in Figure 17, Figure 18 and Figure 19 of the next subsection.

6.2. Experiments Results

Generally, it is hard to represent the 3D pseudo-spectrum in figures. Thus, the obtained 3D pseudo-spectrum is represented as two spectrums: range-azimuth spectrum and range-Doppler spectrum. As explained in the Section 4, there are three indexes {qk,wk,gk} of the k-th peak of 3D pseudo-spectrum, and the estimations for ranges, azimuth angles, and velocities of K targets will be calculated through the relationships in Equation (19). Once the peaks are detected, the estimations for range, azimuth, and velocity can be calculated.
Figure 16 shows one frame of the detection results of Experiment 1 for QD1 in Path 1, range-Doppler map in Figure 17a and range-azimuth map in Figure 17b. It can be seen that QD1 was flying in the range around 966 m at an azimuth angle of 13.5° and a radial speed of about −0.7 m/s. The calculated absolute speed was around 3 m/s, which approximately equals the value displayed in the controller. One frame of the detection results of Experiment 2 for QD2 in Path 2 is shown in Figure 18. The estimated range was around 1005 m at an azimuth angle of 4.5° and a radial speed of about 3.4 m/s. In Experiment 2, according many detection results, drone detection becomes difficult for the implemented FMCW radar system when the distance of the drone from the radar system is greater than about 1005 to 1010 m. For Experiment 3, the two drones were flying at the same time. One frame of the detection results in Figure 18 shows that QD1 was moving away from the radar system at the range of around 339 m at an azimuth angle of 8.8° and a radial speed of about 3.1 m/s; QD2 was moving award toward the radar system at the range of around 247 m at an azimuth angle of −11.5° and a radial speed of about 3 m/s.
By analyzing the raw data of all frames for one experiment set, the flight path can be estimated, and the estimated flight paths of all experiment sets are presented in Figure 20. When the drones flew in a location with a large azimuth angle, the detection results were influenced by the narrow beam width of the Tx antenna, as shown in Figure 20a. Since the flight of drones is affected by the wind, it can be seen that drones cannot fly stably above 400 m altitude, but the estimated results can be accepted due to the difficulty of long-range drone detection.

7. Conclusions

A prototype of a 24-GHz frequency-modulated continuous-wave (FMCW) radar system with two sectoral horn antennas and one transmitting lens antenna for the long-range drone detection was presented, and a 3D subspace-based algorithm was proposed for the joint range-azimuth-Doppler estimation of long-range drone detection. In a realistic outdoor environment, three sets of experiments were conducted to detect two quadcopter drones, and the subspace projection approach was utilized to mitigate the stationary clutter. The experiment results proved that the feasible distance of drone detection is up to 1km with the implemented 24-GHz FMCW radar system. Additionally, the effectiveness and performance of the proposed 3D subspace-based algorithm was verified. The proposed algorithm constructs the 3D auto-correlation matrix for lower computing complexity, and avoids the estimated parameter matching for range, azimuth and Doppler. Since the proposed method is still composed of a variety of matrix operations and 3D spectrum searching, it was difficult to implement the algorithm through the hardware. Instead, the processing of the raw radar data with the proposed algorithm is accomplished by MATLAB 2016b in a high-performance PC.
The proposed algorithm is realized in MATLAB and applied with the raw radar data of the experiments, and the future work is to develop the implementation of the proposed algorithm on FPGA and DSP for real time processing. Moreover, the estimation of radar targets is organized the assumption that the radar system remains stationary. It is necessary to develop new signal model and algorithm for the situation when the radar system is placed on a moving platform.

Author Contributions

All authors conceived and designed the system and experiments together; B.C., Y.-C.L. and D.O. performed the experiments and analyzed the data. D.O. was the lead developer for the hardware used and contributed to the experiments work. S.K. and J.-W.C. contributed analysis tools/evaluation of system.

Funding

This work was supported by the DGIST R&D Program of the Ministry of Science and ICT (18-ST-01).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ding, G.; Wu, Q.; Zhang, L.; Lin, Y.; Tsiftsis, T.A.; Yao, Y. An Amateur Drone Surveillance System Based on the Cognitive Internet of Things. IEEE Commun. Mag. 2018, 56, 29–35. [Google Scholar] [CrossRef] [Green Version]
  2. Trundle, S.S.; Slavin, A.J. DRONE DETECTION SYSTEMS. U.S. Patent 2017/0092.138A1, 30 March 2017. [Google Scholar]
  3. Moses, A.; Rutherford, M.J.; Valavanis, K.P. Radar-based detection and identification for miniature air vehicles. In Proceedings of the 2011 IEEE International Conference on Control Applications (CCA), Denver, CO, USA, 28–30 September 2011; pp. 933–940. [Google Scholar] [CrossRef]
  4. Mezei, J.; Fiaska, V.; Molnár, A. Drone sound detection. In Proceedings of the 2015 16th IEEE International Symposium on Computational Intelligence and Informatics (CINTI), Budapest, Hungary, 19–21 November 2015; pp. 333–338. [Google Scholar] [CrossRef]
  5. Mezei, J.; Molnár, A. Drone sound detection by correlation. In Proceedings of the 2016 IEEE 11th International Symposium on Applied Computational Intelligence and Informatics (SACI), Timisoara, Romania, 12–14 May 2016; pp. 509–518. [Google Scholar] [CrossRef]
  6. Kim, J.; Park, C.; Ahn, J.; Ko, Y.; Park, J.; Gallagher, J.C. In Proceedings of the Real-time UAV sound detection and analysis system. 2017 IEEE Sensors Applications Symposium (SAS), Glassboro, NJ, USA, 13–15 March 2017; pp. 1–5. [Google Scholar] [CrossRef]
  7. Busset, J.; Perrodin, F.; Wellig, P.; Ott, B.; Heutschi, K.; Rühl, T.; Nussbaumer, T. Detection and tracking of drones using advanced acoustic cameras. Proc. SPIE 2015, 9647, 96470F. [Google Scholar] [CrossRef]
  8. Liu, H.; Wei, Z.; Chen, Y.; Pan, J.; Lin, L.; Ren, Y. Drone Detection Based on an Audio-Assisted Camera Array. In Proceedings of the 2017 IEEE Third International Conference on Multimedia Big Data (BigMM), Laguna Hills, CA, USA, 19–21 April 2017; pp. 402–406. [Google Scholar] [CrossRef]
  9. Park, J.; Kim, D.H.; Shin, Y.S.; Lee, S. A comparison of convolutional object detectors for real-time drone tracking using a PTZ camera. In Proceedings of the 2017 17th International Conference on Control, Automation and Systems (ICCAS), Jeju, Korea, 18–21 October 2017; pp. 696–699. [Google Scholar]
  10. Oh, B.; Guo, X.; Wan, F.; Toh, K.; Lin, Z. An EMD-based micro-Doppler signature analysis for mini-UAV blade flash reconstruction. In Proceedings of the 2017 22nd International Conference on Digital Signal Processing (DSP), London, UK, 23–25 August 2017; pp. 1–5. [Google Scholar] [CrossRef]
  11. Chen, V.C.; Li, F.; Ho, S.-S.; Wechsler, H. Micro-Doppler effect in radar: Phenomenon, model, and simulation study. IEEE Trans. Aerosp. Electron. Syst. 2006, 42, 2–21. [Google Scholar] [CrossRef]
  12. Kim, Y.; Nazaroff, M.; Oh, D. Extraction of drone micro-Doppler characteristics using high resolution time-frequency transforms. Microw. Opt. Technol. Lett. 2018, 60, 2949–2954. [Google Scholar] [CrossRef]
  13. Park, J.; Rios, J.; Moon, T.; Kim, Y. Micro-Doppler based classification of human activities on water via transfer learning of convolutional neural networks. Sensors 2016, 16, 1990. [Google Scholar] [CrossRef] [PubMed]
  14. Rahman, S.; Robertson, D.A. Millimeter-wave micro-Doppler measurements of small UAVs. In Proceedings of the SPIE Radar Sensor Technology XXI, 101880T, Anaheim, CA, USA, 17 April 2017; Ranney, K.I., Doerry, A., Eds.; SPIE: Bellingham, WA, USA, 2017; Volume 10188. [Google Scholar] [CrossRef]
  15. Oh, B.; Guo, X.; Wan, F.; Toh, K.; Lin, Z. Micro-Doppler Mini-UAV Classification Using Empirical-Mode Decomposition Features. IEEE Geosci. Remote Sens. Lett. 2018, 15, 227–231. [Google Scholar] [CrossRef]
  16. Fioranelli, F.; Ritchie, M.; Griffiths, H.; Borrion, H. Classification of loaded/unloaded micro-drones using multistatic radar. Electron. Lett. 2015, 51, 1813–1815. [Google Scholar] [CrossRef] [Green Version]
  17. Torvik, B.; Olsen, K.E.; Griffiths, H. Classification of Birds and UAVs Based on Radar Polarimetry. IEEE Geosci. Remote Sens. Lett. 2016, 13, 1305–1309. [Google Scholar] [CrossRef]
  18. Shin, D.; Jung, D.; Kim, D.; Ham, J.; Park, S. A Distributed FMCW Radar System Based on Fiber-Optic Links for Small Drone Detection. IEEE Trans. Instrum. Meas. 2017, 66, 340–347. [Google Scholar] [CrossRef]
  19. Rudge, A.W.; Milne, K.; Olver, A.D.; Knight, P. Chapter 7.7 Simple horn antenna. In The Handbook of Antenna Design; The Institution of Engineering and Technology: Stevenage, UK, 1982; Volume 1. [Google Scholar]
  20. Pollock, B.; Goodman, N.A. Structured de-chirp for compressive sampling of LFM waveforms. In Proceedings of the 2012 IEEE 7th Sensor Array and Multichannel Signal Processing Workshop (SAM), Hoboken, NJ, USA, 17–20 June 2012; pp. 37–40. [Google Scholar] [CrossRef]
  21. Oh, D.; Ju, Y.; Nam, H.; Lee, J. Dual smoothing DOA estimation of two-channel FMCW radar. IEEE Trans. Aerosp. Electron. Syst. 2016, 52, 904–917. [Google Scholar] [CrossRef]
  22. Winkler, V. Range Doppler detection for automotive FMCW radars. In Proceedings of the 2007 European Radar Conference, Munich, Germany, 10–12 October 2007; pp. 166–169. [Google Scholar]
  23. Li, X.; Pahlavan, K. Super-resolution TOA estimation with diversity for indoor geolocation. IEEE Trans. Wirel. Commun. 2004, 3, 224–234. [Google Scholar] [CrossRef]
  24. Oh, D.; Lee, J. Low-Complexity Range-Azimuth FMCW Radar Sensor Using Joint Angle and Delay Estimation Without SVD and EVD. IEEE Sens. J. 2015, 15, 4799–4811. [Google Scholar] [CrossRef]
  25. Schmidt, R.O. Multiple emitter location and signal parameter estimation. IEEE Trans. Antennas Propag. 1986, 34, 276–280. [Google Scholar] [CrossRef]
  26. Roy, R.; Kailath, T. ESPRlT-Estimation of Signal Parameters via Rotational in variance Techniques. IEEE Trans. Acoust. Speech Signal Process. 1989, 37, 984–995. [Google Scholar] [CrossRef]
  27. Wax, M.; Kailath, T. Detection of signals by information theoretic criteria. IEEE Trans. Acoust. Speech Signal Process. 1985, 33, 387–392. [Google Scholar] [CrossRef]
  28. Brooks, D. Differential Signals: Rules to Live By. Available online: https://www.ieee.li/pdf/essay/differential_signals.pdf (accessed on 9 November 2018).
  29. Pinkle, C. The Basics: Single-Ended and Differential Signaling. Available online: https://www.allaboutcircuits.com/technical-articles/the-why-and-how-of-differential-signaling (accessed on 9 November 2018).
  30. Tivive, F.H.C.; Bouzerdoum, A.; Amin, M.G. A Subspace Projection Approach for Wall Clutter Mitigation in Through-the-Wall Radar Imaging. IEEE Trans. Geosci. Remote Sens. 2015, 53, 2108–2122. [Google Scholar] [CrossRef]
Figure 1. Radar system scenario with the two-element sectoral horn antenna array.
Figure 1. Radar system scenario with the two-element sectoral horn antenna array.
Sensors 18 04171 g001
Figure 2. Illustration of the 3D phase shifts.
Figure 2. Illustration of the 3D phase shifts.
Sensors 18 04171 g002
Figure 3. Simplified version of Figure 2.
Figure 3. Simplified version of Figure 2.
Sensors 18 04171 g003
Figure 4. Illustration of Tx lens antenna, (a) front view; (b) side view; (c) back view.
Figure 4. Illustration of Tx lens antenna, (a) front view; (b) side view; (c) back view.
Sensors 18 04171 g004
Figure 5. Simulated Tx Lens Horn Antenna Radiation Pattern.
Figure 5. Simulated Tx Lens Horn Antenna Radiation Pattern.
Sensors 18 04171 g005
Figure 6. Illustration of the Rx antennas, (a) the implemented E-plane sectoral horn antenna; (b) two-element Rx antenna array with the distance d between the centers of the individual antennas.
Figure 6. Illustration of the Rx antennas, (a) the implemented E-plane sectoral horn antenna; (b) two-element Rx antenna array with the distance d between the centers of the individual antennas.
Sensors 18 04171 g006
Figure 7. Measured Rx E-plane sectoral horn antenna radiation pattern (normalized).
Figure 7. Measured Rx E-plane sectoral horn antenna radiation pattern (normalized).
Sensors 18 04171 g007
Figure 8. Block diagram of the implemented 24-GHz transceiver and Intermediate Frequency (IF) module.
Figure 8. Block diagram of the implemented 24-GHz transceiver and Intermediate Frequency (IF) module.
Sensors 18 04171 g008
Figure 9. (a) Photograph of Adopted Analog Devices evaluation board; (b) layout of the Analog Devices evaluation board; (c) photograph of the designed 24-GHz power amplifier (PA); (d) layout of the designed 24-GHz power amplifier (PA).
Figure 9. (a) Photograph of Adopted Analog Devices evaluation board; (b) layout of the Analog Devices evaluation board; (c) photograph of the designed 24-GHz power amplifier (PA); (d) layout of the designed 24-GHz power amplifier (PA).
Sensors 18 04171 g009
Figure 10. Submodules of IF, (a) high-pass filter; (b) coaxial cables; (c) voltage gain control amplifier (VGA).
Figure 10. Submodules of IF, (a) high-pass filter; (b) coaxial cables; (c) voltage gain control amplifier (VGA).
Sensors 18 04171 g010
Figure 11. Performance of the high-pass filter.
Figure 11. Performance of the high-pass filter.
Sensors 18 04171 g011
Figure 12. Photograph of the developed data-logging platform.
Figure 12. Photograph of the developed data-logging platform.
Sensors 18 04171 g012
Figure 13. (a) Illustration of the experimental set up; (b) photograph of the experiments.
Figure 13. (a) Illustration of the experimental set up; (b) photograph of the experiments.
Sensors 18 04171 g013
Figure 14. Illustration of the designed flight paths for the drones, (a) horizontal flight in Path 1 for Experiment 1, and vertical flight in Path 2 for Experiment 2; (b) two vertical paths for Experiment 3.
Figure 14. Illustration of the designed flight paths for the drones, (a) horizontal flight in Path 1 for Experiment 1, and vertical flight in Path 2 for Experiment 2; (b) two vertical paths for Experiment 3.
Sensors 18 04171 g014
Figure 15. Photographs of drones used in the experiments, (a) quadcopter drone 1 (QD1); (b) quadcopter drone 2 (QD2).
Figure 15. Photographs of drones used in the experiments, (a) quadcopter drone 1 (QD1); (b) quadcopter drone 2 (QD2).
Sensors 18 04171 g015
Figure 16. Illustration of the detection results, (a) without the clutter mitigation approach; and (b) with the clutter mitigation approach.
Figure 16. Illustration of the detection results, (a) without the clutter mitigation approach; and (b) with the clutter mitigation approach.
Sensors 18 04171 g016
Figure 17. Illustration of one frame of detection results of Experiment 1, (a) range-Doppler map; (b) range-angle map.
Figure 17. Illustration of one frame of detection results of Experiment 1, (a) range-Doppler map; (b) range-angle map.
Sensors 18 04171 g017aSensors 18 04171 g017b
Figure 18. Illustration of one frame of detection results of Experiment 2, (a) range-Doppler map; (b) range-angle map.
Figure 18. Illustration of one frame of detection results of Experiment 2, (a) range-Doppler map; (b) range-angle map.
Sensors 18 04171 g018
Figure 19. Illustration of one frame of detection results of Experiment 3, (a) range-Doppler map; (b) range-angle map.
Figure 19. Illustration of one frame of detection results of Experiment 3, (a) range-Doppler map; (b) range-angle map.
Sensors 18 04171 g019
Figure 20. Estimated flight paths, (a) estimated path 1; (b) estimated path 2; (c) estimated path 3; (d) estimated path 4.
Figure 20. Estimated flight paths, (a) estimated path 1; (b) estimated path 2; (c) estimated path 3; (d) estimated path 4.
Sensors 18 04171 g020
Table 1. Comparison of drone classification/detection with the previous works.
Table 1. Comparison of drone classification/detection with the previous works.
Function of SystemRadar SystemParameters (Dimensions)Detection Range
[14]Drone calissfication94 GHz FMCWRange (1D)120 m
[15]Drone calissfication9.7 GHz CWRange (1D)3~150 m
[16]Drone calissfication2.4 GHz Pulsed RadarRange (1D)~60 m
[17]Drone calissficationBirdRad radarRange (1D)300~400 m
[18]Drone detection24 GHz FMCWRange/Doppler (2D)500 m
This workDrone detection24 GHz FMCWRange/Azimuth/Doppler (3D)Up to 1 km

Share and Cite

MDPI and ACS Style

Choi, B.; Oh, D.; Kim, S.; Chong, J.-W.; Li, Y.-C. Long-Range Drone Detection of 24 G FMCW Radar with E-plane Sectoral Horn Array. Sensors 2018, 18, 4171. https://doi.org/10.3390/s18124171

AMA Style

Choi B, Oh D, Kim S, Chong J-W, Li Y-C. Long-Range Drone Detection of 24 G FMCW Radar with E-plane Sectoral Horn Array. Sensors. 2018; 18(12):4171. https://doi.org/10.3390/s18124171

Chicago/Turabian Style

Choi, Byunggil, Daegun Oh, Sunwoo Kim, Jong-Wha Chong, and Ying-Chun Li. 2018. "Long-Range Drone Detection of 24 G FMCW Radar with E-plane Sectoral Horn Array" Sensors 18, no. 12: 4171. https://doi.org/10.3390/s18124171

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop