Next Article in Journal
Metal Oxide Nanowire Preparation and Their Integration into Chemical Sensing Devices at the SENSOR Lab in Brescia
Previous Article in Journal
DCE: A Distributed Energy-Efficient Clustering Protocol for Wireless Sensor Network Based on Double-Phase Cluster-Head Election
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Designing Waveform Sets with Good Correlation and Stopband Properties for MIMO Radar via the Gradient-Based Method

College of Electronic Science and Engineering, National University of Defense Technology, Changsha 410073,
*
Author to whom correspondence should be addressed.
Sensors 2017, 17(5), 999; https://doi.org/10.3390/s17050999
Submission received: 16 March 2017 / Revised: 24 April 2017 / Accepted: 27 April 2017 / Published: 1 May 2017
(This article belongs to the Section Remote Sensors)

Abstract

:
Waveform sets with good correlation and/or stopband properties have received extensive attention and been widely used in multiple-input multiple-output (MIMO) radar. In this paper, we aim at designing unimodular waveform sets with good correlation and stopband properties. To formulate the problem, we construct two criteria to measure the correlation and stopband properties and then establish an unconstrained problem in the frequency domain. After deducing the phase gradient and the step size, an efficient gradient-based algorithm with monotonicity is proposed to minimize the objective function directly. For the design problem without considering the correlation weights, we develop a simplified algorithm, which only requires a few fast Fourier transform (FFT) operations and is more efficient. Because both of the algorithms can be implemented via the FFT operations and the Hadamard product, they are computationally efficient and can be used to design waveform sets with a large waveform number and waveform length. Numerical experiments show that the proposed algorithms can provide better performance than the state-of-the-art algorithms in terms of the computational complexity.

1. Introduction

Waveform design has received considerable attention in recent years [1] and been employed in many applications, including polarimetric radar [2], multiple-input multiple-output (MIMO) radar [3,4], stealth communications [5] and the code-division multiple-access (CDMA) system. As a category of the general waveform design research [1], the design of waveform sets (i.e., multidimensional waveforms) is an important research content of MIMO radar. Generally, waveform sets are desired to have a good correlation property, which can effectively improve radar resolution, detection performance, imaging quality, the ability to obtain information and the accuracy of MIMO channel estimation [6,7,8]. In recent years, a large number of scholars has been devoted to designing waveform sets with a good correlation property. The main research covers two aspects: one is the waveform sets with good auto- and cross-correlation properties [9,10,11,12,13,14,15,16], and the other is the complementary sets of sequences (CSS) [17,18,19,20,21,22,23,24,25].
Waveform sets with good auto- and cross-correlation properties, also known as the orthogonal waveform set (OWS), have low autocorrelation sidelobes and low cross-correlation levels. In the early stage, the simulated annealing- [9] and cross entropy-based [10] methods were proposed for OWS design. However, due to the high computational complexity, these methods are not suited to design long waveforms. To improve the computational efficiency, the CAN (cyclic algorithm-new) algorithm [11] based on fast Fourier transform (FFT) is proposed to minimize the autocorrelation sidelobes and the cross-correlation. This algorithm is computationally efficient and can be used for the design of long waveforms. As it is impossible to design completely orthogonal (i.e., both the autocorrelation sidelobes and the cross-correlation are zeroes) waveform sets [11,16], [11] proposes to design the waveform sets that are orthogonal only at the specified intervals and extends the classical WeCAN (weighted cyclic algorithm-new) [26] algorithm to MIMO radar. To solve the same problem, [12] develops the LBFGS (limited-memory Broyden, Fletcher, Goldfarb and Shanno) iterative algorithm, which is more efficient than the WeCAN algorithm. However, because of the complicated linear search rule for determining the step size, the LBFGS iterative algorithm is still time consuming. Recently, the majorization-minimization (MM)-based algorithms (i.e., MM-Corr (MM-correlation) and MM-WeCorr (MM-weighted correlation)) are proposed in [14]. These two algorithms are also based on FFT operations and much faster than the CAN and WeCAN algorithms [11].
Another waveform set with a good correlation property is the complementary sets of sequences (CSS). A waveform set is called CSS if and only if the autocorrelation sum of the waveforms is a delta function [17]. CSS design is proposed to overcome the difficulty of generating a single unimodular waveform with ideal (impulse-like) autocorrelation. A common application of the CSS is the pulse compression [18,19,20]. In pulse compression radar, the complementary sequences are used to modulate consecutive pulses in a coherent pulse train. Then, the autocorrelation sidelobes can be reduced via the coherent [19] or noncoherent [18] accumulation, which can be regarded as the process of obtaining the autocorrelation sum of the complementary sequences. Moreover, due to the good correlation property, CSS has been widely applied to the CDMA system [21], ISI (intersymbol interference) channel estimation [22], orthogonal frequency division multiplexing (OFDM) [23], and many other areas. The main methods of designing CSS are the analytical construction methods, which have great limitation in generating long waveforms. To overcome this problem, [24] introduces a computational framework based on an iterative twisted approximation (ITROX) for periodically complementary sets of sequences design. Subsequently, [25] extends the CAN algorithm [26] and develops a fast algorithm named CANARY (CAN complementary). Additionally, [14] applies the MM method to the design of CSS.
In addition to the good correlation property, waveform sets are expected to have a good stopband property when the radar systems work in a crowded electromagnetic environment. Waveforms with the stopband property, also known as the sparse frequency waveforms (SFW) in many literature works, are a kind of waveforms with several frequency stopbands. The applications of SFW include ultra-wide bandwidth (UWB) systems [27], high frequency surface wave radar (HFSWR) [28,29] and cognitive radar [30]. By designing waveforms with the stopband property, it can effectively overcome the narrowband interference in the congested frequency bands. At present, there are many research works on the design of a single waveform with the stopband property [31,32,33,34,35,36], but they cannot be used in MIMO systems. Therefore, [36] proposes an iterative algorithm combined with the steepest descent (SD) method for MIMO waveform design. By searching along the gradient direction, the convergence speed of this algorithm is improved. However, the computation of the step size along the gradient direction is complicated, which makes the algorithm still costly. In order to improve the computational efficiency, [37] proposes an algorithm named MDISAA-SFW (multi-dimensional iterative spectral approximation algorithm-SFW) based on alternating projection and phase retrieval.
In this paper, we consider the problem of designing unimodular waveform sets with good correlation and stopband properties and propose a gradient-based algorithm, i.e., Gra-WeCorr-SFW (gradient-weighted correlation-SFW). By using the relationship between the correlation function and the power spectrum density (PSD), the design problem is formulated as an unconstrained minimization problem in the frequency domain. Then, the phase gradient is deduced, and its matrix form is given. In order to avoid searching the step size, we use the Taylor series expansion to derive the step size, which is more efficient than the traditional searching methods. Since both the gradient and the step size can be implemented via the FFT operations and the Hadamard product, the proposed algorithm has high computational efficiency. We also deduce the simplified algorithm named Gra-Corr-SFW (gradient-correlation-SFW), which is faster than the Gra-WeCorr-SFW, for the design problem without considering the correlation weights.
The rest of the paper is organized as follows. In Section 2, the design problem is formulated. In Section 3, we develop a gradient-based algorithm by deducing the phase gradient and the step size and then summarize the algorithm. In Section 4, the simplified algorithm for the design problem without considering the correlation weights is derived. Section 5 provides several numerical experiments to verify the effectiveness of the proposed algorithms. Finally, Section 6 gives the conclusions.
Notation: Boldface upper case letters denote matrices, while boldface lower case letters denote column vectors. ( · ) * , ( · ) T and ( · ) H denote the complex conjugate, transpose and conjugate transpose, respectively. · and · F denote the Euclidean norm and the Frobenius norm. Re ( · ) and Im ( · ) denote the real and imaginary part, respectively. Diag ( x ) denotes a diagonal matrix formed with the column vector x . ∘ denotes the Hadamard product. x ( m ) denotes the m-th element of the vector x . x l is the l-th iteration of x . 1 N is the all-one vectors of length N. I N denotes the N × N identity matrix. F ( x ) and F 1 ( x ) denote the 2 N -point FFT and IFFT (inverse FFT) operations of x , respectively. F c ( X ) and F c 1 ( X ) represent the FFT and IFFT of each column of the matrix X , respectively. In the (I)FFT operations, if the length of x is less than 2 N , x is padded with trailing zeros to length 2 N . e ( · ) is the exponent arithmetic applied to the scalar, vector or matrix.

2. Problem Formulation

As mentioned in the Introduction, this paper focuses on the problem of designing waveform sets with good correlation and stopband properties. Therefore, we first establish two criteria in the frequency domain to measure the correlation and stopband properties and then formulate the waveform set design problem.

2.1. The Criterion for Good Correlation Property

Let { x m } m = 1 M be the complex waveform set to be designed. The vector form of each waveform can be expressed as:
x m = x m ( 1 ) , x m ( 2 ) , . . . , x m ( N ) T , m = 1 , . . . , M ,
where N is the waveform length and M denotes the number of the waveforms. Then, the correlation function of x i and x j is defined as:
r i j ( k ) = n = k + 1 N x i ( n ) x j * ( n k ) = r j i * ( k ) , i , j = 1 , . . . , M , k = 1 N , . . . , N 1 .
Here, we consider the design problems of complementary sets of sequences and waveform sets with both good auto- and cross-correlation properties. For the complementary set of sequences, the common criterion is the complementary integrated sidelobe level (CISL) metric [14,25], which is defined as:
CISL = k = 1 N k 0 N 1 m = 1 M r m m ( k ) 2 .
for the waveform sets with both good auto- and cross-correlation properties, we consider the following more general weighted measure [14]:
ψ = i = 1 M j = 1 M k = 1 N N 1 w t ( k ) r i j ( k ) 2 w t ( 0 ) M N 2 ,
where w t ( k ) = w t ( k ) 0 , k = 0 , . . . , N 1 denote the weights assigned to different time lags. In those specified intervals that are expected to have as low as possible sidelobes, w t ( k ) = 1 , else w t ( k ) = 0 .
Let:
r i j = r i j ( 0 ) , r i j ( 1 ) , . . . , r i j ( N 1 ) , 0 , r i j ( 1 N ) , . . . , r i j ( 1 ) T , w t = w t ( 0 ) , w t ( 1 ) , . . . , w t ( N 1 ) , 1 , w t ( 1 N ) , . . . , w t 1 T
be the correlation vector and the corresponding correlation weight vector, respectively. The Fourier transform of the correlation function r i j (i.e., power spectrum density (PSD)) can be written as:
p i j = F r i j = F r i j , r i j = F 1 p i j = 1 2 N F H p i j ,
where F is the 2 N × 2 N discrete Fourier transform (DFT) matrix with the following expression:
F ( m , n ) = e j 2 m n π 2 N , 1 m , n 2 N .
According to (5) and (6), (4) can be expressed in the frequency domain as:
ψ = i = 1 M j = 1 M r i j H D t r i j w t ( 0 ) M N 2 = 1 4 N 2 i = 1 M j = 1 M p i j H F D t F H p i j w t ( 0 ) M N 2 ,
where D t = Diag w t . It is worth noting that the value of the N + 1 -th element of w t has no effect on the objective function. In order to facilitate the derivation below, the N + 1 -th element of w t is set to be one.
Actually, when w t ( k ) = 1 , k = 1 N , . . . , N 1 (i.e., the correlation weights are not taken into account), Criterion (8) is equivalent to Criterion (3) (see Appendix A), which means that Criterion (3) is a special case of Criterion (8). Thus, here, we just consider Criterion (8). By ignoring the constant term in (8), the criterion related to the correlation property is given by:
J C F = 1 4 N 2 i = 1 M j = 1 M p i j H F D t F H p i j .

2.2. The Criterion for the Good Stopband Property

The waveform set with a good stopband property means that the PSD of each waveform has several frequency stopbands. Without loss of generality, we consider that the frequency is normalized. Define the set of frequency stopbands as:
Ω f = k = 1 n s ( f k 1 , f k 2 ) [ 0 , 1 ] ,
where ( f k 1 , f k 2 ) denotes one stopband and n s denotes the number of the stopbands. Considering the 2 N -point FFT operations, the corresponding point set of the stopband set Ω f can be expressed as:
Ω ¯ f = k = 1 n s ( 2 N f k 1 , 2 N f k 2 ) [ 0 , 2 N ] .
Define the frequency weight vector as:
w f = w f ( 1 ) , w f ( 2 ) , . . . , w f ( 2 N ) T , w f ( p ) = 1 , p Ω ¯ f 0 , otherwise .
As the PSD of each waveform is nonnegative, the criterion related to the stopband property (or PSD) can be formulated as:
J P S D = i = 1 M w f p i i 2 = i = 1 M p i i H D f p i i ,
where D f = Diag w f .

2.3. The Minimization Problem

In order to obtain good correlation and stopband properties, both J C F and J P S D should be minimized. Thus, the waveform design problem can be regarded as a multi-objective optimization (also known as Pareto optimization) problem, i.e., finding the Pareto optimal solutions that satisfy the constraints. Here, we apply the traditional weighting method to solve the problem and, thus, formulate a single-objective function as follows:
J T = λ J P S D + 1 λ J C F = i = 1 M p i i H Q p i i + i = 1 M j = 1 M p i j H Q p i j ,
where λ [ 0 , 1 ] is a weight coefficient by which we can balance the relative weight between J C F and J P S D , and:
Q = λ D f , Q = 1 4 N 2 1 λ F D t F H .
When λ = 0 , (14) is the criterion for designing waveform sets with a good correlation property. Additionally, when λ ( 0 , 1 ] , (14) becomes the criterion for SFW design. Generally, for maximizing the transmitter efficiency and reducing the requirement to the hardware, the unimodular constraint [38] is required in the waveform design. Therefore, the design problem can be formulated as the following minimization problem:
min { x m } m = 1 M J T s . t . x m ( n ) = 1 , n = 1 , . . . , N , m = 1 , . . . , M .
Let ϕ m denote the phase vector of x m , i.e., x m = e j ϕ m ( 1 ) , e j ϕ m ( 2 ) , . . . , e j ϕ m ( N ) T . Then, the problem (16) can be reformulated as the following unconstrained problem with regard to { ϕ m } m = 1 M :
min { ϕ m } m = 1 M J T = i = 1 M p i i H Q p i i + i = 1 M j = 1 M p i j H Q p i j .

3. Problem Optimization via the Gradient Method

In this section, we optimize Problem (17) by using the gradient-based algorithm, which is able to guarantee that the objective function is monotonically decreasing at each iteration. In the following, we first derive the phase gradient and the step size and then briefly summarize the algorithm.

3.1. Phase Gradient

To facilitate the derivation, let ϕ m ( n ) = ϕ m n . Before deriving the phase gradient ϕ m J T , we first deduce the derivative of J T with respect to the phase ϕ m n :
J T ϕ m n = i = 1 M p i i H Q p i i + i = 1 M j = 1 M p i j H Q p i j / ϕ m n = p m m H Q + Q p m m / ϕ m n + j = 1 j m M p m j H Q p m j i = 1 i m M p i m H Q p i m / ϕ m n .
Since i = 1 i m M p i m H Q p i m = i = 1 i m M p m i H Q * p m i (see Appendix B), (18) can be rewritten as:
J T ϕ m n = p m m H Q + Q p m m / ϕ m n + 2 i = 1 i m M p m i H Re Q p m i / ϕ m n .
The first term in (19) can be simplified as:
p m m H Q + Q p m m ϕ m n = p m m T Q + Q p m m ϕ m n = p m m T ϕ m n Q + Q + Q + Q T p m m = 2 p m m T ϕ m n Re Q + Q p m m ,
where the third equality follows from the fact Q + Q T = Q + Q * . Let:
y m m = Re Q p m m y m i = Re Q p m i , i = 1 , . . . , M ,
then (20) can be expressed as:
p m m H Q + Q p m m ϕ m n = 2 p m m T ϕ m n y m m + y m m = 2 k = 1 2 N p m m ( k ) ϕ m n y m m ( k ) + y m m ( k ) .
According to (A4) and (A5), we can deduce the derivative p m m ( k ) / ϕ m n as follows:
p m m ( k ) ϕ m n = f m ( k ) f m * ( k ) ϕ m n = 2 Re f m * ( k ) ϕ m n f m ( k ) = 2 Re j x m * ( n ) e j 2 π 2 N n k f m ( k ) .
By substituting (23) into (22), we have:
p m m H Q + Q p m m ϕ m n = 4 Re j x m * ( n ) k = 1 2 N f m ( k ) y m m ( k ) + y m m ( k ) e j 2 π 2 N n k .
To compute the second term of (19), we first deduce p m i H Re Q p m i / ϕ m n ( i m ) :
p m i H Re Q p m i ϕ m n = 2 Re p m i H ϕ m n Re Q p m i = 2 Re k = 1 2 N p m i * ( k ) ϕ m n y m i ( k ) .
Similarly to the derivation of (23), it is easy to obtain that:
p m i ( k ) ϕ m n = j x m ( n ) e j 2 π 2 N n k f i * ( k ) , i m .
On the basis of (25) and (26), p m i H Re Q p m i / ϕ m n can be further denoted as
p m i H Re Q p m i ϕ m n = 2 Re j x m * ( n ) k = 1 2 N f i ( k ) y m i ( k ) e j 2 π 2 N n k .
By substituting (24) and (27) into (19), J T / ϕ m n can be simplified as:
J T ϕ m n = 4 Re j x m * ( n ) k = 1 2 N f m ( k ) y m m ( k ) + i = 1 M f i ( k ) y m i ( k ) e j 2 π 2 N n k .
Let z m be the inverse FFT (IFFT) of f m y m m + i = 1 M f i y m i , i.e.,
z m = F 1 f m y m m + i = 1 M f i y m i z m ( n ) = 1 2 N k = 1 2 N f m ( k ) y m m ( k ) + i = 1 M f i ( k ) y m i ( k ) e j 2 π 2 N n k ,
then J T / ϕ m n in (28) becomes:
J T ϕ m n = 8 N Re j x m * ( n ) z m ( n ) .
By stacking (30) in a vector, the phase gradient ϕ m J T is given by:
ϕ m J T = J T / ϕ m 1 , . . . , J T / ϕ m N T = 8 N Re j x m * z m ( 1 : N ) ,
where z m ( 1 : N ) denotes the first N elements of z m . It is worth noting that y m m and y m i defined in (21) can be calculated by the Hadamard product and the FFT operations:
y m m = λ D f p m m = λ w t p m m , y m i = Re 1 4 N 2 1 λ F D t F H p m i = 1 8 N 2 1 λ F D t F H + F H D t F p m i = 1 4 N 1 λ F w t F 1 p m i + F 1 w t F p m i .
From the derivation above, it is easy to see that the calculation of the phase gradient is a little bit cumbersome. In order to make the calculation process concise, we write the gradient in the form of matrix. Let X = [ x 1 , . . . , x M ] and Φ = [ ϕ 1 , . . . , ϕ M ] denote the matrix of waveform set and the corresponding phase matrix, respectively. Then, the spectrum matrix of X is S = F c ( X ) = [ f 1 , . . . , f M ] . Define:
Y = y 11 , . . . , y M M = λ w f 1 M T S S * , Y 1 = i = 1 M f i y 1 i , . . . , i = 1 M f i y M i ,
then the matrix form of (29) is given by:
Z = [ z 1 , . . . , z M ] = F c 1 S Y + Y 1 .
Thus, according to (31), the matrix of phase gradient G can be expressed as:
G = ϕ 1 J T , . . . , ϕ M J T = 8 N Re j X * Z ( 1 : N , : ) ,
where Z ( 1 : N , : ) denotes the submatrix formed with the first N rows of Z . By defining Y and Y 1 , we can easily calculate the gradient matrix. However, it is still hard to calculate Y 1 directly by (33). To efficiently calculate Y 1 , define:
Y = y 11 y 1 M y M 1 y M M C 2 N M × M , S r e p = S S 2 N M × M .
Then it is easy to verify that:
S r e p Y 1 M = vec Y 1 ,
where vec Y 1 denotes the column vector consisting of all of the columns of Y 1 . Thus, we can obtain Y 1 via the inverse operation of vec · , i.e.,
Y 1 = ve c 1 S r e p Y 1 M ,
where Y can be calculated by (32). It is easy to see that y m i can be implemented by four FFT (IFFT) operations. Since y m i = y i m * , the calculation of Y takes 4 M ( M + 1 ) 2 = 2 M 2 + 2 M FFT (IFFT) operations.

3.2. Step Size Calculation via Taylor Series Expansion

The traditional methods for obtaining the step size are the linear search methods, which require many iterations and thus are quite time consuming. To reduce the computing expense, here we propose to calculate the step size directly. Assume that Φ l is the phase matrix of the present iteration point X l = x 1 l , . . . , x M l = e j Φ l , and D l = d 1 l , . . . , d M l is the descent direction. Then, the new iteration point can be denoted as:
Φ l + 1 = Φ l + μ D l , X l + 1 = e j Φ l + 1 = X l e j μ D l ,
i.e.,
x i l + 1 = x i l e j μ d i l , i = 1 , . . . , M ,
where μ is the step size. Thus, the linear search problem can be formulated as the following minimization problem:
min μ > 0 h ( μ ) = J T X l + 1 = i = 1 M p i i l + 1 H Qp i i l + 1 + i = 1 M j = 1 M p i j l + 1 H Q p i j l + 1 .
By taking the derivative of (41), we have:
h μ = 2 Re i = 1 M p i i l + 1 H μ Qp i i l + 1 + i = 1 M j = 1 M p i j l + 1 H μ Q p i j l + 1 .
To simplify (41) and (42), we first deduce p i j l + 1 . By using the Taylor series expansion, p i j l + 1 can be approximated as (see Appendix C):
p i j l + 1 f i f j * + j f i f j * f i f j * μ 1 2 f i f j * + f i f j * 2 f i f j * μ 2 ,
where:
f i = F x i l , f i = F x i l d i l , f i = F x i l d i l d i l .
Let
p i j l = f i f j * , c i j = j f i f j * f i f j * , c i j = 1 2 f i f j * + f i f j * 2 f i f j * ,
then (43) can be rewritten as:
p i j l + 1 p i j l + c i j μ + c i j μ 2 = p ¯ i j l + 1 .
By replacing p i j l + 1 with p ¯ i j l + 1 , the approximate function of h ( μ ) in (41) can be denoted as:
h 1 ( μ ) = i = 1 M p ¯ i i l + 1 H Q p ¯ i i l + 1 + i = 1 M j = 1 M p ¯ i j l + 1 H Q p ¯ i j l + 1 .
Since p i j l + 1 = p ¯ i j l + 1 holds when μ = 0 , it is easy to verify that:
h ( 0 ) = h 1 ( 0 ) , h μ μ = 0 = h 1 μ μ = 0 ,
which indicates that h ( μ ) and h 1 ( μ ) have the same function value and slope at μ = 0 . As D l is the descent direction, the slopes of these two functions are less than zero. Thus, these two functions have at least one minimum point greater than zero. Since h ( μ ) is sensitive to the waveform phases, the optimal step size of h ( μ ) is very small and close to zero. Consequently, we can use the minimum point of h 1 ( μ ) to approximate the optimal step size.
To calculate the minimum point of h 1 ( μ ) , we replace p i j l + 1 in (42) with p ¯ i j l + 1 , then the derivative of h 1 ( μ ) with respective to μ is given by:
h 1 μ = 2 Re i = 1 M c i i H + 2 c i i H μ Q p i i l + c i i μ + c i i μ 2 + i = 1 M j = 1 M c i j H + 2 c i j H μ Q p i j l + c i j μ + c i j μ 2 = a μ 3 + b μ 2 + c μ + d ,
where:
a = 2 Re 2 i = 1 M c i i H Q c i i + 2 i = 1 M j = 1 M c i j H Q c i j , b = 2 Re i = 1 M 2 c i i H Q c i i + c i i H Q c i i + i = 1 M j = 1 M 2 c i j H Q c i j + c i j H Q c i j , c = 2 Re i = 1 M 2 c i i H Qp i i l + c i i H Q c i i + i = 1 M j = 1 M 2 c i j H Q p i j l + c i j H Q c i j , d = 2 Re i = 1 M c i i H Qp i i l + i = 1 M j = 1 M c i j H Q p i j l .
To simplify the calculation, let:
h i j = F 1 c i j = 1 2 N F H c i j , h i j = F 1 c i j = 1 2 N F H c i j .
Then (50) can be rewritten as:
a = 4 Re λ w f T i = 1 M c i i * c i i + 1 λ w t T i = 1 M j = 1 M h i j * h i j , b = 2 Re λ w f T i = 1 M 2 c i i * c i i + c i i * c i i + 1 λ w t T i = 1 M j = 1 M 2 h i j * h i j + h i j * h i j , c = 2 Re λ w f T i = 1 M 2 c i i * p i i l + c i i * c i i + 1 λ w t T i = 1 M j = 1 M 2 h i j * r i j l + h i j * h i j , d = 2 Re λ w f T i = 1 M c i i * p i i l + 1 λ w t T i = 1 M j = 1 M h i j * r i j l ,
where the first term of each equality follows from the fact a H Qb = λ a H D f b = w f T ( a * b ) ( a , b are the arbitrary vectors), and the second term of each equality follows from the fact a H Q b = 1 4 N 2 1 λ a H F D t F H b = 1 λ w t T F 1 ( a ) * F 1 ( b ) . Let h 1 μ = 0 , then the minimum point of h 1 ( μ ) can be obtained by solving the following cubic equation:
a μ 3 + b μ 2 + c μ + d = 0
It is well known that a cubic equation with real coefficients has three roots, in which there is at least a real root. Therefore, we can choose the positive root that is closest to zero as the precise estimation of the step size.
In order to facilitate the calculation, we write the above derivation in the form of matrix. Define:
S = f 1 , . . . , f M , S = f 1 , . . . , f M , S = f 1 , . . . , f M ,
then according to (44) we have:
S = F c X l , S = F c X l D l , S = F c X l D l D l .
Let:
P = p 11 l , . . . , p M 1 l , p 12 l , . . . , p M 2 l , . . . . . . , p 1 M l , . . . , p M M l , C = c 11 , . . . , c M 1 , c 12 , . . . , c M 2 , . . . . . . , c 1 M , . . . , c M M , C = c 11 , . . . , c M 1 , c 12 , . . . , c M 2 , . . . . . . , c 1 M , . . . , c M M , P d = p 11 l , p 22 l , . . . , p M M l , C d = c 11 , c 22 , . . . , c M M , C d = c 11 , c 22 , . . . , c M M ,
Then the inverse Fourier transforms of P , C , C can be denoted as:
R = F c 1 P , H = F c 1 C , H = F c 1 C .
According to (52), (56) and (57), the coefficients of the cubic equation can be reformulated as:
a = 4 Re λ w f T C d * C d 1 M + 1 λ w t T H * H 1 M 2 , b = 2 Re λ w f T 2 C d * C d + C d * C d 1 M + 1 λ w t T 2 H * H + H * H 1 M 2 , c = 2 Re λ w f T 2 C d * P d + C d * C d 1 M + 1 λ w t T 2 H * R + H * H 1 M 2 , d = 2 Re λ w f T C d * P d 1 M + 1 λ w t T H * R 1 M 2 .

3.3. Algorithm Summary

After deducing the phase gradient and the step size, it is easy to solve the unconstrained problem (17) by using the conjugate gradient algorithm (CGA). Here, we apply the classical Polak–Ribiere–Polyak CGA (PRP-CGA) to deal with the waveform design problem. The searching direction of the PRP-CGA can be expressed as:
d l + 1 = g l + 1 + g l + 1 g l T g l + 1 g l 2 d l ,
where g l and d l are the gradient vector and the direction vector, respectively. For the problem here, g l and d l are defined as:
g l = g 1 l T , . . . , g M l T T , d l = d 1 l T , . . . , d M l T T ,
where g m l = ϕ m J T X l , m = 1 , . . . , M . Since the step size is an approximate value, g l + 1 T d l is not equal to zero. Thus, d l + 1 may not be descendent, i.e.,
g l + 1 T d l + 1 = g l + 1 2 + g l + 1 g l T g l + 1 g l 2 g l + 1 T d l < 0
is not always satisfied. For guaranteeing that the searching direction is descendant, we adopt the following modified direction:
d ˜ l + 1 = g l + 1 + g l + 1 g l T g l + 1 g l 2 d l , d l + 1 = d ˜ l + 1 , g l + 1 T d ˜ l + 1 < 0 g l + 1 , g l + 1 T d ˜ l + 1 0 .
Actually, (59) can also be expressed as:
D l + 1 = G l + 1 + g l + 1 g l T g l + 1 g l 2 D l .
Since
g l 2 = m = 1 M g m l 2 = G l F 2 , g l + 1 g l T g l + 1 = G l + 1 F 2 1 N T G l + 1 G l 1 M ,
and (63) can be rewritten as:
D l + 1 = G l + 1 + G l + 1 F 2 1 N T G l + 1 G l 1 M G l F 2 D l .
Thus, (62) can be expressed as the following matrix form:
D ˜ l + 1 = G l + 1 + G l + 1 F 2 1 N T G l + 1 G l 1 M G l F 2 D l , D l + 1 = D ˜ l + 1 , 1 N T G l + 1 D ˜ l + 1 1 M < 0 G l + 1 , 1 N T G l + 1 D ˜ l + 1 1 M 0 .
On the basis of the above derivation, the gradient-based algorithm, which we call Gra-WeCorr-SFW, is summarized in Algorithm 1.

4. Simplified Algorithm for the Design Problem without Considering the Correlation Weights

In Section 3, we present a gradient-based algorithm to handle Problem (17). From (9), we can see that the criterion J C F can be simplified when w t ( k ) = 1 , k = 1 N , . . . , N 1 (i.e., the correlation weights are not taken into account). Thus, in this section, we derive a simplified algorithm for the design problem without considering the correlation weights. Let w t ( k ) = 1 , k = 1 N , . . . , N 1 , then the criterion J C F can be simplified as:
J C F = 1 2 N i = 1 M j = 1 M p i j H p i j = 1 2 N i = 1 M j = 1 M p i i H p j j = 1 2 N i = 1 M p i i H j = 1 M p j j .
Thus we can rewrite the problem (17) as:
J T = λ i = 1 M p i i H D f p i i + λ 1 i = 1 M p i i H j = 1 M p j j ,
where λ 1 = 1 λ 2 N . Like the derivation in Section 3, we deduce the gradient and the step size.
Algorithm 1: Gra-WeCorr-SFW.
Initialization: l = 0 , λ , M , N , w t , w f , X 0 = x 1 0 , . . . , x M 0 ,
       S = F c X 0 , G 0 , D 0 = G 0 .
Repeat
1: S = F c X l D l , S = F c X l D l D l .
2: Compute P , C , C , P d , C d , C d according to (45).
3: R = F c 1 P , H = F c 1 C , H = F c 1 C .
4: Compute coefficients a , b , c , d according to (58).
5: Solve the cubic Equation (53), and then choose the
 positive root that is closest to zero as the step size μ l .
6: X l + 1 = X l e j μ l D l , S = F c ( X l + 1 ) .
7: Y = λ w f 1 M T S S * .
8: Compute Y according to (32).
9: S r e p = S T , . . . , S T 2 N M × M T , Y 1 = ve c 1 S r e p Y 1 M .
10: Z = F c 1 S Y + Y 1 .
11: G l + 1 = 8 N Re j X l + 1 * Z ( 1 : N , : ) .
12: Compute the searching direction D l + 1 according to (66).
13: l = l + 1 .
Until convergence

4.1. Phase Gradient

According to (68), we deduce the derivative J T / ϕ m n as:
J T ϕ m n = 2 Re p m m H ϕ m n λ w f p m m + λ 1 j = 1 M p j j .
Let:
y m = λ w f p m m + λ 1 j = 1 M p j j ,
then according to (23), (69) can be rewritten as:
J T ϕ m n = 2 Re k = 1 2 N p m m * ( k ) ϕ m n y m ( k ) = 4 Re j x m m * k = 1 2 N f m ( k ) y m ( k ) e j 2 π 2 N n k .
By stacking (71) in a vector, the gradient is given by:
ϕ m J T = 8 N Re j x m * z m 1 : N ,
where z m = F 1 ( f m y m ) . Define Y 2 = y 1 , . . . , y M , Z 1 = z 1 , . . . , z M , then it is easy to verify that:
Y 2 = λ w f 1 M T S S * + λ 1 S S * 1 M 1 M T , Z 1 = F c 1 S Y 2 .
Thus, the gradient matrix can be expressed as:
G = 8 N Re j X * Z 1 1 : N , : .

4.2. Step Size

Similarly to (41), the linear search problem here can be denoted as:
min μ > 0 h ( μ ) = λ i = 1 M p i i l + 1 H D f p i i l + 1 + λ 1 i = 1 M p i i l + 1 H j = 1 M p j j l + 1 .
By taking the derivative of h ( μ ) , we have:
h μ = 2 Re λ i = 1 M p i i l + 1 μ H D f p i i l + 1 + λ 1 i = 1 M p i i l + 1 μ H j = 1 M p j j l + 1 .
Then, the approximate derivative can be obtained by replacing p p q l + 1 with p ¯ p q l + 1 defined in (46):
h μ 2 Re λ i = 1 M c i i + 2 c i i μ H D f p i i l + c i i μ + c i i μ 2 + λ 1 i = 1 M c i i + 2 c i i μ H j = 1 M p j j l + c j j μ + c j j μ 2 .
According the definition (56), we have:
i = 1 M p i i l = P d 1 M , i = 1 M c i i = C d 1 M , i = 1 M c i i = C d 1 M ,
where P d , C d and C d can be expressed as the following equalities according to (45):
P d = S S * , C d = 2 Im S S * , C d = Re S S * S S * .
Thus, (77) can be written more compactly as:
h μ a 1 μ 3 + b 1 μ 2 + c 1 μ + d 1 ,
where:
a 1 = 4 Re λ w f T C d * C d 1 M + λ 1 C d 1 M H C d 1 M , b 1 = 2 Re λ w f T 2 C d * C d + C d * C d 1 M + λ 1 2 C d 1 M H C d 1 M + C d 1 M H C d 1 M , c 1 = 2 Re λ w f T 2 C d * P d + C d * C d 1 M + λ 1 2 C d 1 M H P d 1 M + C d 1 M H C d 1 M , d 1 = 2 Re λ w f T C d * P d 1 M + λ 1 C d 1 M H P d 1 M .
By solving the cubic equation a 1 μ 3 + b 1 μ 2 + c 1 μ + d 1 = 0 , the step size can be easily obtained.

4.3. Algorithm Summary

In the previous two subsections, the gradient and step size for Problem (68) are derived. Then, the simplified algorithm (which we call Gra-Corr-SFW) for the design problem without considering the correlation weights is summarized in Algorithm 2. It is easy to observe that both Algorithm 1 and Algorithm 2 can be easily implemented by the (I)FFT operations and the Hadamard product. Since the Hadamard product is more efficient than the (I)FFT operation, we mainly use the number of (I)FFT operations to measure the time complexity of the algorithms. From Section 3, we know that the calculation of Y needs 2 M 2 + 2 M (I)FFT operations. Thus, the Gra-WeCorr-SFW requires 5 M 2 + 6 M (I)FFT operations at each iteration. Compared to the Gra-WeCorr-SFW, the Gra-Corr-SFW is simpler and only needs 4 M (I)FFT operations at each iteration. It is worth noting that the calculation of the cubic function is very simple and needs only a small amount of computation.
Algorithm 2: Gra-Corr-SFW.
Initialization: l = 0 , λ , λ 1 , M , N , w t , w f , X 0 = x 1 0 , . . . , x M 0 ,
       S = F c X 0 , G 0 , D 0 = - G 0 .
Repeat
1: S = F c X l D l , S = F c X l D l D l .
2: P d = S S * , C d = - 2 Im S S * , C d = - Re S S * - S S * .
3: Compute the coefficients a 1 , b 1 , c 1 , d 1 according to (81).
4: Solve the cubic Equation (80), and then choose the positive root
 which is closest to zero as the step size μ l .
5: X l + 1 = X l e j μ l D l , S = F c X l + 1
6: Y 2 = λ w f 1 M T S S * + λ 1 S S * 1 M 1 M T .
7: Z 1 = F c - 1 S Y 2 .
8: G l + 1 = 8 N Re - j X l + 1 * Z 1 1 : N , : .
9: Compute the searching direction D l + 1 according to (66).
10: l = l + 1 .
Until convergence
To analyze the convergence speed, Table 1 presents the per iteration computational complexities of the proposed and existing algorithms. As shown in Table 1, the proposed Gra-WeCorr-SFW requires fewer (I)FFT operations than the MM-WeCorr-acc (MM-WeCorr-acceleration) at each iteration. Compared to these two algorithms, the per iteration computational complexities of the rest of the three algorithms (MM-Corr-acc (MM-Corr-acceleration), MDISAA-SFW and Gra-Corr-SFW), which do not consider the correlation weights, are much smaller. In addition to the per iteration computational complexity, the iteration number is also an important factor affecting the convergence speed. Thus, it is difficult to compare the convergence performance of the algorithms via the per iteration computational complexity. In the following section, several numerical experiments are provided to show the convergence performance of the proposed algorithms.

5. Numerical Experiments

To illustrate the effectiveness and superiority of the proposed algorithms, several numerical experiments are presented in this section. We first validate the monotonicity of the proposed algorithms and then assess the performance of the algorithms by designing three different waveform sets. The proposed algorithms are compared with the MM-Corr-acc [14], MM-WeCorr-acc [14] and MDISAA-SFW [37] algorithms, where MM-Corr-acc and MM-WeCorr-acc are the state-of-the-art algorithms for designing waveform sets with a good correlation property, and MDISAA-SFW is the state-of-the-art algorithm for designing the orthogonal waveform set (OWS) with the stopband constraint.
All of the experiments are performed on a PC with a 3.60-GHz i7-4790 CPU and 8GB RAM. The software environment is MATLAB 2012b. In the following experiments, all of the algorithms are initialized by the unimodular waveform sets with random phases.

5.1. Verification of The Monotonicity

In this subsection, we investigate the monotonicity of the proposed algorithms in three different waveform design problems, which are respectively complementary sets of sequences (CSS) design, waveform set design with zero correlation zone and orthogonal waveform set (OWS) design with the stopband constraint. In order to measure the monotonicity, define the relative error of the l-th iteration as follows:
ε r l = h ( μ l ) h ( μ ˙ l ) h ( 0 ) h ( μ ˙ l ) ,
where μ l denotes the approximate step size of the l-th iteration, which is obtained via the Taylor series expansion, and μ ˙ l denotes the optimal step size obtained by the searching method. The smaller the value of ε r l is, the closer the approximate step size is to the optimal step size. Thus, ε r l can be used to express the accuracy of the approximate step size. When ε r l < 1 , we have h ( μ l ) < h ( 0 ) , which means the objective function is decreasing at the l-th iteration. Therefore, we can measure the monotonicity of the algorithms by using the following peak relative error P r e :
P r e = max ε r l , l = 1 , . . . , N I ,
where N I is the iteration number.
To simulate ε r l and P r e , we use the Gra-WeCorr-SFW to design waveform sets with M = 3 waveforms and each waveform of length N = 256 . The simulation parameters of different design problems are shown in Table 2. Additionally, we stop the algorithm after 200 iterations. Figure 1 shows the evolution curves of the relative error with respect to the iteration number. From Figure 1, we can see that the relative error is a little larger at the initial iterations. However, it decreases rapidly and is substantially below 10 4 after several iterations. In the whole iteration process, the relative error is less than one, which means that the objective function is monotonically decreasing. Figure 2 shows the peak relative error of 100 random trials. It is easy to see that the peak relative error of all three cases is very small and less than one. This indicates that the monotonicity of the Gra-WeCorr-SFW is not affected by the initial iteration point. Since the Gra-Corr-SFW is a simplified algorithm of the Gra-WeCorr-SFW, it can also guarantee that the objective function is monotonically decreasing at each iteration.

5.2. CSS Design or OWS Design

In Appendix A, we have demonstrated that the CSS design is equivalent to the OWS design. In this subsection, we apply the Gra-Corr-SFW to design CSS (or OWS). Since the lower bound of the CISL is zero, we choose CISL ε as the stopping criterion for all of the algorithms in this experiment. The weight coefficient λ and the correlation weights are the same as Case 1 in Table 2. Figure 3 shows the normalized autocorrelation sum of the waveform sets designed by the Gra-Corr-SFW, where the normalized autocorrelation sum is defined as:
r s u m ( k ) = 20 log 10 m = 1 M r m m ( k ) m = 1 M r m m ( 0 ) , k = 1 N , . . . , N 1 .
From Figure 3, we can see that the autocorrelation sum of the CSS is a Delta function. Additionally, with the decrease of ε , the sidelobes of the CSS are getting lower and lower.
Next, we compare the performance between the proposed Gra-Corr-SFW and the state-of-the-art MM-Corr-acc by designing waveform sets with different waveform number M and waveform length N. For both algorithms, we choose M and N as follows:
M = 3 k , N = 256 k , k = 1 , . . . , 5 .
The evolution curves of MM-Corr-acc and Gra-Corr-SFW are respectively shown in Figure 4a,b. From these two subfigures, we observe that the convergence speed of the proposed Gra-Corr-SFW is much faster than that of the MM-Corr-acc. Even for ( N = 1280 , M = 15 ) , the proposed algorithm takes 1.35 s to coverage to CISL 1 0 13 , while the MM-Corr-acc takes 48.11 s.
Further, to eliminate the randomness, we repeat the algorithms 100 times for each ( M , N ) pair and record the average iteration number N I , the average running time t and the average peak sidelobe level of the autocorrelation sum P s u m , where P s u m is defined as:
P s u m = max r s u m ( k ) , k = 1 , . . . , N 1 .
Then, we choose ε = 10 1 for the stopping criterion. The performance parameters N I , t , P s u m of these two algorithms are provided in Table 3. As can be seen from this Table, since the algorithms stop the iteration when CISL 1 0 1 , the P s u m of these two algorithms are basically the same. At the same time, the average running time in Table 3 shows that the Gra-Corr-SFW is an order of magnitude faster than the MM-Corr-acc. From Table 1, we can see that the per iteration computational complexities of both the MM-Corr-acc and the Gra-Corr-SFW are O ( 8 M N log 2 N ) . Therefore, the reason for the slower convergence of the MM-Corr-acc may be that the MM strategy in this algorithm makes the objective function loose, so that the iteration number of the MM-Corr-acc is larger than that of the Gra-Corr-SFW (as shown in Table 3).

5.3. Waveform Set Design with Zero Correlation Zone

To verify the effectiveness of the Gra-WeCorr-SFW, we consider the problem of suppressing the correlation sidelobes at the specified intervals, i.e., designing the waveform set with the zero correlation zone (ZCZ), and compare the performance with the MM-WeCorr-acc. The experimental parameters are the same as Case 2 in Table 2. For both algorithms, we choose the value of the objective function ψ in (4) as the stopping criterion, i.e., ψ ε . The auto- and cross-correlations of the waveform sets designed by MM-WeCorr-acc and Gra-WeCorr-SFW are shown in Figure 5. From this figure, we can see that the proposed algorithm can also generate a waveform set with correlation sidelobes that are almost zero at the specified intervals.
Similar to Figure 4, for each ( M , N ) pair, we simulate the evolution curves of the objective function ψ with respect to the running time. Additionally, the results of these two algorithms are presented in Figure 6. It is easy to see that the Gra-WeCorr-SFW is faster than the MM-WeCorr-acc. In addition, we can also find that when the objective function ψ is less than 10 11 , the convergence speed of the MM-WeCorr-acc decreases, especially for a large ( M , N ) pair.
Further, Table 4 presents the comparison of performance parameters between MM-WeCorr-acc and Gra-WeCorr-SFW, where P z c z is the peak sidelobe level at the zero correlation zone defined as:
P z c z = max 20 log 10 r i j ( k ) N , i , j = 1 , . . . , M , k [ 51 , 80 ] .
Similarly, for each ( M , N ) pair, the algorithms are repeated 100 times. As can be seen from Table 4, the P z c z of the Gra-WeCorr-SFW is a little bit lower than that of the MM-WeCorr-acc. Moreover, due to fewer iterations, the average running time of the Gra-WeCorr-SFW is about 50% of that of the MM-WeCorr-acc, which indicates that the former is more computationally efficient.

5.4. OWS Design with the Stopband Constraint

In this subsection, we consider the problem of designing OWS with the stopband constraint. Since the Gra-Corr-SFW is a simplified version of the Gra-WeCorr-SFW and needs much fewer (I)FFT operations, it is more efficient than the Gra-WeCorr-SFW. Thus, we only investigate the performance of the Gra-Corr-SFW and compare it with the MDISAA-SFW. Here, X l + 1 X l F 10 4 is employed as the stopping criterion, and the experimental parameters are the same as Case 3 in Table 2. First, we apply these two algorithms to design OWS with the stopband constraint. Suppose the waveform number M is three and the waveform length N is 256. The correlation levels and spectral power of the waveform sets designed by MDISAA-SFW and Gra-Corr-SFW are shown in Figure 7 and Figure 8. From Figure 7, we can see that the waveform set designed by the MDISAA-SFW has better autocorrelation performance, while the waveform set designed by the Gra-Corr-SFW has better cross-correlation performance. This is because the MDISAA-SFW optimizes the autocorrelation explicitly and, thus, places more emphasis on suppressing the autocorrelation sidelobes. Figure 8 indicates that the stopband spectral power of the waveform set designed by the Gra-Corr-SFW is less than that of the waveform set designed by the MDISAA-SFW.
In order to fully compare the algorithms, we define the peak autocorrelation sidelobe P a c , peak cross-correlation P c c , peak stopband power P s p and integrated sidelobe level (ISL) [16] as follows:
P a c = max 20 log 10 r i i ( k ) N , i = 1 , . . . , M , k = 2 , . . . , N 1 , P c c = max 20 log 10 r i j ( k ) N , i , j = 1 , . . . , M , i j , k = 0 , . . . , N 1 , P s p = max P s t o p i , i = 1 , . . . , M , I S L = i = 1 M k = N + 1 k 0 N 1 r i i ( k ) 2 + i = 1 M j = 1 j i M k = N + 1 N 1 r i j ( k ) 2 ,
where P s t o p i denotes the peak stopband power of the waveform x i . In the above four performance parameters, P a c and P c c are used to measure the autocorrelation and cross-correlation performances, respectively; P s p is the parameter related to the stopband performance; and I S L indicates the overall sidelobe performance. Since the decrease of the bandwidth leads to the broadening of the main lobe, r i i ( 1 ) , i = 1 , . . . , M can be regarded as the autocorrelation main lobe. Thus, the definition of P a c does not take k = 1 into account. Here, we consider the waveform sets with M = 3 waveforms and each waveform of length N { 256 , 512 , 1024 } . For each ( M , N ) pair, we repeat these two algorithms 100 times and record the average values of the performance parameters. The comparison of the performance parameters between MDISAA-SFW and Gra-Corr-SFW is shown in Table 5. In addition to the parameters defined in (88), the average iteration number N I and the running time t are provided in Table 5. From this table, we obtain three findings, as follows. Above all, the P c c and P s p of the Gra-Corr-SFW are lower than that of the MDISAA-SFW, which means that the cross-correlation and stopband performances of the Gra-Corr-SFW are better than that of the MDISAA-SFW. However, in terms of autocorrelation performance, the Gra-Corr-SFW is inferior to the MDISAA-SFW. From the I S L in Table 5, it is easy to see that the overall sidelobe performance of the Gra-Corr-SFW is better than that of the MDISAA-SFW. Furthermore, although the per iteration computational complexity of the Gra-Corr-SFW is slightly higher than that of the MDISAA-SFW (see Table 1), the Gra-Corr-SFW needs fewer iterations due to the fast convergence of the gradient algorithm, which makes the proposed algorithm more efficient. Finally, with the increase of the waveform length, both the correlation and stopband performances of these two algorithms are improved, especially the cross-correlation performance.
Next, we investigate the performance of the algorithms under different λ . The average values (100 trials) of P a c , P c c and P s p are shown in Figure 9. From this figure, we can see that with the increase of λ , the P a c and P c c of these two algorithms change slowly, while the P s p is sensitive to the change of λ . It can also be observed that except the autocorrelation performance, the cross-correlation and stopband performances of the Gra-Corr-SFW are better than that of the MDISAA-SFW. This means that the stopband property can be easily obtained by using the Gra-Corr-SFW. Moreover, since the correlation performance changes slowly with the increase of λ , we can choose a relatively large λ (e.g., λ ( 0 . 85 , 0 . 95 ) ) for the Gra-Corr-SFW to obtain both good correlation and stopband properties.

6. Conclusions

In this paper, we propose an efficient algorithm named Gra-WeCorr-SFW for designing the waveform set with a good correlation and stopband properties. The algorithm optimizes the objective function directly and can guarantee that the objective function is monotonically decreasing at each iteration. By changing the design parameters, the Gra-WeCorr-SFW can be used to generate different waveform sets, such as CSS, the waveform set with ZCZ and OWS with the stopband constraint. As the main steps can be implemented by the FFT operations and the Hadamard product, the proposed algorithm is computationally efficient and can be used to design waveform sets with large M and N. Besides, the simplified version of the Gra-WeCorr-SFW (named Gra-Corr-SFW) is proposed for the design problem without considering the correlation weights. Compared to the Gra-WeCorr-SFW, the simplified algorithm requires fewer FFT operations and is faster. Numerical experiments show that the proposed algorithms are faster than the state-of-the-art algorithms (MM-WeCorr-acc and MM-Corr-acc) when designing CSS or the waveform set with ZCZ. In the case of designing OWS with the stopband constraint, the simplified algorithm has better stopband performance and computational efficiency compared with the MDISAA-SFW.

Acknowledgments

This work is supported by the Key Projects in the National Science and Technology Pillar Program during the Twelfth Five-year Plan Period (No.2014BAK12B00) and the National Natural Science Foundation of China (No.61101186 and No.61101179).

Author Contributions

Liang Tang conceived of and devised the idea. Liang Tang and Yongfeng Zhu designed and performed the experiments. Yongfeng Zhu and Qiang Fu analyzed the experiment results. All authors contributed to writing and editing the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
MIMOmultiple-input multiple-output
(I)FFT(inverse) fast Fourier transform
CSScomplementary sets of sequences
OWSorthogonal waveform set
CANcyclic algorithm-new
WeCANweighted cyclic algorithm-new
LBFGSlimited-memory Broyden, Fletcher, Goldfarb and Shanno
MMmajorization-minimization
MM-CorrMM-correlation
MM-Corr-accMM-correlation-acceleration
MM-WeCorrMM-weighted correlation
MM-WeCorr-accMM-weighted correlation-acceleration
CDMAcode-division multiple-access
OFDMorthogonal frequency division multiplexing
SFWsparse frequency waveforms
HFSWRhigh frequency surface wave radar
UWBultra-wide bandwidth
MDISAA-SFWmulti-dimensional iterative spectral approximation algorithm-SFW
PSDpower spectrum density
Gra-WeCorr-SFWgradient-weighted correlation-SFW
Gra-Corr-SFWgradient-correlation-SFW
CISLcomplementary integrated sidelobe level

Appendix A

Proof. 
Due to the unimodular constraint, the energy of each waveform is N, i.e., r i i ( 0 ) = N , i = 1 , . . . , M . Then, (3) can be rewritten as:
CISL = k = 1 N N 1 m = 1 M r m m k 2 m = 1 M r m m 0 2 = k = 1 N N 1 m = 1 M r m m k 2 M 2 N 2 .
Since m = 1 M r m m k 2 = i = 1 M j = 1 M r i i * k r j j k , we have:
CISL = i = 1 M j = 1 M k = 1 N N 1 r i i * k r j j k M 2 N 2 = i = 1 M j = 1 M r i i H r j j M 2 N 2 .
by substituting (6) into (A2), CISL can be expressed as:
CISL = 1 2 N i = 1 M j = 1 M p i i H p j j M 2 N 2 .
Assume f m is the 2 N -point discrete Fourier transform (DFT) vector of the waveform x m , m = 1 , . . . , M , i.e.,
f m ( i ) = n = 1 N x m ( n ) e j n ω i , ω i = 2 π 2 N i , i = 1 , . . . , 2 N ,
then the PSD vector p i j can be written as:
p i j = f i f j * .
According to (A5), it is easy to verify that p i i H p j j = p i j H p i j . Thus, (A3) can be denoted as:
CISL = 1 2 N i = 1 M j = 1 M p i j H p i j M 2 N 2 .
Let w t ( k ) = 1 , k = 1 N , . . . , N 1 , then we have D t = Diag 1 2 N = I 2 N . Thus, (8) can be simplified as:
ψ 1 = 1 2 N i = 1 M j = 1 M p i j H p i j M N 2 .
From (A6) and (A7), we can observe that the only different between CISL and ψ 1 is the constant term, i.e., CISL and ψ 1 are equivalent. The proof is complete. ☐

Appendix B

Proof. 
According to (A5), it is easy to verify that:
p i m = p m i * .
by substituting (A8) into i = 1 i m M p i m H Q p i m , we have:
i = 1 i m M p i m H Q p i m = i = 1 i m M p m i T Q p m i * = i = 1 i m M p m i H Q T p m i T = i = 1 i m M p m i H Q T p m i .
Since:
Q H = 1 4 N 2 1 λ F D t F H H = Q ,
we have:
Q T = Q H * = Q * .
Thus, (A9) can be rewritten as:
i = 1 i m M p i m H Q p i m = i = 1 i m M p m i H Q * p m i .
The proof is complete. ☐

Appendix C

Proof. 
Here, we temporarily replace the subscripts i , j with p , q (i.e., p p q l + 1 ) to avoid the confusion of the number j and the imaginary unit j. According to (A4), (A5) and (40), the k-th element of p p q l + 1 can be written as:
p p q l + 1 ( k ) = f p l + 1 ( k ) f q l + 1 ( k ) * = n = 1 N m = 1 N x p l ( n ) x q l ( m ) * · e j μ d p l ( n ) d q l ( m ) · e j 2 π 2 N n m k ,
where x p l ( n ) , d p l ( n ) denote the n-th elements of x p l and d p l . By using the Taylor series, the expansion of e j μ d p l ( n ) d q l ( m ) in (A13) (which we keep the first three terms) is given by:
e j μ d p l ( n ) d q l ( m ) 1 + j μ d p l ( n ) d q l ( m ) + j μ d p l ( n ) d q l ( m ) 2 j μ d p l ( n ) d q l ( m ) 2 2 2 .
Thus, p p q l + 1 ( k ) can be approximated as:
p p q l + 1 ( k ) n = 1 N m = 1 N x p l ( n ) x q l ( m ) * · e j 2 π 2 N n m k + j μ n = 1 N m = 1 N x p l ( n ) x q l ( m ) * · d p l ( n ) d q l ( m ) · e j 2 π 2 N n m k μ 2 2 n = 1 N m = 1 N x p l ( n ) x q l ( m ) * · d p l ( n ) d q l ( m ) 2 · e j 2 π 2 N n m k
Let:
f i = F x i l , f i = F x i l d i l , f i = F x i l d i l d i l .
Then, (A15) can be rewritten as:
p p q l + 1 k f p k f q * k + j μ f p k f q * k f p k f q k * μ 2 2 f p k f q * k + f p k f q k * 2 f p k f q k * ,
where f p k , f p k , f p k are the k-th elements of f p , f p , f p . By representing (A17) as a vector, we have:
p p q l + 1 f p f q * + j f p f q * f p f q * μ 1 2 f p f q * + f p f q * 2 f p f q * μ 2 .
The proof is complete. ☐

References

  1. Blunt, S.D.; Mokole, E.L. Overview of radar waveform diversity. IEEE Aerosp. Electron. Syst. Mag. 2016, 31, 2–42. [Google Scholar] [CrossRef]
  2. Cheng, X.; Aubry, A.; Ciuonzo, D.; De Maio, A.; Wang, X. Robust waveform and filter bank design of polarimetric radar. IEEE Trans. Aerosp. Electron. Syst. in press. [CrossRef]
  3. He, H.; Li, J.; Stoica, P. Waveform Design for Active Sensing Systems: A Computational Approach; Cambridge University Press: Cambridge, UK, 2012. [Google Scholar]
  4. Chen, C.Y.; Vaidyanathan, P.P. MIMO radar waveform optimization with prior information of the extended target and clutter. IEEE Trans. Signal Process. 2009, 57, 3533–3544. [Google Scholar] [CrossRef]
  5. Ciuonzo, D.; De Maio, A.; Foglia, G.; Piezzo, M. Intrapulse radar-embedded communications via multiobjective optimization. IEEE Trans. Aerosp. Electron. Syst. 2015, 51, 2960–2974. [Google Scholar] [CrossRef]
  6. Li, J.; Stoica, P. MIMO Radar Signal Processing; John Wiley & Sons: Hoboken, NJ, USA, 2009. [Google Scholar]
  7. Li, J.; Stoica, P.; Zheng, X.Y. Signal synthesis and receiver design for MIMO radar imaging. IEEE Trans. Signal Process. 2008, 56, 3959–3968. [Google Scholar] [CrossRef]
  8. Wang, Z.J.; Babu, P.; Palomar, D.P. Design of PAR-constrained sequences for MIMO channel estimation via majorization-minimization. IEEE Trans. Signal Process. 2016, 64, 6132–6144. [Google Scholar] [CrossRef]
  9. Deng, H. Polyphase code design for orthogonal netted radar systems. IEEE Trans. Signal Process. 2004, 52, 3126–3135. [Google Scholar] [CrossRef]
  10. Khan, H.A.; Zhang, Y.Y.; Ji, C.L.; Stevens, C.J.; Edwards, D.J.; O’Brien, D. Optimizing polyphase sequences for orthogonal netted radar. IEEE Signal Process. Lett. 2006, 13, 589–592. [Google Scholar] [CrossRef]
  11. He, H.; Stoica, P.; Li, J. Designing unimodular sequence sets with good correlations-including an application to MIMO radar. IEEE Trans. Signal Process. 2009, 57, 4391–4405. [Google Scholar] [CrossRef]
  12. Wang, Y.C.; Dong, L.; Xue, X.; Yi, K.C. On the design of constant modulus sequences with low correlation sidelobes levels. IEEE Commun. Lett. 2012, 16, 462–465. [Google Scholar] [CrossRef]
  13. Wu, H.; Fu, Q.; Li, Y.X.; Xia, Y. Designing sequence sets with good correlation properties based on PSD fitting. Electron. Lett. 2015, 51, 1193–1195. [Google Scholar] [CrossRef]
  14. Song, J.; Babu, P.; Palomar, D.P. Sequence set design with good correlation properties via majorization-minimization. IEEE Trans. Signal Process. 2016, 64, 2866–2879. [Google Scholar] [CrossRef]
  15. O’Donnell, B.; Baden, J.M. Fast gradient descent for multi-objective waveform design. In Proceedings of the 2016 IEEE Radar Conference, Philadelphia, PA, USA, 2–6 May 2016. [Google Scholar]
  16. He, H.; Stoica, P.; Li, J. On aperiodic-correlation bounds. IEEE Signal Process. Lett. 2010, 17, 253–256. [Google Scholar] [CrossRef]
  17. Bomer, L.; Antweiler, M. Periodic complementary binary sequences. IEEE Trans. Inf. Theory 1990, 36, 1487–1494. [Google Scholar] [CrossRef]
  18. Levanon, N. Noncoherent radar pulse compression based on complementary sequences. IEEE Trans. Aerosp. Electron. Syst. 2009, 45, 742–747. [Google Scholar] [CrossRef]
  19. Wu, H.; Song, Z.; Liu, Z.; Fu, Q. Complete complementary sequence for MIMO radar waveform design with low range sidelobes. In Proceedings of the 2015 IET International Radar Conference, Hangzhou, China, 14–16 October 2015. [Google Scholar]
  20. Li, F.; Chen, J.; Zhang, L. Optimisation of complete complementary codes in MIMO radar system. Electron. Lett. 2010, 46, 1157–1159. [Google Scholar] [CrossRef]
  21. Tseng, S.M.; Bell, M.R. Asynchronous multicarrier DS-CDMA using mutually orthogonal complementary sets of sequences. IEEE Trans. Commun. 2000, 48, 53–59. [Google Scholar] [CrossRef]
  22. Spasojevic, P.; Georghiades, C.N. Complementary sequences for ISI channel estimation. IEEE Trans. Inf. Theory 2001, 47, 1145–1152. [Google Scholar] [CrossRef]
  23. Schmidt, K.U. Complementary sets, generalized Reed-Muller codes, and power control for OFDM. IEEE Trans. Inf. Theory 2007, 53, 808–814. [Google Scholar] [CrossRef]
  24. Soltanalian, M.; Stoica, P. Computational design of sequences with good correlation properties. IEEE Trans. Signal Process. 2012, 60, 2180–2193. [Google Scholar] [CrossRef]
  25. Soltanalian, M.; Naghsh, M.M.; Stoica, P. A fast algorithm for designing complementary sets of sequences. Signal Process. 2013, 93, 2096–2102. [Google Scholar] [CrossRef]
  26. Stoica, P.; He, H.; Li, J. New algorithms for designing unimodular sequences with good correlation properties. IEEE Trans. Signal Process. 2009, 57, 1415–1425. [Google Scholar] [CrossRef]
  27. Lindenfeld, M.J. Sparse frequency transmit and receive waveform design. IEEE Trans. Aerosp. Electron. Syst. 2004, 40, 851–861. [Google Scholar] [CrossRef]
  28. Kassab, R.; Lesturgie, M.; Fiorina, J. Performance analysis of interrupted sparse HFSWR waveform coded with successive fast Fourier transforms. Electron. Lett. 2009, 45, 525–526. [Google Scholar] [CrossRef]
  29. Liu, W.X.; Lesturgie, M.; Lu, Y.L. Real-time sparse frequency waveform design for HFSWR system. Electron. Lett. 2007, 43, 1387–1389. [Google Scholar] [CrossRef]
  30. Haykin, S. Cognitive radar: A way of the future. IEEE Signal Process. Mag. 2006, 23, 30–40. [Google Scholar] [CrossRef]
  31. Wang, G.H.; Mai, C.Y.; Sun, J.P.; Lu, Y.L. Sparse frequency waveform analysis and design based on ambiguity function theory. IET Radar Sonar Navig. 2016, 10, 707–717. [Google Scholar]
  32. Liang, J.L.; So, H.C.; Li, J.; Farina, A. Unimodular Sequence Design Based on Alternating Direction Method of Multipliers. IEEE Trans. Signal Process. 2016, 64, 5367–5381. [Google Scholar] [CrossRef]
  33. Feng, X.; Song, Y.C.; Zhou, Z.Q.; Zhao, Y.N. Designing unimodular waveform with low range sidelobes and stopband for cognitive radar via relaxed alternating projection. Int. J. Antenna. Propag. 2016, 2016. [Google Scholar] [CrossRef]
  34. Liang, J.; So, H.C.; Leung, C.S.; Li, J.; Farina, A. Waveform design with unit modulus and spectral shape constraints via lagrange programming neural network. IEEE J. Sel. Topics Signal Process. 2015, 9, 1377–1386. [Google Scholar] [CrossRef]
  35. Rowe, W.; Stoica, P.; Li, J. Spectrally constrained waveform design. IEEE Signal Process. Mag. 2014, 31, 157–162. [Google Scholar] [CrossRef]
  36. Wang, G.; Lu, Y. Designing single/multiple sparse frequency waveforms with sidelobe constraint. IET Radar Sonar Navig. 2011, 5, 32–38. [Google Scholar] [CrossRef]
  37. Zhao, Y.N.; Li, F.C.; Zhang, T.; Zhou, Z.Q. Computational design of optimal waveforms for MIMO radar via multi-dimensional iterative spectral approximation. Multidim. Syst. Signal Process. 2016, 27, 43–60. [Google Scholar] [CrossRef]
  38. Yue, W.; Zhang, Y.; Liu, Y.; Xie, J. Radar constant-modulus waveform design with prior information of the extended target and clutter. Sensors 2016, 16. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Evolution of the relative error with respect to the iteration number.
Figure 1. Evolution of the relative error with respect to the iteration number.
Sensors 17 00999 g001
Figure 2. The peak relative errors of 100 random trials.
Figure 2. The peak relative errors of 100 random trials.
Sensors 17 00999 g002
Figure 3. The autocorrelation sum of the waveform sets designed by the Gra-Corr-SFW with different ε . ( N = 256 , M = 3 )
Figure 3. The autocorrelation sum of the waveform sets designed by the Gra-Corr-SFW with different ε . ( N = 256 , M = 3 )
Sensors 17 00999 g003
Figure 4. Evolution of the CISL with respect to the running time: (a) MM-Corr-acc; (b) Gra-Corr-SFW. ( ε = 10 13 ) .
Figure 4. Evolution of the CISL with respect to the running time: (a) MM-Corr-acc; (b) Gra-Corr-SFW. ( ε = 10 13 ) .
Sensors 17 00999 g004
Figure 5. The normalized auto- and cross-correlations of the waveform sets designed by MM-WeCorr-acc and Gra-WeCorr-SFW. ( ε = 10 10 ) .
Figure 5. The normalized auto- and cross-correlations of the waveform sets designed by MM-WeCorr-acc and Gra-WeCorr-SFW. ( ε = 10 10 ) .
Sensors 17 00999 g005
Figure 6. Evolution of the objective function with respect to the running time: (a) MM-WeCorr-acc; (b) Gra-WeCorr-SFW ( ε = 10 13 ) .
Figure 6. Evolution of the objective function with respect to the running time: (a) MM-WeCorr-acc; (b) Gra-WeCorr-SFW ( ε = 10 13 ) .
Sensors 17 00999 g006
Figure 7. The correlation levels of the waveform sets designed by MDISAA-SFW and Gra-Corr-SFW.
Figure 7. The correlation levels of the waveform sets designed by MDISAA-SFW and Gra-Corr-SFW.
Sensors 17 00999 g007
Figure 8. The spectral power of the waveform sets designed by MDISAA-SFW and Gra-Corr-SFW.
Figure 8. The spectral power of the waveform sets designed by MDISAA-SFW and Gra-Corr-SFW.
Sensors 17 00999 g008
Figure 9. The comparison of P a c , P c c and P s p between MDISAA-SFW and Gra-Corr-SFW versus λ from 0–1. (a) P a c ; (b) P c c ; (c) P s p ( M = 3 , N = 256 ).
Figure 9. The comparison of P a c , P c c and P s p between MDISAA-SFW and Gra-Corr-SFW versus λ from 0–1. (a) P a c ; (b) P c c ; (c) P s p ( M = 3 , N = 256 ).
Sensors 17 00999 g009
Table 1. The per iteration computational complexities of different algorithms.
Table 1. The per iteration computational complexities of different algorithms.
AlgorithmNumber of (I)FFTComplexity
MM-WeCorr-acc 6 M 2 + 8 M O ( ( 12 M + 16 ) M N log 2 N )
MM-Corr-acc 4 M O ( 8 M N log 2 N )
MDISAA-SFW 3 M O ( 6 M N log 2 N )
Gra-WeCorr-SFW 5 M 2 + 6 M O ( ( 10 M + 12 ) M N log 2 N )
Gra-Corr-SFW 4 M O ( 8 M N log 2 N )
Table 2. Simulation parameters of different design problems.
Table 2. Simulation parameters of different design problems.
λ Correlation weights { w k } k = 1 N N 1 Stopband Ω f
Case 1 a 0 w k = 1 , k = 1 N , . . . , N 1
Case 2 b 0 w k = 1 , k = 0 1 , k [ 80 , 51 ] [ 51 , 80 ] 0 , otherwise
Case 3 c 0.9 w k = 1 , k = 1 N , . . . , N 1 ( 0.04 , 0.21 ) , ( 0.23 , 0.25 ) , ( 0.28 , 0.37 ) , ( 0.39 , 0.49 ) , ( 0.52 , 0.56 )
a: CSS design; b: waveform design with zero correlation zone; c: OWS design with stopband constraint.
Table 3. The comparison of the performance parameters between MM-Corr-acc and Gra-Corr-SFW.
Table 3. The comparison of the performance parameters between MM-Corr-acc and Gra-Corr-SFW.
MM-Corr-accGra-Corr-SFW
( M , N ) t ( s ) N I P s u m ( dB ) t ( s ) N I P s u m ( dB )
M = 3 , N = 256 1.2491687−63.80.071239−65.9
M = 6 , N = 512 0.914496−68.70.110132−71.4
M = 9 , N = 768 1.558460−72.00.178131−74.7
M = 12 , N = 1024 2.531484−74.40.265132−77.5
M = 15 , N = 1280 4.483587−76.10.343138−79.1
Table 4. The comparison of the performance parameters between MM-WeCorr-acc and Gra-WeCorr-SFW ( ε = 10 10 ) .
Table 4. The comparison of the performance parameters between MM-WeCorr-acc and Gra-WeCorr-SFW ( ε = 10 10 ) .
MM-WeCorr-accGra-WeCorr-SFW
( M , N ) t ( s ) N I P z c z ( dB ) t ( s ) N I P z c z ( dB )
M = 3 , N = 256 0.584335−167.50.248164−168.2
M = 6 , N = 512 2.691407−178.11.348176−179.5
M = 9 , N = 768 10.167445−184.55.149178−186.3
M = 12 , N = 1024 37.760486−189.715.106179−191.5
M = 15 , N = 1280 84.759549−193.530.683182−195.1
Table 5. The comparison of the performance parameters between MDISAA-SFW and Gra-Corr-SFW.
Table 5. The comparison of the performance parameters between MDISAA-SFW and Gra-Corr-SFW.
AlgorithmN P ac ( dB ) P cc ( dB ) P sp ( dB ) ISL N I t ( s )
MDISAA-SFW N = 256 −13.6−13.9−17.5893,573119322.512
N = 512 −13.9−16.6−17.93,532,44718,6026.045
N = 1024 −14.1−19.3−18.214,013,41727,26814.080
Gra−Corr−SFW N = 256 −12.5−15.9−20.3832,48212160.274
N = 512 −12.9−18.6−22.23,321,72921001.013
N = 1024 −13.3−21.1−23.813,237,90335773.033

Share and Cite

MDPI and ACS Style

Tang, L.; Zhu, Y.; Fu, Q. Designing Waveform Sets with Good Correlation and Stopband Properties for MIMO Radar via the Gradient-Based Method. Sensors 2017, 17, 999. https://doi.org/10.3390/s17050999

AMA Style

Tang L, Zhu Y, Fu Q. Designing Waveform Sets with Good Correlation and Stopband Properties for MIMO Radar via the Gradient-Based Method. Sensors. 2017; 17(5):999. https://doi.org/10.3390/s17050999

Chicago/Turabian Style

Tang, Liang, Yongfeng Zhu, and Qiang Fu. 2017. "Designing Waveform Sets with Good Correlation and Stopband Properties for MIMO Radar via the Gradient-Based Method" Sensors 17, no. 5: 999. https://doi.org/10.3390/s17050999

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop