A New MCMC Sampling Based Segment Model for Radar Target Recognition

One of the main tools in radar target recognition is high resolution range profile (HRRP). However, it is very sensitive to the aspect angle. One solution to this problem is to assume the consecutive samples of HRRP identically independently distributed (IID) in small frames of aspect angles, an assumption which is not true in reality. However, based on this assumption, some models have been developed to characterize the sequential information contained in the multi-aspect radar echoes. Therefore, they only consider the short dependency between consecutive samples. Here, we propose an alternative model, the segment model, to address the shortcomings of these assumptions. In addition, using a Markov chain Monte-Carlo (MCMC) based Gibbs sampler as an iterative approach to estimate the parameters of the segment model, we will show that the proposed method is able to estimate the parameters with quite satisfying accuracy and computational load.


Introduction
Radar target recognition was thoroughly influenced by the development of high resolution radars, which made it possible to extract much more information from the targets.One such feature vector, which is one of the most powerful tools for radar target recognition, is the high resolution range profile [1,2,3,4,5].However, it is a strong function of the target-radar aspect angle [6,7].
In order to solve this problem and benefit the information of consecutive range profiles in the recognition process, a mathematical model should be developed for the statistical relation of the consecutive range profiles.In [8,9], some solutions are proposed using the Gaussian distribution and its variants.There (i.e., [8,9]), the consecutive range profiles are assumed to be identically independently distributed in an aspect frame.
In [10,11,12], according to the physical behavior of linear and rotational movement of the target and taking into account the electromagnetic backscattering considerations, Dynamic system (DS) is proposed to model the short dependency between consecutive samples of HRRP in a segment of the HRRP sequence.
Also, it should be noted that in all of the aforementioned works (i.e., [8,9,10,11,12]), the signal's amplitudes are considered as the features extracted from HRRP.However, in this paper, the locations of dominant peaks are considered as the observed data, which are less sensitive to target fluctuations and the receiver noise [13].The feature extraction process will be the same as the work proposed in [14].
On the other hand, the hidden Markov model (HMM) is the most important tool in modeling non-stationary signals such as radar returns, speech, handwriting, etc., and is widely used in the task of recognition of patterns made from this type of signals [1,15].This model is used as a basis in many signal processing applications [3,16,17].For example, in the field of speech recognition, the model can be used to realize the phonemes (or other parts of speech signal) in conjunction to each other [17].In this model, the observations are emitted through a state sequence in which transition to each state depends only on the previous one.In the previous HMM-based works in radar target recognition (e.g.[18,15,8]), each state can emit only one observation.However, by relaxing the number of emitted observations in each state, we achieve a more general model called the "segment model" [2], so that the dependency of an observation sample is not restricted to its previous sample.
Estimating the parameters of this model from the observations is the most important task in the training phase of the proposed target recognition procedure.To this end, here, we suggest a new approach based on Markov chain Monte-Carlo technique, [19].We use the Gibbs sampling technique to generate the samples for the estimation phase of the algorithm.This technique is quite efficient in reducing the sampling from high-dimensional distributions to sampling from a series of low dimensional distributions [20].Moreover, it has many advantages, such as solving the problem of convergence to local maxima and quite admissible computational burden [21].
The remainder of this paper is organized as follows.We will introduce the linear segment model and its related assumptions in Sec. 2. In Sec. 3 we will propose the Gibbs sampling procedure to estimate the parameters of the linear segment model, and the experimental results are shown in Sec. 4. Finally, Section 5 concludes the paper.

Problem Formulation 2.1 Feature Extraction
A common transmit signal in a high resolution radar is the wideband linear frequency modulated (LFM) chirp pulse.The echo of such signal from an extended target can be written as where α k and f k denote the complex amplitude and frequency of the k'th dominant backscatterer, respectively, and K is the number of backscatterers.The time index is denoted by n, and N is the number of the range profile samples.In addition, e n is the receiver additive noise.
Letting µ be the LFM rate, we have where R k is the k'th scatterer's range and R 0 is the reference range of the target.
In this paper, we choose f k s as the feature vector for the target recognition.In order to extract the feature vector, we use the RELAX algorithm [14].Also, note that Y is the matrix of observed features.

Segment Model
As mentioned in the introduction, the main difference between the previously used HMMs and segment model is the relaxation of the number of observation emitted in each state in the segment model.A complete tutorial on the segment model can be found in [2].Next, we introduce the parameters of the model.If we denote the set of possible states by {S i } N i=1 and the state of the m'th segment by h m , then with Markov chain assumption (i.e. the dynamics of the system is described by a Markov chain), we have The vector of prior probabilities is denoted by π = [π 1 , π 2 , . . .π N ], so we have We denote the vector of transition probabilities from state S i by w i , i.e., where The emission behavior can be described by a joint probability density for each state, where Y m is the vector of observations emitted in the m'th segment and l m denotes the length of Y m .As mentioned before, this is the key point in the segment model, since in the previously used HMM model, each state can only emit one output.But, in the segment model, each state can produce a stream of observations with length l m , which, as will be explained later, can be considered as a random variable with Poisson distribution.That is why, in spite of the HMM model, there is no need to include w ii in ( 5).

If we denote the vector of all observations by
and the boundaries in the m'th segment by s m and e m (note that e m − s m + 1 = l m ), then we have Note that each observation (o i ) contains the dominant peak locations of the i'th radar return in an ascending order.We denote its dimension by K.
Here, for the segment length (l m ), we consider the Poisson distribution with parameter ε i .In addition, for the observations (Y m ), we assume a Gaussian process with standard deviation σ and variable mean with linear trajectory.In other words, the observations of each state (S i ) are produced along a straight line having Gaussian noise.In addition, the starting and end points' coordiantes of the linear trajectory are demonstrated with b s and b e , respectively.
where Z l is a time index vector with length l, that maps segments of different durations within the range of zero to one.The reason of using the Poisson distribution, is the approximately constant rate of staying in an aspect frame by the target.In addition, this is a commonly used distribution in similar works of the segment model (e.g.[2,20]).
We can summarize the model parameters as one parameter set λ: Our goal is to estimate (B, σ)1 .However, other parameters will also, be used in our proposed procedure, as will be shown.

The Gibbs Sampling Procedure
1. Initialization phase: Beginning from the initial distribution and sampling X (0) .
Next, we demonstrate the Gibbs sampler steps for the parameters of the linear segment model.

Initialization Phase
To initiate the Gibbs sampler, we need to specify the prior distributions.We define the priors as follows.
First of all, the prior vector of the initial state probabilities is sampled from (N − 1)-dimensional Dirichlet distribution with equal coefficients.
The parameters of the Poisson distribution related to the number of observations emitted in each state, l m , can be sampled from the Gamma distribution [20], The vector w i can be sampled initially from (N − 1)dimensional Dirichlet distribution with equal coefficients Finally, the parameters related to the emission can be sampled from the uniform and Rayleigh distributions,

Sampling Phase
The sampling procedure for each of the model parameters is as follows.Here, the index (k) denotes the estimated parameter at iteration k.
Step 2: Sampling the Poisson parameters (ε i ), where d i (k−1) denotes the total number of observations in state S i (or equivalently sum of the segments' lengths in state S i ), and m i (k−1) is the number of segments in state S i at iteration k − 1.
Step 3: Sampling the transition probabilities, where n j is the number of transitions from state S i to state S j , at iteration k − 1.Note that n i is omitted since w ii does not exist in this model, as explained before.
Step 4: Sampling the state trajectory's starting and end points, It can be shown that the above probability density can be sampled using a Gaussian distribution.The related equations are as below.
where k 1 , k 2 can be sampled from the normal distributions, with parameters defined below.
In fact, the parameters A and B normalize the summations in C and D.
Step 5: Sampling the standard deviation (σ), If we denote the mean of the Gaussian distribution related to the sample at time t by µ t , from (10), it can be shown that: Step 6: Sampling the state sequence (q), q t (k) ∼ f (q t |q 1 (k) , . . ., q t−1 (k) , q t+1 (k−1) , . . ., k) , . . ., q t−1 (k) , q t+1 (k−1) , . . ., where τ a and τ b are chosen such that q (k) In fact, the above equations results in choosing a state sequence in q, which depends on the state q t , so that the computational burden is reduced.

Estimation Phase
After the sampling phase, we use the MAP criterion to select the best parameter set λ (k * ) from the produced samples (λ (k) s).
where k * stands for the index of the best set, and the probability in (30) can be simplified using the Bayes relation, The first two expressions at the right hand side of the relation can be computed using the model formulation through equations (3-10) and P λ (k) is the prior probability of λ (k) at the k'th iteration, based on the related distributions discussed in Sec.3.2.

Simulations
In order to test the performance of the algorithm, we use a sequence of observations produced by a scatterer, which wends linear trajectories during T = 500.Its path (or equivalently the observations) is plotted in Fig. 1.A linear segment model is considered for this target with parameters as below, It should be noted that, for (14), the initial parameters of u = 2 and v = 0.2 worked well in practice.
Using the Gibbs sampler described in Sec. 3, the algorithm was run for 3000 iterations.The values for the starting points of the states, b s , during the 3000 iterations, are shown in Fig. 2. In addition, the values for the end points, b e , are depicted in Fig. 3.As can be seen, the algorithm's convergence is quite fast and after 200 iterations, the results are satisfactory.
Furthermore, the estimated value for σ is plotted in Fig. 4. Similarly, after 200 iterations, the algorithm has converged to its real value, i.e., 0.02.Moreover, the logarithm of MAP probability function of (30) is shown in Fig. 5.
For a more practical radar simulation, next, we consider a target with two dominant scatterers in each state.In addition, for simplicity in comparing, we assume three number of aspect angle frames (i.e. three states of the model).Subsequently, the performance of the estimation (used as the learning phase of target recognition) is tested for two extreme cases: 1.A rotating target with non-uniform angular velocity, 2. a non-rotating target with radial velocity relative to the radar.
The observations from the two scatterers of the target are plotted in Figs. 6, 7, for the two aforementioned cases respectively.
As can be seen in the scatterers' movement in Fig. 6, the state transition occurs faster in the first case (i.e. the rotating target).The corresponding state numbers, for the two cases (i.e.rotating and non-rotating target), and their estimated values by the proposed algorithm are depicted in Fig. 8 and Fig. 9, respectively.
As can be seen, in both cases, the emission boundaries are estimated with more than 99 percent accuracy.
Finally, the logarithm of MAP probability functions for the aforementioned two cases are shown in Figs. 10, 11.Similar to the previous results, the accuracy and fast convergence of the algorithm can be seen in these figures too.

Conclusion
In this paper, inspiring from the speech recognition techniques, we framed the segment model for the problem of radar target recognition.Indeed, the speech signal and    radar echo signal have much similar characteristics.For example, they are both AR 3 processes in short term.Besides, the HMM model performs quite well for both target and speech recognition.Considering these similarities, A new segment model was developed for the HRRP-based radar target recognition.In addition, a new approach, based on Gibbs sampler, was presented to estimate the parameters of this model.
For each segment, and correspondingly for each Markov state, we assumed a linear trajectory.In other words, the scatterer's track was split into linear trajectories, such that each segment could be characterized with a state.Finally, estimating the state parameters of each segment was done through Gibbs sampling approach.
In this way, we could overcome the major problem of HRRP-based target recognition techniques, i.e., being sensitive to the aspect angle.Simulations verified the theoretical results we developed.
For future works, our goal is to promote the linear model to the polynomial model, i.e., modeling each segment with a polynomial trajectory.

Fig. 5 .
Fig. 5.The logarithm of MAP probability vs the iteration number.

Fig. 8 .
Fig. 8.The state number for the rotating target.

Fig. 11 .
Fig. 11.The logarithm of MAP probability vs the iteration number for the non-rotating target.