Next Article in Journal
A Prognosis Technique Based on Improved GWO-NMPC to Improve the Trajectory Tracking Control System Reliability of Unmanned Underwater Vehicles
Next Article in Special Issue
Self-Location Based on Grid-like Representations for Artificial Agents
Previous Article in Journal
Effective and Privacy-Preserving Estimation of the Density Distribution of LBS Users under Geo-Indistinguishability
Previous Article in Special Issue
An Improved SVM with Earth Mover’s Distance Regularization and Its Application in Pattern Recognition
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Robust Constellation Diagram Representation for Communication Signal and Automatic Modulation Classification

1
The 54th Research Institute of China Electronics Technology Group Corporation, Shijiazhuang 050081, China
2
School of Electronic Engineering, Xidian University, Xi’an 710071, China
3
Xi’an Satellite Control Center, Xi’an 710043, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Electronics 2023, 12(4), 920; https://doi.org/10.3390/electronics12040920
Submission received: 7 January 2023 / Revised: 2 February 2023 / Accepted: 8 February 2023 / Published: 12 February 2023
(This article belongs to the Special Issue Machine Learning for Radar and Communication Signal Processing)

Abstract

:
Automatic modulation recognition is a necessary part of cooperative and noncooperative communication systems and plays an important role in military and civilian fields. Although the constellation diagram (CD) is an essential feature for different digital modulations, it is hard to be extracted under noncooperative complex communication environment. Frequency offset, especially the nonlinear frequency offset is a vital problem of complex communication environment, which greatly affects the extraction of traditional CD and the performance of modulation recognition methods. In the current paper, we propose an antifrequency offset constellation diagram (AFO-CD) extraction method, which combines the constellation diagram with a convolutional neural network (CNN). The proposed method indicates the change of the CD with time and enables us to suppress the influence of frequency offset efficiently. Additionally, a residual units-based classifier is designed for multiscale feature extraction and modulation classification. The experimental results demonstrate that the proposed method can effectively improve the recognition accuracy and has a good application prospect in the complex electromagnetic environment.

1. Introduction

With the rapid development of information technology, automatic modulation recognition (AMC) technology is playing a crucial role in wireless communication systems [1,2,3,4], which have been used for a variety of applications, including civilian and military purposes [5,6]. Meanwhile, the communication signals and the electromagnetic environment are becoming much denser and more complex, leading to various difficulties for classification of unknown modulation modes [7].
To achieve efficient AMC, the existing techniques are mainly divided into two categories: likelihood-based (LB) [8] and feature-based (FB) [9] methods. The LB methods are implemented based on probability statistics and likelihood functions, which have high precision and high computational complexity. The FB methods are based on feature extraction and classifier design. The commonly used features include instantaneous features, high-order statistics, cyclic spectrum, and wavelet transform, etc. The classifiers mainly include the unsupervised clustering method [10], decision tree [11], artificial neural network [12], Bayesian classifier [13], support vector machine methods (SVM) [14], etc.
Deep learning (DL) is an important issue in artificial intelligence and a crucial direction of machine learning (ML) [15]. Deep learning can overcome the problem that handcrafted feature extraction depends on professional experience, and make full use of the large amount of data in the communication system. Feature representation, sequence representation, and image representation are three kinds of commonly used inputs for deep learning networks.
Feature representation processes the signal into one or more features, such as higher-order cumulant features [16,17], cyclic spectrum [18], etc. Sequence representation processes signals into one-dimensional signal sequences, including amplitude-phase sequences [19,20], I/Q sequences [21], etc. Image representation processes signals into two-dimensional matrices, and then the classic image recognition algorithm can be used for modulation recognition. Image representation includes time-frequency diagrams [22], bispectrum diagrams [23], constellation diagram (CD) [24], etc.
This paper proposes an antifrequency offset constellation diagram (AFO-CD), which is a novel signal representation method. Compared with exiting CD-based AMC methods, the proposed method can represent the change of CD with time, make full use of the timing characteristics of the signal, and suppress the influence of the frequency offset. Additionally, it has higher generalization and antiinterference abilities. Additionally, a constellation-specific recurrent neural network is designed to act as a classifier, which significantly improves the recognition performance.
The rest of this paper is organized as follows. Section 2 is the related works. Section 3 outlines the proposed AFO-CD algorithm and the deep learning network structure. The experimental results are given in Section 4. Section 5 gives the conclusion.

2. Related Works

To improve the recognition accuracy of modulation types, various deep learning networks have been proposed. Compared with traditional ML-based algorithms, the DL-based approach has advantages and feasibility. In [25], the AlexNet and GoogLeNet are introduced for AMC. A modulation recognition algorithm based on ResNet50, and multifeature fusion is proposed to solve the problem of low accuracy under low signal-to-noise ratio (SNR) [26]. VGG networks, such as VGG-16 and VGG-19, have shown good performance in image classification, which are used for AMC by converting the sampled data of communication signals into gray images [27]. An end-to-end bidirectional long short-term memory (Bi-LSTM) is proposed for AMC in [28], which has low computational complexity in low SNR. The combinatorial model, e.g., convolutional long short-term deep neural network (CLDNN) is introduced in [29], which extract the advantages of individual networks including CNN, LSTM, and DNN. In [30], a CNN network incorporating a time-frequency attention mechanism is proposed. Additionally, multimodal convolutional features are utilized to realize signal recognition [31,32].
Compared with the methods mentioned above, the CD is an essential feature for different digital modulations, which also transforms the modulation recognition problem to image classification. For example, a graphic constellation projection (GCP) algorithm is proposed in [33], in which the deep belief network (DBN) is employed to mine the signal features and to classify the modulation types. The CD maps the signal amplitude and phase onto a two-dimensional complex plane, which can represent the information of a specific signal, and illustrate the relative distribution characteristics between different modulation states. However, under a noncooperative complex communication environment, the CD is greatly affected by the frequency offset.
In order to solve the problem above, we reproduce some of the methods mentioned in the article and experiment with the model using our data. Because the accumulation of Gaussian white noise is zero when it is higher than the second order, we adopt a recognition method based on high-order cumulants and use SVM and Bayesian models commonly used in machine learning as classifiers, and the results show that these two methods are stable under a low signal-to-noise ratio, but the recognition effect is not good in high-frequency bias. The selected high-order cumulants may not be suitable to model, and the deep learning method avoids artificial feature selection. Thus, we experiment with deep learning methods. Considering that the current deep learning methods mainly deal with two-dimensional data structures, we select VGG-16 and GoogLeNet, which have better image-classification effects for experiments, and the results show that these two models perform well. Although both models consist of simple convolutional layers and pooling layers, they lack the ability to capture time correlation. Therefore, the GRU module is utilized to solve the problem of information forgetting in long time series.

3. Antifrequency Offset CD Extraction and Modulation Recognition Algorithm

3.1. The Proposed Model

The proposed modulation recognition method based on AFO-CD is shown in Figure 1. After the signal receiving, mixing, and downconversion processing, the in-phase/quadrature (I/Q) data is available. Our model includes AFO-CD extraction and modulation recognition. First, the AFO-CD is extracted as features for AMC. Then the AFO-CD features are fed into the residual double-gated recurrent neural network (RDGNN) as a classifier to obtain the modulation types.
This paper considers the additive white Gaussian noise (AWGN). The received digital signal is
r ( t ) = s ( t ) + n ( t ) ,
where s ( t ) represents an intermediate frequency or high-frequency modulated signal, and n ( t ) represents noise.
A modulated signal with a carrier frequency of f 0 can be expressed as
s ( t ) = A ( t ) cos ( 2 π f 0 t + ϕ ( t ) + φ 0 ) ,
where A ( t ) represents the amplitude, ϕ ( t ) is the signal phase, and φ 0 is the initial phase.
There are various modulation types of modern digital signals, which are suitable for different communication environments. Generally speaking, signals are mainly divided into four categories: multiple amplitude shift keying (MASK), multiple phase shift keying (MPSK), multiple frequency shift keying (MFSK), and multiple quadrature amplitude modulation (MQAM).

3.2. AFO-CD Feature Extraction

The current digital communication signals commonly adopt amplitude and phase modulations. Actually, digital amplitude-phase modulation of a signal can be uniquely represented by CD. Therefore, the CD is an efficient tool to analyze digital modulation signals. In a noncooperative communication system, the carrier frequency is first estimated, and the signal is downconverted to the baseband. Due to the estimation error, the baseband signal is always affected by frequency offset, which results in a significant effect on the CD.
The baseband signal is written as
s ( t ) = A ( t ) e j ( 2 π Δ f t + φ ( t ) + φ 0 ) ,
where Δ f is the frequency offset. It can be seen that the frequency offset affects the signal phase, but the amplitude remains unchanged. The traditional CD projection algorithms always projects all the data points of a signal onto a diagram, which will lead to ambiguity of the CD, and directly affect the modulation recognition of the signal.
In fact, the signal data points can be projected onto the CD sequences, and show the changes with time. Based on our previous works in [34], the AFO-CD uses the fast projection algorithm to processing I/Q signal into a constellation matrix C, and linearly maps the matrix to an 8-bit gray-scale constellation. More specifically, the signal is divided into M segments, and each segment is formed by N m data points. For the first CD C 1 segment, only the first N m data points are included. For the second CD C 2 segment, the first 2 N m data points are included. Consequently, the n-th CD C n includes the former n segments, which is written as
C ( n ) = f ( n N m ) ,
where f ( · ) denotes the fast projection algorithm.
To illustrate that the projection of data points changes over time, we selected three signal modulations, viz. 2PSK, 4PSK, and 8PSK for demonstration. The top, middle, and bottom rows of Figure 2 show the CD sequences { C 1 , C 2 , C 3 , C 4 , C 5 } of 2PSK, 4PSK, and 8PSK, respectively. For each row, the five subfigures from the left to right columns, respectively, represent the projections of different numbers of data points on the constellation diagram. It can be seen from Figure 2 that with the increase of the data points, the CDs of different signals become similar and difficult to distinguish.
Based on the characteristics of the abovementioned CDs, the feature extraction process of the AFO-CD is demonstrated in the feature extraction module in Figure 1 (see the left side of Figure 1). For the I/Q signal, the fast projection algorithm is used to project the I/Q signal onto F CDs. For different signals, I and Q have a unique corresponding relationship. Therefore, one-dimensional convolution and one-way sliding are used for feature extraction. For each CD, two layers of one-dimensional convolution with a width of 3 and two layers of average pooling with a width of 2 are used to extract high-dimensional features. Then a fusion operation is performed in the convolution kernel dimension of the features. Among them, the convolution stride is 1, and the pooling stride is 2. The convolutional layer uses ReLU as the activation function.

3.3. CD-Based Deep Learning Structure

Recurrent neural network has memory and can effectively process time series data of any length, so it is often used to solve modulation recognition problems. However, it is difficult to deal with dependencies between states at long intervals due to vanishing gradients or exploding gradients. To solve this problem, a gating mechanism is introduced based on the recurrent convolutional neural network. The gating mechanism can selectively add new information while forgetting the previously accumulated information, thereby effectively controlling the speed of information accumulation. The gated recurrent unit (GRU) is a gated-based recurrent neural network, whose structure is simpler than that of the long short-term memory network. The proposed CD changes with time, so the recurrent neural network can be introduced for the classification.
As shown in Figure 1, the classifier is the proposed RDGNN, the input to the classifier is the AFO-CD feature, and the output is the modulation type of the signal. The classifier, namely RDGNN, contains a residual unit, two gated recurrent units, and a fully connected layer. In order to make the channel dimensions of different branches consistent, the residual unit in RDGNN adds a one-dimensional convolution to the direct mapping part. The direct mapping part in the Figure 1 contains a layer of one-dimensional convolution with a width of 1, which is used to adjust the number of channels. The other branch contains two layers of one-dimensional convolutions with a width of 3. ReLU is used as the activation function in the residual unit. The two-layer gated recurrent unit in RDGNN, the number of neurons in each unit is 200. The last layer of the network is a fully connected layer, which uses Softmax as the activation function to output the modulation prediction.

4. Experiment

4.1. Implementation Details

We use the Tensorflow platform as the model and our experiment is implemented on the Nvidia GeForce RTX 2080Ti GPU, which is a high-computing graphic-processing unit for deep learning. GPU can quickly iterate the architecture design and parameters of deep neural networks, and greatly shorten the time required for experiments. For simulated signals, the model is trained by the following steps: (1) Obtain the baseband complex samples of the modulated signal computer simulations, each CD is generated by 1000 symbol samples; (2) Label each CD according to the modulation class of the sample; (3) Collect 1200 labeled images to form a dataset at different SNRs and different maximum frequency offsets in each modulation class; (4) Divide the 1200 labeled images into training set and test set; (5) Send the training set to the network for training, and obtain the training model after a maximum of 200 epochs; (6) Test the training model with the test set data. In this paper, the length of a symbol representing a signal is equal to the number of points projected onto the CD.
Here, we consider eight kinds of signals, including 4ASK, 2PSK, 4PSK, 8PSK, 16QAM, 32QAM, 64QAM, and 128QAM. We use MATLAB software to generate these signals. Based on the existing signal and noise models, we consider various parameter for simulation, such as the carrier frequency, sampling frequency, bandwidth, etc. The main parameters of the simulation experiment are set as shown in Table 1.

4.2. Recognition Performance Comparisons of Different Neural Networks

In order to study the influence of different deep neural networks on the recognition performance, a variety of neural networks are designed as classifiers to compare with RDGNN. The training parameters are shown in Table 2. Table 3 shows several different neural network structures proposed in this paper.
A variety of neural networks are designed to study the influence of different deep neural networks on the recognition performance. In Table 3, in addition to RDGNN, it also shows the constructed gated recurrent neural network (DGNN), which contains two gated recurrent units (GRUs) and one fully connected layer. The DGNN is mainly used to study the influence of residual units on the recognition results, which has the same parameters as the RDGNN network. The convolutional gated recurrent neural network (CDGNN) is consists of two layers of 1D convolution, two layers of max pooling, two layers of gated recurrent units, and one fully connected layer. Compared with RDGNN, CDGNN replaces the residual unit with convolution and pooling, and lacks the direct mapping branch. The data is directly input into the gated recurrent unit after two layers of convolution to extract features. The four-layer convolutional neural network (FCNN) includes four-layer one-dimensional convolution, four-layer max-pooling, and two-layer fully connected. FCNN is a classic convolutional neural network structure, which does not contain gated recurrent units. Through FCNN, the classification ability of recurrent neural network for AFO-CD features is studied.
It is clear from Figure 3 that the recognition performance of RDGNN is generally better than the other three networks. In the case of high signal-to-noise ratio, the recognition accuracy of the four structures is similar, but when the signal-to-noise ratio is −5 dB and −10 dB, the recognition accuracy of RDGNN reaches 89.47% and 96.34%, respectively, which is much higher than in the other three networks. Compared with GDNN and CDGNN, RDGNN has more residual units, which is equivalent to two more layers of convolution for feature extraction. At the same time, the residual unit of RDGNN directly maps the underlying features to high dimensions, which also improves its recognition accuracy. Compared to FCNN, the combination of residual unit and gated recurrent unit also performs better than the convolution operation alone. Therefore, the RDGNN structure proposed in this paper has a better classification effect to a certain extent.

4.3. Performance Comparisons of AF-RDGNN with Current Mainstream Methods

To verify the effectiveness and superiority of the AF-RDGNN, we compare the proposed method with the other five methods under the same simulation conditions and parameter settings, including two machine learning methods, SVM and naive Bayes, and three deep learning methods, GCP-DBN, TCI-GoogLeNet, and VGG-16. The methods based on TCI-GoogLeNet and VGG-16 with deep layers have good performance in image classification. The GCP-DBN is similar to our method. This paper chooses these deep learning methods for comparisons. First, the identification accuracy of different methods when the signal-to-noise ratio is −10 dB∼10 dB and the maximum frequency offset is 50 kHz is reported in Figure 4. According to Figure 5, our proposed AF-RDGNN has advantages in the entire SNR range at high frequency offset, and the recognition accuracy reaches more than 99% when the SNR is higher than 8 dB, which indicates good antifrequency offset performance. The recognition performance of SVM is similar to that of naive Bayes, the accuracy of which is approximately 82% in various SNR conditions. It can be seen that GCP-DBN and TCI-GoogLeNet are greatly affected by the SNR. The performance of VGG-16 is relatively stable in the whole SNR range.
Figure 4 shows the recognition accuracy of AF-RDGNN for each modulation signal. It can be seen that the proposed method has a good effect on ASK and PSK signals, and the recognition accuracy reaches more than 97% under various SNRs. For QAM-class signals, the classification accuracy keeps improving as the SNR increases. Especially for the 16QAM and 128QAM, with the gradual increase of SNRs, the accuracy rate has improved by nearly 20%. Note that the classification accuracies of QAM signals are worse than those of other modulation types. To further demonstrate the effectiveness of our proposed method for antifrequency offset, we study the recognition accuracy of six methods in the next section.

4.4. Recognition Performance of AF-RDGNN under Different Frequency Offset

As shown in Figure 6, the recognition accuracies of the six methods decrease with the increase of the maximum frequency offset. When the SNR is −5 dB and maximum frequency offset ranges from 25 kHz to 100 kHz, the recognition accuracy of AF-RDGNN varies from 97.5% to 92.5%. The recognition accuracies of two deep learning methods, GoogLeNet and VGG-16, decrease slowly with the increase of frequency offset. All of these three methods are relatively stable. In contrast, the recognition accuracy of SVM and naïve Bayes machine learning methods decreases rapidly, which to a certain extent shows that the performance of deep learning methods outperforms traditional machine learning methods in antifrequency offset. The recognition effect of GCP-DBN is relatively low, the main reason is that the data length is short, and the correlation between sequences is not strong.

4.5. Performance Comparisons on Real Signals

Next, we test the performance of the methods on the collected real measured signals. The acquisition process of the measured signal is as follows. First, RF signals are generated from a Keysight n5180 X-series signal generator. Then an analog-to-digital converter (ADC) board is used to collect and store the signal with 5 GHz sampling rate and 14-bit quantization. Finally, the collected signal is down converted to baseband signal. Various parameters of the actual measured signal are shown in Table 4.
We use the real measured signal to verify our method. To verify the effectiveness and superiority of the AF-RDGNN, we also compare it with the other five methods under the same simulation conditions and parameter settings, including SVM, naive Bayes, GCP-DBN, TCI-GoogLeNet, and VGG-16. Figure 7 and Figure 8 show the recognition accuracy in different maximum frequency offsets when the SNRs are −5 dB and −10 dB, respectively. It can be seen that the proposed AF-RDGNN achieves superior recognition performance. In particular, in Figure 8 with lower SNR, the accuracy of the AF-RDGNN is much better than the other methods. Compared with other methods, the average accuracy of AF-RDGNN is improved by more than 5%. These experimental results demonstrate that the proposed AF-RDGNN shows a good effect in antifrequency offset.
Figure 9 and Figure 10 are the confusion matrices of AF-RDGNN for different modulated signals at a frequency offset of 50 kHz when the SNRs are −5 dB and −10 dB, respectively. Observe that in high frequency offset, the method has low accuracy for QAM-class signals. For 2ASK and MPSK signals, the antinoise performance is relatively strong because of their stable feature of CD. The results here are similar to the simulational experiment above. The classification accuracies of QAM signals are worse than those of other modulation types. The proposed method has better recognition performance for ASK-class and PSK-class signals.

5. Conclusions

This paper proposes a signal modulation recognition method based on AFO-CD features. Compared with model-based machine learning methods, the proposed data-driven method achieves superior accuracy performance for various modulated signals, and shows good antifrequency offset characteristics. Experimental results show that the proposed AF-RDGNN has higher classification accuracy for digital communication signals under the condition of high frequency offsets. In the future work, research on how to improve the classification accuracy of QAM signals deserves further research, which will further improve the recognition performance. In addition, more datasets, both simulational and real signals can be used to test the proposed method.

Author Contributions

Conceptualization, Z.Z. and P.M.; methodology, Z.Z. and P.M.; funding acquisition, Z.Z. and L.L.; investigation, Y.L.; validation, Y.L. and B.L.; visualization, Z.Z. and B.L.; writing—original draft, Y.L.; writing—review and editing, Z.Z. and L.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work is partially supported by the National Natural Science Foundation of China under Grants # 62071349, # 62203343 and # U21A20455, and the Key Research and Development Program of Shaanxi (Program No. 2023-YBGY-223).

Data Availability Statement

Not applicable.

Acknowledgments

The authors wish to express their appreciation to the editors for their rigorous and efficient work and the reviewers for their helpful suggestions, which greatly improved the presentation of this paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Nguyen, T.V.; Nguyen, V.D.; da Costa, D.B.; An, B. Hybrid User Pairing for Spectral and Energy Efficiencies in Multiuser MISO-NOMA Networks With SWIPT. IEEE Trans. Commun. 2020, 68, 4874–4890. [Google Scholar] [CrossRef]
  2. Huynh-The, T.; Hua, C.H.; Kim, J.W.; Kim, S.H.; Kim, D.S. Exploiting a low-cost CNN with skip connection for robust automatic modulation classification. In Proceedings of the 2020 IEEE Wireless Communications and Networking Conference (WCNC), Seoul, Republic of Korea, 25–28 May 2020; pp. 1–6. [Google Scholar]
  3. Zhu, Z.; Yi, Z.; Li, S.; Li, L. Deep Muti-Modal Generic Representation Auxiliary Learning Networks for End-to-End Radar Emitter Classification. Aerospace 2022, 9, 732. [Google Scholar] [CrossRef]
  4. Zhu, Z.; Ji, H.; Zhang, W.; Li, L.; Ji, T. Complex Convolutional Neural Network for Signal Representation and Its Application to Radar Emitter Recognition. IEEE Commun. Lett. 2023. early access. [Google Scholar] [CrossRef]
  5. Mitola, J.; Maguire, G. Cognitive radio: Making software radios more personal. IEEE Pers. Commun. 1999, 6, 13–18. [Google Scholar] [CrossRef]
  6. Yang, P.; Xiao, Y.; Xiao, M.; Guan, Y.L.; Li, S.; Xiang, W. Adaptive Spatial Modulation MIMO Based on Machine Learning. IEEE J. Sel. Areas Commun. 2019, 37, 2117–2131. [Google Scholar] [CrossRef]
  7. Mendis, G.J.; Wei-Kocsis, J.; Madanayake, A. Deep Learning Based Radio-Signal Identification with Hardware Design. IEEE Trans. Aerosp. Electron. Syst. 2019, 55, 2516–2531. [Google Scholar] [CrossRef]
  8. Dobre, O.A.; Abdi, A.; Bar-Ness, Y.; Su, W. Survey of automatic modulation classification techniques: Classical approaches and new trends. IET Commun. 2007, 1, 137–156. [Google Scholar] [CrossRef]
  9. Huang, S.; Yao, Y.; Wei, Z.; Feng, Z.; Zhang, P. Automatic Modulation Classification of Overlapped Sources Using Multiple Cumulants. IEEE Trans. Veh. Technol. 2017, 66, 6089–6101. [Google Scholar] [CrossRef]
  10. Yang, F.; Yang, L.; Wang, D.; Qi, P.; Wang, H. Method of modulation recognition based on combination algorithm of K-means clustering and grading training SVM. China Commun. 2018, 15, 55–63. [Google Scholar]
  11. Furtado, R.S.; Torres, Y.P.; Silva, M.O.; Colares, G.S.; Pereira, A.M.C.; Amoedo, D.A.; Valadão, M.D.M.; Carvalho, C.B.; da Costa, A.L.A.; Júnior, W.S.S. Automatic Modulation Classification in Real Tx/Rx Environment using Machine Learning and SDR. In Proceedings of the 2021 IEEE International Conference on Consumer Electronics (ICCE), Las Vegas, NV, USA, 10–12 January 2021; pp. 1–4. [Google Scholar]
  12. Ya, T.; Lin, Y.; Wang, H. Modulation Recognition of Digital Signal Based on Deep Auto-Ancoder Network. In Proceedings of the 2017 IEEE International Conference on Software Quality, Reliability and Security Companion (QRS-C), Prague, Czech Republic, 25–29 July 2017; pp. 256–260. [Google Scholar]
  13. Wong, M.L.D.; Ting, S.K.; Nandi, A.K. Naïve Bayes classification of adaptive broadband wireless modulation schemes with higher order cumulants. In Proceedings of the 2008 2nd International Conference on Signal Processing and Communication Systems, Gold Coast, QLD, Australia, 15–17 December 2008; pp. 1–5. [Google Scholar]
  14. Lv, J.; Zhang, L.; Teng, X. A modulation classification based on SVM. In Proceedings of the 2016 15th International Conference on Optical Communications and Networks (ICOCN), Hangzhou, China, 24–27 September 2016; pp. 1–3. [Google Scholar]
  15. Huynh-The, T.; Hua, C.H.; Kim, D.S. Encoding Pose Features to Images with Data Augmentation for 3-D Action Recognition. IEEE Trans. Ind. Inform. 2020, 16, 3100–3111. [Google Scholar] [CrossRef]
  16. Wang, A.; Li, R. Research on digital signal recognition based on higher order cumulants. In Proceedings of the 2019 International Conference on Intelligent Transportation, Big Data & Smart City (ICITBS), Changsha, China, 12–13 January 2019; pp. 586–588. [Google Scholar]
  17. Abdelmutalab, A.; Assaleh, K.; El-Tarhuni, M. Automatic modulation classification using hierarchical polynomial classifier and stepwise regression. In Proceedings of the 2016 IEEE Wireless Communications and Networking Conference, Doha, Qatar, 3–6 April 2016; pp. 1–5. [Google Scholar]
  18. Ma, J.; Jiang, F. Automatic Modulation Classification Using Fractional Low Order Cyclic Spectrum and Deep Residual Networks in Impulsive Noise. In Proceedings of the 2021 IEEE MTT-S International Wireless Symposium (IWS), Nanjing, China, 23–26 May 2021; pp. 1–3. [Google Scholar]
  19. Rajendran, S.; Meert, W.; Giustiniano, D.; Lenders, V.; Pollin, S. Deep Learning Models for Wireless Signal Classification with Distributed Low-Cost Spectrum Sensors. IEEE Trans. Cogn. Commun. Netw. 2018, 4, 433–445. [Google Scholar] [CrossRef]
  20. Hong, S.; Zhang, Y.; Wang, Y.; Gu, H.; Gui, G.; Sari, H. Deep Learning-Based Signal Modulation Identification in OFDM Systems. IEEE Access 2019, 7, 114631–114638. [Google Scholar] [CrossRef]
  21. Zheng, S.; Qi, P.; Chen, S.; Yang, X. Fusion Methods for CNN-Based Automatic Modulation Classification. IEEE Access 2019, 7, 66496–66504. [Google Scholar] [CrossRef]
  22. Xie, L.; Wan, Q. Cyclic Feature-Based Modulation Recognition Using Compressive Sensing. IEEE Wirel. Commun. Lett. 2017, 6, 402–405. [Google Scholar] [CrossRef]
  23. Liu, A.-S.; Zhu, Q. Automatic modulation classification based on the combination of clustering and neural network. J. China Univ. Posts Telecommun. 2011, 18, 13–38. [Google Scholar]
  24. Doan, V.S.; Huynh-The, T.; Hua, C.H.; Pham, Q.V.; Kim, D.S. Learning Constellation Map with Deep CNN for Accurate Modulation Recognition. In Proceedings of the GLOBECOM 2020—2020 IEEE Global Communications Conference, Taipei, Taiwan, 7–11 December 2020; pp. 1–6. [Google Scholar]
  25. Peng, S.; Jiang, H.; Wang, H.; Alwageed, H.; Zhou, Y.; Sebdani, M.M.; Yao, Y.D. Modulation Classification Based on Signal Constellation Diagrams and Deep Learning. IEEE Trans. Neural Netw. Learn. Syst. 2019, 30, 718–727. [Google Scholar] [CrossRef] [PubMed]
  26. Liu, X.; Wu, Z.; Tang, C. Modulation Recognition Algorithm Based on ResNet50 Multi-feature Fusion. In Proceedings of the 2021 International Conference on Intelligent Transportation, Big Data & Smart City (ICITBS), Xi’an, China, 27–28 March 2021; pp. 677–680. [Google Scholar]
  27. Sun, D.; Chen, Y.; Liu, J.; Li, Y.; Ma, R. Digital Signal Modulation Recognition Algorithm Based on VGGNet Model. In Proceedings of the 2019 IEEE 5th International Conference on Computer and Communications (ICCC), Chengdu, China, 6–9 December 2019; pp. 1575–1579. [Google Scholar]
  28. Wei, S.; Qu, Q.; Zeng, X.; Liang, J.; Shi, J.; Zhang, X. Self-Attention Bi-LSTM Networks for Radar Signal Modulation Recognition. IEEE Trans. Microw. Theory Tech. 2021, 69, 5160–5172. [Google Scholar] [CrossRef]
  29. Jiang, J.; Wang, Z.; Zhao, H.; Qiu, S.; Li, J. Modulation recognition method of satellite communication based on CLDNN model. In Proceedings of the 2021 IEEE 30th International Symposium on Industrial Electronics (ISIE), Kyoto, Japan, 20–23 June 2021; pp. 1–6. [Google Scholar]
  30. Lin, S.; Zeng, Y.; Gong, Y. Learning of Time-Frequency Attention Mechanism for Automatic Modulation Recognition. IEEE Wirel. Commun. Lett. 2022, 11, 707–711. [Google Scholar] [CrossRef]
  31. Zhu, Z.; Ji, H.; Li, L. Deep Multi-modal Subspace Interactive Mutual Network for Specific Emitter Identification. IEEE Trans. Aerosp. Electron. Syst. 2023. early access. [Google Scholar] [CrossRef]
  32. Li, L.; Dong, Z.; Zhu, Z.; Jiang, Q. Deep-learning Hopping Capture Model for Automatic Modulation Classification of Wireless Communication Signals. IEEE Trans. Aerosp. Electron. Syst. 2022. early access. [Google Scholar] [CrossRef]
  33. Wang, F.; Wang, Y.; Chen, X. Graphic Constellations and DBN Based Automatic Modulation Classification. In Proceedings of the 2017 IEEE 85th Vehicular Technology Conference (VTC Spring), Sydney, NSW, Australia, 4–7 June 2017; pp. 1–5. [Google Scholar]
  34. Lin, L.; Meng, L.; Zhigang, Z.; Shiyao, L.; Chuanjin, D. An Efficient Digital Modulation Classification Method Using the Enhanced Constellation Diagram. Available online: https://www.researchgate.net/publication/365374312_An_Efficient_Digital_Modulation_Classification_Method_Using_the_Enhanced_Constellation_Diagram (accessed on 16 November 2022).
Figure 1. An overview of the proposed model.
Figure 1. An overview of the proposed model.
Electronics 12 00920 g001
Figure 2. The CD sequences of three PSK signals.
Figure 2. The CD sequences of three PSK signals.
Electronics 12 00920 g002
Figure 3. Comparison of recognition accuracy of different neural network structures when the maximum frequency offset is 50 kHz.
Figure 3. Comparison of recognition accuracy of different neural network structures when the maximum frequency offset is 50 kHz.
Electronics 12 00920 g003
Figure 4. Accuracy comparisons of eight modulation types in different SNRs.
Figure 4. Accuracy comparisons of eight modulation types in different SNRs.
Electronics 12 00920 g004
Figure 5. Comparison of recognition accuracy of different methods when the maximum frequency offset is 50 kHz.
Figure 5. Comparison of recognition accuracy of different methods when the maximum frequency offset is 50 kHz.
Electronics 12 00920 g005
Figure 6. Comparison of the recognition accuracy of different methods when the SNR is −5 dB.
Figure 6. Comparison of the recognition accuracy of different methods when the SNR is −5 dB.
Electronics 12 00920 g006
Figure 7. Performance comparison on real measured signal at different frequency offsets when the SNR is −5 dB.
Figure 7. Performance comparison on real measured signal at different frequency offsets when the SNR is −5 dB.
Electronics 12 00920 g007
Figure 8. Performance comparison on real measured signal at different frequency offsets when the SNR is −10 dB.
Figure 8. Performance comparison on real measured signal at different frequency offsets when the SNR is −10 dB.
Electronics 12 00920 g008
Figure 9. Confusion matrix for different modulated signals at a frequency offset of 50 kHz when the SNR is −5 dB.
Figure 9. Confusion matrix for different modulated signals at a frequency offset of 50 kHz when the SNR is −5 dB.
Electronics 12 00920 g009
Figure 10. Confusion matrix for different modulated signals at a frequency offset of 50 kHz when the SNR is −10 dB.
Figure 10. Confusion matrix for different modulated signals at a frequency offset of 50 kHz when the SNR is −10 dB.
Electronics 12 00920 g010
Table 1. Parameters of the simulation data.
Table 1. Parameters of the simulation data.
ParametersValue
Sampling frequency f s 5 GHz
Carrier frequency f c 370 MHz
Bandwidth B d 40 MHz
Chip rate f d 20 MHz
Number of signal samples M1200
SNR range−10∼10 dB
Frequency offset range25∼100 kHz
Training set: test set5:1
Training samples in each SNR for each signal1000
Test samples in each SNR for each signal200
Signal length (discrete points)1000
Table 2. Training parameters of the network.
Table 2. Training parameters of the network.
ParametersValue
Learning rate0.001
Batch size64
Epoch200
OptimizerAdam
Dropout0.5
Table 3. Parameters of the network.
Table 3. Parameters of the network.
RDGNNDGNNCDGNNFCNN
1D Convolution
Average Pooling
1D Convolution
1D ConvolutionAverage Pooling
Average Pooling1D Convolution
1D ConvolutionAverage Pooling
Residual unit Average Pooling1D Convolution
GRUGRUGRUAverage Pooling
GRUGRUGRUFully Connected
Fully ConnectedFully ConnectedFully ConnectedFully Connected
Table 4. Various parameters of the real measured signals.
Table 4. Various parameters of the real measured signals.
ParametersValue
Sampling frequency f s 5 GHz
Sampling time1 ms
Carrier frequency f c 500 MHz
Signal bandwidth B d 20 MHz
Signal amplitude A50 mV
Signal length (discrete points)32,000
Number of signal samples M1200
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ma, P.; Liu, Y.; Li, L.; Zhu, Z.; Li, B. A Robust Constellation Diagram Representation for Communication Signal and Automatic Modulation Classification. Electronics 2023, 12, 920. https://doi.org/10.3390/electronics12040920

AMA Style

Ma P, Liu Y, Li L, Zhu Z, Li B. A Robust Constellation Diagram Representation for Communication Signal and Automatic Modulation Classification. Electronics. 2023; 12(4):920. https://doi.org/10.3390/electronics12040920

Chicago/Turabian Style

Ma, Pengfei, Yuesen Liu, Lin Li, Zhigang Zhu, and Bin Li. 2023. "A Robust Constellation Diagram Representation for Communication Signal and Automatic Modulation Classification" Electronics 12, no. 4: 920. https://doi.org/10.3390/electronics12040920

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop