Open Access
29 December 2022 Special Section Guest Editorial: Synthetic Aperture Radar Imaging Technology in Deep Learning: New Trends and Viewpoints
Author Affiliations +
Abstract

Guest Editors Achyut Shankar, Li Zhang, Yu Chen Hu, and Prabhishek Singh introduce the Special Section on Synthetic Aperture Radar Imaging Technology in Deep Learning: New Trends and Viewpoints.

Deep learning has changed the way several synthetic aperture radar (SAR) image processing tasks are done. SAR images are used to discover and track ships, predict ocean waves, keep an eye on farmlands, help the military, and figure out how much damage has been done after a flood or earthquake. Due to its large wavelength and ability to penetrate, the SAR sensor can capture images at any time of day or night. However, the random and continuous interaction of high frequency electromagnetic radiations emitted from the SAR sensor with target areas causes constructive and destructive interference, which leads to speckle noise that makes the image less clear. It is hard to get information out of a situation like this. Not only do SAR images have speckle noise, but they also have geometric distortion, system nonlinear effects, and range migration, all of which need to be studied. Based on how they are used, SAR can be done in three different ways: Strip mapping is a type of SAR that is used to map large areas of terrain. Spotlight Mode SAR is used to capture images of a small area of terrain by looking at it from different angles. In inverse SAR, it is used to track the movement of a target. Deep learning methods like the convolutional neural network (CNN) make it possible to classify images and fix them in amazing ways. So, experts, academicians, researchers, and scientists need to come up with new SAR image processing methods and SAR raw signal modeling techniques to help them build new SAR systems. There is a total of ten papers published in this special section. The main goals of the section are to find the basic research questions about SAR image processing that are important for real-world SAR and other remote sensing applications that use deep learning techniques, track how well remote sensing problems are getting solved, and have experts, academicians, researchers, and scientists share their success stories of applying advanced deep learning techniques to real-world SAR and other remote sensing applications.

Kumar et al. suggested using cloud computing, which is quicker, more reliable, and less cumbersome than traditional approaches to company image management. Through a synergy of cloud computing, computer vision, and image processing, this method attempts to address the challenge of image management. They have a lengthy discussion about the concept and conduct in-depth analyses of the prerequisites, the feasibility research, and the architectural design. In addition, they talk about the implementation code, provide a detailed comparison of various image processing algorithms, and conduct an empirical analysis of this approach in comparison to more traditional methods in terms of cost, image processing algorithm accuracy, data entry time, searching time, etc. In the end, they highlight the method and image processing algorithm that have been determined to be the most effective in practice. Chen et al. replaced the standard Fourier transform with two-dimensional discrete wavelets and multi-resolution analysis to implement convolution, leading to more precise produced graph data. In this paper, they investigate the graph wavelet neural network’s spectral clustering approach and supplement it with a local correlation preserving support vector machine classifier. When compared to the cascade classifier, this one’s structure is simpler, yet it still manages to provide quick and reliable classification results. The average detection time per frame for the algorithm is 359 ms, and its accuracy is 93.40% on the test set, with a recall rate of 96.27%.

Mahapatra et al. optimized the resilience of a watermarking technique based on a CNN autoencoder and made it invisible to the human eye. To fully evaluate the algorithm’s efficacy, a two-network model is presented, consisting of embedding and extraction. Convolutional autoencoders are used as the backbone of the embedding network design. A convolutional neural network (CNN) is first proposed to extract feature maps from the cover and mark pictures. In the next step, they use the concatenation concept to merge the mark and cover feature maps. To recover the secret message from watermarked and cover photos, the extraction model uses block-level transposed convolution and the corrected linear unit technique. Deng et al. employ a bottom-up approach to deconvolution, fusing its features with those of the preceding layer, before moving on to top-down down sampling and fusion of high-level features. To do aberrant behavior identification, they use the freshly constructed top-level feature layer. Using the pedestrian abnormal action recognition dataset and the same parameters, the paper compares the accuracy of the three state-of-the-art methods of C3D, R3D, and R (2 + 1) D with the C3D network algorithm based on feature fusion proposed in this paper and finds that it achieves significant improvements.

Kaur et al. use post-disaster satellite pictures to estimate whether any structures were damaged. They have used PCA to visualise the data. It has been determined to utilize the VGG16 model to extract features from the input images. Images using VGG16 feature extractions have been classified using K-nearest-neighbor (KNN), logistic regression, decision tree, random forest, and XGBoost classification methods. In a matched test set, the KNN classifier achieves an accuracy of 97%, while the logistic regression method achieves 96%. Huang et al. put out a vision-based intelligent rail traffic signal management system. To implement intelligent control of the track, the system can identify when there are too many people waiting and then modify the display time of the signal lights in real time based on the picture. This paper demonstrates that the enhanced algorithm for vehicle target identification and recognition achieves a greater accuracy (over 90%) than the original approach.

Lin et al. developed a model for extracting and detecting scattering data from aerial images of targets. To begin, a scattering center feature extraction module is built to identify and acquire the possible scattering centers of aircraft targets in SAR pictures. Furthermore, the adaptive noise suppression module is presented to reduce ambient noise by acquiring and using global knowledge. Finally, they put up a collection of SAR images of aircraft detection and run a battery of tests on it. Singh et al. provide a comparison of several approaches to despeckling SAR images. Methodology, aims, benefits, and drawbacks are all taken into account in this comparison. The goal of this study is to analyse the state of the art of non-conventional approaches to despeckling SAR images. Chakravarty et al. place particular emphasis on HSI classification, making use of several machine learning techniques, such as support vector machine, K-nearest neighbor, and CNN. Principal component analysis and minimal noise fraction were used to prune the dataset of redundant and noisy bands. Yang et al. suggest using deep vision learning to aid industrial feeding with dynamic scene photos and intelligent control. To start, they use the interframe difference technique to a feeding video to get a picture of the fish while it is eating. The feeding condition of the fish is then converted into a binary classification issue, and the feeding frequency of the fish is computed using a modified VGG16 model. The modified YOLOv5 model is then used for residual bait detection.

© 2022 Society of Photo-Optical Instrumentation Engineers (SPIE)
Achyut Shankar, Li Zhang, Yu-Chen Hu, and Prabhishek Singh "Special Section Guest Editorial: Synthetic Aperture Radar Imaging Technology in Deep Learning: New Trends and Viewpoints," Journal of Electronic Imaging 32(2), 021601 (29 December 2022). https://doi.org/10.1117/1.JEI.32.2.021601
Published: 29 December 2022
Lens.org Logo
CITATIONS
Cited by 1 scholarly publication.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Synthetic aperture radar

Deep learning

Image processing

Feature extraction

Imaging technologies

Image classification

Image sensors

Back to Top