Skip to content
BY 4.0 license Open Access Published by De Gruyter Open Access January 24, 2024

Toward automated hail disaster weather recognition based on spatio-temporal sequence of radar images

  • Liuping Wang , Ziyi Chen EMAIL logo , Jinping Liu , Jin Zhang EMAIL logo and Abdulhameed F. Alkhateeb
From the journal Demonstratio Mathematica

Abstract

Hail, an intense convective catastrophic weather, is seriously hazardous to people’s lives and properties. This article proposes a multi-step cyclone hail weather recognition model, called long short-term memory (LSTM)-C3D, based on radar images, integrating attention mechanism and network voting optimization characteristics to achieve intelligent recognition and accurate classification of hailstorm weather based on long short-term memory networks. Based on radar echo data in the strong-echo region, LSTM-C3D can selectively fuse the long short-term time feature information of hail meteorological images and effectively focus on the significant features to achieve intelligent recognition of hail disaster weather. The meteorological scans of 11 Doppler weather radars deployed in various regions of the Hunan Province of China are used as the specific experimental and application objects for extensive validation and comparison experiments. The results show that the proposed method can realize the automatic extraction of radar reflectivity image features, and the accuracy of hail identification in the strong-echo region reaches 91.3%. It can also effectively realize the prediction of convective storm movement trends, laying the theoretical foundation for reducing the misjudgment of extreme disaster weather.

MSC 2010: 68T07

1 Introduction

China has a vast territory and complex terrain; hailstorms have a large ripple effect and a wide hail disaster area. Hail frequently occurs with strong winds and impacts objects in the clouds at a speed of more than 10 m/s, significantly negatively affecting crop production. However, hail has a short life cycle. It develops rapidly, covering areas ranging from a few kilometers to tens of kilometers, and its life cycle is generally only a few minutes to a few hours. It often happens concurrently with strong convective weather, such as strong wind and heavy precipitation, making it challenging to predict catastrophic weather. In addition to crops, hailstorm weather can cause significant damage to buildings, crops, transportation, industrial and mining infrastructure, and power and communication infrastructure, and even endanger people’s lives. Therefore, timely identification of hail is of great significance for hail prevention.

Numerous academics have studied hail identification and forecasting extensively in recent years. Early studies primarily analyzed the meteorological conditions during hail occurrence and identified or predicted hail weather by obtaining their statistical laws and characteristics. It is assumed that the current hailstorm is more likely to occur if the climatic circumstances align with the statistical laws at the time of hail occurrence. To achieve early identification of hailstorms, hail identification was often based on a single salient feature of radar weather data, such as reflectivity of weather radar, differential reflectivity, radial velocity, and vertical integrated value (VIL). For example, Beal et al. [1] used radar reflectivity data and radar velocity data to analyze features such as hail season, hail size, and hail path; Pilorz et al. [2] combined radar reflectivity with temperature stratification data from ERA5 reanalysis data to convert the corresponding VIL of radar reflectivity that passes vertically through convective clouds to convective cloud top height, thereby improving the accuracy of big hail detection. The researchers also applied the method to analyze radar and ERA5 data for the Midwestern United States from 2010 to 2018. The results show that the approach is effective in detecting large hail. However, the aforementioned method only considers the hail weather science characteristics and formation mechanisms and often does not yield the expected ideal results in practical applications.

To improve hail identification accuracy, some researchers adopted multiple features, models, and data sources to identify hailstorms. For example, Murillo and Homeyer [3] analyzed more than 10,000 storms from objective identification via top tracking of radar echoes and nearly 6,000 hail reports, combining uni-polarized and dual-polarized metrics to provide the most proficient distinction between severe and non-severe hail and to identify individual hail occurrences. In addition, a correction to the “maximum expected size of hail” (MESH) metric is proposed and shown to improve the spatio-temporal comparison between reported hail sizes and radar-based estimates for the studied cases; Wang et al. [4] used multiple parameters from dual-polarization radar data, such as differential reflectance, differential phase, and linear polarization reflectance, to analyze and extract various features, including the structure, precipitation, and wind field of convective clouds. Furthermore, their algorithm incorporates considerations of radar data noise and spatial resolution, which improves the accuracy and reliability of convective feature extraction. It is shown that the POH index can be applied to real occurring thunderstorm events, proving to be reliable and informative for hail core detection; Merino et al. [5] obtained observational data on hail precipitation from meteorological and radar stations in different locations. By performing an EOF analysis on the observational data, they identified the region’s weather patterns and trends of hail precipitation. Wang et al. [6] proposed ten features based on three factors: height and thickness of the cell nucleus, radar echo intensity, overhang structure, and horizontal reflectivity gradient, which were converted into volumetric composite features (VMCFs) and height gradient composite features (HGCFs) by principal component analysis. By analyzing S-band radar data for 49 hail cases and 35 severe rainfall cases, the time series showed a significant increase in VMCF or HGCF values in the early stage of hail storms. The experimental results show that the method achieves real-time hail prediction. However, since most of these methods are based on individual cases, it is difficult to achieve a high degree of consistency with the occurrence of local climate disasters, and it is even more challenging to quickly develop preventive measures based on changes in the natural environment. As a result, these methods perform poorly in distinguishing between hail and heavy rain. Research has shown that, in many cases, these methods can result in a relatively high false-positive (FP) rate for hail events.

Due to the vigorous development of machine learning theory and practice, various machine learning algorithms, including traditional machine learning algorithms (e.g., linear regression, logistic regression, plain Bayes, support vector machines [7], and neural networks [8]) and deep learning methods [9], have played an active and important role in the fields of strong convective monitoring, short-term proximity forecasting, and short-term forecasting [10]. The application effect is often significantly better than the traditional method, which relies on statistical features or subjective experience accumulation. Machine learning methods can more effectively extract strong convective features from small and medium-scale observations with a high spatial and temporal resolution, providing a more comprehensive and powerful automated identification and tracking capability for strong convective monitoring [11]. It effectively integrates multi-source observation data and numerical forecast models and extracts more practical information for strong convection proximity forecasting and warning. At the same time, it can effectively enhance the application of global numerical and high-resolution regional numerical models for strong convective weather forecasting by interpreting and post-processing their forecasts [12]. Guo et al. [13] extracted the echo properties of strong convective areas based on radar reflectivity images. They are combining a discretization method based on rough set theory that discretizes meteorological data and builds a decision table, using an improved artificial fish swarm algorithm for attribute reduction to obtain the minimum attribute set. Finally, an objective model for identifying strong convective weather is established. The experimental results show that the model can identify strong convective weather with a small number of attributes and high recognition accuracy. Ding et al. [14] integrated traditional features and image features to identify hailstorms. The structural features of hail clouds can be identified using radar images. However, the aforementioned methods require a huge amount of data and have long hail identification cycles, which are no longer suitable for modern hail identification requirements.

With the widespread adoption of next-generation Doppler radar, the underlying data storage is becoming more extensive, which makes it possible to identify hail storms using convolutional neural network (CNN), which has achieved great success in image recognition. For example, many successful models on the international standard ImageNet data set are based on CNNs, such as AlexNet [15], VGG [16], GoogleNet [17], and ResNet [18]. With the success of CNN in image processing, more and more researchers have applied it to weather forecasting [19]. Zhang et al. [20] proposed a lightweight radar echo contour prediction model based on a convolutional autoencoder and long short-term memory [21]. A CNN encoder extracts the features of the radar echo image, and the extracted features are used as input predictions for the LSTM. Finally, the features are restored to their original size and shape using a CNN decoder. Experimental results show that the model can predict changes in radar images effectively and timely and can meet agricultural production’s needs. Gooch and Chandrasekar [22] proposed a method for image classification of precipitation states on color weather radar scans. The technique uses migration learning to reduce the data required to perform this classification by two orders of magnitude, thereby increasing throughput and applying deep learning tools to this task.

The aforementioned methods harness machine learning methods (especially deep learning models) to automatically extract radar weather image features. However, these methods often analyze and process weather image information based on a single moment without considering the spatio-temporal correlation characteristics of radar weather data. Thus, the accuracy of hail hazard identification still needs to be improved. This study provided a hail prediction model, LSTM-C3D, based on a long short-term memory network and used adversarial neural networks [23] to expand the data to improve the sample space of the data set. In addition, a voting network model structure is designed to enhance the model for feature extraction at different levels of the image feature space. This design scheme improves the logic for data processing. Finally, a strong correlation discriminative model of the regional hail occurrence relationship is established based on radar echo data in the strong-echo region, effectively improving the hail hazard prediction accuracy.

The subsequent chapters of this article are organized as follows: Section 2 makes a problem analysis for hail hazard weather briefly; Section 3 details the design of the hail identification convolutional network model, LSTM-C3D; Section 4 presents the data pre-processing procedure and experimental results and compares the performance of the LSTM-C3D with some representative state-of-the-art methods. Finally, Section 5 gives the conclusion and future outlook.

2 Challenging issues in hail disaster weather recognition in Hunan Province of China

Hunan Province is located in the Yangtze River’s middle reaches, with the vast majority of the province’s territory south of Dongting Lake, the so-called Hunan. Xiangjiang River runs through the south and north of Hunan Province, so Hunan Province is referred to as Xiang. It is located at longitude 10 8 4 7 11 4 1 5 East and latitude 2 4 3 8 3 0 0 8 North. It borders Jiangxi in the east by the Makufu and Wugong mountain systems; Guizhou in the west by the eastern edge of the Yunnan-Guizhou Plateau; Chongqing in the northwest by the Wuling Mountains; Guangdong and Guangxi in the south by the South Ridge; and Hubei in the north by the Binhu Plain. Mountains on three sides surround Hunan, and the lower topography in the north makes the land of Sanxiang sometimes controlled by cold fronts in the northwest and sometimes influenced by warm and humid air currents in the southwest, so the climate is variable, sometimes sunny, sometimes rainy, suddenly cold, and suddenly hot, and it is easy to form sudden disaster weather.

Hail is a very violent meteorological disaster caused by a strong convective weather system. It is sudden and strong, easy to bring large losses to agriculture, construction, communication, electricity, transportation, and people’s life and property safety, and may induce flash floods, geological disasters, and urban and rural flooding. Figure 1 shows the damage to agriculture and buildings caused by extreme hail weather. Therefore, accurate identification and forecasting of hail weather are one of the key research objectives of meteorological researchers.

Figure 1 
               Outside damage caused by hailstorms: left: houses damaged by strong convective weather; right: apple production area in Hunan Province affected by hailstorms.
Figure 1

Outside damage caused by hailstorms: left: houses damaged by strong convective weather; right: apple production area in Hunan Province affected by hailstorms.

2.1 Situation and disaster in Hunan Province

According to incomplete records of hailstorm weather in Hunan Province, at around 10:10 p.m. on the evening of March 19, 2013, Jingzhou Miao and Dong Autonomous County in Hunan Province were suddenly hit by a very large hailstorm for nearly 30 min. A total of more than 14,800 households and 44,700 people were affected in the county, with direct economic losses of about 163 million yuan. Among them, 45,590 houses were damaged; 159,000 mu of crops were affected; 35 enterprises’ plants, equipment, raw materials, and finished products were severely damaged; and some electric power, water supply, transportation, water conservancy, commerce, storage, and other facilities were damaged. From March 4, to March 6, 2018, wind and hail damage occurred in some areas of Hunan Province. The wind and hail disaster caused a total of 173,000 people to be affected in Changsha, Zhuzhou, and other cities, with more than 710 people urgently relocating, 59 houses collapsing, more than 7,000 houses damaged to varying degrees, and direct economic losses of RMB 110 million. On May 11, 2021, some townships in Xintian County, Hunan Province, were hit by hailstorms, damaging tens of thousands of mu of baked tobacco in Xinxu Township, Xinlong Township, and Taoling Township. More than 6,700 mu was extinguished among them, with economic losses estimated at more than 35 million yuan.

To sum up, it is very important to minimize losses by forecasting in advance before the occurrence of extreme hailstorms. However, due to the sudden, localized, and short life cycle characteristics of the hailing process, its forecasting effect has been unsatisfactory for a long time. Even if it is only half an hour in advance of the proximity forecast, its progress is very slow, and the most serious difficulty lies in the identification of hail echoes. To address this problem, this article adopts the sequence data from radar stations to understand the weather trend and ensure the hail identification results’ correctness as much as possible.

2.2 New-generation Doppler weather radar

The data used in this study are the raw Doppler meteorological data acquired by the new-generation Doppler weather radar. The data include Julian time, body sweep mode, reflectivity factor, radial velocity, and spectral width. Among them, the reflectivity factor (in “dBZ”) can identify cumulus, stratus, and cumulonimbus cloud systems and weather features such as fronts and bob lines. Meteorological observers can determine the intensity of the echoes by the reflectivity factor and determine the weather of rain, snow, wind, lightning, hail, etc. Its characteristics can provide an important scientific guidance basis for weather forecasting [24]. In general, the higher the reflectivity intensity, the greater the possibility of strong convective weather. The reflectivity range is usually divided into 15 levels to facilitate observation. Each color corresponds to a range of reflectivity intensity, and black indicates invalid echoes.

In the specific representation of radar reflectivity images, the spatial distribution of radar reflectivity can be described according to the distribution of colors. Among them, green indicates the weaker reflectivity, yellow indicates the medium reflectivity, red indicates the stronger reflectivity, and purple indicates the strongest reflectivity. Therefore, we can judge the size of radar reflectivity according to the color to analyze the weather situation. For example, if the entire image is green or yellow, it indicates that the reflectivity in the region is relatively weak and the weather is relatively good. If there are red, blue, or purple areas in the image, it indicates that the reflectivity in the region is strong, and there may be weather phenomena such as heavy rain or hail. Additionally, the time series changes of the radar reflectivity image can be used to describe the evolution of weather. For example, suppose the color in the image appears to have a significant change. In that case, the weather conditions are changing. The reflectance intensity corresponds to the color, as shown in Figure 2.

Figure 2 
                  Left: correspondence between basic colors of radar reflectivity map and reflectivity intensity; right: reflectance image.
Figure 2

Left: correspondence between basic colors of radar reflectivity map and reflectivity intensity; right: reflectance image.

The radar reflectivity image is derived from the reflectivity factor in the basic product, which takes the location of the radar as the origin, and the color taken at each coordinate point in the coordinate system is used to reflect the reflectivity intensity of the radar echo at the corresponding spatial location, as shown in Figure 2. During the processing of radar echo data, the reflectivity of the corresponding elevation angle is read from the radar base data. The point with the highest reflectivity is identified, and it is determined whether the reflectivity at that point is greater than 45 dBZ. If the reflectivity intensity in the area exceeds 45 dBZ, it can be initially concluded that hail may be present in the region. However, this is only a preliminary assessment and requires comprehensive analysis and judgment in conjunction with other factors.

3 Proposed method

To make hail hazard weather forecasting more accurate and efficient, we use the proposed LSTM-C3D to predict and track radar echo shapes in conjunction with the characteristics of the radar echo data used. This section describes the encoder-decoder network-based LSTM-C3D prediction model framework, including its general structure and the specifics of how each block was designed.

3.1 Model architecture

In hailstorm weather image data processing, the newly developed Doppler weather radar is used for individual scanning. After pre-processing and preprocessing the collected data, the data were augmented through deep convolution generative adversarial networks (DCGANs), and the spatio-temporal data were generated through relatively formatted images. Then, LSTM-C3D was used to carry out feature vector extraction. In the information coding part, gradient highway unit (GHU) high-speed channel unit and causal LSTM computing unit are used to select and fuse long-time and short-time feature information, encode the complete sequence information spatio-temporally and map the long sequence to the low-dimensional feature space. The decoded information was further extracted using an improved 3D ConvNet. Finally, it was mapped to specific categories through global pooling and full connection. LSTM-C3D was used to analyze the characteristics of the data, and then, the candidate regions were divided. Based on this, a network structure with the focus mechanism as the core was constructed to obtain the key information of the hail weather characteristics to improve the accuracy of the model. Finally, to further improve the accuracy rate of hail occurrence, VGG, ResNet, and AlexNet were selected in this study to construct the triple-branch structure. The voting network was used to improve the universality of the model. The RPN regional recommendation algorithm was used to process the feature data in the regional convolutional network. After further obtaining the feature map, softmax was used to classify whether there was hail [25]. The structure diagram of this model is shown in Figure 3.

Figure 3 
                  Structure of the LSTM-C3D model.
Figure 3

Structure of the LSTM-C3D model.

In hail identification problems, it is usually necessary to provide labeled learning sample number sets to train the model. When the classifier is equipped with classification capability after continuous training, it can accurately identify the presence or absence of hail using the latest input Doppler radar data. Figure 4 shows the meteorological image information processing flow of the whole hail using LSTM-C3D.

Figure 4 
                  Schematic of the overall system.
Figure 4

Schematic of the overall system.

3.2 Causal LSTM

To improve the accuracy of hail identification, this task uses sequence recognition to memorize the trend of radar data changes. Compared with the traditional ST-LSTM [26], the causal LSTM [27] can better solve the problem of recursive depth by adding a nonlinear layer, thus increasing the network depth and memory capacity, and its cell structure is shown in Figure 5. A causal LSTM cell contains dual memory, temporal memory C t k , and spatial memory M t k , where the subscript t indicates the time step and the superscript k indicates the kth hidden layer in the stacked causal LSTM network. The current temporal memory depends directly on its previous state C t 1 k controlled by the forgetting gate f t , the input gate i t , and the input modulation gate g t . The current spatial memory M t k depends on the deep migration path M t k 1 . For the bottom level ( k = 1 ), the uppermost spatial memory at ( t 1 ) is allocated to M t k 1 . The causal LSTM differs significantly from the original spatio-temporal LSTM in that it uses a cascade mechanism in which the spatial memory is a function of the temporal memory of another set of gate structures. This newly designed cascade memory outperforms the simple cascade structure of the spatio-temporal LSTM because of the significant increase in recursion depth along the spatio-temporal transition path.

Figure 5 
                  causal LSTM cell structure.
Figure 5

causal LSTM cell structure.

The causal LSTM equation is expressed as follows, where * stands for convolution; σ is a sigmoid function; square brackets indicate a cascade of tensors; round brackets indicate a system of equations; x denotes the input vector; H denotes the output vector; C denotes the state vector; and g , i , and f are the valve vectors, representing the weights of the input and output information. C is added to the formulae to jointly determine the opening and closing of the gate, where C in the previous step of the first formula and C in the latter stage of the third formula form a cascade through a recursive relationship. In addition, tanh is adopted in the last step to add non-linearity:

(1) g t i t f t = tanh σ σ W 1 * [ X t , H t 1 k , C t 1 k ] ,

(2) C t k = f t C t 1 k + i t g t ,

(3) g t i t f t = tanh σ σ W 2 * [ X t , C t k , M t k 1 ] ,

(4) M t k = f t tanh ( W 3 * M t k 1 ) + i t g t ,

(5) O t = tan ( W 4 * [ X t , C t k , M t k ] ) ,

(6) H t k = O t tanh ( W 5 * [ C t K , M t k ] ) .

In addition to short-term video dynamics, causal LSTM tends to suffer from difficulties in gradient backpropagation in the long term. Due to longer transitions, the temporal memory may forget obsolete features. Theoretical evidence shows that highway layers can efficiently pass gradients in very deep feedforward networks and prevent long-term gradients from disappearing rapidly in recurrent networks relying on GHU. Its cell structure is shown in Figure 6. The gradient highway unit (GHU) structure is introduced in the encoder’s first and second stacked layers. The input of the GHU is the output of the current lower layer and the input of the GHU of the previous moment, i.e., the GHU connects the current moment and the input of the previous moment, which can effectively solve the problem of gradient disappearance.

Figure 6 
                  GHU cell structure.
Figure 6

GHU cell structure.

The equation for GHU is expressed as follows, where W represents the convolution filter. Z t is the switch gate because it is capable of adaptive learning between converted input P t and hidden state Z t :

(7) P t = tanh ( W p x * X t + W p z * Z t 1 ) ,

(8) S t = σ ( W s x * X t + W s z * Z t 1 ) ,

(9) Z t = S t P t + ( 1 S t ) Z t 1 .

Considering the low frequency of hail in the actual meteorological process, this directly leads to the low amount of data available for training. Additionally, we made a tiny area crop of the original radar map during the data processing. Therefore, in the encoder part, the number of layers of vertical stacking is reduced from the stacking approach, which, on the one hand, relieves the pressure of the network too deep gradient difficult to backpropagate and, on the other hand, reduces the number of parameters of the network, thus speeding up network training and online extrapolation. Since small areas of radargrams are used as input, the shallower stacking is sufficient to extract spatially valid information. This approach prevents the network from over-fitting to changes in the values of single-pixel points and effectively focuses on the overall trend changes. Meanwhile, for the causal calculation unit of the encoder, the model is designed with two scales of calculation: large convolution kernels for capturing the overall radar echo change information and small convolution kernels for capturing the local echo generation and extinction. In particular, for the high-echo information that often accompanies hailstorms, the rate of change of its generation and extinction is correlated with the presence of hailstorms so that the smaller convolution kernels can be used to capture more of this detailed change sensitively.

Similarly, subsequently, in the decoder part, the spatio-temporal information of both scales is stored in the encoder. Thus, the corresponding feature data are effectively extracted in the decoder and, after 3DCNN block, finally mapped to categories for classification using global pooling and full concatenation.

3.3 C3D

In the design of the decoder, inspired by the design of the C3D network [28], the feature extraction structure design of the network is similar to that of the C3D network but has been improved on the basis of the original. There are eight convolution layers (Conv1, Conv2, Conv3a, Conv3b, Conv4a, Conv4b, Conv5a, and Conv5b), five pooled layers (Pool1, Pool2, Pool3, Pool4, and Pool5), three fully connected layers (Fc6, Fc7, and Fc8), and one softmax classification layer, in which each convolution layer and fully connected layer is connected with ReLu nonlinear layer. The fully connected layers (Fc6, Fc7, and Fc8) with the ReLu nonlinear layer use dropout technology to prevent overfitting in network training. The number of convolution cores in eight convolution layers (Conv1, Conv2, Conv3a, Conv3b, Conv4a, Conv4b, Conv5a, and Conv5b) is 64, 64, 128, 256, 256, 512, 512, 512, and 512 in turn, and the size of convolution cores is (3, 3, 3). The convolution step is (1, 1, 1). Among the five pooling layers (Pool1, Pool2, Pool3, Pool4, and Pool5), except the first pooling layer (Pool1), the sampling window size is (2, 2, 1), and the sampling step is (2, 2, 1). The sampling window size of the other pooling layers (Pool2, Pool3, Pool4, and Pool5) is (2, 2, 2), and the sampling step is (2, 2, 2), in order to ensure that the time-domain information can be retained as early as possible. After several convolution pooling operations, the features are sampled down to obtain more globally representative features. Each fully connected layer (Fc6, Fc7, Fc8) contains (8,192, 4,096, and 4,096) output units. The last layer of the network is the softmax classification layer, and the output is the hail zone and non-hail zone. For the adaptation task, the network input channel of C3D is changed to 3, and the category output dimension is set to 2. Similarly, subsequently, in the decoder part, the spatio-temporal information of both scales is stored in the encoder. Thus, the corresponding feature data are effectively extracted in the decoder and, after 3DCNN block, finally mapped to categories for classification by means of global pooling and full concatenation.

3.4 Attention mechanism

Most hail images are based on intersection-over-union (IoU) matching method to determine positive and negative samples, which only obtains the features that make up the larger part of the data while neglecting the identification and localization of other features that are obvious and have an impact factor, resulting in the inability to understand the determination of hail generation fully. In addition, some data features disappear after deep learning-based multi-layer convolution, resulting in a shallow network structure in the recognition model that cannot extract semantic information. To address these issues, an attention-focused mechanism is used to overcome these problems and obtain more accurate hail data features. In this article, we design the network structure of the spatial attention mechanism as shown in Figure 7, using the residual layer for pooling operation to reduce the image feature vector dimension and then extracting the image spatial feature information through a convolution kernel of 3 × 3 size and step size (padding) of 1. This information is processed by the sigmoid function and weighted with the pooled image feature vector information in the scale layer.

Figure 7 
                  Structure of attention mechanism.
Figure 7

Structure of attention mechanism.

3.5 Voting networks

Voting networks are typically used to process the output results of multiple classifiers in order to obtain more accurate classification results. In the voting network, the data commonly used are the prediction results of multiple classifiers on the same input data. For example, the same image can be fed into multiple different CNNs, each network outputting a probability distribution for a class, and these probability distributions are then passed to the voting network for integration.

In this article, we used AlexNet, ResNet50, and VGG deep learning neural network benchmark models to build a three-branch structural fusion network model, the structure of which is shown in Figure 8. Since the size of the convolutional kernel is an important hyperparameter, it determines the size of the receptive field of the convolutional layer when extracting image features. Different sizes of convolutional kernels can capture features of different scales, thus improving the feature extraction ability of the network. Specifically, using different sizes of convolutional kernels can help the network extract feature information of different scales, thereby enhancing the feature expression ability of the network. Therefore, the first branch uses a 1 × 1 convolution kernel to extract the candidate regions of the feature map. The smaller convolution kernel can capture more local details and improve the diversity and stability of the features. The second branch uses a 3 × 3 convolutional kernel to extract candidate regions from the feature map, which can capture relatively larger image features. The third branch uses two 3 × 3 convolutional kernels in series to extract candidate regions from the feature map, and its receptive field is equivalent to a 5 × 5 convolutional kernel. Specifically, the first 3 × 3 convolutional kernel can capture local features of the input image, while the second 3 × 3 convolutional kernel can combine these local features to form a more global feature representation. This not only reduces the number of model parameters and improves the computational efficiency of the model but also enhances the nonlinear expression ability of the network, improving the accuracy and robustness of feature extraction. The final model can adapt to multi-scale image feature extraction, increase the generality of the model, and improve the accuracy of the model determination.

Figure 8 
                  Structure of the voting network.
Figure 8

Structure of the voting network.

For the voting network, we adopt the idea of a soft voting classifier, which takes the average probability of all models predicting samples to be a certain class as the standard, and the type corresponding to the highest probability is the final prediction result.

4 Experimental results and discussions

This section covers the following points: first, the radar echo data and pre-processing steps used in our suggested model are discussed. Second, the ability of the model is evaluated using evaluation metrics commonly used in this field to measure model performance, with detailed ablation experiments performed for the baseline model and additional modules. Finally, the training results of the proposed model are validated against hot models in the field to confirm the accuracy and reliability of the training results.

4.1 Experimental data and implementation

4.1.1 Data set

Data sets were constructed using Doppler weather radar, automatic regional station, and lightning positioning data from 11 cities in Hunan Province (Changsha, Xiangtan, Yueyang, Changde, Zhangjiajie, Huaihua, Shaoyang, Hengyang, Yongzhou, Chenzhou, and Yiyang) from 2015 to 2021. The original Doppler meteorological data were compressed and stored by a new generation of Doppler weather radar. The individual sweep data was compressed and stored in.bz2 format. After decompression, a binary file in .bin format is obtained, which records all the body sweep parameters in the cardinal data format standard, including Julian time, body sweep mode, reflectivity factor, radial velocity, and spectrum width. The resolution of the base data of the body scan mode in time is 6 min, i.e., a body scan is completed every 6 min, and a single data is recorded. The weather radar base data are read according to the elevation angles of 0.5 , 1.45 , 2.4 , 3.35 , 4.3 , and 6.0 . The hailstone data include the hailstone station and other manual records collected in Hunan Province during the same period.

4.1.2 Data pre-processing

For a certain city/county/district with hail, it is necessary to find the radar station echo map with its correlation and slice it. The specific steps are as follows:

First, the actual distance between the radar station and the real station is calculated according to the latitude and longitude of the two stations, and the corresponding relationship between the real station and the radar station is determined.

Second, find the corresponding radar station data according to the hail data. For the case that the live data are one moment, the processing method is as follows: from the data of the radar station whose recording time is nearest to the live time, a total of five data are taken as the input sequence, i.e., a total of 30 min. If the live data are for a period of time, the handling method is as follows: first, determine the recording time of the radar station nearest to the start time and the end time, respectively. Start with the data of the radar station nearest to the end time and save five consecutive data. Then, start with the data of the radar station at the previous time and protect five consecutive data successively until the data of the radar station nearest to the start time are obtained. All data will be stored in a group of five in the way of radar station serial number – live station serial number coding to facilitate the subsequent data processing.

Third, the last data in the sequence are selected to read the reflectance data in the radar base data according to the 2.4 elevation angle and save it in the format of NumPy matrix. The specific value of the azimuth angle is held in the 0th column of the matrix, which is convenient for slicing the reflectance matrix in the next step. Figure 9 shows the evolution trend of radar reflectance in five frames of a radar detection area in Hunan.

Figure 9 
                     Visualization of radar reflectivity evolution in a region over 30 min: (a) start, (b) 6th minute, (c) 12th minute, (d) 18th minute, and (e) 24th minute.
Figure 9

Visualization of radar reflectivity evolution in a region over 30 min: (a) start, (b) 6th minute, (c) 12th minute, (d) 18th minute, and (e) 24th minute.

Finally, the program calculates the azimuth angle between the radar station and the live station and cuts the azimuth angle ± 3 0 with the live station as the center, 48 km at the proximal end and 48 km at the centrifugal end, i.e., a 60 × 96 area, and finds the point with the highest reflectivity among them. It is judged whether the point is greater than 45 dBZ; if so, the azimuth and radial bank numbers of the point are saved for slicing the other data and elevation angles in the sequence. For all six elevation angles, the azimuth angle ± 1 6 , 16 km at the proximal end, and 48 km at the centrifugal end, i.e., 32 × 64 , are selected as the center of the point. The sliced data of the six elevation angles are stacked together in the first dimension in a way to obtain data of size ( 6 × 32 × 64 ), finally (the length of the sequence is 5), and the data are given the label “Hail exists” label. The same process is applied to the radar data based on the live data (historical heavy precipitation data) without hail, and a label of “no hail” is given to this area data.

After forming the sample set with labeled hail data, the data are expanded using DCGANs based on Gaussian-distributed probability density random numbers as DCGAN input, and the sample set is partitioned according to 7:2:1 to form the train set, validation set, and test set.

4.2 Performance indicators

In the field of weather recognition and prediction, accuracy (A), precision (P), recall (R), and F 1 scores are commonly used as model performance metrics measurements, and this article also uses the aforementioned evaluation metrics to evaluate the performance of hail hazard weather recognition models. Applying the aforementioned evaluation metrics to the performance test of the hail hazard weather recognition model, two types of result images can be obtained: predicted results with hail and predicted results without hail. Among them, true positive (TP), false positive (FP), true negative (TN), and false negative (FN) are the key metrics used to describe the accuracy. Specifically, TP hail refers to the case where there is hail in the real situation and hail in the detection result at the same time, FP hail indicates the case where there is no hail in the real situation but hail in the detection result, TN hail refers to the case where there is no hail in the real situation and hail in the detection result at the same time, indicating that the model detects correctly, and FN hail indicates the case where there is hail in the real situation and hail in the detection result at the same time. The calculation formulas for accuracy, precision, recall, and F 1 score [29] are as follows:

Accuracy (A) is the percentage of correctly predicted results to the total sample and is given by the following equation:

(10) Accuracy hail = TP hail + TN hail TP hail + FP hail + FN hail + TN hail .

Precision (P), also known as check accuracy, is for the predicted outcome, and it means the probability of the actual positive sample among all the predicted positive samples and its formula is as follows:

(11) Precision hail = TP hail TP hail + FP hail .

Recall (R), also known as check-all rate, is for the original sample, and it means the probability of a positive sample being predicted among all samples that are actually positive, and its formula is as follows:

(12) Recall hail = TP hail TP hail + FN hail .

F1 score is the summed average of precision rate and recall rate, which integrally reflects the model performance, and its formula is as follows:

(13) F 1 hail = 2 × precision hail × recall hail precision hail + recall hail .

4.3 Recognition results and analysis

4.3.1 Validation experiments

To analyze the prediction effect more intuitively, three sets of individual cases were selected from the test data set for analysis. The baseline model itself is used to generate future 10-frame predictions from five frames of historical radar maps. The experimental results are shown in Figure 10 (0–1 h).

Figure 10 
                     Radar echoes at 6 min intervals (5 frames): (a) model input, (b) from the 6 to 30 min, (c) prediction from 6 to 30 min, (d) from the 36 to 60 min, and (e) prediction from 36 to 60 min.
Figure 10

Radar echoes at 6 min intervals (5 frames): (a) model input, (b) from the 6 to 30 min, (c) prediction from 6 to 30 min, (d) from the 36 to 60 min, and (e) prediction from 36 to 60 min.

As can be seen from the results in Figure 10, the proposed method can always capture the evolution of different intensity echoes and their locations more accurately, indicating that the model can “learn” and “understand” the development of strong convective weather and that the evolution of strong convective features learned from the training set can be effectively applied to new “future” scenarios with good generalization capability. Moreover, the model can effectively identify and remove the “noise” (radar clutter) from the input radar image (the noise in the upper-left part of the live image), thanks to the advanced filtering features inherent in the model’s convolution operation algorithm. It is worth noting that the model also consistently and accurately tracks the movement of the nascent echoes just entering the study area in the lower-right part of the figure. This is because the model encounters similar situations many times in the training set of massive data, so it can learn the movement trend of such radar echoes appearing on the boundary and give reasonable prediction results. However, the edge details of the different intensity echoes are gradually lost over time. The reason for this problem may be that the decoder network continues to generate the predicted output iteratively, and the difficulty of prediction is gradually increased so that the confidence in the expected value is gradually reduced and fed back to the input of the next moment, which eventually leads to the accumulation of the uncertainty of the prediction. After visualization, the performance of the image is reflected in the edge details, which slowly disappear and become blurred. Figure 11 shows the results of the prediction of the model versus the actual situation.

Figure 11 
                     Model predictions vs actual conditions: a.1–c.1 for 0.5 h and a.2–c.2 for 1 h.
Figure 11

Model predictions vs actual conditions: a.1–c.1 for 0.5 h and a.2–c.2 for 1 h.

As seen in Figure 11, the model can accurately track the movement trend of convective storms in strong convective weather. The position and development situation of strong-echoes are closer to the corresponding actual radar observations, but there are some shortcomings as well. Compared with the real situation, the edge contours of different intensity echoes in the predicted radar echo maps are blurred and some details are lost. For example, if the real image shows several single thunderstorm echoes at similar locations, which are difficult to be distinguished by the model, they merge into a larger echo in the predicted image and eventually create a false alarm problem. One possible reason for this phenomenon is the inherent uncertainty in this weather prediction problem (nonlinear rapid evolution of strong convection), which propagates rapidly with time and makes it almost impossible to accurately predict the exact location of future echoes. Therefore, to reduce prediction errors, the models tend to produce fuzzy forecasts to improve the “hit rate” of the forecasts. These details are more “copied” from the original radar echoes, which are always changing dynamically, and this way of retaining details brings more false alarms (“false alarm rate” increases), and the position of the echoes it tracks is not accurate (the forecast position is somewhat ahead). In general, the 0.5 and 1 h proximity forecasts of the baseline model for different areas of strong convective echoes are in better agreement with the actual conditions.

4.3.2 Ablation experiments

Figure 12 shows the optimization results of the parameters. It can be seen that the accuracy of the model gradually increases with the increase of the training period and eventually tends to be stable due to the model having reached the fitting after the training period is greater than 800. By comparing different result models, it can be seen that after adding the attention mechanism and voting network structure, the accuracy of the model has been significantly improved under the premise of the same training cycle. At the same time, in the actual training, the final accuracy is not significantly improved by increasing the number of branch structures of the voting network, but the training time and prediction time are increased, so the final hail prediction system adopts the three-branch structure model. When the training period is 800, the three-branch structure voting network is used to compare the accuracy of the model, the baseline model, the additional attention focus structure, the additional attention focus structure, and the voting network. The accuracy is 91.3, 93.4, and 97.8%, respectively. The accuracy of the model can be improved by adding the attention focus mechanism. The voting network was added to enhance the performance of the model further.

Figure 12 
                     Comparison of LSTM-C3D models with different modules added.
Figure 12

Comparison of LSTM-C3D models with different modules added.

4.3.3 Comparative experiments

To illustrate the role of the additional attention-focused structure and voting network model established in this article in hailstorm weather prediction, the hailstorm meteorological data label is advanced 1 h, i.e., the ground observation data of the previous 3 h, including the current time and the relevant physical quantities calculated are used as features, and the weather conditions of the future 1 h are used as labels for model training, and the 1 h advance prediction model can be obtained so that it has more practical application significance. The performance of classic 3D ConvNets, CNN-DNN [30], and other networks and the proposed model in this article are compared, including training time, accuracy, and recall rate of hailstorm prediction [31], and the experimental results are shown in Table 1.

Table 1

Performance comparison

Model Training time (h) Accuracy (%) Precision (%) Recall (%) F1 score (%)
3D ConvNets 6 52.8 55.1 51.2 53.1
CNN-DNN 13 73.2 73.0 72.6 72.8
CNN 16 65.3 68.4 61.7 64.9
LSTM-C3D 18 91.3 88.3 86.7 87.5

According to the experimental results, the LSTM-C3D proposed in this article is better than several comparison models, and the reasons are as follows:

Compared with 3D ConvNets, the proposed LSTM-C3D can better solve the problem of recurrence depth due to the addition of the LSTM module with a cascading mechanism and the GHU to prevent the rapid disappearance of long-term gradients. CNNs are known for their parallelization and computational efficiency. They can efficiently handle large spatio-temporal sequences of radar echoes using hardware resources such as GPUs for parallel computing, resulting in faster processing speeds. However, since the convolution kernel and pooling layer of the model can only process two-dimensional data, it is necessary to convert the three-dimensional data into two-dimensional data before inputting it into the CNN, which may cause some information loss. Additionally, CNNs have limited ability to capture temporal dependencies due to their focus on convolutions. In contrast, CNN-DNN can effectively integrate data from different dimensions and assess whether the network performs better when presenting data from multiple sources. However, the CNN-DNN model may struggle to predict global changes, as it primarily focuses on local features, making it difficult to capture long-term dependencies between sequences.

It can be seen from Table 1 that the proposed model is superior to other models in terms of accuracy and recall, but the training time is relatively long. The main reason is that the model in this study has more layers in the CNN structure and more complex parameter adjustments. Since hail weather forecast focuses more on image processing and discrimination in the later stage, the model focuses more on accuracy and recall performance indicators. On this basis, the F 1 score is further used as the harmonic mean of the two to reflect the model performance comprehensively. The F1 score is 87.50%, which proves that the hailstorm recognition model has good performance on the whole and can meet the actual business needs to a certain extent. The aforementioned results show that the model in this study can meet the needs of real meteorological workers for hailstorm prediction in the era of big data.

4.3.4 Model performance analysis

This section compares the training time, accuracy, precision, recall, and F1 scores of 3D ConvNets, CNN-DNN, CNN, and LSTM-C3D.

The comparison results are shown in Table 1. However, since LSTM-C3D has a large number of parameters, the actual training speed is a little slower. Then, we can deploy and run our model on a GPU with 8 GB or higher to ensure speed. As can be seen from generated images, our model predictions are much closer to the real image results. Overall, through a combination of observation and analysis, the model used in this study is the best.

5 Conclusion

With the continuous encryption of the new-generation Doppler weather radar deployment network, radar space-time data gradually become an important data type, which is widely used in many fields such as transportation, meteorology, and environmental protection. This study proposes an LSTM-C3D model with an attention mechanism and voting network optimization. Taking the meteorological scanning data of 11 Doppler weather radars in Hunan Province as the specific experiment and application object, a large number of confirmatory and comparative experiments are carried out. The results show that the proposed model can automatically extract the features of radar reflectance images. The accuracy of hail recognition in the strong-echo region is 91.3%. However, although the model in this article can achieve accurate identification of the sample set data, the hailfall processes used in the aforementioned study are limited, and all are in Hunan Province. Therefore, using more methods to collect and calibrate hail data is one of the most critical tasks, and subsequent studies will further enrich the data to enhance model generalization and improve early warning and forecasting capabilities. At the same time, there are inherent drawbacks in the encoder-decoder structure. The encoder compresses the spatio-temporal features of all input sequences into a fixed-size intermediate vector, and the decoder uses this intermediate vector as a basis to predict future sequence data. This compression and decoding process inevitably leads to the loss of some spatio-temporal information, resulting in the loss of details and blurred edges. To address this issue, a more expressive attention module will be used in the future to capture dynamic global information.

Acknowledgements

This research work was funded by Institutional Fund Projects under grant no. (IFPIP: 489-135-1443). The authors gratefully acknowledge technical and financial support provided by the Ministry of Education and King Abdulaziz University, DSR, Jeddah, Saudi Arabia.

  1. Author contributions: All authors have accepted responsibility for the entire content of this manuscript and approved its submission.

  2. Conflict of interest: The authors state that there is no conflict of interest.

  3. Human participants: This article does not contain any studies with human participants performed by any of the authors.

  4. Informed consent: The authors have read and agreed to submit this version of the manuscript.

References

[1] A. Beal, R. Hallak, L. D. Martins, J. A. Martins, G. Biz, A. P. Rudke, et al., Climatology of hail in the triple border Paraná, Santa Catarina (Brazil) and Argentina, Atmos. Res. 234 (2020), 104747, DOI: https://doi.org/10.1016/j.atmosres.2019.104747. 10.1016/j.atmosres.2019.104747Search in Google Scholar

[2] W. Pilorz, M Ziȩba, J. Szturc, and E. Łupikasza, Large hail detection using radar-based VIL calibrated with isotherms from the ERA5 reanalysis, Atmos. Res. 274 (2022), 106185, DOI: https://doi.org/10.1016/j.atmosres.2022.106185. 10.1016/j.atmosres.2022.106185Search in Google Scholar

[3] E. M. Murillo and C. R. Homeyer, Severe Hail Fall and Hailstorm detection using remote sensing observations, J. Appl. Meteorol. Climatol. 58 (2019), no. 5, 947–970, DOI: https://doi.org/10.1175/JAMC-D-18-0247.1. 10.1175/JAMC-D-18-0247.1Search in Google Scholar

[4] C. Wang, C. Wu, and L. Liu, Integrated convective characteristic extraction algorithm for dual polarization radar: Description and application to a convective system, Remote Sens. 15 (2023), no. 3, 808, DOI: https://doi.org/10.3390/rs1503080810.3390/rs15030808Search in Google Scholar

[5] A. Merino, J. L. Sánchez, S. Fernández-González, E. García-Ortega, J. L. Marcos, C. Berthet, et al., Hailfalls in southwest Europe: EOF analysis for identifying synoptic pattern and their trends, Atmos. Res. 215 (2019), 42–56, DOI: https://doi.org/10.1016/j.atmosres.2018.08.006. 10.1016/j.atmosres.2018.08.006Search in Google Scholar

[6] P. Wang, J. Shi, J. Hou, and Y. Hu, The identification of hail storms in the early stage using time series analysis, J. Geophys. Res. Atmos. 123 (2018), no. 2, 929–947, DOI: https://doi.org/10.1002/2017JD027449. 10.1002/2017JD027449Search in Google Scholar

[7] S. Gupta, A. D. Dileep, and V. Thenkanidiyoor, Recognition of varying size scene images using semantic analysis of deep activation maps, Mach. Vis. Appl. 32 (2021), no. 2, 52.1–52.19, DOI: https://doi.org/10.1007/s00138-021-01168-8. 10.1007/s00138-021-01168-8Search in Google Scholar

[8] J. Liu, J. He, Z. Tang, Y. Xie, W. Gui, T. Ma, et al., Frame-dilated convolutional fusion network and GRU-based self-attention dual-channel network for soft-sensor modeling of industrial process quality indexes, Trans. Syst. Man Cybern. Syst. 52 (2022), no. 9, 5989–6002, DOI: https://doi.org/10.1109/TSMC.2021.3130232. 10.1109/TSMC.2021.3130232Search in Google Scholar

[9] S. Park, S. Han, S. Kim, D. Kim, S. Park, S. Hong, et al., Improving unsupervised image clustering with robust learning, In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021, pp. 12278–12287. 10.1109/CVPR46437.2021.01210Search in Google Scholar

[10] L. Li, J. Zhang, J. Yan, Y. Jin, Y. Zhang, Y. Duan, et al., Synergetic learning of heterogeneous temporal sequences for multi-horizon probabilistic forecasting, AAAI. 35 (2021), no. 10, 8420–8428, DOI: https://doi.org/10.1609/aaai.v35i10.17023. 10.1609/aaai.v35i10.17023Search in Google Scholar

[11] M. Cai, Y. Shi, J. Liu, J. P. Niyoyita, H. Jahanshahi, and A. A. Aly, DRKPCA-VBGMM: fault monitoring via dynamically-recursive kernel principal component analysis with variational Bayesian Gaussian mixture model, J. Intell. Manuf. 34 (2023), 2625–2653, DOI: https://doi.org/10.1007/s10845-10022-01937-w. 10.1007/s10845-022-01937-wSearch in Google Scholar

[12] P. Wang, W. Lv, C. Wang, and J. Hou, Hail storms recognition based on convolutional neural network, WCICA 2018 (2018), 1703–1708, DOI: https://doi.org/10.1109/WCICA.2018.8630701. 10.1109/WCICA.2018.8630701Search in Google Scholar

[13] J. Guo, Z. Lu, C. Wang, H. Wu, and X. Ding, Severe convection weather identification model based on rough set theory and artificial fish swarm algorithm, IEEE, New York, USA, 2021. pp. 7113–7118, DOI: https://doi.org/10.23919/CCC52363.2021.9549755. 10.23919/CCC52363.2021.9549755Search in Google Scholar

[14] Y. Ding, X. Yu, J. Zhang, and X. Xu, Application of linear predictive coding and data fusion process for target tracking by Doppler through-wall radar, IEEE Trans. Microw. Theory Tech. 67 (2019), no. 3, 1244–1254, DOI: https://doi.org/10.1109/TMTT.2018.2885973. 10.1109/TMTT.2018.2885973Search in Google Scholar

[15] S. Lu, Z. Lu, Y D. Zhang, Pathological brain detection based on AlexNet and transfer learning, J. Comput. Sci. 30 (2019), 41–47, DOI: https://doi.org/10.1016/j.jocs.2018.11.008. 10.1016/j.jocs.2018.11.008Search in Google Scholar

[16] Q. Guan, Y. Wang, B. Ping, D. Li, J. Du, Y. Qin, et al., Deep convolutional neural network VGG-16 model for differential diagnosing of papillary thyroid carcinomas in cytological images: a pilot study, J. Cancer 10 (2019), no. 20, 4876–4882, DOI: https://doi.org/10.7150/jca.28769. 10.7150/jca.28769Search in Google Scholar PubMed PubMed Central

[17] R U. Khan, X. Zhang, R. Kumar, Analysis of ResNet and GoogleNet models for malware detection, J. Comput. Virol. Hacking Tech. 15 (2019), no. 1, 29–37, DOI: https://doi.org/10.1007/s11416-018-0324-z. 10.1007/s11416-018-0324-zSearch in Google Scholar

[18] Z. Lu, X. Jiang, and A. Kot, Deep coupled resnet for low-resolution face recognition, IEEE Signal Process Lett. 25 (2018), no. 4, 526–530, DOI: https://doi.org/10.1109/LSP.2018.2810121. 10.1109/LSP.2018.2810121Search in Google Scholar

[19] F. Ji, H. Zhang, Z. Zhu, and W. Dai, Blog text quality assessment using a 3D CNN-based statistical framework, Future Gener. Comput. Syst. 116 (2021), 365–370, DOI: https://doi.org/10.1016/j.future.2020.10.025. 10.1016/j.future.2020.10.025Search in Google Scholar

[20] L. Zhang, Z. Huang, W. Liu, Z. Guo, and Z. Zhang, Weather radar echo prediction method based on convolution neural network and Long short-term memory networks for sustainable e-agriculture, J. Clean. Prod. 298 (2021), 126776, DOI: https://doi.org/10.1016/j.jclepro.2021.126776. 10.1016/j.jclepro.2021.126776Search in Google Scholar

[21] Y. Yu, X. Si, C. Hu, and J. Zhang, A review of recurrent neural networks: LSTM cells and network architectures, Neural Comput. 31 (2019), no. 7, 1235–1270, DOI: https://doi.org/10.1162/neco_a_01199. 10.1162/neco_a_01199Search in Google Scholar PubMed

[22] S R. Gooch and V. Chandrasekar, Improving historical data discovery in weather radar image data sets using transfer learning, IEEE Trans. Geosci. Remote Sens. 59 (2020), no. 7, 5619–5629, DOI: https://doi.org/10.1109/TGRS.2020.3015663. 10.1109/TGRS.2020.3015663Search in Google Scholar

[23] J. Liu, J. He, Y. Xie, W. Gui, Z. Tang, T. Ma, et al., Illumination-invariant flotation froth color measuring via Wasserstein distance-based cycleGAN with structure-preserving constraint, IEEE Trans. Cybern. 51 (2021), no. 2, 2168–2275, DOI: https://doi.org/10.1109/TCYB.2020.2977537. 10.1109/TCYB.2020.2977537Search in Google Scholar PubMed

[24] H. Huang, H. Xu, F. Chen, C. Zhang, and A. Mohammadzadeh, An applied type-3 fuzzy logic system: Practical Matlab Simulink and M-files for robotic, control, and modeling applications, Symmetry 15 (2023), no. 2, 475, DOI: https://doi.org/10.3390/sym15020475. 10.3390/sym15020475Search in Google Scholar

[25] J. Liu, L. Xu, Y. Xie, T. Ma, J. Wang, Z. Tang, et al., Toward robust fault identification of complex industrial processes using stacked sparse-denoising auto-encoder with softmax classifier, IEEE Trans. Cybern. 53 (2023), no. 1, 428–442, DOI: https://doi.org/10.1109/TCYB.2021.3109618. 10.1109/TCYB.2021.3109618Search in Google Scholar PubMed

[26] Q. Tang, M. Yang, Y. Yang, ST-LSTM: A deep learning approach combined spatio-temporal features for short-term forecast in rail transit, J. Adv. Transp. 2019 (2019), 1–8, DOI: https://doi.org/10.1155/2019/8392592. 10.1155/2019/8392592Search in Google Scholar

[27] Y. Wang, Z. Gao, M. Long, J. Wang, and S. Y. Philip, Predrnn++: Towards a resolution of the deep-in-time dilemma in spatio temporal predictive learning, ICML 2018 (2018), 5123–5132. Search in Google Scholar

[28] D. Tran, L. Bourdev, R. Fergus, L. Torresani, and M. Paluri, Learning spatio-temporal features with 3d convolutional networks, ICCV 2015 (2015), 4489–4497. 10.1109/ICCV.2015.510Search in Google Scholar

[29] G. Fan, Z. Deng, Q. Ye, and B. Wang, Machine learning-based prediction models for patients no-show in online outpatient appointments, DSM 2 (2021), 45–52. 10.1016/j.dsm.2021.06.002Search in Google Scholar

[30] M. Pullman, I. Gurung, M. Maskey, R. Ramachandran and S. A. Christopher, Applying deep learning to Hail detection: A case study, IEEE Trans. Geosci. Remote Sens. 57 (2019), no. 12, 10218–10225, DOI: https://doi.org/10.1109/TGRS.2019.2931944. 10.1109/TGRS.2019.2931944Search in Google Scholar

[31] A. Mohammadzadeh and B. Firouzi, A new path following scheme: safe distance from obstacles, smooth path, multi-robots, J. Ambient. Intell. Humaniz. Comput. 14 (2023), 1–13. 10.1007/s12652-023-04565-1Search in Google Scholar

Received: 2023-02-20
Revised: 2023-05-06
Accepted: 2023-06-01
Published Online: 2024-01-24

© 2024 the author(s), published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 28.4.2024 from https://www.degruyter.com/document/doi/10.1515/dema-2023-0262/html
Scroll to top button