Next Article in Journal
The Difficulty of Measuring the Roughness of Glossy Surfaces Using the Triangulation Principle
Previous Article in Journal
Weld Defect Detection of a CMT Arc-Welded Aluminum Alloy Sheet Based on Arc Sound Signal Processing
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Research on Feature Extraction of Meteorological Disaster Emergency Response Capability Based on an RNN Autoencoder

1
College of Computer and Information, Hohai University, Nanjing 211100, China
2
Business School, Hohai University, Nanjing 211100, China
3
State Grid Jiangsu Electric Power Company Ltd., Research Institute, Nanjing 211100, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(8), 5153; https://doi.org/10.3390/app13085153
Submission received: 11 February 2023 / Revised: 11 April 2023 / Accepted: 19 April 2023 / Published: 20 April 2023
(This article belongs to the Section Computing and Artificial Intelligence)

Abstract

:
Climate change has increased the frequency of various types of meteorological disasters in recent years. Finding the primary factors that limit the emergency response capability of meteorological disasters through the evaluation of that capability and proposing corresponding improvement measures in order to increase that capability is of great practical importance. The evaluation of meteorological disaster emergency response capability still has some issues. The majority of research methods use qualitative analysis, which makes it challenging to deal with fuzzy factors, leading to conclusions that are subjective and insufficiently rigorous. The evaluation models themselves are also complex and challenging to simulate and analyze, making it challenging to promote and use them in practice. Deep learning techniques have made it easier to collect and process large amounts of data, which has opened new avenues for advancement in the emergency management of weather-related disasters. In this paper, we suggest a Recurrent Neural Network (RNN)-based dynamic capability feature extraction method. The process of evaluation content determination and index selection is used to build a meteorological disaster emergency response capability evaluation index system before an encoder, based on the encoder–decoder architecture, is built for dynamic feature extraction. The RNN autoencoder deep learning ability dynamic rating method used in this paper has been shown through a series of experiments to be able to not only efficiently extract ability features from time series data and reduce the dimensionality of ability features, but also to reduce the focus of the ability evaluation model on simple and abnormal samples, concentrate the model learning on difficult samples, and have a higher accuracy. As a result, it is more suitable for the problem situation at evaluation of the disaster capability.

1. Introduction

The recent increase in the frequency of meteorological disasters has made strengthening meteorological disaster emergency management and improving emergency management capacity major concerns for governments at all levels. The idea behind improving emergency management capacity necessitates a thorough assessment of the current system for managing meteorological disasters. Building emergency management capacity for preventing and mitigating meteorological disasters has been evaluated qualitatively by some academics, but no quantitative evaluation index system or evaluation methods have been developed.
The evaluation of emergency management capability is typically regarded as a crucial component to enhancing local governments’ emergency management capabilities. Government performance management is a comprehensive assessment of the procedures, effectiveness, and efficiency of the government and its agencies in carrying out their duties and completing their objectives. The evaluation is conducted using a scientific assessment index system, which is the primary means by which government work can be improved, administrative costs can be decreased, and government effectiveness can be increased based on the evaluation’s findings. Based on the findings of the evaluation, a management concept and method to enhance government work, lower administrative costs, and boost effectiveness can be created. Bryen D. N. (2009) argued that governmental organizations’ management agencies should be in charge of emergency management of natural disasters and that social organizations already have specialized emergency management capacity in the field of research related to evaluating the emergency management capacity of local governments. Following a case study of the state of Florida, the authors came to the conclusion that local governments should build emergency management capacity that covers pre-disaster preparedness, disaster response, and disaster reduction, as well as post-disaster recovery and construction [1]. From the perspectives of organizational technology, information, cooperative communication, and cultural building, Corbacioglu Sitki et al. (2006) investigated the factors that affect emergency response capacity [2]. According to Kathy Zantal-Wiener et al. (2010), the Government Performance and Results Approach (GPRA) can serve as the framework for conducting a thorough evaluation with the addition of performance monitoring used in the U.S. education sector [3]. By using a wavelet neural network values evaluation model, JinFeng Wang et al. (2012) created a coal mine inundation disaster emergency response capability [4]. According to Sugumaran et al. (2017), a successful emergency management organization can reduce disaster losses. As a result, a model for evaluating emergency organization capacity and offering guidance for various local emergency management organizations is proposed [5].
Academics conduct a great deal of research in the area of natural disasters. The theoretical study of crisis management has gradually caught the attention of academics due to the social crisis events brought on by various human rights movements, ideological shifts, and economic crises. Theories of natural disaster management have been developed and improved with more in-depth research directions, and numerous academics have conducted evaluations and studies regarding government decision-making mechanisms and decision implementation to guarantee the scientific effectiveness of policies that deal with natural disasters.
Most academics have studied how laws and regulations are created, how management systems are built, and how emergency plans are developed for the management of natural disasters in order to enhance government emergency management capabilities. To gradually regulate the emergency management of natural disasters, the United States passed the Flood Control Act, the Natural Disaster Mitigation Act, and other laws starting in the 1980s [6]. The Disaster Relief and Emergency Assistance Act, which was passed in the United States in 1974, requires the federal government to assist state and local governments and establishes guidelines for rescue and relief efforts following a natural disaster. By establishing a single act like the Disaster Relief Act, the Japanese government was able to carry out disaster prevention and mitigation. The Basic Law for Disaster Countermeasures was later passed by the Japanese government, which changed the country’s disaster emergency management laws from a single response to a multi-hazard comprehensive system that focuses on institution building, disaster early warning, plan creation, emergency response, and finally, post-disaster recovery and reconstruction [7]. The Japanese government’s disaster prevention and mitigation management is based on the Fundamental Countermeasures for Natural Disasters Act, which also supports the regulation of emergency response work [8].
Many academics have carried out more creative research in this field regarding management mechanisms and institution building. Using network analysis and Petri net model construction techniques, Guo Xuesong et al. (2020) analyzed organizational collaboration and “fragmented” management and discussed how to link and coordinate management organizations in terms of emergency management organizations and disaster response procedures. The “fragmented” management links are named in the article. By establishing a process analysis system model and fusing it with a system evaluation method, the article investigated the creation of emergency management information systems [9]. Tao Peng (2016) offers a progression for improving emergency management capabilities based on the concept of social networking. In order to demonstrate that innovative reforms in the development of grassroots emergency management systems can be achieved through networking, he first verified the viability of the networked management scheme. On this basis, he analyzed several aspects of the local government’s emergency system development ideas, emergency management system support, integration of resources and coordination, and innovation. He then presented pertinent policy recommendations [10]. In the context of the big data era, Zhou Hui (2016) carried out research, examined how our government handles emergencies, identified their causes and problems, and provided useful recommendations for institutional innovation [11]. Similar to Li (2013) examined the issues with China’s emergency response system and suggested approaches to effectively improve the current state of emergency management in China by utilizing technology and big data thinking [12].
Building emergency response capacity in terms of disaster prevention and mitigation requires the creation of a disaster emergency response capacity evaluation index system [13]. In order to create an emergency response capacity rating system for state and local governments, the Federal Emergency Management Agency (FEMA) of the United States revised its CAR system in 2000. According to the actual situation in Chinese cities, Liu, J et al. [14] developed a framework system for evaluating urban emergency response capacity. This framework system includes 18 categories, 76 attributes, and 405 characteristics that reflect all facets of the development of urban emergency response capacity in China. Wang, Y et al. [15] created the urban disaster emergency management capacity evaluation index system by grading the evaluation indexes of urban disaster emergency response capacity from the perspective of system theory. A hierarchical analysis was used by Wang, Z.Q et al. [16] to assess the meteorological disaster emergency management capacity evaluation index system, which included 3 secondary indicators and 15 tertiary indicators. A meteorological service guarantee capability evaluation index system was created by Wen, X et al. [17] and evaluated using a multi-level fuzzy comprehensive judgment method. In order to empirically analyze the meteorological disaster emergency management capacity of 31 provinces (municipalities and autonomous regions) in China, Yao, X.J et al. [18] established the evaluation index system of meteorological disaster emergency management and used the projection tracing model. According to the emergency management tenet that “scientific prevention is better than disaster relief,” Wang, Y et al. [19] established an evaluation system of regional meteorological disaster emergency defense capability based on the improved Criteria Importance Through Interpretation (CRITIC) method.
Currently, Principal Component Analysis (PCA), random forest, and other feature extraction techniques are frequently used in machine learning. These dimensionality reduction techniques, however, are effective at obtaining linear features from cross-sectional data, but they are unable to characterize nonlinear relationships or obtain features from time series. Additionally, the traditional autoencoder is still unable to successfully extract the features of time series, despite being able to learn the nonlinear relationship in the ability features. Due to this, the Recurrent Neural Network (RNN) algorithm and encoder–decoder model framework in the field of machine translation will be used to build a capability feature extraction method in this chapter.
Researchers in the field of machine translation construct and train translation models using RNN networks and the encoder–decoder framework. The translation model developed by training a large corpus can decode an utterance of a language (such as Chinese) into that language or another language by encoding it into a specific semantic content (i.e., the hidden layer of RNN) (e.g., English). This semantic implication can be thought of as linguistic feature extraction. It is possible to translate various linguistic symbols to the semantic level.
The capability feature extraction method based on a Recurrent Neural Network autoencoder is proposed in this paper as a solution to the feature extraction problem of multidimensional time series capability data of meteorological disaster emergency capability [20]. The capability feature set of meteorological disaster emergency subjects is constructed by integrating the existing theoretical research results and management practices. On this basis, the specific steps of building emergency capability feature extraction are studied. Finally, the results of feature extraction are interpreted by the Shapley Additive Explanations (SHAP) method, which lays the foundation for understanding the feature extraction results.
Natural disaster management has grown to be a top priority for all governments. Enhancing emergency management capability requires evaluation, but the methods used today lack a quantitative index system. Despite the fact that many academics have studied how laws, management systems, and emergency plans are developed, the majority of research techniques use qualitative analysis, which makes it challenging to deal with ambiguous factors and leads to subjective and inadequately rigorous conclusions. Simulation and analysis are made more difficult by the complexity of evaluation models, which limits their usefulness in real-world applications. The evaluation of meteorological disaster emergency response capability using RNN autoencoder-based dynamic feature extraction is a novel method proposed in this paper. This ground-breaking approach addresses the issues of dynamicity and inconsistent length of capability data, leading to a more precise assessment [13].
Additionally, the suggested approach makes use of deep learning techniques to gather and analyze large amounts of data, opening up new opportunities for development in the area of meteorological disaster emergency management. It is demonstrated that the RNN autoencoder-based dynamic feature extraction method efficiently and accurately extracts ability features from time series data while reducing dimensionality. This makes it more appropriate for assessing emergency response capabilities for meteorological disasters, particularly in circumstances where simple and unusual samples could result in a flawed assessment.
To build a meteorological disaster emergency response capability evaluation index system, the proposed method starts with the selection of evaluation content and indexes. The development of an encoder for dynamic feature extraction based on the encoder–decoder architecture comes next. Then, to effectively extract ability features and reduce dimensionality, the RNN autoencoder deep learning ability dynamic rating method is used. This method is demonstrated through a series of experiments to increase the evaluation model’s accuracy by concentrating on challenging samples.
In conclusion, the suggested RNN autoencoder-based dynamic feature extraction method offers a more precise and effective method for assessing the capability of emergency response to meteorological disasters. The method has significant practical implications for improving emergency management capabilities in the face of rising meteorological disasters due to its effectiveness in efficiently extracting ability features from time series data, reducing dimensionality, and improving accuracy.

2. Materials and Methods

Accurate assessment of the emergency response capacity for meteorological disasters is a prerequisite and is based on a scientific index system. The emergency response capability of meteorological disasters is influenced by a variety of factors, and when screening indicators, the following four principles are observed [14]: (1) A guiding principle. A structured and hierarchical index system must take into account all factors influencing the emergency response capacity because the weather disaster emergency is a complete system. (2) The representativeness principle. It is not very realistic to choose all the indicators because the indicators of meteorological disaster emergency response capacity encompass economic, legal, management, medical, and other aspects. As a result, the indicators that are chosen should be representative and ensure that they are closely related to the meteorological disaster response capacity. (3) The feasibility principle. When choosing evaluation indicators, we should take into account more readily available data and information, straightforward and practical evaluation methods, and evaluation results with broad applicability in order to assist relevant departments in raising the level of emergency management. (4) The dynamicity principle. The factors influencing meteorological disasters change continuously as society develops, so the evaluation indices chosen should be consistent with time and space, as this can reflect the process of changes in the capability of meteorological disaster emergency response as society develops [12].

2.1. Analysis of Capability Feature Extraction Problem and Solution Ideas

To extract the main features that are consistent with the three-dimensional theory of capability through the RNN encoder (similar to the semantic connotation in machine translation), we will use the large amount of capability index data collected in practical management work as the object of features to be extracted in this section. Finally, we will map the main features to the original (similar to the original language or another language in machine translation). This is shown in Figure 1.
In order to provide high-quality input data for creating an accurate evaluation model for constructing the capability of weather disaster emergency subjects, the encoder part obtained from the training is used as a feature extractor to extract the capability features of the time series samples and reduce the capability feature dimension.

2.2. RNN Autoencoder-Based Method for Extracting Capability Features

The feature extraction concept illustrated in Figure 1 states that, in order to build a method for feature extraction based on this network structure, one must first create a neural network structure that corresponds to the characteristics of the capability data.

2.2.1. RNN Autoencoder Architecture Design

(1)
Method for supervised feature extraction
A Recurrent Neural Network (RNN) is a neural network model dealing with time series, which enables the model to be trained and generalized to variable length time series samples by sharing parameters at different moments, and thus, it can dynamically extract features for contingency [21].
The encoder–decoder architecture is a variant of the RNN model. Data compression, feature extraction, and anomaly detection are just a few of the machine learning tasks that have been successfully completed using autoencoders, which are a class of neural networks. We provide a thorough overview of the architecture of autoencoders in this paper.
The encoder and the decoder are the two main components that make up an autoencoder’s architecture. The encoder is in charge of reducing the dimensions of the input data, and the decoder is in charge of reassembling the input data from the compressed representation. The reconstruction error, also known as the difference between input data and output, is minimized by the architecture of an autoencoder.
Typically, multi-layer perceptrons (MLPs), or fully connected neural networks, are used to implement the encoder and decoder. The encoder converts the input data into a compressed representation using one or more hidden layers. Hyperparameters like the number of hidden layers and the number of neurons in each layer are typically figured out through trial and error.
The decoder, which is the exact opposite of the encoder, converts the compressed representation into a reconstructed output using one or more hidden layers. The decoder can reconstruct the input data with little loss of information because the number of hidden layers and neurons in each layer is typically the same as those in the encoder.
The mean-squared error (MSE), which gauges the discrepancy between the input data and output, is frequently used as the loss function to train an autoencoder. Stochastic gradient descent (SGD) or one of its variants is typically the optimization algorithm used to minimize the loss function.
Autoencoders are neural networks that can learn to compress and reconstruct data, to put it briefly. They consist of a decoder that reconstructs the input data from the compressed representation and an encoder that compresses the input data into a lower-dimensional representation. The loss function used to train the autoencoder is typically the mean-squared error, and the architecture of an autoencoder is typically implemented using fully connected neural networks. Figure 2 shows how it maps the input sequence to the output sequence.
Assume that the input is a sequence of meteorological disaster response capability characteristics.
x = x 1 , x 2 , , x T x
where T x is the length of the sequence x. If x t is an M-dimensional feature vector, then
x t = x 1 t , x 2 t , x M t
where x n t denotes the nth dimensional original capability characteristics of the weather disaster emergency subject to be evaluated at moment t, and M is the dimension of the initial capability characteristics.
The output of the model is a sequence of ability levels.
y = y 1 , y 2 , , y T y
where T y is the length of the sequence y.
The calculation method of intermediate state h t is shown in Equation (4).
h t = g W h h h t 1 + W x h x t , t 1 , 2 , , T x
where g stands for activation function. W x h , W h y , and W h h are the mapping matrices of input values to hidden states, hidden states to output values, and hidden states to hidden states, respectively. h 0 is usually initialized as 0 vectors. The hidden state records the information of the current input and the historical state, which is a high summary and generalization of the historical input. The other nodes are calculated in a similar way to Equation (4).
As can be seen from the figure, the encoder–decoder architecture of RNN consists of an encoder (Encoder) that reads the input sequence x (capability feature) and a decoder (Decoder) that produces the output sequence y (capability level). The state ht of the hidden layer is the intermediate result of encoding, which finally gets the encoded value of a (capability feature).
Once the input sequence x has been processed by the Encoder through a forward computation, the Encoder produces a final hidden state vector a as its output. a is the temporal feature vector of the input sequence if Wxh, Why, and Whh have been trained to be optimal. In machine translation problems, a is usually taken as the semantic implication of the input utterance, and a can be passed to the decoder for translation into another language sequence.
a = g W h h h T x
If the hidden state dimension is m, then a = a 1 , a 2 , , a m , g is the activation function, and to reduce the risk of avoiding gradient disappearance, ReLU is usually used as the activation function inside the RNN.
From a temporal perspective,   a is the result obtained by linear spatial mapping, nonlinear transformation, and weighting of time series data features, so learning by RNN   a can accumulate the temporal information that is most meaningful for predicting y and attenuate the nonsignificant temporal information.
From the standpoint of the feature, a more compact and refined capability information feature is created by reducing the dimensionality of the feature dimension, usually lowering the hidden layer dimension below the input dimension and removing redundant information from the feature dimension through data training. The self-coding mechanism produced by using RNN a can be used as an efficient extraction method for capability features by combining the roles of the two aforementioned perspectives.
After performing the aforementioned analysis and looking at Figure 1, it is clear that the hidden states of earlier moments will influence later hidden states, but that this influence will wane over time. This is in line with the law that states that weather disaster emergency subjects build their capability features over time, allowing the final state to reflect the capability sequence features. The RNN-based encoder–decoder architecture is built to realize the dynamic extraction of capability sequence features when the data samples are sufficient.
(2)
Unsupervised feature extraction method
As training samples for the architecture depicted in Figure 2, a sizable number of capability traits and capability level time series data of weather disaster emergency response capability subjects are needed.
S = {(xi, yi)| i = 1, 2, …, N}
where S denotes the set of samples used to train the feature encoder, xi is the weather disaster emergency subject capability feature data shown in Equation (1), and yi is shown in Equation (3).
However, due to the late start of meteorological disaster emergency management, there are still a large number of meteorological disaster emergency subjects that have not applied for capacity evaluation, i.e., there is no capacity level sequence y. According to the relevant data from the network platform, as of December 2020, only the relevant network platform has saved the capacity-related data of 23,025 meteorological disaster emergency capacity subjects. In contrast, only 1249 capacity evaluation subjects (valid as of 2020) were officially announced in 2017. The number of subjects with multiple consecutive capability evaluation levels is low.
The feature extraction architecture based on RNN autoencoder is proposed in this paper to address the issue of a small number of samples with ability level sequence y, as shown in Figure 3.
Unlike conventional encoders, the autoencoder structure is shown in Figure 3, setting the training samples for feature extraction as
S = {(xi, xi)|i = 1, 2, N}
The data for the weather disaster emergency subject capability feature are shown in the equation. The input sequence and output sequence are both xi. The structure is a part of the unsupervised learning process known as feature extraction. The output results an of the hidden layer serves as the foundation for the autoencoder’s reconstruction of the input features x. The input weight Wxh, the hidden layer weight Why, the output weight Whh, and the bias b are continuously optimized during the encoder learning process to lower the reconstruction error for the input x.
The capability level of the weather disaster emergency subject at each instant is not the prediction target for the autoencoder feature extraction structure shown in Figure 2, but the reconstruction of x by capability characteristic encoding value a can retain the key features, reflecting the capability evolution trend from the time scale. a can also learn to obtain the time-dependent characteristics of various types of capability information because it is the output of the hidden layer with nonlinear characteristics. This nonlinear feature is challenging to extract using conventional PCA and Constrained Principal Component Analysis (CPCA) feature extraction techniques, and it complies with the national capability information restoration mechanism’s requirement that records significant capability feature data within a specific time frame and adopts the forgetting mechanism after that.
In terms of dimensional characteristics, the autoencoders form a pair of mutually inverse mapping relations.
f: x→a  d: a→x
where f mapping is obtained by training the input layer weight matrix, and d mapping is obtained by learning the output layer weights. This mapping relationship is similar to the restricted Boltzmann machine [22], which is able to extract features from the input perspective better by data reconstruction.

2.2.2. Capability Feature Extraction Method

(1)
Model parameters
Assume that Encoder is denoted by f in the feature extraction method shown in Figure 3, which is responsible for mapping the capability raw data x to the feature extraction layer a; Decoder is denoted by d, which is responsible for mapping a to the reconstructed features x ^ ; the number of samples of capability sequences for building weather disaster emergency subjects is N , and the set of capability feature sequences can be determined.
X = x 1 , x 2 , , x N
For the ith feature sequence
x i = x i 1 , x i 2 , , x i T i
According to the RNN network structure, each node of the encoder–decoder architecture constructed in this paper is computed as follows.
h i = g W h h h i t 1 + W x h x i t
a i = g W h h h i T i
h i t = g W h h h i t 1
x ^ i t = g W h x h i t
where x ^ i t is the reconstruction of the autoencoder for the input x, and i 1 , 2 , N , t 1 , 2 , , T i . h i 0 is the initial value of the capability feature, and a random number greater than or equal to 0 can generally be chosen as the initial value according to the connotation of the capability feature. h i 0 = a i . W x h denotes the input x to the connection weight matrix of the Encoder hidden layer h. In the process of ability feature extraction, the mapping relationship between the original ability features and the output features can be obtained through the learning of this matrix, as shown in Equation (15). In this equation, (wxh)i,j is the connection weight between the i-th dimensional input xi and the hidden state (capability feature) hj, which represents the contribution of xi to hj. Unlike the general linear dimensionality reduction method, these weight effects are also affected by the activation function g of the activation function.
W x h = w x h 1 , 1 w x h 1 , M w x h m , 1 w x h m , M m × M
W h h , as shown in Equation (16), is the connection weight of the previous moment t − 1 hidden state of the Encoder part on the hidden state at moment t. In the capability feature extraction, the influence of the previous capability information of the Encoder part on the current capability information is expressed, where (whh)i,j is the connection weight between the hidden state (capability feature) hi at moment t − 1 and the hidden state (capability feature) hj at moment t, representing the contribution of hi to hj. Again, these contributions are also affected by the activation function g of the h. By learning this matrix, it is possible to retain useful information about the prior capabilities in the final extracted features.
W h h = w h h 1 , 1 w h h 1 , m w h h m , 1 w h h m , m m × m
W h h , as shown in Equation (17), is the connection weight of the previous moment t − 1 hidden state to the hidden state at moment t in the Decoder. The Decoder is able to reconstruct the capability data from the capability features compressed to a. During the capability feature extraction, the W h h denotes the influence of the Decoder’s partial previous capability information on the current capability information.
W h h = w h h 1 , 1 w h h 1 , m w h h m , 1 w h h m , m m × m
W h x , as shown in Equation (15), represents the joint weight matrix of the hidden state h reconstruction feature of the Decoder section. In the process of feature extraction ability, through the study of the matrix, we can obtain the Decoder part hidden state and reconstruction feature x ^ mapping relation.
(2)
Feature extraction steps
(a) Feature coding
W h x = w h x 1 , 1 w h x 1 , m w h x M , 1 w h x M , m M × m
The capability extraction process begins with the raw capability data being mapped by f  x i mapping to the final state a i , as the feature representation of this sequence, and the mapping function is
a i = f x i
If the dimension of the capability feature is M, then x i t = x i t 1 , x i t 2 , , x i t M , and the t 1 , 2 , T i . a i is the coded value of the ith sample, and if the mth sequence of dimensional features is to be extracted, then a i = a i 1 , a i 2 , , a i m .
    (b)
Feature decoding
By d mapping the extracted a i to the output x ^ i :
x ^ i = d a i
where x ^ i is the reconstruction result of the ith capability feature sequence; see Equation (21).
x ^ i = x ^ i 1 , x ^ i 2 , , x ^ i T i
i 1 , 2 , N , the set of reconstructed capability feature sequences is
X ^ = x ^ 1 , x ^ 2 , , x ^ N
    (c)
Loss function calculation
In order to extract the optimal features, the set of reconstructed features of the RNN autoencoder requires X ^ to be as close as possible to the real set of capability feature sequences X , so its learning process is to minimize the L2 loss function is as follows.
L X ^ ,   X = t = 1 T L t = i = 1 N x ^ i x i 2
    (d)
Parameter learning
The time-based backpropagation algorithm Backpropagation Through Time (BPTT) [23] is used to learn the parameters of the encoder–decoder architecture. The actual solution process can be combined with certain optimization strategies, such as stochastic gradient descent (SGD), to obtain the final encoder after several iterations of training and stopping when the termination condition or the predefined number of training rounds is reached f .
Given a sample set S = X , X , the number of samples is N, and the set of ability feature sequences is X = x 1 , x 2 , , x N ; then, the ability feature of the ith feature sequence after dimensionality reduction is
a i = f x i
where x i X , x i = x i 1 , x i 2 , , x i T i , x i t = x i t 1 , x i t 2 , , x i t M , and a i = a i 1 , a i 2 , , a i m .
The set of samples after feature extraction S = X , A , A = a 1 , a 2 , , a N is the A m × N . According to the principle of feature extraction, the final state can be set a with dimensionality less than the input feature dimensionality, i.e., m < M , which can effectively reduce the risk of overfitting of the encoder–decoder architecture, while forcing it to capture the most significant features in the capability sequence for the purpose of dimensionality reduction.

2.3. Weather Disaster Emergency Response Capability Subject Capability Feature Set Construction

From the government’s perspective, the purpose of dynamically evaluating and supervising the capacity of the construction of weather disaster emergency subjects is to provide decision support for project authorities at all levels and project legal persons and relevant units in carrying out daily management, government procurement, weather disaster emergency access, bidding and tendering, administrative approval, evaluation of merits and awards, qualification management, and other specific work, so as to form an efficient weather disaster emergency subject as well as individual emergency response capacity constraints.
In this paper, we summarize the relevant laws and regulations, literature, and practice of building the capacity evaluation of the main meteorological disaster emergency response body in different regions. Based on the literature collection, the capacity evaluation indicators obtained were statistically analyzed, and the indicators that were cited more than 10 times were retained. For example, the indicator “disaster recognition capability” can accept indicators such as “automatic weather station coverage rate” and “hydrological station network density”; the indicator “disaster relief capability” can accept indicators such as “number of professional rescue team drills”, “number of doctors per 1000 people “, and “number of doctors per 1000 people”.
After the above analysis and research, it is initially determined that the capability evaluation indexes should include disaster identification capability, engineering defense capability, disaster rescue capability, etc. The specific indexes are selected in Table 1. The evaluation criteria for emergency capability are shown in Table 2.

2.4. Steps of Weather Disaster Emergency Response Capability Feature Extraction

The extraction of efficient feature representation from the ability feature set using the constructed RNN autoencoder requires the steps of feature data processing, RNN autoencoder training, and feature extraction interpretation. Feature data processing solves the problems of missing data and inconsistent magnitudes; encoder training is the process of building a feature extraction model using the processed data; feature extraction interpretation uses a neural network interpretation method to theoretically analyze the feature extraction results and ensure that the feature extraction results are meaningful for capability evaluation and management.

2.4.1. Feature Data Processing

(1)
Processing of missing features
The filling of weather disaster emergency information is nonmandatory in nature, and some data have missing problems. To address the problem of missing capability features, first set a threshold value, and if the percentage of missing features of a feature exceeds this threshold value, delete this feature; if the threshold value is not exceeded, use the KNN algorithm to find the k samples closest to the sample where the missing value is located and take the average of their corresponding features to fill the missing features. Usually, this step is based on the 80/20 rule, and the deletion threshold of the missing features is set to 80% [24].
(2)
Data dimensionless processing
Since the composition of the capability features of the construction weather disaster emergency subject is more complex, and the dimensional differences between the capability features are large, the unprocessed features are often not comparable with each other. In order to be able to reflect the actual situation accurately and effectively, the sample is dimensionlessly processed to ensure that the different features have equivalence and homoscedasticity. In this paper, min-max normalization (normalization) is used, and the specific process is as follows: set x m a x and x m i n be the attributes x of the maximum and minimum values; then, the original value x can be mapped to the value in the interval [0, 1] by min-max normalization x ^ , which is given by
x ^ = x x m i n x m a x x m i n

2.4.2. RNN Autoencoder Training

The RNN autoencoder used for feature extraction is solved by the BPTT algorithm and combined with a gradient-based parameter optimization method for model training. The gradient of the loss L to the current node is first calculated recursively based on the output gradient of the last moment of decoder part d in Figure 2, and then the model parameters are updated according to the gradient direction.
Starting from the loss at the final moment recursively, the total loss L is obtained as shown in Equation (23) for moment t and loss L t of the gradient.
L L t = 1
On this basis, the reconstruction error of the capability input feature x is passed backwards in the direction of the output of the delay decoder to the tth output moment, and the total loss L is obtained for the moment t and output x ^ of the gradient.
L x ^ i t = L L t   L t x ^ i t = L t x ^ i t
The specific computation process starts at the end of the sequence (moment T) and is performed in reverse. At the moment T, the h T , and there is no subsequent node, so its gradient is
L h T = L x ^ T   x ^ T h T = W h x T   L x ^ T
The gradient is then propagated backwards through time due to the fact that, when t < T time, the h T has h t + 1 and x ^ T as two subsequent nodes, and its gradient is calculated as
L h t = L x ^ t   x ^ t h t + L h t + 1   h t + 1 h t = W h x T   L x ^ t + W h h T   L h t + 1
The gradient of the internal nodes is calculated, and the gradient of the loss about the parameters can be found. Because RNN autoencoders share parameters at multiple moments, to remove ambiguity, it is defined to use a dummy variable at moment t   W t as W for a copy of the loss, and then the gradient of the loss about the weights is found, respectively.
L W h x = t i L x ^ i t   x ^ i t W h x t = t L x ^ t   h t T
L W h h = t i L h i t   h i t W h h t = t W h x T   L x ^ t + W h h T   L h t + 1   h t 1 T  
According to the calculated gradients L W h x and L W h h , the weight matrices W h x and W h h are respectively updated, which is one round of training for the RNN model.
According to the above theoretical derivation, the process of RNN autoencoder training for the capability features of weather disaster emergency subjects is shown in the Algorithm 1.
Algorithm 1. RNN autoencoder training algorithm
Input:
Set of ability feature samples S (7) shown in equation, Upper limit of error E, Number of iterations K, Learning rate η, Batch size batchsize
Output:
Weights and biases of the capability feature extraction model
  • epoch = 1
  • While epoch < K and error > E do the following:
  • For each batch of data x in S, fetched in batches of size batchsize, do the following:
  •   T = length(x)
  •   For each time step t in range(1, T + 1), obtain the hidden layer h by stepwise forward calculation according to Equations (9)–(12)t.
  •   Set a = h(T), which is the coding result of the sample, i.e., the feature extraction result.
  •   For each time step t in range(1, T + 1), follow Equations (13)–(18) to obtain a step-by-step reconstruction of the input x from a x ^ .
  •   Calculate the loss function for the sample reconstruction of the loss function according to Equations (19)–(23), and set L to its value.
  •   Set error = L
  •    Calculate the gradient of each weight and bias according to Equations (26)–(31).
  •    Update the input weights Wxh, hidden layer weights Whh, and bias b of the encoder part, and Wh’x, Wh’h’, and bias b of the decoder part, according to the gradient descent algorithm using the following formulas: W = W − η ∂L/∂W; b = b − η ∂L/∂b
  • End For
  • End While
The algorithm trains an RNN autoencoder to extract capability features from the input sequence samples S. The algorithm uses a forward computation to obtain the final hidden state vector a, which is the coding result of the sample. The algorithm then reconstructs the input sequence from the coding result and calculates the loss function for the sample reconstruction. The weights and biases of the capability feature extraction model are updated using the gradient descent algorithm, until the maximum number of iterations K is reached, or the error falls below the specified upper limit E.

2.4.3. Interpretation Methods for Feature Extraction

The results of competency feature extraction provide a reference for competency evaluation but need to be able to be recognized and accepted by managers and evaluated subjects. For this reason, the feature extraction results need to have a good interpretability. By interpretability, we mean the choice to express the model in a way that humans can understand based on their perception of reality. In data science, it is often difficult to balance the performance and interpretability of a model. In recent years, with the development of computing power and the increase of data volume, deep learning models based on neural networks have become more and more important in the field of data mining, but these models are “black boxes”, i.e., they cannot explain the prediction results of the models, which greatly limits the credibility of the models.
To address this shortcoming, Lundberg et al. [25] proposed a model interpretation framework, the Shapley Additive Explanations (SHAP) method, based on game theory, which achieves model interpretation by calculating the importance of each feature in the sample.
The SHAP method is an additive explanatory model, i.e., each feature is considered as a contributor and its marginal contribution is calculated, and the final prediction of the model is equal to the sum of the marginal contributions of all features.
S H A P f e a t u r e x = s e t : f e a t u r e s e t s e t × F s e t 1 P r e d i c t s e t x   P r e d i c t s e t \ f e a t u r e x  
where x is a sample, Feature is a feature of the sample, and set is the set of features containing feature; Predict is the prediction result of the model about the set of features, and P r e d i c t s e t x P r e d i c t s e t \ f e a t u r e x denotes the marginal contribution of feature to the model with set as the feature. S H A P f e a t u r e x is equal to the sum of the marginal contributions of the feature to the model.
If no features are known, then the output of the model can be interpreted as the average prediction over the dataset as a baseline, with each feature having a corresponding contribution value that explains how the model goes from the baseline to the new prediction of the model containing this feature; the sum of the SHAP values for each feature of the sample is the difference between the model prediction and the dummy model without any feature.
The comprehensive analysis above shows that, compared with traditional machine learning interpretation methods, such as random forest, the SHAP method can not only give the importance ranking of each feature, but also illustrate the specific impact of a feature on the prediction results, effectively unifying the global and local interpretation of machine learning models.

2.5. Performance Analysis of Weather Disaster Emergency Response Capability Feature Extraction Method

This section covers data acquisition and feature extraction.
(1) Data sources. To study the capability characteristics of meteorological disaster emergency capability subjects, we need the data of meteorological disaster emergency capability subjects as the research data. Through finding literature and network search, this paper mainly takes a social third-party capability information platform and government capability information platform as data sources, which specifically include enterprise chacha, sky-eye check, the national subject capability information public system, the national water conservancy construction weather disaster emergency capability information platform, the national highway construction weather disaster emergency capability information management system. Additionally, we also obtained data from the national construction meteorological disaster emergency supervision public service platform, which provides information on the capacity behavior and capacity status of construction meteorological disaster emergency subjects.
In this paper, from the perspective of construction meteorological disaster emergency subject behavior and state, we need to collect the capability behavior and capability state data of the capability subject, so we capture a wide range of capability information management platforms, such as the national subject capability information public system, national water construction meteorological disaster emergency capability information platform, national construction meteorological disaster emergency supervision public service platform, and construction meteorological disaster emergency subject capacity behavior and capacity status data.
(2) Data collection steps. In this paper, the capability data collection is carried out by python, and the specific steps are as follows.
The first step is to drop the request to the url of the target website with the help of the requests library in Python, get the response from the remote server, and get the HTML type data of the target web page.
The second step is to parse and extract the capability data in the target web interface. The XPath method is used to perform gauge matching, analyze, and locate the HTML nodes, and on this basis, to parse and obtain the capability data required in this paper from the target web pages.
In the third step, use the pandas library in Python to store the capability data into a local xlsx file. The data collection procedure was prepared according to the above steps. The data captured from major capability information platforms in this paper mainly include capability characteristic information, time of occurrence of behavior, subject capability level, and time of subject capability level assessment, and they form a more complete time series sample. A total of 4052 capability data points of meteorological disaster emergency subjects including survey, design, construction, supervision, and other types were obtained, and Table 3 describes the statistics of some capability features after data pre-processing. The capability rating model constructed in this paper requires each time series sample to have more complete capability characteristics and at least two records of capability rating, filtering out samples with missing or abnormal information.
We combined and cleaned the raw data after data collection in order to determine each subject’s meteorological disaster emergency capability sequence. Table 3 and Table 4 show, respectively, statistics for some ability traits and data on the academic level of some subjects over the past few years. We were able to train and test the suggested RNN autoencoder-based feature extraction method using the cleaned data.
Data on capability characteristics, behavior timing, subject capability level, and subject capability level assessment make up the majority of the information gathered from the main capability information platforms in this paper. A more complete time series sample can be created by combining these data. After data pre-processing, Table 3 provides statistics for a number of capacity features. Each time series sample must include more detailed capability characteristics and at least two records of capability rating in order to meet the criteria set forth by the capability rating model developed in this study, which eliminates samples with missing or incomplete data.
Table 4 shows the information of the capacity rating of some entities in the past years, where “N/A” indicates that there is no rating or the capacity rating is missing in the current year. * is used to hide the specific name of the unit. AAA, AA, A, BBB, and CCC are the Evaluation criteria for emergency capability in Table 2.
(3) The setting of Encoder dimension m. According to the results of theoretical analysis in preceding part, scholars usually divide the connotation of competence into 2–5 aspects. For this reason, the dimension m of Encoder is normally set as 2–5, respectively. Through experimental comparison, the reconstruction accuracy of samples is higher when m is 3, and for this reason, the dimension m of Encoder’s final state is set as 3.
(4) Specific settings of hyperparameters. The encoder–decoder architecture is constructed and trained using the sample X of the characteristic sequences of building weather disaster emergency subject capabilities, and the specific settings of the hyperparameters are shown in Table 5.
(5) Autoencoder code. The structure of the model is shown in Figure 4. The 24-dimensional features processed are converted to 3-dimensional features by Encoder as the input to the final state of feature extraction and mapped back to 24-dimensional by Decoder. Then, X is input into Encoder, the output value is the feature after dimensionality reduction of the capability sequence, and the distribution is shown in Figure 5.
From Figure 5, it can be seen that, after the features are extracted by Encoder, the distribution of samples of different capability levels in space has different ranges, so it can be preliminarily inferred that the dynamic features of the capability sequence of the construction weather disaster emergency subject extracted by Encoder will reduce the discrimination difficulty of the classification model.

3. Results

3.1. Interpretation of Feature Extraction Results

In order to analyze the meaning of the ability feature extraction results, this section uses the SHAP analysis method of machine learning to explore the connotation of the extraction results. The SHAP method is based on the idea of game theory and achieves the importance ranking of the input features by quantifying the contribution of each input feature of the model to the output results. In order to overcome the un-interpretability of RNN and extend the applicability of the encoder–decoder architecture constructed in this paper, the constructed Encoder is interpreted and analyzed with the help of Python’s SHAP framework, so as to reflect the importance of the original ability features to the ability sequence features, as shown in Figure 6.
The hidden layer-to-hidden layer transfer matrix obtained from training is
W h h = 0.9518 0.0936 0.1264 0.0419 0.8325 0.0397 0.1070 0.0946 0.7910
As can be seen from Figure 6, the “number of flood control projects” “clean water guarantee number”, and “proportion of urban buildings with lighting protection devices” have a greater impact on feature 1, and this kind of feature can directly show the physical disaster prevention facilities and provide a reliable guarantee; “weather radar station coverage” and hydrological station network density” have a greater impact on feature 2. These facilities mean that disaster prevention subjects have better forecasting and warning ability for the coming of disasters. “Automatic weather station coverage rate” has a greater impact on feature 3, and more automatic weather stations mean that disaster prevention agents can understand weather conditions faster and more comprehensively and make preparations for possible disasters.
Observing Equation (33), we can find that, with W h h , the absolute values of the main diagonal elements are larger and the absolute values of the elements at other positions are smaller, which predicts that, as time progresses, a feature at the current moment is more influenced by the same feature at the previous moment and less influenced by other features. This phenomenon is most pronounced for feature 1 and less pronounced for feature 3. This is because the impact of disaster recognition capability is relatively stable and lasting, while the social control capability is highly variable, difficult to evaluate, and unstable.

3.2. Model Performance Comparison

To test whether the feature extraction method proposed in this paper can provide high-quality input to the capability evaluation model, i.e., whether it can effectively improve the accuracy of capability evaluation, the extraction results of this method are compared with those of the Principal Component Analysis (PCA) method [26], the Common Principal Component Analysis (CPCA) [27] method, and the random forest method (SVM) [28]. Since the feature extraction results provide an efficient set of features for capability evaluation, to guarantee the comparability of the four methods, the KNN query is used to test the effect of feature extraction on the classification of capability evaluation.
In the field of capability evaluation, PCA-SVM, CPCA-SVM, and CPCA-RF represent a spectrum of commonly used feature extraction techniques. PCA-SVM and CPCA-SVM are dimensionality reduction and classification algorithms that are based on the combination of Principal Component Analysis and Support Vector Machines. CPCA-RF is a variant of the PCA method that incorporates classification by random forest. By comparing our method to these well-established approaches, we aimed to demonstrate the efficacy and potential benefits of our RNN autoencoder-based method for extracting relevant features and enhancing classification performance.
The specific process is as follows.
In the first step, the features of each object in the sample set S are extracted using PCA-SVM, CPCA-SVM, CPCA-RF, and this paper’s method (RNN autoencoder), respectively. The original sample set S is mapped to the feature space to construct SA.
In the second step, the dynamic time-bending distance (DTW) algorithm is used to find the K closest samples for each sample in SA, respectively.
In the third step, the accuracy of various methods for the sample K-nearest neighbor finding is counted.
The four methods extracted the first three features separately in order to ensure the consistency of feature extraction during the accuracy comparison. The final calculated model accuracy results are shown in Figure 7.
The experimental results of comparing the feature extraction method proposed in this paper with the PCA-SVM method, CPCA-SVM method, and CPCA-RF method are presented in Figure 7. To ensure the comparability of the four methods, the KNN query was used to test the effect of feature extraction on the classification of capability evaluation. The results show that the RNN autoencoder proposed in this paper achieves the highest K-nearest neighbor finding accuracy of 0.65 at K equal to 1, outperforming the other four algorithms. Additionally, the proposed method consistently maintains an accuracy above 0.5 as the value of K increases. While the CPCA-RF algorithm and CPCA-SVM algorithm perform better at K equal to 1 and 5, their performance decays more quickly as K increases to 10. Although the overall classification accuracy of the four methods is not high, the proposed feature extraction method outperforms the other three methods in classifying ability subjects using KNN. These results demonstrate that the proposed feature extraction method is effective in providing high-quality input to the capability evaluation model and improving the accuracy of capability evaluation.
We employed the Support Vector Machine (SVM) as an alternative classification method to compare model performance. The Support Vector Machine is a widely used classification algorithm applicable to both linear and nonlinear classification problems. We compared the SVM classification performance based on the RNN autoencoder, PCA, CPCA, and SVM feature extraction methods.
The specific process is as follows.
In the first step, the features of each object in the sample set S were extracted using PCA-SVM, CPCA-SVM, CPCA-RF, and the method proposed in this study (RNN autoencoder). The original sample set S was mapped to the feature space, constructing SA.
In the second step, SA was divided into a training set and a test set. The SVM classifier was trained using the training set. To ensure linear separability of the classification problem, a linear kernel function was employed here.
In the third step, the performance of the SVM classifier was assessed using the test set. Performance metrics, such as accuracy, precision, recall, and F1 score, were calculated.
Finally, the performance of SVM classifiers based on different feature extraction methods was compared. Results are shown in Table 6.
According to these indexes, the RNN autoencoder is superior to PCA-SVM, CPCA-SVM, and CPCA-RF methods in accuracy, precision, recall, and F1 scores. This indicates that the RNN autoencoder feature extraction method can provide higher-quality input for the evaluation of meteorological disaster emergency response capability, thus improving the evaluation accuracy and performance.
There are several differences between the models that contribute to the superior performance of our proposed method.
RNN autoencoders are capable of capturing temporal dependencies in time series data, which is crucial for analyzing and evaluating the dynamic capability sequences of meteorological disaster emergency response. In contrast, PCA and CPCA methods are based on linear transformations that might not be able to adequately capture the temporal structure of the data.
Our RNN autoencoder method employs an encoder–decoder architecture that learns a compact representation of the input data while minimizing the reconstruction error. This allows for more efficient feature extraction compared to PCA and CPCA methods, which rely on orthogonal transformations that may not always lead to the most informative features.
The RNN autoencoder can handle nonlinear relationships between input features, as it employs activation functions like ReLU to learn complex patterns in the data. This is in contrast to PCA and CPCA, which assume linear relationships between features.
Lastly, our method benefits from the adaptability of RNNs, which can learn to focus on the most relevant features for the task at hand. This provides an advantage over PCA, CPCA, and random forest methods, which do not have the same level of flexibility in adapting to the specific requirements of the capability evaluation problem.
In conclusion, our RNN autoencoder-based method shows promising results in extracting features for capability evaluation in the context of meteorological disaster emergency response. The superior performance of our method compared to the other methods is attributed to its ability to capture temporal dependencies, efficiently learn compact representations of the input data, handle nonlinear relationships, and adapt to the specific requirements of the problem. We believe that our approach has the potential to significantly improve capability evaluation in disaster management and response, and we plan to further refine and expand upon our method in future research.

4. Conclusions

This paper suggests a novel method of feature extraction for RNN autoencoder-based meteorological disaster emergency response capability. The technique tackles the problem of feature extraction from multidimensional time series data. In order to determine the features of the disaster defense capability grade, the study first constructs a meteorological disaster capability evaluation index system that combines real data and evaluation content. Then, a strong foundation for disaster management is built using an encoder–decoder architecture to dynamically extract features from the capability sequence of the disaster management subject.
Several experiments were planned to evaluate the effectiveness of the suggested approach. Using the KNN query, the method’s performance was compared to PCA-SVM, CPCA-SVM, and CPCA-RF methods. Although none of the four methods had particularly high classification accuracy, the proposed method performed better than the other three methods when categorizing capability levels of subjects. As a result, the technique works well for feature extraction and can help with disaster management.
The findings from our paper highlight the importance of addressing the issue of sample imbalance in evaluating meteorological disaster defense capability. The extraction of effective features is only the most basic work. In the future, we will try to build a more accurate evaluation model. Our future work involves developing and implementing solutions to address the double imbalance problem in evaluating meteorological disaster defense capability. We will accomplish this by developing an encoder-adaptive focal deep learning credit dynamic evaluation model and training algorithm, which will effectively resolve the sample proportion double imbalance problem. Additionally, we will explore the use of category weights and adaptive focal loss to address the sample percentage imbalance problem. By combining these approaches, we can improve the accuracy and performance of existing methods for evaluating disaster defense capability, ultimately leading to better disaster management and response techniques.
However, there is still much work to be done in developing and testing methods that can handle sample imbalance more effectively. Future research in this area could focus on improving the performance and accuracy of existing methods, as well as exploring new techniques to address the issue of sample imbalance. Additionally, further investigation into the sources of information required for dynamic evaluation of capability could provide insights into how to better collect and parse data to improve the evaluation process. Ultimately, by continuing to develop and refine methods for evaluating meteorological disaster defense capability, we can improve disaster management and response techniques and minimize the harm caused by these catastrophic events.

Author Contributions

Conceptualization, J.T. and R.Y.; methodology, J.T. and G.Y.; software, J.T. and G.Y.; validation, J.T., R.Y., G.Y. and Y.M.; formal analysis, J.T. and R.Y.; investigation, J.T., R.Y. and G.Y.; resources, J.T.; data curation, J.T.; writing—original draft preparation, J.T. and G.Y.; writing—review and editing, J.T., Q.D. and R.Y.; visualization, G.Y.; supervision, Q.D. and Y.M.; project administration, Y.M. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by Research project of State Grid Corporation (4000-202318098A-1-1-ZN).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors would like to acknowledge Hohai University and State Grid Jiangsu Electric Power Company for providing the materials used for experiments. We would like to express our gratitude to the journal editors and reviewers for the recognition of this paper, the comments and suggestions which are invaluable for the improvement of our manuscript. All authors and associated individuals confirm this.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Bryen, D.N. Communication during times of natural or man-made emergencies. J. Pediatr. Rehabil. Med. 2009, 2, 123–129. [Google Scholar] [CrossRef]
  2. Corbacioglu, S.; Kapucu, N. Organisational Learning and Selfadaptation in Dynamic Disaster Environments. Disasters 2006, 30, 212–233. [Google Scholar] [CrossRef] [PubMed]
  3. Zantal-Wiener, K.; Horwood, T.J. Logic modeling as a tool to prepare to evaluate disaster and emergency preparedness, response, and recovery in schools. New Dir. Eval. 2010, 2010, 51–64. [Google Scholar] [CrossRef]
  4. Wang, J.F.; Feng, L.J.; Zhai, X.Q. A system dynamics model of flooding emergency capability of coal mine. Prz. Elektrotechniczny 2012, 88, 209–211. [Google Scholar]
  5. Wang, X.; Sugumaran, V.; Zhang, H.; Xu, Z. A Capability Assessment Model for Emergency Management Organizations. Inf. Syst. Front. 2018, 20, 653–667. [Google Scholar] [CrossRef]
  6. Love, P.E.; Matthews, J. Quality, requisite imagination and resilience: Managing risk and uncertainty in construction. Reliab. Eng. Syst. Saf. 2020, 204, 107172. [Google Scholar] [CrossRef]
  7. Hosseini, A.; Faheem, A.; Titi, H.; Schwandt, S. Evaluation of the long-term performance of flexible pavements with respect to production and construction quality control indicators. Constr. Build. Mater. 2019, 230, 116998. [Google Scholar] [CrossRef]
  8. Francom, T.; Markham, C.; Pridmore, A.; Geisbush, J. Identifying Geotechnical Risk and Assigning Ownership on Water and Wastewater Pipeline Projects Using Alternative Project Delivery Methods. In Proceedings of the Sessions of the Pipelines Conference, Phoenix, AZ, USA, 6–9 August 2017. [Google Scholar] [CrossRef]
  9. Guo, X.; Kapucu, N. Assessing social vulnerability to earthquake disaster using rough analytic hierarchy process method: A case study of Hanzhong City, China. Saf. Sci. 2020, 125, 104625. [Google Scholar] [CrossRef]
  10. Peng, T. Exploration of the grid-based path of emergency management system innovation for grassroots governments. Risk Disaster Crisis Res. 2016, 4, 119–178. [Google Scholar]
  11. Zhou, H.; Zhao, Y.; Shen, Q. Risk assessment and management via multi-source information fusion for undersea tunnel construction. Autom. Constr. 2020, 111, 103050.1–103050.16. [Google Scholar] [CrossRef]
  12. Li, L.; Kuang, Z.; Mo, J.; Meng, C. Assessment of risk ranking for autumn drought in guangxi province based on ahp and gis. Trans. Chin. Soc. Agric. Eng. 2013, 29, 193–201. [Google Scholar]
  13. Chen, G.-H.; Tao, L.; Zhang, H.-W. Study on the methodology for evaluating urban and regional disaster carrying capacity and its application. Saf. Sci. 2009, 47, 50–58. [Google Scholar] [CrossRef]
  14. Liu, J.; Liu, M.; Xu, S.; Du, Q.; Zhu, J.; Zhu, X. A survey on integrated and comprehensive disaster reduction technology in the era of big data. Geomat. Inf. Sci. Wuhan Univ. 2020, 45, 1107–1116. [Google Scholar]
  15. Wang, Y.; Wang, W.; Zhang, X.; Xu, D. Novel Interleaved Single-Stage AC/DC Converter with a High Power Factor and High Efficiency. J. Power Electron. 2011, 11, 245–255. [Google Scholar] [CrossRef]
  16. Wang, Z.Q.; Xie, Y.N.; Li, H. On innovation of the emergency management in China in the era of big data. In Proceedings of the 11th International Conference on Public Administration, Padjadjaran Univ, Fac Social & Polit Sci, Bandung, Indonesia, 9–11 December 2015; University Electronic Science & Technology China Press: Chengdu, China, 2015; pp. 687–692. [Google Scholar]
  17. Wen, X.; DEStech Publicat Inc. Systematic strategy on emergency management of China international petroleum cooperation. In Proceedings of the 2nd International Conference on Social Science (ICSS), Changsha, China, 18 October 2015; Destech Publications, Inc.: Lancaster, UK, 2015; pp. 180–184. [Google Scholar]
  18. Yao, X.J.; Panaye, A.; Doucet, J.P.; Zhang, R.S.; Chen, H.F.; Liu, M.C.; Hu, Z.D.; Fan, B.T. Comparative Study of QSAR/QSPR Correlations Using Support Vector Machines, Radial Basis Function Neural Networks, and Multiple Linear Regression. J. Chem. Inf. Comput. Sci. 2004, 44, 1257–1266. [Google Scholar] [CrossRef] [PubMed]
  19. Wang, Y.; Xiao, F.; Zhang, L.; Gong, Z. Research on Evaluation of Meteorological Disaster Governance Capabilities in Mainland China Based on Generalized λ-Shapley Choquet Integral. Int. J. Environ. Res. Public Health 2021, 18, 4015. [Google Scholar] [CrossRef] [PubMed]
  20. Yuan, S.; Tang, Z.; Tian, J.; Cao, H. A Resonant Push–Pull DC–DC Converter. In Proceedings of the 3rd International Conference on Electrical Engineering and Information Technologies for Rail Transportation (EITRT), Changsha, China, 20–22 October 2017; Lecture Notes in Electrical Engineering. Springer: Singapore, 2018; pp. 48267–48278. [Google Scholar] [CrossRef]
  21. Graves, A.; Mohamed, A.R.; Hinton, G. Speech recognition with deep recurrent neural networks. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Vancouver, BC, Canada, 26–31 May 2013; pp. 6645–6649. [Google Scholar]
  22. Tan, S.; Mayrovouniotis, M.L. Reducing data dimensionality through optimizing neural network inputs. AIChE J. 1995, 41, 1471–1480. [Google Scholar] [CrossRef]
  23. Ismail, S.; Ahmad, A.; Halton, G. Recurrent neural network with backpropagation through time algorithm for arabic recogni-tion. In Proceedings of the 18th European Simulation Multiconference, Magdeburg, Germany, 13–16 June 2004; Scs Europe: Ghent, Belgium, 2004; pp. 29–33. [Google Scholar]
  24. Zhou, S.-Y.; Huang, A.-C.; Wu, J.; Wang, Y.; Wang, L.-S.; Zhai, J.; Xing, Z.-X.; Jiang, J.-C.; Huang, C.-F. Establishment and assessment of urban meteorological disaster emergency response capability based on modeling methods. Int. J. Disaster Risk Reduct. 2022, 79, 103180. [Google Scholar] [CrossRef]
  25. Lundberg, S.M.; Lee, S.I. A unified approach to interpreting model predictions. In Proceedings of the 31st Annual Conference on Neural Information Processing Systems (NIPS), Long Beach, CA, USA, 4–9 December 2017; Information Processing Systems (Nips): La Jolla, CA, USA, 2017; p. 30. [Google Scholar]
  26. Maji, K.; Kumar, G. Inverse analysis and multi-objective optimization of single-point incremental forming of AA5083 aluminum alloy sheet. Soft Comput. 2019, 24, 4505–4521. [Google Scholar] [CrossRef]
  27. Li, H. Multivariate time series clustering based on common principal component analysis. Neurocomputing 2019, 349, 239–247. [Google Scholar] [CrossRef]
  28. Jiang, J.; Wu, Z.; Xu, M.; Jia, J.; Cai, L. Comparing feature dimension reduction algorithms for gmm-svm based speech emotion recognition. In Proceedings of the Annual Summit and Conference of the Asia-Pacific-Signal-and-Information-Processing-Association (APSIPA), Kaohsiung, Taiwan, 29 October–1 November 2013. [Google Scholar]
Figure 1. Feature extraction idea.
Figure 1. Feature extraction idea.
Applsci 13 05153 g001
Figure 2. Encoder–decoder architecture.
Figure 2. Encoder–decoder architecture.
Applsci 13 05153 g002
Figure 3. The architecture of the autoencoder built in this paper.
Figure 3. The architecture of the autoencoder built in this paper.
Applsci 13 05153 g003
Figure 4. Visualization of encoder-decoder architecture.
Figure 4. Visualization of encoder-decoder architecture.
Applsci 13 05153 g004
Figure 5. Distribution of capability sequence characteristics.
Figure 5. Distribution of capability sequence characteristics.
Applsci 13 05153 g005
Figure 6. Visualization of the importance of original capability characteristics on capability sequence characteristics.
Figure 6. Visualization of the importance of original capability characteristics on capability sequence characteristics.
Applsci 13 05153 g006
Figure 7. Comparison of the accuracy of K-nearest neighbor classification based on different feature extraction methods.
Figure 7. Comparison of the accuracy of K-nearest neighbor classification based on different feature extraction methods.
Applsci 13 05153 g007
Table 1. Evaluation index system of meteorological disaster emergency capability.
Table 1. Evaluation index system of meteorological disaster emergency capability.
Target LayerGuideline LayerIndicator Layer
Meteorological disaster emergency response capabilityDisaster
Recognition
Capability
Automatic weather station coverage rate
Hydrological station network density
Weather radar station coverage
Satellite cloud map receiving station coverage
Engineering
Defense
Capability
Proportion of urban buildings with lightning protection devices
Number of flood control projects
Number of shelters
Disaster
Relief
Capability
Number of professional rescue team drills
Professional rescue team equipment to meet the standard rate
Number of doctors per 1000 people
First aid vehicle response time
Number of emergency vehicles per 1000 people
Resource
Security
Capacity
Number of military, armed police, and public security available for mobilization
Number of people guaranteed clean water
Number of backup generator sets
Contingency reserve rate
Behavioral
Responsiveness
Disaster risk perception status
Disaster prevention awareness status
Status of disaster information dissemination awareness
Disaster input awareness status
Awareness status of disaster prevention actions
Social
Control
Capability
Organizational system construction status
Command staff quality
Information Release Management Status
The extent to which government functions are performed
Weather disaster emergency regulations construction status
Meteorological disaster emergency plan construction status
Table 2. Evaluation criteria for emergency capability.
Table 2. Evaluation criteria for emergency capability.
Capability LevelEvaluation Value
AAA8–10
AA6–<8
A4–<6
BBB2–<4
CCC0–<2
Table 3. Statistics of some ability characteristics.
Table 3. Statistics of some ability characteristics.
Automatic Weather Station Coverage RateHydrographic Station Network DensityWeather Radar Station CoverageNumber of Flood Control WorksRatio of Lightning Protection Device InstallationClean Water Guarantee NumberNumber of Spare Generating SetsOrganizational System Construction StatusInformation Release ManagementWeather Disaster Emergency Regulations Construction Number of SheltersNumber of Doctors per 1000 PeopleStatus of Disaster Information Dissemination AwarenessCommand Staff Quality
Number of samples40524052405240524052405240524052405240524052405240524052
Average value0.0290.0100.0580.0090.0050.0110.0150.0400.0230.0190.0160.0030.0050.012
Variance0.0480.0340.0790.0450.0380.0490.0590.0610.0820.0600.0380.0230.0350.031
Minimum value00000000000000
25 th percentile00000000000000
50 th percentile0.01600.04300000.008000000
75th percentile0.0360.0065790.08600000.086000000
Maximum value11111111111111
Table 4. Information on the ability level of some subject in the past years.
Table 4. Information on the ability level of some subject in the past years.
Company Name20102011201220132014201520162017
*** Fourteen Bureau Group Co.AAAN/AN/AN/AN/AN/AN/AAAA
*** Meteorological Bureau Co.AAAN/AN/AAAAN/AN/AN/AAAA
** 22nd Bureau Group Co.AAAN/AN/AAAAN/AN/AN/AAAA
** Water Conservancy BureauAAAN/AN/AAAAN/AN/AAAAN/A
** City Water Conservancy Construction CorporationN/AN/AN/ABBBN/AAN/AAA
** Municipal governmentAAN/AN/AN/AAAAAAN/AN/A
........................ ............
Table 5. encoder–decoder architecture hyperparameter settings.
Table 5. encoder–decoder architecture hyperparameter settings.
HyperparametersRoleValue
Number of layers of RNN neural networkChange the complexity of the model1
RNN single/bi-directionalWhether current characteristics are influenced by future characteristicsOne-way
Activation function typeLearning nonlinear relationshipsReLU
Batch size of input samples per training roundTrade-off between model accuracy and training efficiency64
Number of model training roundsMake the model continuously learn the distribution of the training data100
Learning RateControl the step size of the weight update0.001
Adaptive learning rate algorithmAutomatically adjust the learning rate according to some strategy to find the global optimal pointAdam
Table 6. Comparison results of models based on SVM classification method.
Table 6. Comparison results of models based on SVM classification method.
MethodAccuracyPrecisionRecallF1 Score
RNN Autoencoder0.680.700.720.71
PCA-SVM0.490.510.530.52
CPCA-SVM0.540.560.580.57
CPCA-RF0.590.610.630.62
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Tang, J.; Yang, R.; Dai, Q.; Yuan, G.; Mao, Y. Research on Feature Extraction of Meteorological Disaster Emergency Response Capability Based on an RNN Autoencoder. Appl. Sci. 2023, 13, 5153. https://doi.org/10.3390/app13085153

AMA Style

Tang J, Yang R, Dai Q, Yuan G, Mao Y. Research on Feature Extraction of Meteorological Disaster Emergency Response Capability Based on an RNN Autoencoder. Applied Sciences. 2023; 13(8):5153. https://doi.org/10.3390/app13085153

Chicago/Turabian Style

Tang, Jiansong, Ruijia Yang, Qiangsheng Dai, Gaoteng Yuan, and Yingchi Mao. 2023. "Research on Feature Extraction of Meteorological Disaster Emergency Response Capability Based on an RNN Autoencoder" Applied Sciences 13, no. 8: 5153. https://doi.org/10.3390/app13085153

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop