Semantic-Based Sports Music Information Fusion and Retrieval in Wireless Sensor Networks

The wireless sensor network has developed rapidly in recent years. It is formed by the intersection of multiple disciplines. It integrates embedded technology, sensor technology, distributed technology, wireless communication technology, and modern networks. It is a brand new information acquisition platform. The characteristics of sensor networks determine that information fusion technology is a hot spot in the research of wireless sensor networks. Information fusion can achieve high performance and low cost in terms of energy and communication, which is of great significance to the research of sensor networks. This paper is aimed at studying the semantic-based sports music information fusion and retrieval research in wireless sensor networks. WSNs may face various attacks including eavesdropping attacks, replay attacks, Sybil attacks, and DOS attacks. Therefore, they are designing sensor network solutions. It is necessary to consider the network security issues. This article summarizes and analyzes the existing WSN security data fusion solutions for this issue and compares them by classification. This paper proposes methods and theories such as the spatial correlation detection algorithm, CBA algorithm, FTD algorithm, and DFWD algorithm, which enriches the research of information fusion and retrieval in wireless sensor networks, which is of exploratory significance, and it also establishes this problem. The model was studied, and reliable data was obtained. The experimental results of this paper show that when using these methods to diagnose faults in WSN, the correct rate of model diagnosis is higher than 77%.

1. Introduction 1.1. Background. WSNs are currently attracting international attention, a highly integrated frontier research field involving multiple disciplines and a high degree of intersecting knowledge. Developments in sensor technology, in microelectrical systems technology, computer technology, and radio communication have promoted the emergence and development of modern WSN. After the Internet, WSNs are IT hotspot technologies that have a significant impact on human lifestyles in the 21st century. As early as 1999, it was listed by US Business Weekly as the most influential machine of the 21st century, and in 2000 by the US Department of Defense as one of the five cutting-edge areas of national defense. These miniature, inexpensive, and lowpower sensor nodes (devices) have the function of sensing the surrounding environment and data processing and can form a sensor network spontaneously through wireless communication technology, which is convenient for people to collect useful information and analyze and process. In many fields in the real environment, WSNs have very broad application prospects. For example, it has good applications in military, target tracking, environmental monitoring, health monitoring, and other fields. Later, in 2003, MIT's Technology Review magazine listed WSN as the first emerging technology among the top ten emerging technologies in the future; in 2003, Business Week published a report in its "Future Technology Special Edition." The article pointed out that WSN is one of the four high-tech industries in the world in the future. Since the promotion of sports dance in China in the 1980s, in order to make it quickly become one of the fitness programs of national sports, to harmoniously integrate with the international sports dance culture, and to enhance the cultural taste of Chinese sports dance, some experts and scholars have advocated. Under the influence of the theory of Chinese sports dance, sports dance uses Chinese elements in music and clothing, making this sport quickly accepted by the Chinese people and effectively promoting the development of national fitness sports.
1.2. Significance. The basic function of WSNs is to sense, collect, process, and return data information in the monitoring area. The resources of the sensor nodes are very limited, especially the battery energy resources of the nodes, which are directly related to the service life of the entire network. WSN usually contains many wireless sensor nodes, which are composed of embedded processors, sensing components, power modules, and communication parts. The sensor nodes are usually deployed by random placement. At any time, the sensor nodes use wireless communication to selforganize the network topology. Each node has a strong collaboration ability, and the network system is "intelligent" from the perspective of the overall behavior of the system. At the same time, due to the characteristics of reciprocal measurement and distributed collaboration, wireless sensor networks have a wider application range, flexible operation, high measurement accuracy, and intelligence and have broad application prospects in many fields such as military, industry, and agricultural production. At the same time, the reliability and sustainability of the network are also increasing. The fault diagnosis of sensor nodes is very important. Real-time understanding of the state of the network plays an important role.

Related Work.
Since the end of the 19th century, the wireless sensor network has been a highly interdisciplinary field. It highly integrates sensors, embedded computing, wireless network, and communication technologies. Scholars at home and abroad have paid more and more attention to its research due to many unavoidable factors. Due to the influence of factors, the environment in which WSN is used is extremely complex and harsh, so the probability of sensor node failure is much higher than that of other systems. Li and Wang proposed that wireless sensor networks, as an emerging technology integrating information acquisition, transmission, and processing, play an important role in the field of computer networks. Reliable information fusion is a key technology for reducing energy consumption and data transmission in wireless sensor networks and has broad prospects. In recent years, with the development of information technology, the research of WSN reliable information fusion method has attracted wide attention from domestic and foreign scholars and gradually diversified [1]. Wu et al. proposed the wireless sensor network holmium wshn41, which has been widely used in military, transportation, medical, and other fields. The design of a wireless sensor network routing protocol is limited by the characteristics of local information and power supply. The communication ability and access point of sensor networks are limited. How to improve the energy efficiency of node and network life cycle is the current research hotspot, and an improved leach hol-mium algorithm is proposed [2]. Wang et al. said that the class state estimation problem of wireless sensor networks (WSNs) with finite energy is studied. Firstly, the multirate estimation model is established, and then, the calculation is carried out on the basis of matrix weighting. Based on the optimal fusion criterion, there is a new development. A two-step information fusion algorithm is designed. Compared with the existing methods, the proposed fusion algorithm can greatly reduce the communication cost. A wireless sensor network can effectively save the energy cost of sensors [3]. Although the analysis is in place, some of the theories put forward do not have practical significance. The disadvantage of the research is that there is no specific analysis of the research process.
1.4. Innovation. Innovative points of this article are the following: (1) Innovation is from the perspective of topic selection. From the perspective of topic selection, this article is a brand new perspective. It has exploratory significance in researching wireless sensor networks, sports music, information fusion, and retrieval. (2) The innovation of the method part is to use the spatial correlation detection algorithm, CBA algorithm, FTD algorithm, and DFWD algorithm in the research for sports music information fusion and retrieval. (3) The other is the innovation of project practice. Due to the characteristics of peer-to-peer measurement and distributed collaboration, wireless sensor networks have a wider application range, flexible operation, high measurement accuracy, and intelligence and have broad application prospects in many fields such as military, industry, and agricultural production.

Overview of Wireless Sensor Network
Detection Methods 2.1. Overview of Detection Methods. Wireless sensor networks have an important application value in many fields, such as detecting whether an event has occurred and reporting it to users. The application research of event detection is the design of effective event detection technology. Since the 21st century, many domestic and foreign scholars have conducted research on this. This paper summarizes the existing event area detection algorithms, which can be roughly divided into two categories: algorithms based on spatial correlation and algorithms based on spatiotemporal correlation [4]. According to the spatial correlation characteristics between nodes, they proposed a Bayesian fault-tolerant algorithm (BFTA). The first is the need to calculate the prior probability; the second is the error rate of the classification decision; the third is that it is very sensitive to the expression of the input data; the fourth is the use of the assumption of the independence of the sample attributes, so if the sample attributes are related, the effect is not good. The BFTA algorithm [5] is executed on the premise that normal nodes have 2 Journal of Sensors spatial relevance but error nodes do not have spatial relevance characteristics, all nodes have the same error rate, and the missed alarm rate and false alarm rate [6] are consistent. Define the node error rate, then Among them, it is defined as the actual condition of the event, which is defined as the state observed by the node, and the value is "1" or "0." Assume L n indicates normal conditions. Lower node perception data,L e /2, represents node perception data under abnormal conditions; the best decision threshold is equal to L n + L e /2. If the environmental noise obeys the normal distribution Nð0, ∂ 2 Þ, then the value of P can be obtained according to formula (1): 2.2. Detection Algorithm. The CBA algorithm, FTD algorithm, and DFWD algorithm are all three different improved algorithms for the above BFTA fault-tolerant OTDS algorithm. The CBA algorithm improved algorithm performance, such as reducing the incident missed alarm rate. The FDT algorithm reduced the communication between network nodes on the premise of effectively completing event detection. The DFWD algorithm improves the reliability of event detection and reduces the false positive rate. The improved methods and advantages and disadvantages are shown in Table 1.

CBA Algorithm.
In the algorithm, T j represents the credibility of each sensor node [7], and the mean value of the product of X i and the node observation value is used to determine whether an event has occurred, and the discriminant function is set to f ðX i , X i Þ. If the mean value meets certain conditions, the other is f ðX i , X i Þ; otherwise, it is 0. If X i is defined as the neighbor node of node s j , the observation data of s j ðj = 1, 2, ⋯, kÞ, and the initial value of the node's credibility is T max , the average value of node s j and its neighbor node s i is calculated as X i : Secondly, use the judgment function f ðX i , X i Þ to judge the state of node s i , namely, 2.2.2. FTD Algorithm. The algorithm implements event detection according to the neighbor node of the node and the neighbor node status of its node neighbors [8]. Each node in the algorithm sets different credibility according to the neighbor's neighbor nodes and judges whether the node realizes the reliable detection of the event through the credibility. First, calculate the sum of the observations of the neighbor nodes of node s i , namely, Calculate the credibility of s i node: Calculate the credibility of neighbor nodes of s i node neighbor: 2.2.3. DFWD Algorithm. The DFWD algorithm is an algorithm based on hypothesis testing. If most of its neighbor nodes also meet the hypothesis testing conditions, it is considered that node s j has detected an abnormality. The conditions to be met are Among them, the variance function that defines the node perception data is SarSR e ðtÞ1, and it is determined by the following formula: Exp n ðtÞ and Exp e ðtÞ, respectively, represent the expected function of the node perception data in the normal area and the event area.

The Information Processing Method of the Convergence
Node of the Wireless Sensor Network. In WSNs, the sensor information of ordinary nodes in the monitoring area is transmitted to the sink node, and the data packet transmission method and the ability of the ordinary sampling points in the monitoring area to fuse data information [9] and information processing methods can be divided into centralized and distributed. The characteristic of the centralized structure is that the sensor node directly transmits the 3 Journal of Sensors associated information to the sink node and then performs data fusion through it. This structure minimizes loss of information, but wireless sensor network nodes are very tightly deployed, so there are multiple sensor nodes corresponding to obstacles, or multiple to one situations. This kind of redundant information will make the wireless sensor network. The energy is severely lost. The distributed method is also called information processing in the sensor network [10]. When the sensor node uses the cluster head node in the network to transmit information, the cluster head node will flip through the scanned information and then preprocess the information accordingly. A method for sinks in wireless sensor networks to collect information when they are moving: first establish a two-layer grid in the entire network, and perform hierarchical monitoring based on the two-layer grid, and perform event-driven or query-driven monitoring of events of interest in the environment. For monitoring applications, when the sink is moving, the agent mechanism is used to reselect the direct agent and the main agent to ensure that the sink can continue to collect data from the event source or the node of the query source. Then, the cluster head node transmits the fault information to the sink node, and finally, it implements information processing. In this way, the information collection of the wireless sensor network can be carried out efficiently, and the two open can reduce the scale of information transmission [11] and then can reduce energy consumption [12]; the channel utilization rate has been improved, and the wireless sensor network has expanded usage time. Therefore, for wireless sensor networks with high energy-saving requirements like this article, a distributed structure is used to process data. In the wireless sensor network, there are many ways to classify the data fusion method of the sink node. The main method of data fusion is shown in Figure 1.

Simulation Experiment
3.1. Feasibility Experiment. The experimental scenario assumes that 4000 nodes are randomly generated in 633 RCs distributed on a 6400 square meter site. The node distribution is shown in Figure 2. The sensing radius [13] is 15 meters, and the initial energy of the nodes is 90 joules, as shown in Figure 2.
First of all, consider the energy consumption of the wireless sensor network without attack and without other network nodes [14], the transmitter radius of SORCA-W being ten meters and the transmitter radius of SORCA being 40 meters, as shown in Figure 3.
It can be seen from the power consumption model that when the communication node distance is greater than 0, the power consumption of the transmitter is proportional to the square of the distance; otherwise, it is proportional to the fourth power of the distance. It does not affect the normal communication of nodes [15,16]; a large amount of energy can be saved by shortening the transmitting radius of the node.

Continuous Wiener Acceleration
Model. When studying the state estimation problem of linear systems, use the Wiener process to describe the product performance degradation trajectory. In the test, when the product performance degrades to a certain threshold, the stress level is changed, so that the stress level change time of the product is a random variable that obeys the inverse Gaussian distribution. Based on this situation, the step stress accelerated degradation of the product is an established model. CWPA is the most basic model. The purpose of designing this experiment is mainly as follows: (1) Through this experiment, how to model the physical system is understood and the obtained continuous time-invariant state differential equation is converted into a discrete form state equation suitable for Kalman filter, numerical data processing noise reduction, Kalman data processing technology, Kalman data processing noise reduction, real-time computing, parallel programming, realtime programming, real-time updating, Kalman filter, navigation, control, etc.
(2) Because in the simulation experiment the measurement data of the sensor needs to be generated artificially, this experiment also explains how to simulate the dynamic system to generate the sensor measurement data close to the actual value (3) The influence of the type and error of the physical quantity measured by the sensor on the experimental results is explained In the mathematical modeling of actual linear timeinvariant physical systems [17] (linear time-invariant models), they often need to be expressed in the form of continuous time-invariant state differential equations, namely, In the simulation experiment, the target motion trajectory and measured value generated by the simulation are shown in Figure 4.
According to the above-measured values, the following position and velocity estimates can be obtained by the standard Kalman filter [18], as shown in Figures 5 and 6.
The feedback of this figure is that the greater the error covariance of the generated measurement, the greater the D, the more intuitive. This is because the value of the coordinate is much larger than the value of the speed. After normalizing the error, it can be known that the relative error of the position is lower than the relative error of the speed. This is because the sensor directly measures only the target position, while the speed is obtained by indirect calculation. If the observation matrix H [19] is modified to measure the target position and speed at the same time, then the accuracy of the filtered position and speed estimation will be improved.

Experimental Data
Set. This experiment uses the KDD99 data set [20], which is used by most researchers in training and testing network intrusion detection systems. The experimental network topology consists of more than 500,000 network connection records for training and more than 200,000 network connection records for testing, as shown in Figure 7.
Each record of this data contains 3 class identifiers and 42 fixed characteristic attributes, of which there are 5 symbolic variables, 7 discrete variables, and 32 continuous variables in the characteristic attributes. The data text format is shown in Table 2.
The last column is the category identifier, which is used to mark whether the record is normal. There are 24 types of attacks here, and the test set contains another 18 types of attacks [21]. The data includes a large amount of normal network traffic data and various intrusion behavior data, which are relatively representative. The intrusion data involves 3 categories and 21 subcategories of intrusion attacks. The specific categories include DOS, Probe, R2L,    [22], respectively, the intrusion detection classification is performed. Taking into account the total number of samples and the balance rate of the data, the attack data in the data set is marked as abnormal [23].

The Relationship between S-LEACH Algorithm and
Improved Bp Neural Network Algorithm. Since the model uses a network model based on a clustering mechanism, the S-LEACH routing algorithm provides a reasonable data collection and transmission mechanism for model construction. That is to say, the network is clustered according to the improved cluster head election algorithm, and then, we use the improved BP neural network fusion algorithm in each cluster routing structure to fuse the data collected by the sensors to extract different types of members in the cluster. The node collects the characteristic value of sensor data fusion and transmits the characteristic value to the sink node through a combination of single-hop and multihop. In this way, the sink node receives the characteristic data that reflects the current entire network situation, rather than the original sensor data. This method will greatly reduce the energy consumption caused by large amounts of data transmission. The S-LEACH routing algorithm is a data fusion algorithm at the network level. It uses cluster heads to summarize data to improve the energy consumption efficiency of the WSN network, while the BP neural network algorithm can extract and reflect the collected data through the trained neural network model. At the level of integration, x-axis coordinate x-axis coordinate    Journal of Sensors the two complement each other and can make up for their own shortcomings. Using the combination of two algorithms to build a WSN-based data fusion model can effectively improve the efficiency of data collection and extend the life cycle of the network. In WSN, the data collected by the node is used as the input of the BP neural network, and the characteristic value of the sensor data can be extracted through the neural network model [24]. In this way, we can convert the sensor data occupying different types of data bytes into simple special values and send it to the convergence center, thus realizing the fusion processing of the data. In view of the above assumptions, we will establish a multilevel data fusion model in this chapter, including the first-level fusion model and the second-level fusion model. Since the S-LEACH algorithm in this paper is based on multihop between clusters, the two-level fusion adopts the intercluster fusion model based on the weighted average algorithm.

Data Analysis.
In this article, the one-to-one method is used to realize multiclass fault classification. First, the training samples are used to train the LS-SVM diagnosis model, and then, the trained model is used to diagnose the simulation fault. In this paper, RBF kernel function is used as the kernel function of LS-SVM, and a cross-validation method is used to determine the model parameters, namely, hyperparameter δ and RBF kernel parameter ∂. The reduced fault sample data is randomly divided into 6 nonintersecting subsets S 1 , S 2 , S 3 , S 4 , S 5 , S 6 . In each training process, 5 of the subsets are used as training samples, and the remaining subset is used as the test set. In this way, the classification result can be compared with the original fault type of the test sample, and the classification ability of the classifier for unknown samples can be reflected according to its accuracy. This article uses a combination of cross-validation method and grid search method to determine the best parameter ðδ, ∂Þ and make the classification accuracy the highest. According to experience, this article gives a set of parameters ∂ = ð0:02, 0:06, 10, 50Þ, δ = ð5, 10, 50, 300Þ. Table 3 is part of experimental result analysis. From the results in Table 3, it can be seen that δ = 0:5, ∂ = 1 when the training accuracy and test accuracy are the highest, respectively, 0.99 and 0.98, so the optimal parameter combination is δ = 0:5, ∂ = 1.    5483  5698  2  3  1  2  1  3  2  3  1  00  0   2  1  3  0  000  15  0  02  15  26  0  0  2  56263  12235  1  3  5  6  23  00  00  4  0  0  0  00  48  00  00  0  0  00  Back  0  tcp  http  SF  242  00 Table 4.
By comparing Tables 3 and 4, it can be found that although the highest classification accuracy obtained by LS-SVM training is not as high as that of SVM, on the whole, the classification accuracy obtained by SVM is very dependent on the choice of parameters. Choose different parameters. The difference in classification accuracy will be very large. As shown in Table 4, the highest classification accuracy can reach about 98%, while the lowest classification accuracy is only about 30%. For LS-SVM, it can be seen from Table 3 that no matter which set of parameters is selected, the difference in classification accuracy will not be too large due to different model parameters, and it will usually not be less than 55%. This also shows that LS-SVM has stronger generalization ability than SVM.

Experimental Results.
Since WSN is generally used in a complex environment, meteorological conditions and changes in the ionospheric state will interfere with the shortwave channel. At the same time, some factors such as noise will also cause a certain deviation in the data measured by the node. Therefore, this paper selects another 200 sets of test sample data whose faults are interfered with by external     Figure 8. It can be seen from Figure 8 that the correct rate of diagnosis using the RSLS-SVM model is significant and rough set diagnosis. The accuracy of the LS-SVM model and rough set method is only 80% and 70% when the data reliability is 90%. When the reliability of the data reaches about 98.5%, the accuracy of diagnosis using the RSLS-SVM model has reached a stable level, reaching 95%, while the LS-SVM model and rough set method cannot achieve stability, only 85% and 77%. When the data reliability reaches about 99%, the correct rate of diagnosis using the LS-SVM model is stable, and the correct diagnosis rate is 88%; when the data reliability reaches about 99%, the rough set is used for fault diagnosis. In order to achieve a stable rate, the correct diagnosis rate is 71%. This shows the feasibility and superiority of the combination of rough set theory and LS-SVM. From the above experimental results, it can be seen that the use of a rough set for fault diagnosis takes 1.213 s, and the diagnosis accuracy rate is only 79%, which is the lowest, while the use of the LS-SVM model for diagnosis takes 0.321 s, which greatly reduces the diagnosis time. At the same time, the RSLS-SVM model that uses the combination of rough set and LS-SVM takes only 0.180 s, and the diagnostic accuracy rate reaches 98%, which is better than using the rough set or LS-SVM model alone. The efficiency of diagnosis is much higher, and the accuracy of diagnosis is also improved, and the fault tolerance performance is better. It can be seen from the whole diagnosis process that when using this method to diagnose WSN faults, the diagnostic personnel only need to collect the symptoms when the WSN node fails without having to have a deep understanding of the WSN domain knowledge, which improves the fault, the vitality of diagnostic algorithms, and the wide range of applications.

Conclusions
In the modern era of rapid scientific development, information fusion participates in more and more human production processes, such as aviation and railway information, environmental monitoring, and forest fire prevention. Many scholars are committed to the research and mining of a fusion algorithm in order to make the application scope of the algorithm wider and more efficient. The main work includes the following points: (1) research on the theory and technology of multisensor data fusion; (2) in view of the fact that the fusion threshold is limited in the datalevel fusion algorithm, the accuracy of the fusion result is affected; this paper proposes a data fusion algorithm applied to homogeneous multisensor. With the continuous application of WSNs, the research on secure data fusion methods in WSNs will continue to improve, including the research on specific data fusion methods, the research on the types of attacks that can be resisted, the research on encryption methods, and the research on network efficiency. Compared with the existing schemes, the two different WSN security data fusion schemes proposed in this paper perform well, but it does not mean that it is perfect and still needs to be improved.

Data Availability
No data were used to support this study.

Conflicts of Interest
There are no potential competing interests in our paper.