Applied

This paper mainly discusses the internal correlation between meshless discrete data and learning samples, meshless dynamic analysis recursive operation and information transmission mode in cyclic convolutional neural networks. This paper establishes a cyclic convolutional neural network based on the meshless method. This paper demonstrates an agent model of cyclic convolutional neural network based on dynamic characteristics. This method combines the advantages of the flexible configuration of meshless nodes in the discrete model. The universality and adaptability of cyclic convolutional neural networks are improved. In addition, because of the unique historical memory characteristics of the periodic module, it can analyze continuous data efficiently. The solution of dynamic analysis is accelerated without affecting the calculation accuracy. Based on a group of examples, the accuracy and effectiveness of this method are studied experimentally


Introduction
Recurrent neural networks (RNNs) are a type of processing for continuous data. It's a recursive neuron. Its internal messages are generally based on constant data sequences. Its internal loop module has memory and parameter-sharing features. A good result is obtained when the characteristics of continual information are extracted. It is a usual deep-learning method. It has critical applications in natural language processing, data properties including time series, etc. Conventional neural networks generally adopt a single one-dimensional vector network for network learning and transmission [1]. If a convolutional neural network (CNN) is added to the network, the super parameters of the network can be reduced.
Firstly, the correlation between unraster discrete data and deep learning training samples, Newmarkbased dynamic analysis process and continuous information transmission mode of periodic convolutional neural networks is compared. The results show an inherent consistency in the operation process and the transmission from data to data. A new circular convolutional neural network structure suitable for the meshless method is proposed. A proxy model of a cyclic convolutional neural network based on dynamic characteristics is established [2]. The periodic convolutional neural network model shown by the rasterless dynamic analysis method can fully use the node model in the masterless process. The technique has the characteristics of discrete flexibility and high precision of numerical sampling. This method can significantly improve the calculation speed of unraster dynamic analysis and maintain high accuracy.
2 Non-gridded discrete modeling of cyclic convolutional neural networks

Symbol Representation
In this paper, the concept of the knowledge base is defined as = { , , }. Q is a set of entities. P is a set of related data. K is all the information collected [3]. One is called a triple ( , , ) ∈ . Where ∈ is the head entity. ∈ is a connection. The embedding of entity q and association p is expressed by ∈ ! and ∈ ! . | | is the number of correlations. | | represents the number of items. is the size of the insert. Set of adjacent semantics in entity q Adjacency topology group Entity adjacent set Name word set " is a unit's nearest neighbor. # stands for the name of a team. #_% is a construction of a unit name. D is a unit of structure. & is a form of association.

Elements of the model
This paper presents the overall structure of a non-grid-based cyclic convolutional neural network ( Figure 1). The CCTA system consists of four modules. 1) A model to represent the resultant merge. Get a neighboring entity from the literal and topology neighborhood and code it. Blending it with knowledge of entity naming makes it more meaningful. The result is a merged representation of an entity. 2) An entity-based interrelated model. A large amount of image information is captured by reordering and reconstructing the obtained images. 3) A grid-free cyclic convolutional neural network model. The input characteristics are attenuated by using a non-raster cross-dimensional discrete interactive method. Then the circular convolution operation is used to extract the relationships among the entities, to obtain the relationships among the attributes. 4) Triple combination score. Flatten the feature graph and correspond it to the embedded dimensions of the entity. The score and probability of ternary combination are obtained by dot multiplication and normalization.

Synthesis expression generation model
The fusion expression generation module is used to generate entity q. Its implementation methods include entity adjacent generation and coding, entity name and structure coding, and fusion representation generation. Figure 2 shows a particular construct. 1) Proximity of the generated and encoded entities. Before generating a new adjacency relation, this paper uses the text features of the object to construct the corresponding adjacency relation [4]. The correlative words are extracted from the characters by name matching method. In this way, a meaningful neighbor relationship is formed. A semantic neighborhood set for q in the literal specification looks like this: ' represents a set of words in q's caption. ∈ , ( ∈ , ) ∈ . Here is the set of a topological neighborhood of the article's merged entity q: The following is the set of semantic adjacencies: Each object can choose group from two adjacent groups as the last entity. I'm going to give you a set of q. So, let's take two adjacent groups and two adjacent groups. The two groups intersect as follows: If the number of adjacent nodes is more significant than , the first adjacent node group is selected as the last group of adjacent nodes [5]. If the number of adjacent nodes is less than , other adjacent nodes are sampled until the number of nodes is : means I chose the first adjacent point to . stands for arbitrary sampling of adjacent points. A physical proximity diagram is shown in Figure 3. For example, an instance ( and ) is selected.

Figure 3. Physically adjacent options
The physical proximity of q is obtained using the above method First, an object is initialized. The goal is to code an object. A primitive physical neighborhood is obtained by embedding d dimension into The first original representative order, , is then entered into the L level conversion program. Coding start represents order: Here is the hidden layer of . The state of the hidden layer at level L is averaged as the adjacent representation of a unit: stands for startup function.
stands for converter coder layer L hidden layer. When is set in this paper, the test results are the best.
2) The naming and construction of the entity are coded. The article uses the name of an entity to enhance the expression of the entity [6]. The name and alphabetical order of are expressed in the following order Initialize word2vec to average the word vectors. The name of a new entity is obtained by reducing dimensions to a real embedded space through a complete connection layer.
stands for weighted and full-link offset. The unit's name and the structure of the organization of the team are added together [7]. This gives the form of the institution's name: 3) Fusion representation generation. The fusion of type is obtained by combining the adjacent representative obtained above with the named structure .
This paper presents three different hybrid methods.
1) Gated integration. Since proximity represents and named constructs represent contribution to , this article introduces the concept of a gate control. The entity's combined expression is defined as is an indicator of learning. It is used to adjust the ratio of two parts.
2) Addition and fusion. Let's just add the adjacent and the named . stands for a set, and it's defined as 3) Combine the connection and image. This paper merges adjacent and named . Through the complete connection level, the corresponding dimension data of the puzzle and the ontology are corresponded to obtain . stands for a set, and it's defined as is for illustration operation.
represents a mapped matrix. stands for bias.

Materialized interaction model
Suppose is used to express the fusion of entities. to express the connection.
is a feature. This paper uses two steps to complete the comprehensive interaction of entity and association.
1) Feature rearrangement. The fusion represents that and association p produce corresponding arbitrary configurations respectively [8]. If there were no constraints, the numbers would be huge. So the arbitrary quantity that comes out of this finite condition is V. It could be represented by group . is the ith element of P. The feature rearrangement program is shown in Figure 4.
2) Feature reconstruction. Determine the reconstruction function . Converting to makes the two adjacent properties no longer close together after regrouping. Apply the reconstruction function to each . And then we combine into a tensor is a splicing operation.

Unraster discretization in the cyclic convolutional neural network model
Not all of the reconstructed type tensors are applicable to completion. The operation will be introduced when solving the discretization of a non-grid. The input tensor of operation B looks like this: stands for maximum water power.
represents an average set.
[; ]is for alignment. 0 represents the first dimension, which performs the largest pool and the average pool operation. First, the input tensor ∈ *×,×on three branches of three discrete cells without grids is transmitted. Specific operations for the three branches are given below.
The first branch captures cross-channel interactions between the F and V dimensions. First Y is rotated 90 degrees counterclockwise along the F axis to get ,. ∈ -×,×* . ,. with respect to − in the direction A, and then the convolution. The weights of interest are generated through the startup function of Sigmoid [9]. The resulting weight of concern is multiplied by the value of ,. . And then you rotate it 90 degrees clockwise along the F axis, and you get , * . Keep the initial input of Y. The first branch is processed as follows: represents a counterclockwise rotation at 90 degrees along the F axis. stands for 90 degrees clockwise in the direction of the F axis.
is the core of a convolution. represents a convolution operation. stands for activity function.
The second branch captures the interaction between V and A dimensions. First Y is rotated 90 degrees counterclockwise along axis A to get . performs in the F direction [10]. The weights of interest are generated through the startup function of Sigmoid. Multiply the resulting rights of attention by L. Then you rotate it 90 degrees clockwise along axis A to get . Maintain the original Y input. The second branch is processed as follows stands for counterclockwise rotation at 90 degrees along axis A.
stands for 90 degrees clockwise in the direction of axis A.
is the core of a convolution. represents a convolution operation. stands for startup function.
The input tensor Y of the final branch decreases to 2 after the operation . A right of concern is generated through the startup function of Sigmoid. Multiply the attention weight and Y to get the final tensor: " is the core of a convolution. * is a convolution operation. stands for launch function.
A simple average of three branches yields a tensor And then we introduce ′ into a periodic convolutional neural network [11]. The feature map is obtained after the convolution method is completed * ̑ stands for convolution over one period. is the core of a convolution. is the starting function of .

Preprocessing data set
This paper introduces the modeling method of agentless dynamic analysis based on cyclic convolutional neural networks and its prediction accuracy [12]. This study uses four parts of the elasticity problem as the training object ( Figure 5). The results show that the elastic coefficient and Poisson's ratio of the model are .
All the examples are discrete without a grid in the unit . The generality of the agent model in this paper includes two aspects. The first target is the data structure. The proposed non-grid method can receive data of any size.

Specific network parameters and training
A new Adam optimization method is proposed to optimize the network performance index. Figure 6 shows a specific data transfer flow based on non-grid dynamic analysis. Initialize network parameters using Xavier standard allocation [13]. The convergence of parameters of the corresponding network Agent mode is shown in Figure 7. The calculation results in Figure 7 show that the value of the loss function MSE will gradually decrease with the increase in the number of iterations. The results show that the parameter convergence of the Agent model based on dynamic meshless dynamic analysis is better. Its accuracy improves with iteration.

Conclusion
This paper discusses in depth the meshless dynamic analysis and the learning prediction process of long-term and short-term memory networks. At the same time, it makes full use of the accuracy and discreteness of the masterless method in generating different types of sample data. In this paper, a dynamic agent structure based on the convolution method and systematic method is established. The Agent model of the irregular convolutional neural network without network element dynamic computation can get the emotional response at any time quickly and conveniently. An Agent modeling method based on a finite element dynamic periodic convolutional neural network is established using four different deformation problems. The results show that this method can effectively improve the computation of non-grid dynamic analysis.

Funding
This work was supported by the Department of Science and Technology of Jilin Province: Research on automatic extraction technology of flame combustion feature information of chain grate furnace based on machine vision (No. 2020C018-5).