Skip to content
BY 4.0 license Open Access Published by De Gruyter Open Access December 10, 2020

Mathematical models for information classification and recognition of multi-target optical remote sensing images

  • Haiqing Zhang EMAIL logo and Jun Han
From the journal Open Physics

Abstract

Traditionally, three-dimensional model is used to classify and recognize multi-target optical remote sensing image information, which can only identify a specific class of targets, and has certain limitations. A mathematical model of multi-target optical remote sensing image information classification and recognition is designed, and a local adaptive threshold segmentation algorithm is used to segment multi-target optical remote sensing image to reduce the gray level between images and improve the accuracy of feature extraction. Remote sensing image information is multi-feature, and multi-target optical remote sensing image information is identified by chaotic time series analysis method. The experimental results show that the proposed model can effectively classify and recognize multi-target optical remote sensing image information. The average recognition rate is more than 95%, the maximum robustness is 0.45, the recognition speed is 98%, and the maximum time-consuming average is only 14.30 s. It has high recognition rate, robustness, and recognition efficiency.

1 Introduction

Optical remote sensing image usually refers to the image data acquired by visible light and part of infrared band sensors [1,2]. It is intuitive and easy to understand. Its spatial resolution is usually high. In sunny weather conditions, the image is rich in content, and the target structure features are obvious [3,4]. It is convenient for target classification and recognition [5]. With the development of remote sensing technology and pattern recognition technology, the research on multi-target classification and recognition of optical remote sensing images has attracted wide attention. Its development has a wide range of significance in seismic observation, military reconnaissance, and other fields [6].

The military significance of multi-target optical remote sensing image information classification and recognition is that it can monitor the ground conditions of hostile countries and regions, reconnoiter the military facilities of the other side, and help analyze local terrain in specific operations [7].

  1. Strategic investigation, i.e., using remote sensing images to find local military targets with important strategic significance.

  2. Tactical reconnaissance, i.e., using remote sensing images to discover local tactical targets and to understand local military deployment and operational intentions [8].

  3. Military surveying and mapping, i.e., according to the imaging mechanism of remote sensing images, quantitative measurement of terrain parameters such as the position and elevation of ground targets [9] is used to draw and revise military topographic maps.

  4. Marine monitoring, i.e., the use of remote sensing images to monitor the ship and its wake during navigation, so as to determine the precise position, course, and speed of the ship [10].

  5. Image terminal guidance technology, i.e., when the attack distance is long, the machine is used to detect the missile image target automatic recognition and positioning [11] and control the missile flight [12].

Civil value of classification and recognition of multi-target optical remote sensing image information: using land satellites or aircraft to acquire remote sensing images of areas for resource investigation, disaster monitoring, resource exploration, agricultural planning, and urban planning [13]. Using computers to interpret and analyze remote sensing images saves manpower and speeds up the analysis. In addition, ocean color remote sensing can be used to monitor the marine ecosystem, which is of great significance to the development of marine fishery resources, marine environment, coastal erosion and siltation, near-shore water pollution, red tide, and harmful algae outbreaks.

At present, there are some problems in multi-object detection and recognition based on optical remote sensing images. For example, literature [14] proposed a multi-objective optical remote sensing image information classification and recognition model based on radial basis kernel function. Using the adaptive threshold fast detection algorithm, on the basis of the support vector machine radial basis kernel function, the weak classifier AdaBoost algorithm is segmented, the global features and local features are extracted respectively, and the weights are modified, and finally get A strong classifier to achieve complete classification and recognition of the test target image. However, this model can only handle the detection and recognition of specific types of targets, and cannot realize the detection and recognition of multiple targets at the same time, which has certain limitations. Literature [15] proposed a multi-objective optical remote sensing image information classification and recognition model based on feature extraction. The optical remote sensing image target extraction method can meet the three points of translation, rotation and scale invariance at the same time. It uses the one-to-one support vector machine algorithm of the radial basis kernel function to establish the target recognition probability calculation model, and design on this basis A weighted fusion strategy for multi-feature decision-making layers to achieve multi-object classification. However, this model has poor adaptability and robustness to the selected features and target rotation.

Because the traditional multi-object optical remote sensing image information classification and recognition adopts the three-dimensional model, it can only recognize a specific category of targets, which has certain limitations. To solve this problem, based on the idea of multi-object recognition, this paper designs a mathematical model of multi-object optical remote sensing image information classification and recognition. The research of multi-target optical remote sensing image information classification and recognition mainly involves three stages: target segmentation and detection, feature extraction, and target recognition [15]. Target segmentation and detection stage is an important preparation step for extracting remote sensing image information. Based on multi-target optical remote sensing image target detection, the image is divided into several regions according to the features [16]. The mathematical model is established by chaotic time series analysis method to realize accurate classification and recognition of image information [17].

2 Analysis of mathematical model of information classification and recognition

The traditional model can only deal with the detection and recognition of specific types of targets, and cannot realize the detection and recognition of multiple targets at the same time. It has certain limitations and has the problem of poor robustness. For this reason, a multi-target mathematical model of optical remote sensing image classification and recognition is designed, which effectively improves the high robustness and recognition efficiency of optical remote sensing image classification and recognition.

2.1 Multi-target optical remote sensing image segmentation

Threshold segmentation is a global-based image segmentation method [18], typical of watershed segmentation, region tracking segmentation, clustering segmentation, etc. [19]. The traditional threshold segmentation method mostly depends on the good double-hump property of the image gray distribution. Because all the remote sensing images in this paper belong to optical remote sensing images, the segmentation algorithm should be adaptable to illumination weather [20]. Because the background of image is often complex and the gray level of multi-target optical remote sensing image is quite different, it is difficult to realize multi-target simultaneous segmentation using fixed threshold [21]. In this paper, local adaptive threshold segmentation algorithm is used to segment multi-object in optical remote sensing image. Generally, this kind of algorithm has strong adaptability. It determines the binary threshold of the pixel position according to the pixel value distribution of the neighborhood block in which the pixel is located. For an optical remote sensing image f is the size of H × H and f(x,y) denotes the pixel gray value of the x row and y column. In this paper, a threshold value is obtained by weighing each neighboring block in the image. Thus, a threshold plane of the whole image is constructed, which is marked as T ( x , y ) . Then the binarization of the image is completed using this threshold plane. After thresholding, the pixel gray value g ( x , y ) is expressed in formula (1):

(1) g x , y = black , f x , y T x , y white , f x , y > T x , y .

The classical local adaptive thresholding algorithms include Bernsen, Niblack, and Sauvola. In this paper, Sauvola method is selected to calculate the weighted mean m x , y and standard deviation s x , y of the current point of the image in the window w × w to get the corresponding threshold value, and to adjust the variance in an adaptive way. For the central pixel of the window, given the parameter k and the maximum value R of the standard deviation s x , y , the threshold formula is as follows:

(2) T x , y = m x , y 1 + k s x , y R 1 .

In formula (2), the selection of the first parameter w has a great influence on the threshold segmentation effect: if w is too large and the degree of self-adaptation is low, it may lose the meaning of local processing, resulting in the slow running speed of the algorithm; if w is too small and the degree of self-adaptation is high, it may lead to noise interference in the foreground or the background.

The second parameter k also has a certain impact on the image segmentation effect: with the increase in k value, the width of the target becomes thicker; with the decrease in k value, the width of the target becomes thinner.

The third parameter R takes the maximum standard deviation, and the local weighted mean m x , y and the standard deviation s x , y of the multi-target optical remote sensing image are selected according to the contrast adjustment threshold of the pixel local neighborhood.

When some areas of the image have high contrast, s x , y R makes T x , y = m x , y , the threshold T x , y is often lower than the average value. When the contrast of the neighborhood of the authority is low, the shadowed part of the background of the multi-target optical remote sensing image can be successfully eliminated.

2.2 Feature extraction of multi-objective optical remote sensing image

Multi-target optical remote sensing image feature extraction is a very important step in multi-target optical remote sensing image information classification and recognition [22]. In this paper, we selected hierarchical BoF-SIFT features and improved SC features and Hu invariant moment features to extract multi-target optics, which can adapt to the translation, scaling, and rotation differences of optical remote sensing images [23].

2.2.1 Hierarchical BoF-SIFT features

In view of the large differences in the scale, scaling, and rotation of the targets in remote sensing images [24], and the interference of light intensity, complex background, and shadows in the external environment, it is very difficult to classify and recognize the targets accurately [25]. Because scale invariant feature transform (SIFT) features have good robustness in many feature descriptors and are widely used in various fields, this paper proposes a hierarchical BoF-SIFT feature based on traditional SIFT features to represent remote sensing images [26].

The SIFT feature dimension is not unique because of the different number and position of the feature points detected by each object in the multi-target optical remote sensing image. First, the method of fixed target feature points and location is used to deal with the problem. Second, the BoF idea can visually represent the characteristics of the target, which can reflect the same target. Similarity can also reflect the differences of different targets; image pyramid can more carefully express the distribution characteristics of target features, and the above three points combined to get hierarchical BoF-SIFT feature representation.

The process of generating hierarchical BoF-SIFT features is shown in Figure 1.

  1. Generate SIFT descriptors: the total number of samples is m and the pixel size 100 × 100 of the target is used. To fully represent the local features and the uniform feature dimension of the target, the remaining region is divided into 11 × 11 = 121 feature points by removing the edge effect of the two pixels. Each feature point is separated by 8 pixels, and then 16 × 16 neighbors are centered on the feature points. First, the SIFT descriptors of each feature point are represented by 4 × 4 × 8 = 128 dimensional features. Finally, the SIFT descriptors of m × 121 feature points in all samples are obtained.

  2. Generating BoF-SIFT features: K-means ++ clustering is used to describe the SIFT of m × 121 feature points extracted from all samples, which makes up for the shortage of random selection of seed points in traditional K-means clustering.

  3. Construct image pyramid: divide the sample into three layers, the first layer is the whole sample, the second layer is divided into 2 × 2 sub-blocks, and the third layer is divided into 4 × 4 sub-blocks, and therefore a total of 21 sub-blocks;

  4. Generate hierarchical BoF-SIFT features: projecting the features of 21 sub-blocks in the pyramid to K clustering centers, then 21 k-dimensional feature vector histograms are obtained.

Figure 1 
                     Schematic diagram of hierarchical BoF-SIFT feature extraction algorithm.
Figure 1

Schematic diagram of hierarchical BoF-SIFT feature extraction algorithm.

2.2.2 Improved SC features

SC feature is an intuitive feature representation, which describes the vector relationship between feature points and all feature points in the extracted multi-target optical remote sensing image. For remote sensing images with different target sizes and angles, SC features cannot be very good for classification and recognition. This paper makes four improvements on the basis of traditional SC feature algorithm.

  1. This paper only generates descriptors for the center of the sampling point, which can remove a lot of redundant information and save running time [27].

  2. Divide the radius uniformly to reduce the influence of the uneven division radius on the density of the polar coordinate sampling points;

  3. By counting the distance from all sampling points to the center point, the maximum distance is automatically selected as the maximum circle radius of polar coordinates, which overcomes the shortcoming of SC feature without scaling invariance.

  4. The symmetrical axis of the target is found by the distance from the sampling point to the center and rotated uniformly to the vertical direction. This improvement makes the proposed SC feature have rotation adaptability.

2.2.3 Hu invariant moment features

Hu invariant moments are obtained by nonlinear combination of low-order (second-order and third-order) normalized center moments of multi-target optical remote sensing images. The feature satisfies translation, scaling, and rotation invariance, has fast computation speed, and can identify larger objects in the images better. Table 1 presents the Hu invariant moment feature partial eigenvalue representation for the four classes of remote sensing images.

Table 1

Hu invariant moment feature part representation

Target image Hu invariant moments
Data 1 Data 2 Data 3 Data 4 Data 5 Data 6 Data 7
Ship 2.82 × 10−3 3.94 × 10−6 6.79 × 10−9 6.43 × 10−9 4.24 × 10−17 1.27 × 10−11 2.61 × 10−18
Aircraft 1.88 × 10−3 7.14 × 10−7 1.88 × 10−9 2.82 × 10−10 2.04 × 10−19 2.05 × 10−13 1.74 × 10−20
Automobile 1.16 × 10−3 5.25 × 10−7 1.56 × 10−10 2.00 × 10−11 1.02 × 10−21 7.78 × 10−15 4.61 × 10−22
Oil tank 2.19 × 10−3 1.50 × 10−8 1.64 × 10−11 3.07 × 10−10 2.11 × 10−0 2.02 × 10−14 5.01 × 10−21

2.3 Mathematical model of image information classification and recognition

Based on the above multi-feature of multi-target optical remote sensing image information, the chaotic time series analysis method is used to classify and recognize the multi-target optical remote sensing image information in the mathematical model. The framework of the model is shown in Figure 2.

Figure 2 
                  Chaos analysis method to establish mathematical analytical model framework.
Figure 2

Chaos analysis method to establish mathematical analytical model framework.

Thus, the calculation of any differential and topological invariants in the original unknown mathematical model can be expanded and calculated in the reconstructed phase space, and a new mathematical model is established in the reconstructed m-dimensional phase space to realize the prediction, analysis, guidance, and analysis of the original unknown mathematical model [28]. The analysis flow chart according to the theory of chaos is shown in Figure 3.

Figure 3 
                  Model building analysis process.
Figure 3

Model building analysis process.

First, the classification error rate of multi-target optical remote sensing image classification is mapped into a set of probability density functions using classifier channel mapping function. The m-dimensional vector formed in the reconstructed m-dimensional phase space is expressed as follows:

(3) X n = x ( n ) , x n + τ , , x n + m 1 τ .

In the formula, n = 1 , 2 , , N obtains one-dimensional vector X n of multi-target optical remote sensing image in the m-dimensional phase space of multi-target optical remote sensing image reconstruction mapping. It is expressed as a point in the phase space, and its nearest neighbor point is X η n . x represents the univariate time series of multi-target optical remote sensing image. Euclidean distance is taken as the distance scale τ , j , and l as constants, X j is the vector, and measure the distance R m m between two nodes, that is:

(4) R m m = X η n X n 2 ( m ) = min j = N 0 , , N , j n X n X j 2 = l = 0 m 1 x η n + l τ x n + l τ 2 .

In the chaotic mapping phase space, the distance R m + 1 n between the m-point and its nearest neighbor in the phase space is as follows:

(5) R m + 1 n = X η n X n 2 ( m ) = l = 0 m x η n + l τ x n + l τ 2 = X η n X n 2 ( m ) 2 + x η n + m τ x n + m τ 2 .

How to compare the classification information with the original information to see whether the difference is significant or not, so as to judge the chaotic probability analysis mapping classification [29]? A batch of chaotic probability is used to analyze the mean Q s of mapping information Q s of multi-target optical remote sensing images to compare with the original Q 0 . In addition, the influence of the deviation of these Q s values should be considered. When the standard deviation σ s o Q s is fixed, the larger the deviation σ s is, the more scattered the σ s are [30]. Some Q s may be very close to the difference of Q 0 . The difference between Q s and Q 0 is not significant; conversely, the smaller the Q s , the more significant the difference between Q s and Q 0 . Therefore, the difference saliency S can be defined to characterize the chaotic probability analysis of multi-target optical remote sensing image mapping information and the difference between the original information:

(6) S = Q s Q 0 σ s .

In formula (7), Q s represents the mean value of the discriminant statistic of N-batch chaotic probability analysis for multi-target optical remote sensing image mapping information, and σ s represents the standard deviation of the discriminant statistic of N-batch chaotic probability analysis for multi-target optical remote sensing image mapping information.

(7) σ s = 1 N 1 i = 1 N Q i Q s 2 .

Assuming that the probability distribution of Q s value of mapping information of multi-target optical remote sensing image is normal distribution of classification and recognition result, the classification and recognition result of multi-target optical remote sensing image p Q s is:

(8) p Q s = 1 2 π σ s exp Q i Q s 2 2 σ s 2 ,

(9) p Q s d Q s = 1 .

To negate chaotic probability analysis for multi-target optical remote sensing image mapping classification, S must be large enough to make the distribution of Q s far from Q 0 , or Q 0 should be outside most of the distribution of Q s [31]. Because the normal distribution p Q s of classification and recognition results of multi-target optical remote sensing images extends to infinity of positive and negative Q s values, it is impossible to require Q 0 to be completely outside the p Q s distribution. Generally, the confidence level generally accepted by the scientific community is 95%. That is, in statistical tests, the probability of rejecting the classification of chaotic probability analysis maps is α = 5 % . If the difference between Q 0 and Q s exceeds a certain critical value Q c , then:

(10) p Q 0 Q s > Q c 0.05 .

Based on the standard normal distribution σ s = 0 , Q s = 0 , the final classification results are as follows:

(11) 0.025 = z 2 p Q s d Q s = 1 z 2 p Q s d Q s .

In the formula, z 2 = z 1 denotes the intersection of the rejection region and the so-called confidence interval established by the mapping classification of multi-target optical remote sensing images, and d is a multi-characteristic coefficient.

3 Results

To verify the practical application effect of the mathematical model of multi-object optical remote sensing image information classification and recognition constructed in this paper, an experimental test was carried out. The overall experimental scheme is as follows: the experimental device is designed to take the optical remote sensing image obtained from the website as the experimental sample data, and the image is processed in a uniform format. Select the literature [14] based on the radial basis kernel function of the optical remote sensing image information classification and recognition model and literature [15] the optical remote sensing image information classification and recognition model for experimental tests; select the model robustness, model recognition efficiency and the cost of classification and recognition. Time as an experimental indicator for comparison. When the model's robustness is higher, the model's stability is better, the model recognition efficiency is higher, the recognition effect is higher, the model recognition time is shorter, and the overall performance is better. The experimental process is as follows.

3.1 Experimental setup

The algorithm is tested in the remote sensing image database built by ourselves. The database contains ships, aircraft, automobile, and oil tanks [32]. The database contains a total of 74 × 4 images, some of which are illustrated in Figure 4.

Figure 4 
                  Part target sample image of remote sensing image database: (a) ships, (b) aircraft, (c) automobile, and (d) oil tank.
Figure 4

Part target sample image of remote sensing image database: (a) ships, (b) aircraft, (c) automobile, and (d) oil tank.

3.2 Validity analysis

3.2.1 Influence of training samples and cluster centers on the recognition rate

First, the number of experimental training sets and test sets is determined. The set training samples were 68, 108, 148, and 188 images. The test samples were 108 images, the cluster centers were 10, 20, 30, and 40, and the pyramid layers were 3. The model is used to classify and recognize multi-target optical remote sensing image information under different training samples and clustering centers. The change curve of recognition rate is described in Figure 5.

Figure 5 
                     The average recognition rate of different training samples and cluster centers.
Figure 5

The average recognition rate of different training samples and cluster centers.

As can be seen from Figure 5, Tr represents the threshold parameter, and the recognition rate curve reaches the maximum value at 20 when the number of training samples is fixed, and the average recognition rate reaches the maximum value at 188 when the number of training samples is fixed [33]; therefore, when the number of training samples is 188, the number of clustering centers is the maximum. The recognition rate reached the maximum value at 20 h.

3.2.2 Influence of pyramid layer number on the recognition rate

Under the above optimal parameters, the image pyramid features (not the highest level features) of the training samples and the test samples are trained and identified by the model. The average recognition rate is shown in Table 2.

Table 2

Image recognition rates of pyramid

Pyramid layer Eigenvector dimension Average recognition rate/%
1 20 95.90
2 20 + 80 97.80
3 20 + 80 + 320 98.70

In Table 2, the more the pyramid layers, the higher the recognition rate. The recognition rate of the third layer is as high as 98.70% [34,35]. The average recognition rate of this model is above 95%, which can effectively classify and recognize multi-target optical remote sensing image information.

3.3 Model performance analysis

To further analyze the performance of the model, seven multi-objective optical remote sensing image information recognition experiments were conducted by selecting the literature [14] model and literature [15] model.

3.3.1 Model robustness test

Robustness is the key to model survival in abnormal and dangerous situations. Statistical results of the robustness tests of the three models are shown in Figure 6.

Figure 6 
                     Robustness test results of three models.
Figure 6

Robustness test results of three models.

Analysis of Figure 6 shows that, with the increase in the number of experiments, the overall trend of the broken line of the model always lies above the analytical conceptual model and the Data Based Mechanistic (DBM)-based three-dimensional model. The highest robustness of the model is 0.45, the lowest robustness is 0.1; the highest robustness of the analytical conceptual model is 0.43, the lowest robustness is 0.07; and for the model based on DBM, the maximum robustness is 0.37 and the minimum robustness is 0.05, which shows that the robustness of the proposed model is higher than that of the other two models. Robustness is an important parameter to describe the stability of the model. The experimental results show that the model has the advantage of high stability in classification and recognition.

3.3.2 Model recognition efficiency test

The results of the recognition speed growth rate of the three models are calculated, and the results are shown in Figure 7.

Figure 7 
                     Three model recognition speed growth rate test results.
Figure 7

Three model recognition speed growth rate test results.

Analysis of Figure 7 shows that the growth rate of the three models can be seen. The polyline trend of the model in this paper is obviously above the other two models. With the increase in the number of experiments, the maximum recognition speed growth rate of the model in this paper is 98%, the recognition speed growth rate of the conceptual model is 91.8%, and the maximum recognition speed of the three-dimensional model based on DBM is 91.8%. The growth rate is only 91.5%, which shows that the recognition speed of this model is faster and more efficient.

Three models are used for classification and identification four times, and time-consuming data are collected. The data results are shown in Tables 3–5.

Table 3

Analysis of time-consuming results of conceptual model recognition

Multi-target optical remote sensing image/individual Identify consumption time/s
First times Second imes Third times Fourth times
50 8.11 9.22 8.03 10.25
100 13.45 13.12 13.05 13.08
150 17.25 16.49 16.02 15.16
200 21.27 21.34 21.16 21.59
250 25.22 26.82 26.46 27.36
300 32.45 33.12 33.44 35.45
350 39.36 40.45 40.54 40.73
400 50.24 50.65 50.45 50.18
Average time consumption 25.91 26.40 26.14 26.72
Table 4

Time-consuming results of model recognition in this paper

Multi-target optical remote sensing image/individual Identify consumption time/s
First times Second times Third times Fourth times
50 1.12 1.22 1.03 1.25
100 3.25 3.16 3.15 3.08
150 7.25 7.49 7.02 7.16
200 10.26 10.38 10.26 10.69
250 15.2 15.86 15.46 15.36
300 19.45 20.13 20.64 19.45
350 25.36 25.48 25.94 25.03
400 30.25 30.69 29.16 29.15
Average time consumption 14.01 14.30 14.08 13.89
Table 5

Time-consuming results of 3D model recognition based on DBM

Multi-target optical remote sensing image/individual Identify consumption time/s
First times Second times Third times Fourth times
50 4.15 4.22 5.03 5.22
100 9.15 9.12 9.09 9.09
150 13.25 13.21 13.25 13.78
200 17.27 17.26 17.16 17.29
250 20.22 21.56 21.36 21.75
300 25.65 26.58 26.48 26.48
350 30.36 31.45 30.15 31.58
400 36.24 35.64 36.45 30.59
Average time consumption 19.53 19.88 19.87 19.47

Analyzing Tables 3–5, we can see that in the same experiment, the average value of recognition time consumption of the conceptual model is 26.72 s, of this model is 14.30 s, and of the three-dimensional model based on DBM is 19.88 s. By comparison, the proposed model takes the shortest time and the fastest efficiency.

4 Conclusions

To solve the problems that the traditional multi-object optical remote sensing image classification and recognition model can only recognize the target of specific category, the model robustness is poor, and the recognition efficiency and recognition time are low, a mathematical model of multi-object optical remote sensing image information classification and recognition is designed. The mathematical model constructed in this paper mainly includes three stages: target segmentation and detection, feature extraction, and target recognition. When the number of training samples is 188 and the number of clustering centers is 20, the recognition rate of the proposed model is 98.70%, and the average recognition rate is above 95%. It can effectively classify and recombine the image information of multi-target optical remote sensing images, and the classification effect is remarkable. The maximum robustness of the model is 0.45, and the lowest is 0.1. The maximum and minimum values are obviously higher than the traditional analytical conceptual model and the DBM-based three-dimensional model. The maximum recognition speed growth rate is 98% and the average recognition time consumption is only 14.30 s, which is time-consuming and fast growing, reflecting the superior classification and recognition performance of the model. The designed mathematical model for classification and recognition of multi-target optical remote sensing image information can realize multi-target classification and recognition, improve the reliability and credibility of geological exploration, environmental and disaster monitoring, precision agriculture, remote sensing image analysis, and also improve surveys Quality and details are of great significance in surveying, mapping, archaeology, etc., and improving the level of intelligence and automation of image processing is of great significance to the progress of modern society. In the next step, the recombination of multi-objective optical remote sensing images can be further studied to greatly improve the classification and recognition accuracy.

References

[1] Biswas D, Poria S, Patra SN. Analysis of different growth mechanisms from phenomenological consideration. J Interdisciplin Math. 2017;20(2):443–59.10.1080/09720502.2015.1123439Search in Google Scholar

[2] Elia M, Schipani D. Involutions, trace maps, and pseudorandom numbers. J Discrete Math Sci Cryptograph. 2018;21(3):735–47.10.1080/09720529.2018.1441962Search in Google Scholar

[3] Gao W, Wang W. New isolated toughness condition for fractional (g, f, n)-critical graph. Colloq Math. 2017;147(1):55–65.10.4064/cm6713-8-2016Search in Google Scholar

[4] Kang L, Du HL, Du X, Wang HT, Ma WL, Wang ML, et al. Study on dye wastewater treatment of tunable conductivity solid-waste-based composite cementitious material catalyst. Desalin Water Treat. 2018;125(2):296–301.10.5004/dwt.2018.22910Search in Google Scholar

[5] Chen H, Xie L, Feng W, Zheng L, Zhang Y. Topic segmentation on spoken documents using self-validated acoustic cuts. Soft Comput. 2015;19(1):47–59.10.1007/s00500-014-1383-9Search in Google Scholar

[6] Euhus DM. Understanding mathematical models for breast cancer risk assessment and counseling. Breast J. 2015;7(4):224–32.10.1046/j.1524-4741.2001.20012.xSearch in Google Scholar PubMed

[7] Delfmann P, Breuker D, Matzner M, Becker J. Supporting information systems analysis through conceptual model query – the diagramed model query language (DMQL). Commun Assoc Inform Syst. 2015;37(24):473–509.10.17705/1CAIS.03724Search in Google Scholar

[8] Leng B, Zhang X, Yao M, Xiong Z. A 3D model recognition mechanism based on deep Boltzmann machines. Neurocomputing. 2015;151(151):593–602.10.1016/j.neucom.2014.06.084Search in Google Scholar

[9] Zyout I, Czajkowska J, Grzegorzek M. Multi-scale textural feature extraction and particle swarm optimization based model selection for false positive reduction in mammography. Comput Med Imaging Graph. 2015;46(16):95–107.10.1016/j.compmedimag.2015.02.005Search in Google Scholar PubMed

[10] Stasyuk AI, Goncharova LL. Mathematical models and methods of the analysis of computer networks of control of power supply of railways traction substations. J Autom Inform Sci. 2017;49(2):50–60.10.1615/JAutomatInfScien.v49.i2.50Search in Google Scholar

[11] Calvetti D, Cheng Y, Somersalo E. Uncertainty quantification in flux balance analysis of spatially lumped and distributed models of neuron–astrocyte metabolism. J Math Biol. 2016;73(6–7):1–27.10.1007/s00285-016-1011-7Search in Google Scholar PubMed

[12] Peluso E, Murari A, Gelfusa M, Lungaroni M, Talebzadeh S, Gaudio P, et al. On determining the prediction limits of mathematical models for time series. J Instrument. 2016;11(7):C07013.10.1088/1748-0221/11/07/C07013Search in Google Scholar

[13] Pan X, Wang J, Zhang X, Mei Y, Shi L, Zhong G. A deep-learning model for the amplitude inversion of internal waves based on optical remote-sensing images. Int J Remote Sens. 2018;39(3):607–18.10.1080/01431161.2017.1390269Search in Google Scholar

[14] Shaidurov GY, Kudinov D. System mathematical models for the formation of signals and synchronous interference with the use of pulsed non-explosive seismic sources. Siberian Fed Univ. 2015;8(4):442–53.10.17516/1997-1397-2015-8-4-442-453Search in Google Scholar

[15] Piacentino A, Gallea R, Cardona F, Lo Brano V, Ciulla G, Catrini P. Optimization of trigeneration systems by mathematical programming: influence of plant scheme and boundary conditions. Energy Convers Manage. 2015;104:100–14.10.1016/j.enconman.2015.03.082Search in Google Scholar

[16] Ferretti E. Some new findings on the mathematical structure of the cell method. Int J Math Models Methods Appl Sci. 2015;27(9):473–86.Search in Google Scholar

[17] Sarbaz Y, Pourakbari H. A review of presented mathematical models in Parkinson’s disease: black- and gray-box models. Med Biol Eng Comput. 2016;54(6):1–14.10.1007/s11517-015-1401-9Search in Google Scholar PubMed

[18] Baumanns S, Jansen L, Selva-Soto M, Tischendorf C. Analysis of semi-discretized differential algebraic equation from coupled circuit device simulation. Comput Appl Math. 2015;34(3):933–55.10.1007/s40314-014-0157-4Search in Google Scholar

[19] Zhang Z, Xu H, Chen K, Shan P. Elastic resource provisioning system based on openstack cloud platform. Autom Instrument. 2017;54(78):593–5.10.1007/978-3-319-60753-5_8Search in Google Scholar

[20] Sakar MG, Saldır O, Akgül A. Numerical solution of fractional Bratu type equations with Legendre reproducing kernel method. Int J Appl Comput Math. 2018;4(5):126.10.1007/s40819-018-0562-2Search in Google Scholar

[21] Akgül A. A new method for approximate solutions of fractional order boundary value problems. Neural Parallel Sci Comput. 2014;22(1–2):223–37.Search in Google Scholar

[22] Akgül A, Cordero A, Torregrosa JR. Solutions of fractional gas dynamics equation by a new technique. Math Methods Appl Sci. 2019;43(2):1349–58.10.1002/mma.5950Search in Google Scholar

[23] Akgül A. Reproducing kernel Hilbert space method based on reproducing kernel functions for investigating boundary layer flow of a Powell–Eyring non-Newtonian fluid. J Taibah Univ Sci. 2019;13(1):858–63.10.1080/16583655.2019.1651988Search in Google Scholar

[24] Baleanu D, Fernandez A, Akgül A. On a fractional operator combining proportional and classical differintegrals. Mathematics. 2020;8(3):360.10.3390/math8030360Search in Google Scholar

[25] Akgül A, Cordero A, Torregrosa JR. A fractional Newton method with 2αth-order of convergence and its stability. Appl Math Lett. 2019;98:344–51.10.1016/j.aml.2019.06.028Search in Google Scholar

[26] Akgül A. A novel method for a fractional derivative with non-local and non-singular kernel. Chaos Solit Fract. 2018;114:478–82.10.1016/j.chaos.2018.07.032Search in Google Scholar

[27] Akgül A. New reproducing kernel functions. Math Problems Eng. 2015;2015:158134–10.10.1155/2015/158134Search in Google Scholar

[28] Song CJ, Han ZZ, Shi L. Simulation of high speed artillery projectile cluster target resolution. Comput Simulat. 2016;33(7):20–3.10.2991/mcei-15.2015.46Search in Google Scholar

[29] Stavrinidis C, Newerla A. Generation of simplified spacecraft mathematical models with equivalent dynamic characteristics. J Spacecraft Rockets. 2015;32(1):117–25.10.2514/3.26583Search in Google Scholar

[30] Caraballo T, Herrera-Cobos M, Marín-Rubio P. An iterative method for non-autonomous nonlocal reaction-diffusion equations. Appl Math Nonlin Sci. 2017;2(1):73–82.10.21042/AMNS.2017.1.00006Search in Google Scholar

[31] Gao W, Wang WF. The fifth geometric-arithmetic index of bridge graph and carbon nanocones. J Differ Equ Appl. 2017;23(1–2SI):100–9.10.1080/10236198.2016.1197214Search in Google Scholar

[32] Lakshminarayana G, Vajravelu K, Sucharitha G, Sreenadh S. Peristaltic slip flow of a Bingham fluid in an inclined porous conduit with Joule heating. Appl Math Nonlin Sci. 2018;3(1):41–54.10.21042/AMNS.2018.1.00005Search in Google Scholar

[33] Mi C, Wang J, Mi W, Huang Y, Zhang Z, Yang Y, et al. Research on regional clustering and two-stage SVM method for container truck recognition. Discrete Contin Dyn Syst Ser S. 2019;12(4–5):1117–33.10.3934/dcdss.2019077Search in Google Scholar

[34] Ting MY. Definite integral automatic analysis mechanism research and development using the “find the area by integration” unit as an example. Eurasia J Math Sci Technol Educ. 2017;13(7):2883–96.10.12973/eurasia.2017.00724aSearch in Google Scholar

[35] Xu S, Fan K. A silent revolution: from sketching to coding – a case study on code-based design tool learning. Eurasia J Math Sci Technol Educ. 2017;13(7):2959–77.10.12973/eurasia.2017.00730aSearch in Google Scholar

Received: 2020-04-01
Revised: 2020-09-03
Accepted: 2020-09-06
Published Online: 2020-12-10

© 2020 Haiqing Zhang and Jun Han, published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 25.5.2024 from https://www.degruyter.com/document/doi/10.1515/phys-2020-0123/html
Scroll to top button