Eigen Solution of Neural Networks and Its Application in Prediction and Analysis of Controller Parameters of Grinding Robot in Complex Environments

,


Introduction
Grinding is one of the most environmental-related manufacturing processes in complex environments.These processes are often scattered, inefficient, contaminated, labor-intensive, and foremost field-intensive.And all the processes account for 12-15% of overall manufacturing costs and 20-30% of total manufacturing time [1].The surface roughness of the casting block and the grinding force prediction are an important aspect of the grinding process for optimization and monitoring.The grinding process is controlled by the grinding robot servo control system.Neural networks can be helpful in reducing the data dimensionality, and the optimization of neural network training may be employed to enhance the learning and adaptation performance of robots [2].With its powerful approximation ability, neural network has been utilized in many promising fields, such as modeling and identification of complex nonlinear systems and optimization and automatic control.The components integrated with a complex system may interact with each other and bring difficulties to the control [3].So many grinding robot servo control systems are constructed with neural networks.
Adaptive neural network controllers were studied in many aspects.The approximation-based controllers are designed for induction motors with input saturation [4] and 3-DOF robotic manipulator that is subject to backlash like hysteresis and friction [5].The parameter-based controllers are designed to identify the unknown robot kinematic and dynamic parameters for robot manipulators with finite-time convergence [6] and perform haptic identification for uncertain robot manipulators [7].The predictive controllers are designed for behaviour predicting of electronic circuits [8], trajectory tracking [9], trajectory tracking of underactuated surface vessels [10], performing attitude tracking control of a quad tilt rotor aircraft [11], and tracking error converging to a neighborhood of the origin [12].A federal Kalman filter based on neural networks is used in the velocity and attitude matching of transfer alignment [13].An iterative neural dynamic programming is provided for affine and nonaffine nonlinear systems by using system data rather than accurate system models [14].We have the merit of adaptive neural network controllers in our work.
Different controller frameworks of neural networks are constructed for different nonlinear systems.A general framework of the nonlinear recurrent neural network was proposed for solving the online generalized linear matrix equation with global convergence property [15].Another framework, which combines modified frequency slice wavelet transform and convolutional neural networks, was proposed for automatic atrial fibrillation beat identification [16].A stability criterion of impulsive stochastic reactiondiffusion cellular neural network framework was derived via fixed-point theory [17].The subnetwork of a class of coupled fractional-order neural networks consisting of N identical subnetworks have r + 1 n locally Mittag-Leffler stable equilibria [18].Our work was implemented based on these frameworks.
The concept of convolutional neural networks made the framework implementation of multilayer neural networks possible [19].A pretraining method was proposed for hidden layer node weights using unsupervised restricted Boltzmann machine to solve the overfitting problem in gradient descending of multilayer neural networks [20].A Dropout strategy was proposed to provide a high-efficiency training method of deep neural network weights with nonsaturating neurons, and this method can prevent overfitting and improve accuracy [21].A reinforcement learning theory was introduced into deep learning neural networks to improve the reasoning ability of deep learning neural networks [22].Based on these, the above developments were systematically described and were formally named as deep learning [23].The improved variants of deep learning theory [24] also appeared.Deep learning theory has been widely used because of its high efficiency and high accuracy [25].The Go program (AlphaGo) was constructed successfully using the deep learning neural networks and successfully defeated the human Go experts [26].Depth learning theory has reached satisfactory success in the adaptive output feedback control of unmanned aircraft [27], image quality assessment [28], image character recognition under natural environment [29], change detection of image [30], image classification [31], and robotic guidance [32].A semiactive nonsmooth control method with deep learning was proposed to suppress harmful effect on building structure by surface motion [33].So the deep neural networks could be used in controller to process massive amounts of unsupervised data in complex scenarios.Neural networks have been widely studied and applied because of their ability to solve complex linear indivisibility problems, especially their super deep learning ability, which has significant advantages in crossdomain big data analysis with unstructured and unidentified patterns.Further, the conditions of existence and global exponential synchronization of almost periodic solutions of the delayed quaternion-valued neural networks were investigated [34], and the function projective synchronization problem of neural networks with mixed time-varying delays and uncertainties asymmetric coupling was investigated [35].Neural networks have made great achievements in the determination of the success or failure and improving the performance.
But the robot dynamic model is often rarely known due to the complex robot mechanism, let alone various uncertainties such as parametric uncertainties or modeling errors existing in the robot dynamics.Therefore, it is a key problem to find the relationship between the changes of neural network structure and the changes of input and output environments to find their mutual influence.What are the relationships between the changes of neural network structure and the changes of input and output environments?How do they have mutual influence?These are the fundamental credit assignment problems [36].It is a hotspot in the current research on the interaction with environments of neural networks, especially in input and output data interaction with neural networks [37].There exist problems to be solved: (1) the quantitative description of input environments, macrostructure of neural networks, and output environments; (2) the quantitative description of the relationship between input environments, macrostructure of neural networks, and output environments; and (3) experimental verification of the relationship between input environments, macrostructure of neural networks, and output environments.Solution to these problems provides support for the wider application of neural networks.
In order to solve these problems, our work focuses on the eigen solution of neural networks of robot in complex environments, including kinematic model parameter uncertainties, dynamic model parameter uncertainties, and external disturbances by directly mapping the morphological subfeatures with controller parameters of the grinding robot.The concept and the eigen solution theorem of neural networks were studied in the second part of the paper.The third part gave the application of the neural network eigen solution rule in prediction analysis of controller parameters of the grinding robot to verify the theory correctness from a practical point of view.In the last part of the paper, we summarized our research work.The main contributions in this paper include the following.
(1) In order to describe the relationship between the change of neural networks, the change of input environments, and the change of output environments, this paper constructed the neural network eigen solution theory, which defined the conception of

Solution of Neural Networks
In our study, we try to find the essential relationship between the changes of neural network structure and the changes of input and output environments and their mutual influence out of the kinematic model parameter uncertainties, dynamic model parameter uncertainties, and external disturbances.We first defined the neural networks from the input and output environments' perspective, and consistent approximation theorem is proposed with this concept.Then, the concept of solution of neural networks was defined based on this concept, and the corresponding concepts of isomorphism solution, eigen solution, complete solution, and partial solution were defined too.With these concepts and theorem, the eigen solution existence theorem, the consistency theorem of complete solution and partial solution, and no solution theorem were proposed.The relation of definitions and theorems were described in detail (see Figure 1).
Definition 1 Neural networks.The neural networks are defined as Input, Net, Output (see Figure 2).Input = S i is the input object set.Net = {W l,k,ij , b l } are the network layers (l > 0, i > 0, j > 0), and W l,k,ij is the k-th convolutional kernel [i○j] of the l-th layer.If the network does not use convolution, let k = 1, then i is the i-th element in the (l − 1)-th layer, j is the j-th element in the l-th layer, and W l,ij is the weight between the i-th element and the j-th element.b l is bias in the l-th layer.Output = {T i } is the output target vector set.ψ is the feature acquisition function.f l is the neural network function.f is the target function corresponding to the target vector set {T i }.Let the input object vector set S i 1 have the solution {Net tr } 1 corresponding to the neural networks {W l,k,ij , b l }.
S i ′ r is given as where ψ is the eigen acquisition function.The original input object vector set S i 1 is transformed into the feature data object vector set S i ′ r .If S i ′ r has the solution Net tr r , solution Net tr r is called eigen solution of {Net tr } 1 .The eigen solutions Net tr r and {Net tr } 1 are the isomorphism solutions for each other.In the study, we focus on the neural network eigen solution. 3 Let the output objective vector set {T i } have the corresponding objective function f .The residual function e l between the neural networks and the output target vector is defined as If ʃ R g x dx ≠ 0, f and f l are continuous; the limit of e l is given as lim n→∞or l→∞ e l = 0 5 That is, e l converges and monotonically declines.

Proof
(1) If the neural networks are feedforward neural networks and the hidden layer number is l = 1, then Theorem 1 is proofed true with the consistent approximation theorem of feedforward neural networks proposed in [38] (2) If the neural networks are feedforward neural networks and the hidden layer number is l ≠ 1, then f l is given as Then, the converted problem is consistent with the consistent approximation theorem of feedforward 4 Complexity neural networks proposed in [39]; hence, Theorem 1 is true (3) If the neural networks are recurrent neural networks NET, the feedback structure of recurrent neural networks is expanded in time dimension.Let the network NET run from time t 0 ; every time step of network NET is expanded into one of the layers in the feedforward neural networks NET * , and the layer has exactly the same activity value with time step network NET.For any τ ≥ t 0 , the connection weight w ij between the neuron or external input j and neuron i of network NET is copied into w ij τ + 1 between the neuron or external input j in the τ-th layer and neuron i in the τ + 1 -th layer of network NET * (see Figure 3) As time goes by, the depth of NET * can be infinitely deep, that is, the recurrent neural networks NET can be equivalent to the feedforward neural networks when n → ∞, and this is exactly the same with the consistent approximation theorem of feedforward neural networks proposed in [38], which also validates Theorem 1.
Theorem 2 Eigen solution existence theorem.If the neural networks {W l,k,ij , b l } have solution {Net tr }, then for neural networks {W l,k,ij , b l }, there exists the eigen solution {Net tr } r .
Proof.Take the eigen acquisition function ψ x = x, then S i ′ r is given as So Net tr r is given as Proof finished.The consistency theorem of complete solution and partial solution shows that the output traits have an optional effect with joint action.The more the output traits are involved in prediction, the more the output traits can be predicted.
Proof.The corresponding target function of the output target vector set T i Part is f , and the part solution is {Net tr } Part corresponding to the mapping function f l of neural networks {W l,k,ij , b l }.The solution exists, so the approximation of e l = 0 holds; the convergence and monotone decrease of e l also hold; f and f l are continuous too.When the output target vector set is T i Max , the target function is still f , and the complete solution is {Net tr } Max corresponding to the mapping function f l of neural networks {W l,k,ij , b l }, but the target vector set nodes increase (see Figure 4).
Therefore, the number of effective hidden nodes is increased.According to the consistent approximation theorem, e l = 0 is more strictly approximated, that is, the trained neural networks {W l,k,ij , b l } with input object vector set {S i } can also get a complete solution {Net tr } Max .
Proof finished.
Theorem 4 No solution theorem.If the given trained neural networks {W l,k,ij , b l } with input processing object vector set {S i } have no complete solution, then the trained neural networks {W l,k,ij , b l } with input processing object vector set {S i } have no partial solution.The no solution theorem shows that if the analyzed data objects are random and there is no nonrandom trait, then the predicted result of the deep learning controller is also random; the controller cannot find nonrandom rule with random inputs.In other words, the prediction of deep learning controller cannot be out of nothing.
Proof.When the output target vector set is T i Max , the target function is f , and the complete solution corresponding to the mapping function f l of neural networks {W l,k,ij , b l } has no solution, so e l = 0 does not hold.
The target function corresponding to target vector set T i Part is f ; the input processing object vector set for training neural networks {W l,k,ij , b l } is {S i }.Here, the number of target vector set nodes is reduced; the number n of effective nodes in the hidden layer is reduced, so e l = 0 does not hold more strictly, i.e., the trained neural networks Proof finished.

Prediction and Analysis of Controller Parameters of Grinding Robot
3.1.Grinding Robot Controller Model.In grinding robot servo control system, dynamic real-time adaptive positioning mechanism was added to the traditional robot servo control system driven by the machine vision system.Our grinding robot controller model describes the closed-loop control process for robot grinding (see Figure 5).The dynamic real-time adaptive specific controlling method of grinding robot is that the image I t of processing casting block surface is obtained using vision measurement and positioning system in time t.L t and W t are the length and width of corresponding defective area waiting to handle.The corresponding handling is done to obtain an appropriate knowledge feature value S t given as In our research, we set S t = L t * W t * F flat ; F f lat is the roughness of casting block surface.
The speed v p and position p of the robotic arm end, the feed speed Vc of the grinding wheel translational motion, the peripheral speed Vw of the grinding wheel rotational motion, the axial displacement f a of casting block motion of the block, and the radial displacement f r of wheel motion are obtained at time t using the feedback information obtained from the real-time knowledge feature value S t .
The speed of robotic arm end coincides with the speed of grinding wheel, the coefficients k 1 -k 4 are determined by experiment based on casting block material.
The robot controller achieves adaptive control by the dynamic real-time feedback information wherein F r is the reference force, q r and q r−n are the joint angles, q g r and q g r−n are the robot joint angular velocities, τ is the joint driving moment, f is the force applied to the casting block by the end of the robot, q m is the feedback joint angle, and q g m is the feedback joint angular velocity.
The working process algorithm of the grinding robot controller model describes the closed-loop control process (see Figure 6).
First, the surface image I of block is gained with vision measurement and positioning system, and then, the geometrical features of image I is obtained with vision measurement and positioning system too.By judging the quality of the casting block, the grinding decision is made.When the surface of block does not meet the quality requirement, the robot grinding traits are got from the geometrical features of image I with robot grinding controller in the grinding robot servo control system.Last, the grinding robot executes the grinding operation with the robot grinding traits, and the surface image I of block feeds back to the vision measurement and positioning system to form a closed loop.
The working process algorithm describes the specific controlling method of grinding robot controller (see Figure 7).
After the geometrical features of image I are obtained, robot grinding traits are generated with grinding robot servo control system in the controller, which is composed of Figure 4: Changing of neural network target vector set.6 Complexity prediction model of trained deep neural networks.Then, the joint angles are computed from robot grinding traits with kinematics module, and the robot joint angular velocities are computed from the joint angles with force control module.The grinding robot executes the grinding operation with the robot grinding traits, joint driving moment, and joint angular velocities.

Prediction Model of Controller Parameters of Grinding
Robot.The research object in our study is the robot grinding process of casting block of engine.Block is one of the important parts of the engine.The quality of casting block engine directly affects the automobile performance.With the development of automobile engine technology, the dimensional accuracy and mechanical performance of engine block are required for high-quality casting block.The automobile engine of study is 1.5 L; maximum outline dimension of the block is 400 mm × 320 mm × 253 mm, and its weight is 38 kg of HT300 material.The controller parameters of grinding robot affect grinding efficiency and quality which includes the feed speed Vc of grinding wheel translational motion, the peripheral speed Vw of grinding wheel rotational motion, the axial displacement f a of workpiece motion of the block, and the  7 Complexity radial displacement f r of wheel motion (see Figure 8).The specific controller parameters of grinding robot are the output data, and their type and number are related to the machining method.The block is located on the bracket of track, while the grinding wheel is located on the actuator side of the robot.During the process, the workpiece is fixed, whereas the grinding wheel moves.
According to DIN EN ISO 4288 and ASME B46.1 standards, the input parameters are Fin = (Ra, Rz, Ry) features of workpiece surface morphology; Ra is the arithmetical mean deviation of the workpiece surface morphology; Rz is the point height of irregularities of the workpiece surface morphology, and Ry is the maximum height of the workpiece surface morphology.The value of Ra of the workpiece surface after grinding ranges from 0.01 μm to 0.8 μm [40].Features (Ra, Rz, Ry) are the statistical extract and dimensionality of surface morphology and cannot characterize all the surface morphology.Features (Ra, Rz, Ry) are only the empirical description of surface morphology and cannot characterize essential features of surface morphology.Features (Ra, Rz, Ry) are only the statistical average of surface morphology with a lot of information loss.The standards of DIN EN ISO 4287 and ASME B46.1 give more explicit and more information of surface morphology using four features, which are surface rugosity, standard deviation, skewness, and kurtosis [39,41,42], presented in ( 10)- (13).But these features still cannot work on all the pixels of surface morphology and cannot give a significant and systematic description statistics.
In order to solve the above problems, we improved the definition of surface rugosity, standard deviation, skewness, and kurtosis in [26][27][28].Alternatively, we give a unified definition of surface morphological features with definite mathematical meanings; the features are corresponding to the first moment, second moment, third moment, and fourth moment, respectively.On the other hand, the definitions of features are extended to all pixels of surface morphological image, which can reduce the information loss to optional extent, while extracting the essential features at the same time.The features are defined in ( 14)-( 17) by improving the method of calculating the depth from the gray level [43].
Here, γ is the reflectivity of the block surface to the incident light.In is the intensity of block surface.In × cos α is the vertical component of the intensity of block surface.f is the focal length of the camera.u A is the object distance from camera to block surface.g ij is the gray value of pixel of block surface image I in position (i, j).I ij represents the depth value of pixel of block surface image I in position (i, j).A and B are the maximum values of coordinates i and j; μ is the average depth of all pixels of block image, and σ is the mean square deviation of the depth of all pixels of block image.M ij,k is the moment of the depth of the pixel in position (i, j).M ij,1 is the first moment of the depth of the pixel in position (i, j), denoting the rugosity of surface image in position (i, j).M ij,2 is the second moment of the depth of the pixel in position (i, j), representing the standard deviation of surface image in position (i, j).M ij,3 is the third moment of the depth of the pixel in position (i, j), signifying the skewness of surface image in position (i, j).M ij,4 is the fourth moment of the depth of the pixel in position (i, j), indicating the kurtosis of surface image in position (i, j).
M ij,1 , M ij,2 , M ij,3 , and M ij,4 are called subgraphs of rugosity, standard deviation, skewness, and kurtosis of generalized surface morphology which forms the basis of this study.

Grinding Robot Prediction Algorithm of Controller
Parameters of Grinding Robot.Grinding parameter prediction is completed by the trained deep neural networks (see Figure 9).The trained deep neural networks are part of the grinding robot controller.The surface image I of part of block is analyzed by algorithm, then the geometrical features of the obtained surface image are obtained, including the   In the constructed model, the surface roughness of block surface was obtained by analyzing the surface of the workpiece, including the corresponding moment features.All the moment features of image are input into the grinding controller, which describes the roughness of the workpiece surface and determines the feed speed Vc, the peripheral speed Vw, the axial displacement f a , and the radial displacement f r of the grinding robot actuator.
It is a complex nonlinear relationship between rugosity M ij,1 , standard deviation M ij,2 , skewness M ij,3 , kurtosis M ij,4 and feed speed Vc, peripheral speed Vw, axial displacement f a , radial displacement f r in (18).The nonlinear relationship between them is described using deep neural networks.
Further, we give the process of generating the deep neural networks (see Figure 10).The feature subgraphs M ij,k of part of block surface image I is generated after the surface image I being processed with the grinded part of block.The initial values of weights and biases are initialized in deep neural networks.The deep neural networks are trained using the (I, Fouts) training set that had been marked with empirical knowledge.I is the block images and Fouts are the output of grinding features corresponding to I. Hence, the trained deep neural networks are fine-tuned to get one stage result of deep neural networks.This iterating process continues until all the data in training set are completed to get the well-trained deep neural networks.
Our proposed model has the characteristics of multifeatures and multitargets, which makes the model complicated.It needs to consider the correlation between rugosity, standard deviation, skewness, kurtosis and their influence on deep neural networks.It also needs to consider the correlation between feed speed Vc, peripheral speed Vw, axial displacement f a , radial displacement f r and their influence on deep neural networks.Further, the correlation between rugosity, standard deviation, skewness, kurtosis and feed speed Vc, peripheral speed Vw, axial displacement f a , radial displacement f r needs to be considered.The agreement of deep neural networks should be considered, that is, the neural network nodes are independent of each other and overfitting phenomena that do not conform to the agreement.
The inputs of our model have several feature subgraphs rather than a single graph.Our model predicts the grinding traits, not just the classification of a given object.Thus, the model processes several feature subgraphs to predict several grinding traits with multi-input and multioutput relationship, which is different from one to one relationship of single predetermined object classification.

Grinding Robot Prediction Algorithm of Controller
Parameters of Grinding Robot.The images of block surface after grinding are taken using vision system, and all the images are labeled with corresponding grinding trait parameter values, as shown in (17).There are 400 labeled training data measured by the experiment according to years of working experience as skilled technicians.
In (14), g ij is the gray value of block surface image; the reflectance γ of block surface is 0.65; the focus f is 50 mm; the object distance to block surface is 0.5 m; α takes 0 for the use of uniform light source, and the intensity of block surface In is 1000 lx.The extended 3D shape of block surface and its feature are obtained by (17).Gray, Ra, M 1 , M 2 , M 3 , and M 4 are gray level, roughness, rugosity, standard deviation, skewness, and kurtosis; all these features are the inputs of deep neural network controller.
We obtain the correlation between the input feature parameters of the neural network controller (see Figure 11).It can be seen that the correlations among the parameters are very complex, with features of positive and negative correlations, and the degrees of correlation are also different and cannot be expressed with analytical formula.
And the correlation between the output parameters of the neural network controller is also obtained (see Figure 12).The output parameters act as robot grinding traits.It can be seen that the correlations among the parameters are also

Standard deviation Skewness Kurtosis
The trained deep neural networks Getting the robot grinding traits Figure 9: Prediction algorithm framework of controller parameters of grinding robot.
9 Complexity very complicated with features of both positive and negative correlations.The degrees of correlation are also different and cannot be expressed with analytical formula.

Predicted Result Analysis of Controller Parameters of
Grinding Robot.In the experiment, the training time of deep learning neural networks for block grinding is finished within 78 hours and 32 minutes with computing environment of Windows Server 2010, Intel Core i5-4308U CPU, 2.80 GHz, and RAM 8.00 GB.The prediction time of trained deep learning neural networks for block grinding is computed within 2 seconds.
In our controller, the model is implemented on deep learning open-source framework Tiny-dnn [44].In deep learning neuron network prediction training model, the minibatch_size is 4, the num_epochs are 3000, the activation function is tanh, and its value ranges from −0.8 to +0.8.In the input layer, the input labeled data are 400, and the default normalized values n lenth × n width are [32 × 32].In the feature extraction layer, convolution and sampling are performed for each feature subgraph; the convolution kernel is [5 × 5], and the sampling kernel takes [2 × 2].In the fuzzy analysis layer, the number of jump positions n s is 10, the quantum interval initial value θ r j is 0.1, and the output layer is with full join.The inputs include subgraphs of rugosity M ij,1 , standard deviation M ij,2 , skewness M ij,3 , and kurtosis M ij,4 of the block surface morphology, and each feature subgraph works independently.The full join between the fuzzy quantum analysis layer and output layer of deep neuron network is processed with all the feature subgraphs.The outputs are the feed speed Vc, the peripheral speed Vw, the axial displacement f a , and the radial displacement f r .

Deep Learning Neuron Network Training Results.
The training results of deep learning neuron network of block surface controller parameters of grinding robot are obtained with samples increasing (see Figure 13).
The accuracy of all the output features is more than 80% with the prediction of deep learning neuron network controller under the given empirical labeled data set.The increasing rate of grinding trait accuracy is among the fastest stage when the samples are less than 600.The accuracy of all the grinding traits reaches more than 95% when samples are 1000, and the accuracy still slowly increases with the increasing of training times.In the final training result, the prediction accuracies of feed speed Vc, peripheral speed Vw, axial displacement f a , and radial displacement f r are above 98% compared to the labeled training data set.
Similarly, the training results of deep learning neuron network of block surface controller parameters of grinding robot are obtained with training times increasing (see Figure 14).
The accuracy of all the output features is more than 80% with the prediction of deep learning neuron network controller under the given empirical labeled data set.The increasing rate of grinding trait accuracy is among the fastest stage when training times are less than 600.The accuracy of all the grinding traits reaches more than 95% when training times are 1500, and the accuracy still slowly increases with the increasing of training times.In the final training result, the prediction accuracies of feed speed Vc, peripheral speed Vw , axial displacement f a , and radial displacement f r are above 97% compared to the labeled training data set.
From the training results of neural networks in Figures 13  and 14, it can be seen that the prediction accuracy of the deep From ( 19)-( 22), we can see that e l,Vc , e l,Vw , e l,f a , and e l,f r converge and monotonically decrease.The experimental results verify the correctness of Theorem 1 (consistent approximation theorem).

Predicted Experimental Results with Multioutput.
The trained deep neural networks are used to predict the traits of casting block grinding.The sample number of data is 100, which is used as the input of the trained deep neural networks.The predicted results of feed speed Vc, peripheral speed Vw, axial displacement f a , and radial displacement f r of grinding wheel are obtained accordingly (see Figures 15-18).The black curve represents the labeled grinding trait data of block surface.The other color curves are the predicted results using the trained deep neural networks, and the average accuracies of predicted results are given (see Table 1).
From Figures 15-18, we can get that g ij [32 × 32] , g ij [48 × 48] , and g ij [52 × 52] are input data sets of different sizes; M ij,1 [32 × 32] , M ij,2 [32 × 32] , M ij,3 [32 × 32] , and M ij,4 [32 × 32] are the feature data set that g ij [32 × 32] transformed, and each input data set corresponds to the respective output data sets {Vc, Vw, f a , f r }.In the given set of input feature data, at least one data set can be found to get the eigen solution Net tr r of solution {Net tr } of   13 Complexity neural networks {W i,k,ij , b i } corresponding to the data set of g ij [32 × 32] , and the prediction accuracy of eigen solution Net tr r is equivalent to that of neural network solution {Net tr } corresponding to the data set of g ij [32 × 32] , so the correctness of Theorem 2 (eigen solution existence theorem) is verified.The following conclusions are drawn from Figures 15-18 and Table 1 with the average accuracies of predicted results.
(1) The trained deep learning neural networks of our model can predict the feed speed Vc, peripheral speed Vw, axial displacement f a , and radial displacement f r simultaneously, and their average minimum prediction accuracies are more than 95%.The more pixels the input data samples from the image, the higher the correlation prediction accuracy.The green, blue, and red curves are the correlation predicted results of our model with pixels [32 × 32, 48 × 48, and 52 × 52] where the accuracy is improved by 1% in turn (2) Our proposed deep neuron network can well reflect the influence between the features of rugosity 4 and the robot grinding traits described as follows.(a) There are several features, at least one feature of rugosity M ij,1 , standard deviation M ij,2 , skewness M ij,3 , kurtosis M ij,4 playing function on all the grinding traits of feed speed Vc, peripheral speed Vw, axial displacement f a , and radial displacement f r .(b) The joint prediction accuracy of robot grinding traits with all block surface features is within the same precision level of the optimal prediction accuracy of each independent robot grinding trait as rugosity M ij,1 , standard deviation M ij,2 , skewness M ij,3 , or kurtosis M ij,4 .Furthermore, the joint prediction accuracy is even sometimes superior to the single feature optimal prediction precision.As seen in Figure 17, the joint prediction accuracy of axial speed is superior to the optimal prediction precision of rugosity M ij,1 .All the joint predicted results of robot grinding traits obtain the optimal prediction accuracy at the same time where the joint predicted results do not interfere with each other.(c) Different Predicted result using M2 with 32 Predicted result with 48 Predicted result using M4 with 32   15, 16, and 18, the average prediction accuracy of rugosity M ij,1 is higher than that of the joint prediction accuracy when the dotted curve is the image with pixels [32 × 32].The predicted results of standard deviation M ij,2 and skewness M ij,3 have larger amplitude fluctuation, and their accuracies are also less than that of the joint accuracy.The predicted result of kurtosis M ij,4 is a line; the corresponding controller almost has no prediction capability, and the prediction accuracy is only a random probability (3) There are severe deviations in times 78-83 and 87-95 as shown in Figure 15.Also, similar deviation is formed in times 4 and 85-95 as shown in Figure 10.
In times 68-72 and 86-92 as shown in Figure 17, as well as in times 66-68 and 84-94 as indicated in Figure 18, severe abnormality were recorded, which are caused by the emergence of new data samples.The solution to this problem is that new data samples are labeled and added to the training set.This model can be used to obtain reasonable predicted result when the similar problem occurs in the future Therefore, given the standard image of grinded block surface, we can accurately get the optional configuration values of the feed speed Vc, peripheral speed Vw, axial displacement f a , and radial displacement f r using the welltrained deep neuron network controller.

Predicted Experimental
Results with Single Output.In the independent experiment with single output, the multito-one deep learning-oriented prediction model is adopted, and the inputs are the moment feature graphs of grinding casting block surface corresponding to the first, second, third, and fourth moments.Each of the moment feature graphs works in a relatively independent network.The output parameter between the fuzzy analysis layer and the output layer of network is single output with peripheral speed V w, feed speed Vc, axial displacement f a , or radial displacement f r .The predicted results are obtained (see .The average accuracies of predicted results are given (see Table 2).
In the independent experiment, g ij [32 × 32] , g ij [48 × 48] , and g ij [52 × 52] are input data sets of different sizes; M ij,1 [32 × 32] , M ij,2 [32 × 32] , M ij,3 [32 × 32] , and M ij,4 [32 × 32] are the feature data set that g ij [32 × 32] transformed to, and each input data set corresponds to the respective output data set {Vc}, {Vw},{f a }, or {f r }.The solution of independent single-output experiment is partial solution Net tr Part,i of multioutput experiment complete solution Net tr Max,i .The partial solution Net tr Part,i predicted results are shown in Figures 19-22.Data sets M ij,1 of g ij [32 × 32] , g ij [48 × 48] , and g ij [52 × 52] have partial solution for all output data set of {Vc}, {Vw},{f a }, or {f r }.Similarly, M ij,3 has partial solution for {Vw}.In the multioutput experiment, the feature data sets M ij,1 , M ij,2 , M ij,3 , and M ij,4 of input data sets g ij [32 × 32] , g ij [48 × 48] , and g ij [52 × 52] have independent multioutput experiment complete solutions Net tr Max,i corresponding to {Vc, Vw, f a , f r }, as shown in Figures 15-18.All M ij,1 , M ij,2 , and M ij,3 have complete solutions Net tr Max,i except for M ij,4 .From the above, we can draw the conclusion that a partial solution must correspond to a complete solution.Thus, our experiment verifies the correctness of Theorem 3 (consistency theorem of complete solution and partial solution).The independent experiment shows that M ij,4 has no complete solutions Net tr Max,i corresponding to {Vc, Vw, f a , f r }, thus verifying the correctness of Theorem 4 (no solution theorem).

Complexity
As shown in Figure 19, the black curve is the original empirical data of the peripheral speed of grinding wheel.The other curves are the predicted results of deep learningoriented controllers.From the figures and the average accuracies in Table 2, we draw the following conclusions.
(1) When output parameter is only the feed speed, the single-output prediction accuracies of each moment are in the same percentage point scope with the multioutput prediction accuracies of each moment, as the blue and the red curve shows; the multioutput prediction accuracies do not interfere with each other for the sake of interaction of multioutput parameters.(2) In this experiment, only the first moment has function, and other moments have no function, as shown with dotted straight lines in the figures.The predicted results of the second, the third, and the fourth moments are only random results.The multioutput prediction accuracies are consistent with the first moment predicted result, which means that the first moment has superb effect in the multioutput prediction.(3) There are 3 nonfunctional moments in single-output experiment, which are 2 more nonfunctional moments than in multioutput experiment.This phenomenon shows that there exists superb effect moment between the moments functioning in multioutput experiment.
The predicted result of feed speed of grinding casting block surface is similar to the predicted result of peripheral speed as shown in Figure 20.The differences are shown as follows.(1) Compared with Figure 19, the second and the fourth moments do not work in this experiment; the predicted experimental accuracy of the fourth moment with   16 Complexity single output is reduced by 10%, which also shows that there exists superb effect moment between the moments functioning in multioutput experiment.(2) The first and third moments function in this experiment, which means that the inputting feature parameters are determined by the grinding surface morph itself but not the order of moments.
From the above analysis, it can be seen that ( 1) there is existence of high prediction accuracy consistent with multiinput and multioutput prediction, multi-input and singleoutput prediction, and single-input and single-output prediction.The optimal prediction accuracy is obtained at the same time for all the predicted results, which do not interfere with each other for the superb affection existing.(2) Also, the optimal prediction accuracy consistent with multi-input and multioutput prediction, multi-input and single-output prediction, and single-input and single-output prediction fully demonstrate that the grinding data do have inherent law and can be captured and predicted by the designed deep learning neural network controller.In order to further verify this, we conducted experiment with random input as follows.

Predicted Experimental Results with Random Output.
In the random-output experiment, we adopt the same proposed deep learning-oriented prediction model, and the inputs are the moment graph of the grinding surface morph data.Each moment graph works in relatively independent network.All the output data values of the feed speed Vc, peripheral speed Vw, axial displacement f a , and radial displacement f r are randomly generated by computer simulation.The predicted results of random-output experiment are obtained (see .The black curves are generated by random data, and the blue curves are the predicted results with our controller.The average accuracies of predicted results with random output are given (see Table 3), and the prediction results are in accordance with the random data.That is to say, we get the random results.
From the analysis of random-output experiment, we get a very interesting result.If the analyzed object data are gained in random, there is no nonrandom feature.The deep learning controller gets random predicted result, which cannot be found in nonrandom laws.In other words, the deep learning prediction controller cannot be out of nothing, which proves the validity of the proposed model from another side.The stronger the law, the more the input features function, the higher the predicted accuracy.The random-output experiment further verifies the correctness of Theorem 4.

Conclusions
In our study, we give the eigen solution theory of general neural networks and quantitatively describe the input environments, the macroscopic structure, output environments, and their relationship of neural networks.The eigen solution theory is applied and validated in the prediction of controller parameters of grinding robot in complex environments with the proposed deep learning neural networks, which will provide theoretical support for a wider application with the proposed deep learning neural networks.

Theorem 3
Consistency theorem of complete solution and partial solution.If the neural networks {W l,k,ij , b l } have solution {Net tr }, then for neural networks {W l,k,ij , b l }, there exists the eigen solution Net tr r .If the partial solution {Net tr } Part is gained by training the neural networks {W l,k,ij , b i } with the input processing object vector set {S i }, then the complete solution {Net tr } Max can be gained by training the neural networks {W l,k,ij , b i } with the input processing object vector set {S i }.

Figure 3 :
Figure 3: Recurrent neural networks are expanded into feedforward neural networks.

FinishYFigure 6 :
Figure 6: The working process algorithm of grinding robot controller model.

Figure 7 :
Figure 7: The working process algorithm of controlling method.

Figure 12 :
Figure 12: The correlation between the output parameters.

Figure 13 :
Figure 13: Trends of accuracy changing of deep learning neuron network for casting block grinding with samples increasing.

Figure 14 :Figure 15 :
Figure 14: Trends of accuracy changing of deep learning neuron network for casting block grinding with training times increasing.

Figure 16 :Figure 17 :
Figure 16: Predicted results of peripheral speed of grinding wheel.

Figure 18 :
Figure 18: Predicted results of radial displacement of grinding wheel.
peripheral speed 32 Predicted result using M2 with peripheral speed 32Predicted result with all 32Original data Predicted result using M3 with peripheral speed 32Predicted result using M1 with peripheral speed 32Predicted result using M4 with peripheral speed 32

Figure 19 :
Figure 19: Predicted results of peripheral speed of grinding wheel.
Machining parameter: feed speed g g g g g g g g g g g p p p p p p p p p p p p p p p p p p p p p p p p p feed speed 32 Predicted result using M2 with feed speed 32Predicted result with all 32Original data Predicted result using M3 with feed speed 32Predicted result using M1 with feed speed 32Predicted result using M4 with feed speed 32

Figure 20 :
Figure 20: Predicted results of feed speed of grinding wheel.

32 Figure 21 :
Figure 21: Predicted results of radial displacement of grinding wheel.
axial displacement 32 Predicted result using M2 with axial displacement 32 Predicted result with all 32 Original data Predicted result using M3 with axial displacement 32 Predicted result using M1 with axial displacement 32 Predicted result using M4 with axial displacement 32

Figure 22 :
Figure 22: Predicted results of axial displacement of grinding wheel.

Figure 23 :
Figure 23: Predicted results of peripheral speed of grinding wheel.

Figure 24 :
Figure 24: Predicted results of feed speed of grinding wheel.

Figure 25 :
Figure 25: Predicted results of radial displacement of grinding wheel.

Figure 26 :
Figure 26: Predicted results of axial displacement of grinding wheel.

i r to obtain m different solutions Net tr r , r = 1, … , m. All these solutions are called isomor- phism solutions of neural networks {W l,k,ij , b l } for each other.
Definition 2 Solution of neural networks.In the study, {Net tr } is a trained neural network, {Input test } is a given test set, and {Output test } is the output target vector set corresponding to Definition 4 Complete solution and partial solution of neural networks.With the output target vector set {T i }, i ∈ 1, Max , if the output target vector set includes all the target vectors T i Max , then {Net tr } Max is called the complete solution of neural networks {W l,k,ij , b l }.If the output target vector set is part of the target vector T i Part , then {Net tr } Part is called the partial solution of neural networks {W l,k,ij , b l }.

8
3omplexity rugosityM ij,1 , standard deviation M ij,2 , skewness M ij,3, and kurtosis M ij,4 of surface image I in position (i, j).The feature values are put into the well-trained deep neural networks, and the values of robot grinding traits are obtained when the network computing is finished.Then, the grinding robot controller sends out the control instruction to other institutions of the robot.

Table 1 :
Average accuracies of predicted results with multioutput., skewness M ij,3 , and kurtosis M ij,4 are in descending order.As shown in Figures

Table 2 :
Average accuracies of predicted results with single output.