Model Predictive Control of Nonlinear System Based on GA-RBP Neural Network and Improved Gradient Descent Method

A model predictive control (MPC) method based on recursive backpropagation (RBP) neural network and genetic algorithm (GA) is proposed for a class of nonlinear systems with time delays and uncertainties. In the offline modeling stage, a multistepahead predictor with GA-RBP neural network is designed, where GA-BP neural network is used as a one-step prediction model and GA is employed to train the initial weights and bias of the BP neural network. (e incorporation of GA into RBP can reduce the possibility of the BP neural network falling into a local optimum instead of reaching global optimization. In the online optimizing stage, a multistep-ahead GA-RBP neural network predictor and an improved gradient descent method (IGDM) are proposed to efficiently solve the online optimization problem of nonlinear MPC by minimizing a modified quadratic criterion. (e designed MPC strategy can avoid information loss while linearizing the controlled system and computing the Hessian matrix and its inverse matrix. Experimental results show that the proposed approach can reduce the computational burden and improve the performance of MPC (i.e., the maximum overshoots, calculation time, rise time, and RMSE tracking error value) for the solution of nonlinear controlled systems.


Introduction
As an advanced technique, model predictive control (MPC) including dynamic matrix control (DMC), generalized predictive control (GPC), and receding horizon control (RHC) has been commonly used in the industrial processes because of its ability to deal with constraints and time delays and adapt to various dynamic models and control objectives [1][2][3][4]. In this control strategy, the process behavior is predicted based on an approximate model, and the control sequence is obtained by solving a performance evaluation function. Compared to conventional control methods, such as the PID controllers, MPC is a desirable optimal control strategy based on linear dynamic models to solve a finite horizon open-loop optimal control problem when the operating points of the process are limited in a small area [5]. However, most of the industrial processes operate in a wide range of operating points and exhibit severe nonlinearities, which may affect the MPC's ability to deliver better control and optimization. e development of nonlinear MPC has been motivated by inherent nonlinearity and production requirements in various operating areas. e nonlinear MPC has fascinated enormous attention for the last two decades, which can predict the future behavior of the system via constructing a nonlinear prediction model. e effectiveness of nonlinear MPC depends on the construction of an appropriate firstprinciple or empirical model from industrial or laboratory data and the derivation of an accurate online linear approximation of the nonlinear model for optimization and prediction [6][7][8]. However, it is always challenging to generate system models that are both accurate and computationally efficient for complex and nonlinear control problems [9,10].
Generally, there are two approaches including firstprinciple and data-driven models for the generation of system models. e first-principle models are derived from theory to describe the real physical behavior of a system. However, the solution of the first-principle models is often a computationally demanding task due to the need for solving the model for any given candidate value of the unknown parameters. e other models are constructed by algorithmically inferring the relationships or parameters from a dataset using machine learning without prior knowledge of the system. As increasingly promising models, many data-driven models have recently been applied to nonlinear MPC to improve the model prediction accuracy, such as Takagi-Sugeno fuzzy [11], Nonlinear Autoregressive with the eXogenous variable (NARX) [12], Support Vector Machine (SVM) [13], Volterra series [14], and Artificial Neural Network (ANN) [15][16][17][18]. Out of all these modeling methods, the ANN method employs a nonlinear empirical model that represents input-output data and can mimic process models by learning patterns from a sufficiently large dataset and thus model nonlinear systems compared to other methods. ANN is considered an excellent tool for modeling the nonlinear MPC, which can approximate any continuous and differentiable functions to an arbitrarily high degree of accuracy [19,20]. Hornik et al. [21] proposed a fuzzy wavelet NN for MPC to model the liquid level system. Han et al. [22] presented a self-organizing NN as the prediction model of nonlinear system to perform the real-time MPC. Ławryńczuk [23] developed nonlinear MPC by using a radial basis function neural network (RBFNN). e MPC based on Elman NN for delayed dynamic systems was introduced in [24].
As one of the simple and effective feed-forward neural networks, the backpropagation neural network (BPNN) was also extensively used in nonlinear plant identification and modeling. However, the BP networks have many drawbacks, such as difficulty in choosing appropriate network architecture and training the weights of a fixed architecture and the resulting local minimum and slow convergence. To overcome these problems and improve the reliability of BPNN, many heuristic algorithms such as genetic algorithm (GA) [25], Particle Swarm Optimization (PSO) [26], and Simulated Annealing (SA) [27] are combined to avoid local minima and to achieve global convergence quickly and correctly. As the key to the problem is based on natural selection and genetics, the GA has been introduced as an evolutionary computation method that does not require problem-specific knowledge and thus becomes superior in terms of robustness and solution of nonlinear, parallel, and complex problems [28]. e optimization of the neural network architecture and the training of the weights of a fixed architecture are two important ways to be used as a robust global optimization method for neural networks [29]. e related experimental results demonstrated that the combination of GA and BPNN methods is superior to the single method in the generalization ability and convergence speed [30,31]. Despite the promising benefits, the prediction results of the GA-BP neural network reported become inaccurate when used iteratively over the prediction horizon. For the determination of the optimal input moves by solving an optimization problem in the nonlinear MPC framework, the capability to perform satisfactory multistep-ahead predictions is considered to be important to provide accurate multistep-ahead predictions rather than one-step-ahead predictions [8,32]. Hence, it is necessary to evaluate and study the multistep-ahead prediction analysis of the MPC model based on the neural network.
Despite the algorithmic developments to improve the convergence rate, it is still required to solve numerical optimization problems online at each iteration, which is a fundamental limiting factor for practical implementation of MPC due to low computational accuracy and efficiency in the process of receding horizon optimization of MPC. In the optimization process of MPC, the optimal control sequence is obtained and the first element is applied to the nonlinear system at each sampling instant. A large number of algorithms were studied to solve the control problem of nonlinear MPC. Lu [33] proposed a systematic design methodology to develop the stable predictive control (SPC) based on recurrent wavelet neural networks for the nonlinear plants. Stefan et al. [34] presented Nesterov's fast gradient method combined with multilayer neural network to accomplish the control of the nonlinear system with input constraints. Kittisupakorn et al. developed a Newton-Raphson algorithm to optimize the control problem of nonlinear MPC for a steel pickling process. Yang et al. [36] designed a nonlinear MPC optimization problem for reference tracking of dynamic positioning ships by combining the disturbance estimation into the rolling optimization.
is paper proposes a novel strategy for nonlinear MPC, in which a recursive neural network is combined with the improved gradient descent method (IGDM) to model the nonlinear dynamic system with time delays and uncertainties and solve the online optimization problem. e receding horizon optimization problem of the nonlinear system is solved by using the proposed IGDM, where the control input sequence can be calculated by solving a modified cost function. e designed control strategy can avoid information loss while linearizing the nonlinear system and solve the online optimization problem on the implementation of MPC. Two different nonlinear systems are selected to verify the abilities of modeling and tracking the proposed algorithm. e main contributions of this paper can be summarized as follows: (1) To obtain an accurate prediction model, we present a recursive BP neural network optimized by GA (GA-RBP) to construct the multistep-ahead model. It can be adapted to predict the future dynamic behavior of nonlinear systems with delay times and uncertainties.
(2) An IGDM is developed to solve the online optimization problem, which can improve the control performance and reduce the computational burden.
(3) Combining the GA-RBP neural network predictor with online optimization based on IGDM, we propose a new online MPC algorithm for a class of nonlinear systems with time delays and uncertainties. e simulation results of the nonlinear system have proven that the proposed strategy has better robustness and tracking ability than the traditional GPC. e rest of this paper is organized as follows. Section 2 introduces the principle of the nonlinear MPC and BPNN model. In Section 3, GA-BP neural network and multistepahead GA-RBP neural network predictor are described. An IGDM is proposed to address the online optimization problem of MPC in Section 4. e experimental results are presented in Section 5 and the conclusions are summarized in Section 6.

Nonlinear Model Predictive Control (NMPC).
e essence of NMPC is to predict the future output of the nonlinear dynamic process based on a nonlinear prediction model. e NMPC controller is designed to obtain the control input sequence and consists of applying a control sequence that minimizes a multistage cost function of the form subject to where y r and y c represent the desired reference signal and estimated output values, respectively, N 0 denotes the minimum prediction horizon, N p is the maximum prediction horizon, N u is the control horizon, and λ j is a weighting factor of control input and generally chosen as zero or a small value. e only constraint for the values of N 0 and N u is that these bounds must be less than or equal to N p . In the process of modeling a dynamic stochastic system, the system can be represented as a nonlinear autoregressive moving average model with exogenous input (NARMAX) [37,38] in the form of where f is an unknown nonlinear function, u and y denote the input and output of the process, d represents zero-mean white noise, n u and n y represent the maximum lags of the inputs and outputs, respectively, and t d is the delay time.

BP Neural Network Model.
e BPNN is a kind of classical feed-forward neural networks, which employs an error backpropagation algorithm in networks of neurons and utilizes the gradient descend method to adjust weights and bias. A typical BPNN architecture consists of input layers, hidden layers, and output layers. As shown in Figure 1, the input layer and the hidden layer of BPNN contain m and n neurons, respectively. e weights and bias of the BPNN are adjusted by where y(k) and y m (k) are the real system output and the output from the BPNN, respectively. In Figure 1, the basic BPNN structure contains one input layer, one hidden layer, and one output layer. I j , j � 1, 2, . . . , m, H i , i � 1, 2, . . . , n, and O represent each layer node of BPNN, respectively. ω ij is the connecting weights between jth input node and ith hidden node. ω i denotes the connecting weight between the ith hidden node and the output node. b i and b represent the bias vector of the hidden layer and the output layer, respectively. H I and H O are the input and output of the hidden layer, respectively, and O O is the output of the network. e output of the BPNN can be expressed as where φ is the sigmoid function, x j is the input of BPNN, and the output of the one-step-ahead prediction model can be expressed as e testing dataset is chosen randomly to determine the performance of the BPNN model and the validation dataset is used to ensure that the model does not overfit or underfit in the training stage.
In the process of modeling nonlinearities, overfitting is a typical problem for the neural network method when a model has many parameters and the neural network structure is very complex. In order to avoid overfitting, a general technique named early stopping is used to reduce overfitting with the use of a validation set to stop model training at the peak of the validation set accuracy [39]. e Complexity error on the validation set is monitored during the training session. In the early stopping method, when the validation error starts to increase after a number of iterations, the training is stopped, and the weights at the minimum of the validation error are returned for optimum network complexity.
Furthermore, the problem of underfitting for the neural network method exists because the model is not well trained in the training process. e underfitting of NN means that a model has too few parameters to capture the underlying structure of the observed data and thus provides poor prediction or generalization. To solve this problem, the hyperparameters including the number of hidden neurons and layers, activation function, loss function, and regularization parameters are adjusted to prevent the underfitting of NN. To obtain an optimal neural network, grid search is employed in this paper, which is an exhaustive search approach for hyperparameter tuning [40]. In grid search, each hyperparameter is assigned with several possible values, comprising a multidimensional grid. It was reported that random grid search has a superior efficiency than full grid search theoretically and empirically [41,42].

GA-BP Neural Network.
To solve the sensitivity to initial weight and easily fall into the local optimum of conventional BP algorithm, this paper combines GA with BP to establish the prediction model. GA is a global searching technique based on the mechanics of natural selection and natural genetic, which can search for the optimal weight and bias vector of the BP algorithm without falling into the local extremum [43]. Based on self-adaptive probabilistic optimization of GA and good nonlinear fitting ability of BPNN, the combined algorithm has the advantages of strong global search capability and high learning accuracy.
In the GA, the individual parameters are encoded as strings of numbers called chromosomes. e process starts by randomly initializing a random population of potential solutions; these are then evaluated using a fitness function to identify the best individuals in the population. Additionally, genetic operators such as mutation and crossover are implemented, where the crossover is to select two random parent chromosomes and combine them to form two child chromosomes and mutation is to maintain the population diversity and to avoid convergence to a local optimum. e main characteristics that affect the convergence of the GA include population size, number of iterations, number of survivals, mutation, and crossover rates. e combination of these parameters has a major influence over the convergence time and the optimal solution. us, it is necessary to train the GA and find the best configuration regarding the convergence time to minimize the average difference between the GA's output and measured data. However, the number of possible parameters and their different values always means that it is very time-consuming for parameter tuning [44]. In this paper, a factorial design approach is used to design the appropriate parameters of GA, which is a wellknown methodology based on statistical considerations. e description of the tuning parameters approach can be found in [45,46]. Figure 2 shows the process of GA-BP neural network modeling, which includes the selection of BPNN structure, the optimization of GA, and the prediction of the BPNN. e procedure of the GA-BP neural network is introduced as follows. Specifically, the BPNN first reads in training data and combines with the initial training weights and bias transferred by GA to complete the first training and obtain the initial prediction model. en, the BPNN reads in the validation data, uses the initial prediction model to complete the predictions, and evaluates the initial prediction results. If fitness convergence conditions are satisfied, the initial prediction model is the final prediction model; if not, the procedures of GA will iterate and the next group of training iterative parameters will transfer to the BPNN until the fitness convergence conditions of the prediction model are met.

Multistep-Ahead Prediction Model Based on Recursive
Neural Network. Since the controlled output has to be predicted several steps ahead for the optimal control of the nonlinear system, a multistep prediction model is required [26,32]. e methods of calculating multistep-ahead prediction for the MPC model using the NN model are based on the recursive prediction strategy, which is a successive onestep-ahead prediction whereas the outputs in each consecutive step are considered as inputs for the next step of prediction. A new GA-RBP neural network is presented as a multistep-ahead predictor to obtain the ahead predicted values of nonlinear systems. According to equation (3), the NARMAX one-step ahead model can be derived as where y(t + 1) is the one-step predicted output value of the nonlinear model. e next step y(t + 2) can be obtained of the form 4 Complexity e d-step-ahead predicted value can be expressed as e structure of the recursive neural network predictor is shown in Figure 3. Specifically, d iterations are required for prediction. Based on the fact that a continuous feed-forward neural network with a single hidden layer and the sigmoid transfer function can approximate any complex continuous mapping with arbitrary precision [19][20][21], the recursive BPNN is constructed by a single hidden layer neural network to prevent the problem of gradient vanishing. When the prediction horizon is larger, the prediction accuracy will be affected by the accumulative errors generated by the recursive procedure. In order to offset this error, a variable y a (t) is introduced to define the system output and predicted output of the predictor y a (t) � y(t) − y m (t).
e estimated output value of the ahead predictor can be derived as

IGDM Online Optimization
To solve numerical optimization problems online at each iteration for the MPC algorithm, the online optimization technique has involved solving dynamic optimization problems in MPC for superior performance. e essence of online optimization problem is to acquire an optimal control input sequence at each sampling instant. Optimal control input can be obtained from the minimization of the cost function in equation (1) over a finite prediction horizon, which has the form let initial prediction error e(t) � y r (t) − y c (t), and the minimization of the cost function can be described as where α and 1 − α represent weighting factor of input and output, respectively. α determines the control performance of the system. e(t) is the prediction error between predicted output and desired reference signal. y r (t), y m (t), e(t), and Δu(t) are denoted as follows: y r (t) � y r (t + 1), y r (t + 2), . . . , y r t + N p T , y m (t) � y m (t + 1), y m (t + 2), . . . , y m t + N p T , e(t) � y r (t + 1) − y m (t + 1), y r (t + 2) − y m (t + 2), . . . , y r t + N p − y m t + N p T ,  Complexity e objective of the receding horizon optimization is to minimize the cost function in equation (12). For each iteration of the cost function, an intermediate control vector is generated; then, a series of control increment Δu(t), Δu(t + 1), . . . , Δu(t + N u − 1) can be derived. For the multivariable function E(θ) with the input θ � [θ 1 , θ 2 , . . . , θ n ] T , the optimal θ * can be obtained by minimizing E(θ), and the parameters θ k+1 can be calculated by where p k � ∇E(θ k ) T ∇E(θ k ) represents the search direction, η denotes step size, and k is the current iteration. Based on the Taylor expansion formula, we can obtain According to the IGDM update rule, the optimization problem is solved by where the control increment Δu(t) is denoted by and the Jacobian matrix is expressed as It is noted that the optimization process will be recursive until the optimum control input u * (k) can be obtained.
Furthermore, the predicted output of the system y m (t + i) and its derivatives are required to calculate the last variable u(k + 1). Based on the chain rule, the element of the Jacobian matrix can be expressed as e proposed GA-RBP neural network predictive control algorithm mainly includes identification and optimization. e multistep-ahead predictor can be obtained by using the recursive technique. e process of the proposed predictive control algorithm by the following procedure is as follows: (1) Select the training dataset, validation dataset, and testing dataset. recursive technique, and the predictor will be used to predict the system output. (5) Specify the initial sequence of control input u 0 (k) and initialize the minimum prediction horizon N 0 and the input weighting factor α. (6) At the current sampling time k, the past input and output samples and the current sample are utilized to predict the current output and future output of the system. (7) Correct the multistep-ahead predicted values y c according to the error between y c and the current reference signal y r in equation (11). (8) Compute Jacobian matrix equation (19) and obtain control moves; the control input sequence at the current sampling time k is calculated such that a set of y c approximates the reference signal y r in an optimal way. (9) Obtain the sequence of control input in step (8) and its first control element u(k) will be applied to control the system. (10) Repeat steps (6)-(9) until the simulation time is reached.
y (t -n y + 1) y (t -n y ) Figure 3: e structure of recursive neural network predictor.
Since a model of the process is directly used online for prediction, MPC techniques can be applied to multivariable processes and also when the number of control inputs and controlled variables is different. e IGDM technique aims at identifying advanced starting points for the optimization to reduce the number of iterations required to reach the optimum. It is considered that the iteration process of GA stops when the number of iterations reaches the maximum. So the computational complexity of GA can be approximately described as where H p is the prediction horizon, H u is the control horizon, and n u and n y are the input and output dimensions, respectively. It is noted that the control horizon H u has a significantly bigger impact on computational complexity than the prediction horizon H p [18,34].

Numerical Experiments
To illustrate the effectiveness and efficiency of the GA-RBP neural network algorithm, some numerical experiments are presented compared with other three models including BPNN, RBFNN, and Support Vector Regression (SVR). e GA controlling parameters are selected based on the factorial design approach as follows: the population size is chosen from 30 to 100, generation is set within 25 to 75, and crossover probability and mutation probability are selected from 0.30 to 0.85 and 0.05 to 0.55. Based on the prediction model using the GA-RBP neural network strategy, the IGDM will be applied to calculate the control law for the system. We select three criteria to evaluate the performance of the proposed algorithm as the mean absolute error (MAE), root mean square error (RMSE), and calculation time. Generally, the smaller MAE and RMSE are, the better prediction performance is. e mathematical formulas can be expressed as where N is the number of samples and y m (i) and y(i) are the predicted output and the system output, respectively. All the experiments are implemented in MATLAB (version 9.6 (R2019a)) on the personal computer of Windows 7 operating system, an Intel Core TM i7 CPU with 2.5 GHz, and 12 GB RAM.

System I.
A typical highly nonlinear discrete system is described as 5y(k)). (23) e time-varying reference signal y r (t) is where the delay time is t d � 0 and the maximum lags of the system are n u � 1 and n y � 1, respectively. e prediction and control horizons of the proposed control method are selected as N p � 2 and N u � 1, respectively. e initial values are u(1) � 1 and y(1) � 0, respectively. A random function is used to generate 2000 data as the input of the nonlinear discrete system and the output data of the controlled system is obtained according to equation (23). e input vector of the GA-BP prediction model is [u(k − 1), y(k − 1)] T , whereas y(k) is output.
In BP and GA-BP algorithms, each hidden layer consists of 20 neurons. e population size of GA is chosen 30, the generation is selected as 50, and the crossover and mutation probabilities are selected as 0.4 and 0.2, respectively. e penalty coefficient is C � 150 and the hyperparameter of the Gaussian kernel is σ � 0.6 in SVR. Each modeling algorithm is trained offline with the input-output data pairs obtained from the training data and the initial RMSE is set as 0.001. e prediction errors of GA-BP, BP, RBF, and SVR are shown in Figure 4. e MAE and RMSE values of the training dataset, validation dataset, and testing dataset are shown in Table 1. It can be observed that GA-BP has good predictive performance and MAE and RMSE test error is small, which proves that the model has good generalization abilities. To further verify the nonlinear accuracy of the GA-BP neural network model, the performance of each model is compared in Table 2. It can be seen that the GA-BP algorithm achieves a smaller MAE and RMSE than the other models, which demonstrate that the GA-BP is suitable for nonlinear system modeling. Although GA-BP requires more training time for the system modeling, GA-BP neural network can avoid falling into the local optimum via adjusting the network parameters, and thus its prediction accuracy is greatly improved.
Combined with the proposed GA-RBP neural network predictor, the IGDM is used to calculate the control moves of the system. e system output is required to track the reference signal as equation (22). Figure 5 shows the comparison results between the system output and the reference signal. e satisfactory performance obtained is due to the accurate representation of the nonlinear dynamics of the process by these neural network models. e tracking time for GA-RBP neural network predictive control (GA-RBPNNPC) is less than those of the other methods, which     Complexity reflects the proposed controller which can track reference signal quickly. e absolute tracking error between the system output and the reference signal of the proposed algorithm can be shown in Figure 6, which demonstrates that the proposed GA-RBP neural network model can highly approximate the dynamic process, and the tracking performance of the proposed IGDM is satisfactory. e simulation results for the nonlinear system are compared with an RBF neural network predictive control (RBFNNPC) and a traditional GPC. e control parameters of RBFNNPC are the same as those of GA-RBP neural network predictive control. e GPC parameters are set as N p � 16, N u � 1, and λ � 5. e control performance of the proposed algorithm and the other algorithms is listed in Table 3. It can be observed that the proposed control method has a superior performance for this system, which indicates that the GA-RBP neural network predictive controller has a better calculate ability than the other methods. It can be inferred that the proposed algorithm improves the control performance of MPC including the maximum overshoot value, calculation time, and RMSE error value.
5.2. System II. A nonlinear dynamic system with time delay has the following form: e reference signal y r (t) and the external disturbance signal d(t) are y r (t) � 0.9, 0 < t ≤ 200, 0, 200 < t ≤ 400, 0.7, 400 < t ≤ 600, −0.3, 600 < t ≤ 800, 0.4, 800 < t ≤ 1000, e sampling time is 1 second. e delay time is t d � 1, and the initial values of the system input and output are u(1) � u(2) � 1 and y(1) � y(2) � 0, respectively. It can be obtained that the maximum lags in the outputs and inputs are 1 and 2, respectively. A random function is used to generate 3000 data as the input of the nonlinear discrete system and the output data of the controlled system are obtained based on equation (25). e crossover and mutation probabilities of GA are chosen as 0.3 and 0.1, respectively. e SVR parameters are selected as C � 80 and σ � 0.2. To compare the predictive performance of the BP and GA-BP, the same number of neurons and learning rate are used in the modeling process. e input and output variables of the proposed GA-BP neural network model are specified by y m (k) � f(u(k − 1), u(k − 2), y(k − 1), y(k − 2)). e design specifications for the GA-BP neural network model obtained are given in Table 4. It can be inferred from the cross-validation results that the GA-BP is well trained and can be used as a prediction model. It can be seen from Figure 7 and Table 5 that the GA-BP model is more accurate compared with the other methods for the time-delay nonlinear system. After the offline training stage, the weights and bias were used as connection parameters of the neural network, and the two-step-ahead prediction results can be obtained by a recursive algorithm. e constructed GA-RBP predictor is combined with the IGDM to track set-point changes signal as equation (26).
As reference trajectory equation (26), a series of set-point changes is considered. As shown in Figures 8 and 9, the proposed algorithm can always reach a stable value in a relatively small time when the operating point changes because the controller adopts a receding horizon optimization method. It also can be seen that there is no steadystate error in the tracking of the desired reference. e MPC based on the GA-RBP algorithm is stable to track the reference trajectory fast enough. In order to test the robustness and disturbance rejection of the proposed algorithm, the external disturbance is introduced in the system. e simulation results of RBFNNPC and traditional GPC algorithms are compared under set-point changes and constant external disturbances. e control parameters of RBFNNPC are the same as those of GA-RBP neural network predictive control. e GPC parameters are selected the same as [2], which can be listed as N p � 20, N u � 1, and λ � 10. Figures 10 and 11 show the output response and corresponding absolute errors of the system.
As shown in Figures 10 and 11, the system output starts fluctuating when an external disturbance occurs. However, the effect of the disturbance and the overall performance become reasonable after around 50 seconds, and the proposed MPC algorithm damps. It can be seen that the proposed method ensures that the process response can track the set points without unrestrained overshoot and oscillations. Compared with the set-point tracking results of the other controllers, our controller has smaller overshoots and rise time. e system output can be faster convergent at the desired value due to the accurate representation of the nonlinear dynamics of the      process by GA-RBP neural network predictive controller and the use of IGDM at the current sampling instant to obtain the value of control in equation (18). Meanwhile, the evaluation criteria of the proposed algorithm include the hidden nodes, the maximum overshoots, the calculation time, the rise time, and the RMSE. It can be seen from Table 6 that the proposed control method has superior performance on the calculation time and RMSE error values, which indicates that the proposed approach has a better adjustment ability than other methods and thus can reduce computational burden and optimize the control performance of MPC. e proposed control method has satisfied prediction and tracking abilities and has no difficulty in rejecting the external disturbance and the influence of model mismatch can be compensated in time. It can also be concluded that the presentation of IGDM realizes the real-time control of complex processes of nonlinear system.

Conclusions
In this paper, an MPC method based on the GA-RBP neural network predictor is proposed and the reference signal tracking and disturbance rejection behavior of the nonlinear system was studied. Due to the global searching characteristic of GA, the optimum interconnection values can be obtained, which improves the convergence rate and stability of the BP algorithm. e GA-RBP neural network predictor is applied to model two highly nonlinear systems and the prediction accuracy proves that it has satisfied performance. Simultaneously, combine the multistep-ahead GA-RBP neural network predictor with the IGDM to address the online optimization problem of nonlinear MPC. Finally, the proposed method has been utilized in these systems to evaluate our strategy performance of set-point changes signal tracking and external disturbance rejection, and the effectiveness of the results from computer simulation has been proven. It can be observed from simulation results that GA-RBP neural network predictive control algorithm presents a satisfactory performance when the process with time delays where traditional GPC algorithm fails due to the nonlinearities. e proposed method can achieve more accurate predictive performance and address online optimization of nonlinear MPC, and the MPC strategy based on the GA-RBP and IGDM can track the desired reference signal with satisfactory performances. However, when the GA-RBP modeling method addresses small sample sizes or uneven sample distribution, the prediction ability for GA-RBP neural network is limited and needs to be improved. In general, there are two ways to solve this problem. One is the data augmentation to generate as many training samples as possible. e other is sampling and data synthesis, where the training dataset is processed to obtain more training samples from an unbalanced dataset to a balanced dataset.
In industrial applications, the delay time of the nonlinear system may be unknown or uncertain. e analysis and design of a controller for nonlinear systems with time delay are more complex if the delays are unknown or time varying.
is problem can be solved by optimal control methods, including linear matrix inequality (LMI) approach [47,48], Lyapunov-Krasovskii functional [49,50], and fuzzy logic model [51,52], which are proved to be robust MPC controllers for the constraint process in practice. e incorporation of GA-RBP in the optimal control methods for the MPC model will be a new method for a class of nonlinear systems with time delays and uncertainties. It is promising that the new combined algorithm has good prediction and tracking performance in nonlinear systems with unknown time delays. e study of this type of nonlinear system in industrial applications will be the further direction.

Data Availability
e data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest
e authors declare that they have no conflicts of interest.