1 Introduction

Model predictive control (MPC), which is a control algorithm able to handle constrained complex systems, has been widely used in process industry. However, MPC is facing a practical problem that is how to deal with uncertainty of the dynamic characteristics model, so that the system can guarantee good performance. Therefore, it is one of the hottest spots in the MPC field that how to establish an accurate prediction model when the system exists uncertainty[13].

Subspace method identifies a system by input and output data, and its algorithm uses some simple and reliable linear algebra methods. Its good capacity to deal with multivariate system is very suitable for use as a predictive control model[4, 5].

Subsequently, the predictive control based on subspace identification method appeared in different studies. Some of them used subspace predictor matrix, and others utilized the subspace identification to obtain the state-space model as a predictive model. Kadali et al.[1] proposed a design predictive control method by using subspace predictor directly, and the typical characteristics of predictive control were included. In recent years, subspace predictive control (SPC) has acquired more research results[6, 7]. Huang and Kadali[6] designed the SPC in the general predictive control (GPC). However, the forgetting factor mechanism was not considered and this affected the tracking speed of the algorithm parameters. In [8] a novel recursive predictor-based subspace identification method was presented to identify linear time-invariant systems with multi-inputs and multi-outputs. The method was implemented in real-time and was able to operate in open loop or closed loop. Alenanya and Shang[9] developed a recursive version of the constrained least squares (CLS) subspace algorithm. The advantage of this approach is that the CLS subspace algorithm is a linear regression problem and hence its recursive algorithm can be obtained without updating neither the observability matrix, SVD, nor QR decompositions[9].In recent years many SPC methods have been studied[1012].

However, the SPC method used in actual industrial process is rare. In this paper, a subspace predictive control algorithm is deduced. Considering the existence of time varying characteristics in the lime calcination process, a variable forgetting factor recursive strategy, adaptive subspace predictive control (ASPC) strategy is proposed. This strategy uses the matching error of the model to calculate the time-varying forgetting factor, which is in the construction of the data matrix. Thus the data will follow the system information better, and the optimal control will be obtained. Finally, the incremental control law is employed to eliminate the steady state error.

2 Problem statement

We consider the following uncertain discrete-time (LTI) system:

$$\left\{ {\matrix{{x(k + 1) = Ax(k) + Bu(k) + Ke(k)} \hfill \cr {y(k) = Cx(k) + Du(k) + e(k)} \hfill \cr } } \right.$$
(1)

where k represents the k-th sampling time, K is the Kalman gain of the stable state. \(u(k) \in {{\bf{R}}^{{n_u}}},x(k) \in {{\bf{R}}^{{n_x}}},y(k) \in {{\bf{R}}^{{n_y}}}\) are the input, the unmeasurable state, and the output, respectively. {e(k)} is zero-mean Gaussian white noise, and A, B, C, D are the corresponding dimension matrices.

For system (1), the control system is shown as Fig. 1. And the input and output data sets are

$$\left({\left[ {\matrix{{u(1)} \cr {u(2)} \cr \vdots \cr {u(n)} \cr } } \right],\,\left[ {\matrix{{u(1)} \cr {u(2)} \cr \vdots \cr {u(n)} \cr } } \right]} \right)$$
Fig. 1
figure 1

The SPC system

Here the predictive method is adopted, which recursively identifies L ω and L U online. This avoids the molding process. The process can be explained as looking for the optimal predictive value of a future output based on the past input/output matrix W p and future input matrix U f . This can be expressed as a linear predictor:

$${\hat Y_f} = {L_\omega }{W_p} + {L_u}{U_f}.$$
(2)

The prediction of minimum-square of Y f is \({\hat Y_f}\),which can be obtained through solving the following minimum variance problem:

$$\mathop {\min }\limits_{{L_\omega },\,{L_u}} = ||{Y_f} - ({L_\omega }\quad {L_u})\left({\matrix{{{W_p}} \hfill \cr {{U_f}} \hfill \cr } } \right)||_F^2.$$
(3)

3 ASPC with time-varying forgetting factor

L ω and L u play an important role in solving the predictive control law. Therefore, the foremost problem is to obtain accurate information of these two matrices. In each iteration, the model is updated by adding new data[13, 14]. Furthermore, a novel recursive subspace identification algorithm based on forgetting factors is proposed.

3.1 Hankel matrices with time-varying forgetting factor

In this paper, we calculate the time-varying forgetting factor using model matching error. First, we take the current model error and its integration as a measure of the model matching degree. Define the model matching degree as

$$\Delta (k) = \alpha {e^2}(k) + \beta \sum\limits_{j = 1}^L {{\theta ^j}{e^2}(k - j)} $$
(4)

where \(e(k) = \hat y(k) - y(k),\,\,\hat y(k)\) is the calculated output at k, y(k) is the actual output at k, α is current error weight, β is the combination weight of past time, θ is memory effect parameters of matching error, L is the error length of past time. Then, we define the time-varying forgetting factor λ(k)= ε/Δ(k),ε < min Δ (k) represents the permissible degree of the model matching error.

When the model-matching error is large, it will significantly accelerate the forgetting of old data with the correction model by new data. When the matching error is close to the permissible degree, the model is only corrected slightly[15].

We construct the new Hankel matrix \(\tilde U_p^{\lambda (k)}\) using the new input u(k)and y(k):

$$\tilde U_p^{\lambda (k)} = \left[ {\matrix{{\Pi {{(k)}^k}u(0)} \hfill & \cdots \hfill & {\Pi {{(k)}^{i - 1}}u(k - i + 1)} \hfill \cr {\Pi {{(k)}^{k - 1}}u(1)} \hfill & \cdots \hfill & {\Pi {{(k)}^{i - 2}}u(k - i + 2)} \hfill \cr \vdots \hfill & \vdots \hfill & \vdots \hfill \cr {\Pi {{(k)}^{k - i + 1}}u(i - 1)} \hfill & \cdots \hfill & {u(k)} \hfill \cr } } \right]$$
(5)
$$\tilde Y_p^{\lambda (k)} = \left[ {\matrix{{\Pi {{(k)}^k}y(0)} \hfill & \cdots \hfill & {\Pi {{(k)}^{i - 1}}y(k - i + 1)} \hfill \cr {\Pi {{(k)}^{k - 1}}y(1)} \hfill & \cdots \hfill & {\Pi {{(k)}^{i - 2}}y(k - i + 2)} \hfill \cr \vdots \hfill & \vdots \hfill & \vdots \hfill \cr {\Pi {{(k)}^{k - i + 1}}y(i - 1)} \hfill & \cdots \hfill & {y(k)} \hfill \cr } } \right]$$
(6)

where π(k)i = λ(k)λ(k − 1), ⋯, λ(ki + 1), the new Hankel matrix can also be written as \(\tilde Y_p^{\lambda (k)} = {\Lambda _P}(k){Y_p}\Lambda (k),\,\,\tilde U_p^{\lambda (k)} = {\Lambda _m}(k){U_p}\Lambda (k)\), where m and p are the corresponding dimensions, Λ P (k), Λ m (k)and Λ(k)are

$${\Lambda _p}(k) = {\rm{diag}}\{ \Pi {(k)^{i - 1}}{I_p}, \cdots, \Pi {(k)^1}{I_P},{I_P}\} $$
(7)
$${\Lambda _m}(k) = {\rm{diag}}\{ \Pi {(k)^{i - 1}}{I_m}, \cdots, \Pi {(k)^1}{I_m},{I_m}\} $$
(8)
$$\Lambda (k) = {\rm{diag}}\{ \Pi {(k)^{i - 1}}, \cdots, \Pi {(k)^1},1\}.$$
(9)

The \(\tilde U_f^{\lambda (k)}\) and \(\tilde Y_f^{\lambda (k)}\) have the same definition.

3.2 ASPC control law design

Consider the following objective function, for which if N 1 = 1 then

$$\matrix{{J = \sum\limits_{j = 1}^{{N_2}} {{{({r_{k + j}} - {{\hat y}_{k + j|k}})}^2}} + \sum\limits_{j = 1}^{{N_u}} {\lambda {{({u_{k + j - 1}})}^2}} = } \hfill \cr {{{({r_f} - {{\hat y}_f})}^{\rm{T}}}({r_f} - {{\hat y}_f}) + u_f^{\rm{T}}(\lambda I){u_f}.} \hfill \cr }$$
(10)

Here, the future output range is equal to the prediction horizon, from k + 1 to k + N 2. The future increment input range is equal to the control horizon, from k to k + N u − 1. r f is the reference input. In the state space model, the optimal prediction of future output vector can be represented as the future input and the current state.

(11)

where ω p = [y k i+1y k −1 yk u k i u k −2 u k −1]T, and u f = [u k u k +1u k +N u 1]T. Substituting (11) into the objective function (10), we have

$$J = \sum\limits_{j = 1}^{{N_2}} {{{({r_{k + j}} - ({L_w}{w_p} + {L_u}{u_f}))}^2}} + \sum\limits_{j = 1}^{{N_u}} {\lambda {{({u_{k + j - 1}})}^2}}.$$
(12)

In finite horizon {N 2, N u }, we make the derivation of J equal to zero. Then we get the SPC control law:

$${u_f} = {(\lambda I + L_u^{\rm{T}}{L_u})^{ - 1}}L_u^{\rm{T}}({r_f} - {L_w}{w_p})$$
(13)

where ω p = W p (:, 1) is the control input. The ASPC control method with time-varying forgetting factor is as follows:

  • Step 1. Construct Hankel matrices \({U_f},{U_f},\,{W_p} = \left[ {\matrix{{{Y_p}} \cr {{U_p}} \cr } } \right]\)

  • Step 2. Calculate

    $$\left[ {\matrix{{{W_p}} \hfill \cr {{U_f}} \hfill \cr {{Y_f}} \hfill \cr } } \right] = \left[ {\matrix{{{R_{11}}} \hfill & 0 \hfill & 0 \hfill \cr {{R_{21}}} \hfill & {{R_{22}}} \hfill & 0 \hfill \cr {{R_{31}}} \hfill & {{R_{32}}} \hfill & {{R_{33}}} \hfill \cr } } \right]\left[ {\matrix{{Q_1^{\rm{T}}} \hfill \cr {Q_2^{\rm{T}}} \hfill \cr {Q_3^{\rm{T}}} \hfill \cr } } \right].$$
    (14)
  • Step 3. Calculate \(L = [{L_w}\quad {L_u}] = [{R_{31}}\quad {R_{32}}]{\left[ {\matrix{{{R_{11}}} & 0 \cr {{R_{21}}} & {{R_{22}}} \cr } } \right]^\dagger }\) and get the predictive controller parameters:

    $$\matrix{{{L_w} = L(:,1:N(m + l))} \hfill \cr {{L_u} = L(:,N(m + l) + 1:{\rm{end}}).} \hfill \cr }$$
    (15)
  • Step 4. Construct the controller input:

    (16)
  • Step 5. Calculate the predictive control sequence \({u_f} = {(\lambda I + L_u^{\rm{T}}{L_u})^{ - 1}}L_u^{\rm{T}}({r_f} - {L_w}{w_p})\) and carry out the first control u 1.

  • Step 6. Measure the system output y 1, and return to Step 2.

3.3 Incremental control

The above control law tracks the reference input trajectory by the control variable u, but does not incrementally control Δu. There is a steady-state error. So it is needed to that use the objective function with increment input as follows:

$$\matrix{{J = \sum\limits_{j = 1}^{{N_2}} {{{({r_{k + j}} - {{\hat y}_{k + j|k}})}^2}} + \sum\limits_{j = 1}^{{N_u}} {\lambda {{(\Delta {u_{k + j - 1}})}^2}} = } \hfill \cr {{{({r_f} - {{\hat y}_f})}^{\rm{T}}}({r_f} - {{\hat y}_f}) + \Delta u_f^{\rm{T}}(\lambda I)\Delta {u_f}.} \hfill \cr }$$
(17)

Suppose e k is the integration noise, that is,

$${e_{k + 1}} = {e_k} + {a_k} + 1$$
(18)

where a k is white noise signal, and Δ = 1 − z −1 is difference operator ε k = a k /Δ. Then, we can get

$${z_{k + 1}} = A{z_k} + B\Delta {u_k} + {K^f}{a_k}$$
(19)
$$\Delta {y_k} = C{z_k} + D\Delta {u_k} + {a_k}$$
(20)

where z k = x k x k− 1, so the subspace input and output expression becomes

$$\Delta {Y_f} = {\Gamma _i}{Z_f} + {H_i}\Delta {U_f} + H_i^s{A_f}$$
(21)
$$\Delta {\hat Y_f} = {\Gamma _i}{z_k} + {H_i}\Delta {u_f} = {L_w}\left[ {\matrix{{\Delta {y_p}} \hfill \cr {\Delta {u_p}} \hfill \cr } } \right] + {L_u}\Delta {u_f}.$$
(22)

The j step predictor is

$$\matrix{{{y_{k + j - 1}} - {y_{k - 1}} = (C{A^{j - 1}} + \cdots + C){z_k} + } \hfill \cr {\left. {\left[ {(C{A^{j - 1}}B + \cdots + D)\Delta {u_k} + } \right. \cdots + D\Delta {u_{k + j - 1}}} \right] + } \hfill \cr {[{a_k} + {a_{k + 1}} + \cdots + {a_{k + j - 1}}].} \hfill \cr }$$
(23)

Equation (11) becomes

(24)

where \(\Gamma _{N_2 }^\circ\) is the improved generalized observation matrix; \(S_{N_2, N_u }\) is the (N 2 m × N u l) dimensional dynamic matrix.And \({{\bar y}_k} = \left[ {\matrix{{{y_k}} \cr {{y_k}} \cr \vdots \cr {{y_k}} \cr } } \right],\,\Gamma _{{N_2}}^\circ = \left[ {\matrix{C \cr {CA + C} \cr \vdots \cr {C{A^{{N_2} - 1}} + \cdots + C} \cr } } \right]\)and \({S_{{N_2},{N_u}}} = {L_u}(1:{N_2}m,1:{N_u}l)\left[ {\matrix{{{I_l}} & \cdots & 0 \cr {{I_l}} & \cdots & 0 \cr \vdots & \ddots & \vdots \cr {{I_l}} & \cdots & {{I_l}} \cr } } \right].L_w^\circ \) constructed by L ω :

(25)

where 1 ⩽ jN 2, F is zero input response:

(26)

Substituting the prediction equation (24) into the objective function (17) yields

$$\matrix{{J = {{({r_f} - F - {S_{{N_2},{N_u}}}\Delta {u_f})}^{\rm{T}}}({r_f} - F - {S_{{N_2},{N_u}}}\Delta {u_f}){y_k} + } \hfill \cr {\Delta u_f^{\rm{T}}(\lambda I)\Delta {u_f}.} \hfill \cr }$$
(27)

Get the incremental control by derivation of J:

$$\Delta {u_f} = {(S_{{N_2},{N_u}}^{\rm{T}}{S_{{N_2},{N_u} + }}\lambda I)^{ - 1}}S_{{N_2},{N_u}}^{\rm{T}}({r_f} - F).$$
(28)

Because only Δu f (1) is executed and is recalculated each time interval, in the time interval we can only calculate Δu k u f (1) = m l (r f F), where m l is the front row of the matrix \({(S_{{N_2},{N_u}}^{\rm{T}}{S_{{N_2},{N_u} + }}\lambda I)^{ - 1}}S_{{N_2},{N_u}}^{\rm{T}}\).So u k is

(29)

4 Simulation study

4.1 Rotary kiln model

The rotary kiln is the key equipment in metallurgical industry. Product quality, energy consumption, production safety and cost are directly related to the running state of the rotary kiln. Calcination temperature in the rotary kiln is an important process parameter that plays a decisive role to ensure product quality. The lime kiln under study is 600 t/d, 4.0m×60m. The main factor that affects the quality of lime rotary kiln is calcining zone temperature, the ideal temperature range is between 1260–1340°C. In this paper, the calcining zone temperature is taken as the controlled object, and the gas flow is taken as the control.

The calcining zone state-space model of the rotary kiln is built by using PO-Moesp subspace method:

Its corresponding controlled auto regressive integrated moving average (CARIMA) model is

$$G({z^{ - 1}}) = {{0.02283{z^{ - 1}} - 0.04466{z^{ - 2}} + 0.02179{z^{ - 3}}} \over {1 - 2.911{z^{ - 1}} + 2.824{z^{ - 2}} - 0.912{z^{ - 3}}}}.$$
(30)

4.2 Simulation results

Set the target temperature to be 1200, the simulation time is 500 minutes. According to the demands from the industrial scene, set the constraints range of u to be 0 to 11000 m 3 /h and Δu to be −1000 to 1000 m3 /h. The control effect is shown as Fig. 2. The result of simulation indicates that the ASPC method has good control effect and eliminates the steady-state error. Then, we compare this model with GPC in Fig. 3. It can be seen that both them can eliminate the steady-state error, but the subspace adaptive predictive control response speed is faster than the GPC.

Fig. 2
figure 2

The diagram of ASPC control effect

Fig. 3
figure 3

The comparison diagram of ASPC and GPC

Suppose the set temperature changes from 1200 °Cto 1300 °C at time 200 min, and back to 1200 °C at 350 min. The control effect is shown in Fig. 4. As can be seen, the subspace has a better tracking ability.

Fig. 4
figure 4

The comparison diagram of set temperature changed

In Fig. 5, the system changes at time 300 min. The B(z −1) changed into:

$$B({z^{ - 1}}) = 0.{\rm{0}}2283 - 0.54466{z^{ - 1}} + 0.02179{z^{ - 2}}.$$
Fig. 5
figure 5

The comparison diagram of controlled object changed

When the model mismatch, adaptive subspace predictive control can achieve better results. Fig. 6 study the changes of model matching errors Δ(k), when λ is 1, 0.9 and variable, respectively.

Fig. 6
figure 6

The comparison diagram of model matching errors

5 Conclusions

Focusing on the problem of algorithm performance degradation which is caused by the model mismatch, we propose a variable forgetting factor subspace predictive control strategy. The time-varying forgetting factor was calculated based on the model-matching error. After designing the control law, the incremental control law is derived, and this eliminates the steady state error. Finally, the simulation was carried out, which took the rotary kiln as the control object.