1 Introduction

Designing a controller with learning ability has been an interesting topic that the control engineers explore[1]. In this background, in the 1970s, iterative learning control (ILC) which is an effective intelligent control technique came up[2]. Nowadays, ILC has been extensively applied in practice, such as industrial robotics, intelligent transport systems, biomedical engineering, etc[38]. Meanwhile, theoretical research on ILC itself has been developed greatly, e.g., the research on modeling is extended to distributed parameter systems from ordinary different systems, as well as from discrete time systems to continuous time systems, and from determine systems to stochastic ones[915].

On the other hand, in recent years, as most physical or chemical processes can be described by distributed parameter systems, ILC for distributed parameter systems has become a research hot-spots. In [16], ILC for the first order hyperbolic distributed parameter systems was discussed by using finite approximation. A class of second order hyperbolic elastic system was studied in [17], by using the differential difference iterative learning algorithm. Xu et al.[18] designed P-type and D-type iterative learning algorithms based on semigroup theory for a class of parabolic distributed parameter systems. Tension control system was studied in [19] by using the PD-type learning algorithm. In [20], by employing the P-type learning control algorithm, a class of single-input single-output coupling (consisting of the hyperbolic and parabolic equations) nonlinear distributed parameter systems was studied and the convergence conditions, speed and robustness of the iterative learning algorithm were proposed or discussed. Recently, without any simplification or discretization of the 3D dynamics in the time, space as well as iteration domain, Huang et al.[21] proposed the ILC for a class of inhomogeneous heat equations.

However, it should be pointed out that compared to discrete systems described by ordinary difference equation, there are very limited studies on ILC for discrete distributed parameter systems governed by partial difference equation. ILC for traffic density of freeway model described by first order discrete hyperbolic systems was studied in [57], including the tracking and identification. ILC for spatiotemporal dynamics using Crank-Nicholson discretization was discussed in [22], using linear matrix inequality conditions. In fact, the application and the relative topics including boundedness, stability, and oscillation of discrete distributed parameter systems governed by partial difference equation have been studied by a number of authors[2326]. Especially, the parabolic type partial difference equation was considered in [27, 28].

In this paper, ILC technique is applied to a class of discrete parabolic distributed parameter systems described by partial difference equations. In the systems, the coefficients are uncertain but bounded. Under given initial and boundary conditions, a P-type iterative learning law for discrete parabolic distributed parameter systems is proposed and tracking error convergence analysis is given in detail. Here, we don’t use linear matrix inequality conditions or finite approximations, although iterative processes involves three different domains (time, space and iteration). We take discrete Green formula and an analogues discrete Gronwall inequality to estimate the states of learning systems, then use fixed point theorem to get convergence results.

The remainder of this paper is organized as follows. In Section 2, the problem formulation and some preliminaries are described. In Section 3, with rigorous analysis, sufficient conditions guaranteeing the tracing error convergence of the ILC system are proposed. Numerical simulations are displayed in Section 4 and finally Section 5 concludes the paper.

2 Preliminary

Consider the following single input single output discrete parabolic type distributed parameter system governed by partial difference equations

$$\left\{{\matrix{{{\Delta _2}q(i,j) = a(j)\Delta _1^2q(i - 1,j) + b(j)q(i,j) + c(j)u(i,j)} \hfill \cr{\quad y(i,j) = l(j)q(i,j) + m(j)u(i,j)} \hfill \cr}} \right.$$
(1a 1b)

where i, j are spatial and time discrete variables, respectively, 1 ≤ iI, 0 ≤ jJ, I, J are given integers. {a(j)}, {b(j)}, {c(j)} as well as {l(j)} {m(j)} are uncertain bounded real sequences, {a(j)} > 0. q(i, j), u(i, j), y(i, j) ∈ R (for a pair of fixed i, j) denotes the state, control input and output of the discrete system (1), respectively. In system (1), the partial differences are defined as usual, i.e.,

$$\left\{{\matrix{{{\Delta _1}q(i,j) = q(i + 1,j) - q(i,j)} \hfill \cr{{\Delta _2}q(i,j) = q(i,j + 1) - q(i,j)} \hfill \cr{\Delta _1^2q(i,j) = {\Delta _1}\left({{\Delta _1}q(i - 1,j)} \right) =} \hfill \cr{\quad \quad \quad \;\;q(i + 1,j) - 2q(i,j) + q(i - 1,j).} \hfill \cr}} \right.$$
(2)

The zero boundary condition of system (1) is

$$q(0,j) = 0 = q(I + 1,j),\quad 1 \leq j \leq J$$
(3)

and the initial condition is

$$q(i,0) = \varphi (i,0),\quad 1 \leq i \leq I.$$
(4)
  • Remark 1. From (1a), if qj =col(q(1, j), ⋯, q(I, j)) and uj =col(u(1, j), ⋯, u(I, j)), then we have the vector recurrence relation as

    $${q^{j + 1}} - {q^j} = {A_j}{q^j} + {c_j}{u^j},\quad j = 0,1,2, \cdots ,J$$

    where

    $$A_j = a(j)\left( {\begin{array}{*{20}c} { - 2} & 1 & 0 & \cdots & 0 \\ 1 & { - 2} & 1 & \cdots & 0 \\ {} & {} & \cdots & {} & {} \\ 0 & \cdots & 1 & { - 2} & 1 \\ 0 & \cdots & 0 & 1 & { - 2} \\ \end{array} } \right) + b(j)I$$
    (5)

    with I being the unit matrix. Furthermore, if \(a(j) = a = {{\Delta t} \over {{h^2}}}\) (h is space stepsize and Δi is time stepsize or sampling time), and b(j) = Δt × b, c(j) = Δt × c, then (1a) is discretization of the following parabolic distributed parameter system

    $${{\partial q(x,t)} \over {\partial t}} = {{{q^2}(x,t)} \over {\partial {x^2}}} + bq(x,t) + cu(x,t).$$
    (6)

    Generally, to ensure numerical stability, it is needed to select a suitable sampling period when we discrete the continuous systems. For example, forward difference method provides \({{\Delta t} \over {{h^2}}} < {1 \over 2}\) However, we only consider the partial difference systems (1) in this paper (step ratio is 1). Some examples were given to illustrate this reasonableness in [23], e.g., the temperature distribution of a “very long” thin rod, and the model of population dynamics with spatial migrations.

For the controlled object described by system (1), let the desired system output be y d (i, j). Now, the aim is to seek a corresponding desired input u d (i, j) such that the actual output of system (1)

$${y^{\ast}}(i,j) = l(j){q_d}(i,j) + m(j){u_d}(i,j)$$
(7)

will approximate to the desired output y d (i, j). It is not easy to get the desired control as the system is uncertain. We will gradually gain the control sequence u k (i, j) by using learning control method.

In order to get the control input sequence u k (i, j), we use the P-type iterative learning control algorithm

$${u_{k + 1}}(i,j) = {u_k}(i,j) + \gamma (j){e_k}(i,j)$$
(8)

where tracking error e k (i, j) = y d (i, j) − y k (i, j), y k (i, j) is the k-th output corresponding to the fc-th input u k (i, j), and γ(j) is the gain matrix in the learning process.

Assuming that in the learning process, the state of system starts from the same initial value, i.e.,

$${q_k}(i,0) = \varphi (i,0),\;0 \leq i \leq I,\;k = 1,2, \cdots .$$
(9)

For convenience, the discrete L2 norm ∥·∥ and discrete L2(λ)norm ∥·∥ λ used in this paper are defined as follows:

$$\left\Vert g \right\Vert = {\left({\sum\limits_{i = 1}^I {g{{(i)}^2}}} \right)^{{\textstyle{1 \over 2}}}},\quad g(i) \in {\rm{R}}$$
(10)
$${\Vert {{f_k}} \Vert _\lambda} = {(\mathop {\sup}\limits_{0 \leq j \leq J} \{({\Vert {{f_k}(\cdot ,j)} \Vert ^2}{\lambda ^j})\})^{{\textstyle{1 \over 2}}}}$$
(11)

where λ > 0, f k R, 1 ≤ iI, 0 ≤ jJ. According to the definition, we have

$${\left\Vert {{f_k}} \right\Vert _\lambda} \leq \mathop {\sup}\limits_{0 \leq j \leq J} \left\Vert {{f_k}(\cdot ,j)} \right\Vert \leq {\lambda ^{- J}}{\left\Vert {{f_k}} \right\Vert _\lambda}.$$

To show the convergence of ILC of systems (1), two lemmas are given as follows.

  • Lemma 1 (A analogues discrete of difference type Gronwall inequality)[27]. Let the constant sequences {v(i)}, {B(i)} and {D(i)} be real sequences defined for i ≥ 0 satisfying

    $$v(i + 1) \leq B(i)v(i) + D(i),\quad B(i) \geq 0,\quad i \geq 0.$$
    (12)
    $$v(j) \leq \prod\limits_{i = 0}^{j - 1} {B(i)v(0) + \sum\limits_{i = 0}^{j - 1} {D(i)} \prod\limits_{s = i + 1}^{j - 1} {B(s),\quad} j \geq 0.}$$
    (13)
  • Lemma 2 (Discrete Green’s formula)[25]. Under initial and boundary conditions (3) and (4), for system(1), we have

    $$\matrix{{\sum\limits_{i = 1}^I q (i,j)\Delta _1^2q(i - 1,j) =} \hfill \cr{q(I + 1,j){\Delta _1}q(I,j) - q(1,j){\Delta _1}q(0,j) - \sum\limits_{i = 1}^I {{{\left({{\Delta _1}q(i,j)} \right)}^2} =}} \hfill \cr{\quad - \sum\limits_{i = 0}^I {{{\left({{\Delta _1}q(i,j)} \right)}^2}.}} \hfill \cr}$$
    (14)

3 Main results

Given that the control input u(i, j) of system (1) is undertaken by u k (i, j) generated from the above learning control scheme (8), the corresponding system dynamics description becomes

$$\left\{{\matrix{{{\Delta _2}{q_k}(i,j) = a(j){\Delta _1}{q_k}(i - 1,j) + b(j){q_k}(i,j) +} \hfill \cr{\quad \quad \quad \quad \quad c(j){u_k}(i,j)} \hfill \cr{\quad {y_k}(i,j) = l(j){q_k}(i,j) + m(j){u_k}(i,j)} \hfill \cr}} \right.$$
(15a 15b)

The initial condition of system (15) is also (9). Additionally, the boundary condition of system (15) is 0, i.e.,

$${q_k}(0,j) = 0 = {q_k}(I + 1,j),\quad 1 \leq j \leq J.$$
(16)

3.1 State estimation

In this section, in order to obtain convergence conditions of tracking error, we give the estimation state of system (15a), i.e., qk(i, j).

  • Proposition 1. Under initial and boundary conditions (9) and (16), for q k in (15), we have

    $$\sum\limits_{i = 1}^I {{q_k}{{(i,j + 1)}^2}} \leq {C_1}(j)\sum\limits_{i = 1}^I {q_k^2} (i,j) + {C_2}(j)\sum\limits_{i = 1}^I {u_k^2} (i,j)$$
    (17)

    where

    $$\matrix{{{C_1}(j) = 1 + 2b(j) + \vert c(j)\vert + 8{a^2}(j) + 4{{(b(j) - 2a(j))}^2}} \hfill \cr{{C_2}(j) = \vert c(j)\vert + 4{c^2}(j).} \hfill \cr}$$
  • Proof. We divide our proof into three steps. First, by (1) and (2), we have

    $$\matrix{{{q_k}(i,j + 1) =} \hfill \cr{\quad a(j)\Delta _1^2{q_k}(i - 1,j) + \left({1 + b(j)} \right){q_k}(i,j) + c(j){u_k}(i,j).} \hfill \cr}$$
    (18)

    Multiplying the two sides of (18) by q k (i, j) yields

    $$\matrix{{{q_k}(i,j){q_k}(i,j + 1) = a(j){q_k}(i,j)\Delta _1^2{q_k}(i - 1,j) +} \hfill \cr{\quad \quad \quad \quad \left({1 + b(j)} \right)q_k^2(i,j) + c(j){q_k}(i,j){u_k}(i,j).} \hfill \cr}$$
    (19)

    Note that

    $$\matrix{{{{({\Delta _2}{q_k}(i,j))}^2} = {{\left({{q_k}(i,j + 1) - {q_k}(i,j)} \right)}^2} =} \hfill \cr{\quad q_k^2(i,j + 1) - 2{q_k}(i,j){q_k}(i,j + 1) + q_k^2(i,j).} \hfill \cr}$$
    (20)

    Then, from (19) and (20), we obtain

    $$\matrix{{q_k^2(i,j + 1) =} \hfill \cr{\quad {{\left({{\Delta _2}{q_k}(i,j)} \right)}^2} + 2{q_k}(i,j){q_k}(i,j + 1) - q_k^2(i,j) =} \hfill \cr{\quad {{\left({{\Delta _2}{q_k}(i,j)} \right)}^2} + 2a(j){q_k}(i,j)\Delta _1^2{q_k}(i - 1,j) +} \hfill \cr{\quad \left({1 + 2b(j)} \right)q_k^2(i,j) + 2c(j){q_k}(i,j){u_k}(i,j).} \hfill \cr}$$
    (21)

    Second, summing up both sides of (21) from i = 1 to I, we get

    $$\matrix{{\sum\limits_{i = 1}^I {q_k^2(i,j + 1) =}} \hfill \cr{\quad \sum\limits_{i = 1}^I {{{\left({{\Delta _2}{q_k}(i,j)} \right)}^2} + 2a(j)\sum\limits_{i = 1}^I {{q_k}} (i,j)\Delta _1^2{q_k}(i - 1,j) +}} \hfill \cr{\quad \left({1 + 2b(j)} \right)\sum\limits_{i = 1}^I {q_k^2(i,j) + 2c(j)\sum\limits_{i = 1}^I {{q_k}} (i,j){u_k}(i,j) \buildrel \Delta \over =}} \hfill \cr{\quad {\Sigma _1} + {\Sigma _2} + {\Sigma _3} + {\Sigma _4}.} \hfill \cr}$$
    (22)

    For ∑1, by (1a), we have

    $$\matrix{{{\Delta _2}{q_k}(i,j) = \left({b(j) - 2a(j)} \right){q_k}(i,j) + a(j){q_k}(i + 1,j) +} \hfill \cr{\quad \quad \quad \quad \;\;a(j){q_k}(i - 1,j) + c(j){u_k}(i,j).} \hfill \cr}$$
    (23)

    Then,

    $$\matrix{{{\Sigma _1} = \sum\limits_{i = 1}^I {{{\left({{\Delta _2}{q_k}(i,j)} \right)}^2} \leq}} \hfill \cr{\quad \sum\limits_{i = 1}^I {4\{{{\left({b(j) - 2a(j)} \right)}^2}q_k^2(i,j) + {a^2}(j)q_k^2(i + 1,j) +}} \hfill \cr{\quad {a^2}(j)q_k^2(i - 1,j) + {c^2}(j)u_k^2(i,j)\} .} \hfill \cr}$$
    (24)

    Further, by the boundary condition (16), we get

    $$\matrix{{\sum\limits_{i = 1}^I {{a^2}} (j)q_k^2(i + 1,j) + {a^2}(j)q_k^2(i - 1,j) \leq} \hfill \cr{\quad {a^2}(j)[q_k^2(2,j) + q_k^2(3,j) + \cdots + q_k^2(I,j)] +} \hfill \cr{\quad {a^2}(j)[q_k^2(1,j) + q_k^2(2,j) + \cdots + q_k^2(I - 1,j)] \leq} \hfill \cr{\quad \sum\limits_{i = 1}^I {2{a^2}} (j)q_k^2(i,j).} \hfill \cr}$$

    Therefore, by (24), we have

    $$\matrix{{{\Sigma _1} \leq \sum\limits_{i = 1}^I {4[{{(b(j) - 2a(j))}^2} + 2{a^2}(j)]q_k^2(i,j) +}} \hfill \cr{\quad 4{c^2}(j)u_k^2(i,j).} \hfill \cr}$$
    (25)

    By Lemma2and a j > 0, j ≥ 0, we have

    $$\matrix{{{\Sigma _2} = 2a(j)\sum\limits_{i = 1}^I {{q_k}} (i,j)\Delta _1^2{q_k}(i - 1,j) =} \hfill \cr{\quad 2a(j)\sum\limits_{i = 1}^I {[{q_k}(I + 1,j){\Delta _1}{q_k}(I,j) - {q_k}(1,j){\Delta _1}{q_k}(0,j) -}} \hfill \cr{\quad \sum\limits_{i = 1}^I {{{\left({{\Delta _1}{q_k}(i,j)} \right)}^2}] = - 2a(j)\sum\limits_{i = 0}^I {{{({\Delta _1}{q_k}(i,j))}^2} \leq 0.}}} \hfill \cr}$$
    (26)

    For ∑4,

    $$\matrix{{{\Sigma _4} = 2c(j)\sum\limits_{i = 1}^I {{q_k}} (i,j)u(i,j) \leq} \hfill \cr{\quad \vert c(j)\vert \sum\limits_{i = 1}^I {\left({q_k^2(i,j) + u_k^2(i,j)} \right).}} \hfill \cr}$$
    (27)

    Finally, substituting (25)–(27) into (22), we obtain

    $$\matrix{{\sum\limits_{i = 1}^I {q_k^2(i,j + 1) \leq}} \hfill \cr{\quad \sum\limits_{i = 1}^I {4[{{(b(j) - 2a(j))}^2} + 2{a^2}(j)]q_k^2(i,j) +}} \hfill \cr{\quad 4{c^2}(j)\sum\limits_{i = 1}^I {u_k^2} (i,j) + \left({1 + 2b(j)} \right)\sum\limits_{i = 1}^I {q_k^2(i,j) +}} \hfill \cr{\quad \vert c(j)\vert \sum\limits_{i = 1}^I {\left({q_k^2(i,j) + u_k^2(i,j)} \right) =}} \hfill \cr{\quad [1 + 2b(j) + \vert c(j)\vert + 8{a^2}(j) +} \hfill \cr{\quad 4{{(b(j) - 2a(j))}^2}]\sum\limits_{i = 1}^I {q_k^2(i,j) +}} \hfill \cr{\quad [\vert c(j)\vert + 4{c^2}(j)]\sum\limits_{i = 1}^I {u_k^2(i,j).}} \hfill \cr}$$
    (28)

3.2 Convergence analysis of ILC

In this section, with Lemma 1 and Proposition 1, we give the sufficient conditions of ILC system (15) under conditions (7) and (16). The following theorem is the main result of this paper.

  • Theorem 1. For ILC system (15) under the initial condition (9) and boundary condition (16), if gain γ(j)of algorithm (8) satisfies \(|1 - m(j)\gamma (j){|^2} < {1 \over 2},\quad 0 \le j \le J\), then

    $$\mathop {\lim}\limits_{k \rightarrow \infty} {\left\Vert {{e_k}(\cdot ,j)} \right\Vert ^2} = 0,\quad 0 \leq j \leq J.$$
    (29)
  • Proof. In the following process of the follow, for simplicity of presentation, we denote

    $${\bar u_k}(i,j) \buildrel \Delta \over = {u_{k + 1}}(i,j) - {u_k}(i,j)$$
    (30)
    $${\bar y_k}(i,j) \buildrel \Delta \over = {y_{k + 1}}(i,j) - {y_k}(i,j)$$
    (31)
    $${\bar q_k}(i,j) \buildrel \Delta \over = {q_{k + 1}}(i,j) - {q_k}(i,j).$$
    (32)

    According to the learning control algorithm (8), we have

    $$\matrix{{{e_{k + 1}}(i,j) =} \hfill \cr{\quad {e_k}(i,j) + {y_k}(i,j) - {y_{k + 1}}(i,j) =} \hfill \cr{\quad {e_k}(i,j) + l(j)\left({{q_k}(i,j) - {q_{k + 1}}(i,j)} \right) +} \hfill \cr{\quad m(j)\left({{u_k}(i,j) - {u_{k + 1}}(i,j)} \right) =} \hfill \cr{\quad {e_k}(i,j) + l(j)\left({{q_k}(i,j) - {q_{k + 1}}(i,j)} \right) -} \hfill \cr{\quad m(j)\gamma (j){e_k}(i,j) =} \hfill \cr{\quad \left({1 - m(j)\gamma (j)} \right){e_k}(i,j) +} \hfill \cr{\quad l(j)\left({{q_k}(i,j) - {q_{k + 1}}(i,j)} \right).} \hfill \cr}$$
    (33)

    Let

    $${\lambda _{m\gamma}} = \mathop {\sup}\limits_{1 \leq j \leq J} {\left({1 - m(j)\gamma (j)} \right)^2}$$
    (34)
    $${\lambda _l} = \mathop {\sup}\limits_{1 \leq j \leq J} l(j).$$
    (35)

    Then, (33) implies

    $$e_{k + 1}^2(i,j) \leq 2{\lambda _{m\gamma}}e_k^2(i,j) + 2{\lambda _l}\bar q_k^2(i,j).$$
    (36)

    Summing up both sides of (36) with respect to i, we obtain

    $$\sum\limits_{i = 1}^I {e_{k + 1}^2} (i,j) \leq {2_{m\gamma}}\sum\limits_{i = 1}^I {e_k^2} (i,j) + 2{\lambda _l}\sum\limits_{i = 1}^I {\bar q_k^2} (i,j).$$
    (37)

    By (37), in order to prove Theorem 1, we must estimate \(\bar q_k^2(i,j)\). Then for learning system (15), it can be easily verified that \(\bar q_k^2(i,j)\) satisfies

    $$\left\{{\matrix{{{\Delta _2}{{\bar q}_k}(i,j) = a(j){\Delta _1}(i - 1,j) +} \hfill \cr{\quad b(j){{\bar q}_k}(i,j) + c(j){{\bar u}_k}(i,j)} \hfill \cr{{{\bar y}_k}(i,j) = l(j){{\bar q}_k}(i,j) + m(j){{\bar u}_k}(i,j).} \hfill \cr}} \right.$$
    (38)

    Under the initial condition (9) and zero boundary condition (16), comparing (38) with (15), by means of Proposition 1, we can obtain

    $$\sum\limits_{i = 1}^I {\bar q_k^2} (i,j + 1) \leq {C_1}\sum\limits_{i = 1}^I {\bar q_k^2(i,j) + {C_2}\sum\limits_{i = 1}^I {\bar u_k^2(i,j)}}$$
    (39)

    where

    $$\matrix{{0 \leq {C_1} = \mathop {\sup}\limits_{1 \leq j \leq J} \vert {C_1}(j)\vert} \hfill \cr{\quad {C_2} = \mathop {\sup}\limits_{1 \leq j \leq J} {C_2}(j).} \hfill \cr}$$
    (40)

    Then, using Lemma 1 and (9), we have

    $$\matrix{{\sum\limits_{i = 1}^I {\bar q_k^2(i,j) \leq}} \hfill \cr{\quad {C_1}(j - 1)\sum\limits_{i = 1}^I {\bar q_k^2(i,0)} + \sum\limits_{t = 0}^{j - 1} {{C_2}} \sum\limits_{i = 1}^I {{C_2}} \bar u_k^2(i,t)C_1^{j - t - 1} =} \hfill \cr{\quad \sum\limits_{t = 0}^{j - 1} {{C_2}} \sum\limits_{i = 1}^I {\bar u_k^2} (i,t)C_1^{j - t - 1}.} \hfill \cr}$$
    (41)

    On the other hand, by learning law (8), it is easy to see

    $$\bar u_k^2(i,j) = {\left({{u_{k + 1}}(i,j) - {u_k}(i,j)} \right)^2} \leq {\gamma ^2}e_k^2(i,j).$$
    (42)

    Substituting (42) into (41), and according to (37), we can obtain that

    $$\matrix{{\sum\limits_{i = 1}^I {e_{k + 1}^2} (i,j) \leq} \hfill \cr{\quad 2{\lambda _{m\gamma}}\sum\limits_{i = 1}^I {e_k^2} (i,j) + 2{\lambda _l}\left({\sum\limits_{t = 0}^{j - 1} {{C_2}} \sum\limits_{i = 1}^I {{\gamma ^2}e_k^2} (i,t)C_1^{j - t - 1}} \right).} \hfill \cr}$$
    (43)

    Multiplying both sides of (43) by λj (0 < λ < 1), and considering that λC1 < 1, we get

    $$\matrix{{\sum\limits_{i = 1}^I {e_{k + 1}^2(i,j){\lambda ^j} \leq 2{\lambda _{m\gamma}}} \sum\limits_{i = 1}^I {e_k^2} (i,j){\lambda ^j} +} \hfill \cr{\quad 2{\lambda _j}(\sum\limits_{t = 0}^{j - 1} {{C_2}} \sum\limits_{i = 1}^I {{\gamma ^2}e_k^2} (i,t)C_1^{j - t - 1}){\lambda ^j} \leq} \hfill \cr{\quad 2{\lambda _{m\gamma}}\sum\limits_{i = 1}^I {e_k^2} (i,j){\lambda ^j} +} \hfill \cr{\quad 2{\lambda _l}\left({\sum\limits_{t = 0}^{j - 1} {{C_2}} \sum\limits_{i = 1}^I {{\gamma ^j}e_k^2} (i,t){\lambda ^t}C_1^{j - t - 1}{\lambda ^{j - t}}} \right).} \hfill \cr}$$
    (44)

    Using the definition of norm ∥·∥ λ , (44) becomes

    $$\matrix{{\sum\limits_{i = 1}^I {e_{k + 1}^2} (i,j){\lambda ^j} \leq 2{\lambda _{m\gamma}}\left\Vert {{e_k}} \right\Vert _\lambda ^2 +} \hfill \cr{\quad 2{C_2}{\lambda _l}{\gamma ^2}(\sum\limits_{t = 0}^{j - 1} {\left\Vert {{e_k}} \right\Vert _\lambda ^2{{({C_1}\lambda)}^{j - t - 1}}\lambda}).} \hfill \cr}$$
    (45)

    Since \(\sum _{t = 0}^{j - 1}({({C_1}\lambda)^{j - t - 1}}\lambda) \le {\lambda \over {1 - {C_1}\lambda}}\), we have

    $$\sum\limits_{i = 1}^I {e_{k + 1}^2} (i,j){\lambda ^j} \leq (2{\lambda _{m\gamma}} + {\lambda \over {1 - {C_1}\lambda}} \times 2{C_2}{\lambda _l}{\gamma ^2})\left\Vert {{e_k}} \right\Vert _\lambda ^2.$$
    (46)

    By conditions of Theorem 1: 2λ < 1, and the continuous property of real number, there exists a sufficiently small λ such that

    $$(2{\lambda _{m\gamma}} + {\lambda \over {1 - {C_1}\lambda}} \times 2{C_2}{\lambda _l}{\gamma ^j}) < 1.$$
    (47)

    If \(\rho \buildrel \Delta \over = (2{\lambda _{m\gamma}} + {\lambda \over {1 - {C_1}\lambda}} \times 2{C_2}{\lambda _l}{\gamma ^2})\), then (46) means

    $$\left\Vert {{e_{k + 1}}} \right\Vert _\lambda ^2 \leq \rho \left\Vert {{e_k}} \right\Vert _\lambda ^2.$$
    (48)

    Thus, from(48), we have

    $$\left\Vert {{e_{k + 1}}} \right\Vert _\lambda ^2 \leq {\rho ^k}\left\Vert {{e_1}} \right\Vert _\lambda ^2.$$
    (49)

    Therefore, when \(k \to \infty ||{e_{k + 1}}||_\lambda ^2 \to 0\). Finally, by

    $${\left\Vert {{e_k}} \right\Vert _\lambda} \leq \mathop {\sup}\limits_{0 \leq j \leq J} \left\Vert {{e_k}(\cdot ,j)} \right\Vert \leq {\lambda ^{- J}}{\left\Vert {{e_k}} \right\Vert _\lambda}$$

    we can obtained

    $$\mathop {\lim}\limits_{k \rightarrow \infty} {\left\Vert {{e_k}(\cdot ,j)} \right\Vert ^2} = 0,\quad 0 \leq j \leq J.$$

  • Remark 2. Here, we only consider single input and single output system governed by a partial difference equation and output equation. It is not difficult to extend the corresponding results to multi-input multi-output systems described by partial difference equations.

  • Remark 3. Clearly, from the conclusion of Theorem 1 and definition of discrete L2 norm, we can get the pointwise convergence of tracking error, i.e., e k (i, j) converges to zero asymptotically as k tends to infinity for all (i, j), 1 ≤ iI, 0 ≤ jJ.

4 Simulation

In order to illustrate the effectiveness of ILC mentioned in this paper, a specific numerical example considering the following system has been given as follows. Let the desired output of system (15) be \({y_d}(i,j) = {1 \over 8}j\sin ({{i - 1} \over 5})\), space and time ranges be 1 ≤ i ≤ 10, 1 ≤ j ≤ 100, state initial value and boundary value be zero. The system coefficients are a(j) = 0.2 + e−2j, b(j) = −0.3, c(j)= 0.6, l(j)= 0.5−e−3j, m(j) = 0.8 + ej, gain coefficient γ(j)= 0.8. Then it is easy to verify that the conditions of Theorem 1 are satisfied. The simulation results are shown in Figs. 14.

Fig. 1
figure 1

Desired output \({y_d}(i,j) = {1 \over 8}j\sin ({{i - 1} \over 5})\)

Fig. 2
figure 2

The twentieth iterative output y k (i, j)(k = 20)

Fig. 3
figure 3

The error surfaces of different iterations

Fig. 4
figure 4

Max tracking error versus number of interations

Fig. 1 shows the desired curved surface. Fig. 2 shows the curved surface at the twenties iteration. Fig. 3 gives error curved surfaces of 8, 10, 15, 20, respectively. Fig. 4 is maximum absolute error-iterative curve. Numerically, when the iteration numbers are 10 and 15, the absolute values of the maximum tracking error are 1.2182 × 10−3 and 3.99 × 10−6, respectively. The effectiveness of the proposed ILC (8) for discrete distributed parameter system (1) is validated.

5 Conclusions

In this paper, we have considered the problem of ILC scheme for a class of discrete parabolic distributed parameter systems with coefficient uncertainty. A P-type learning law is established for our system. Compared with discrete systems described by ordinary difference equation, the ILC of discrete parabolic distributed parameter system is more complex. It is also shown that under some given conditions, the P-type ILC law can guarantee the asymptotic convergence of the tracking error in mean L2 norm for the entire time interval through the iterative learning process. In the future, the control scheme will be applied to a freeway model with a diffusion term[29].