Structured stabilisation of superlinear delay systems by bounded discrete-time state feedback control

Taking different system structures in different Markovian modes into consideration, this paper studies the structured stabilisation of a class of superlinear hybrid stochastic delay systems by feedback control based on discrete-time state observations. The controller is designed in a bounded state area, rather than every observable state, in order to reduce control cost. The time delay is more general in terms of the classical differentiability assumption being relaxed. Compared with the existing papers on discrete-state-feedback stabilisation problem, a new method to estimate the difference between current-time state and discrete-time state is presented, as a result of which the conditions imposed on the underlying system and the control function are less restrictive. Meanwhile, the Lyapunov functional used in this paper is modified to adapt to this change. Finally, an application to stochastic structured neural networks is given to demonstrate the practicability of the developed theory.

(1.2) Here x(t) ∈ R d , r(t) is a Markov chain taking values in S, W (t) is a Brownian motion, the non-negative constant δ stands for system time lag, t τ = [t/τ ]τ , where [t/τ ] is the integer part of t/τ .This stabilisation problem for stochastic systems was first proposed by Mao (2013).Traditionally, the system coefficients f and g should satisfy the linear growth condition (see, e.g.Li and Kou (2017) and You, Liu, Lu, Mao, and Qiu (2015)).But recently, Mei, Fei, Fei, and Mao (2020) eased this restriction and brought this stabilisation problem into superlinear area.Although the theory developed therein has made great progress and more real models could be included, such as competitive model (Liu & Bai, 2017;Zhang & Teng, 2011) and ocean temperature oscillator (Suarez & Schopf, 1988), there are still four questions deserved our further discussion.
Q1. Structured stabilisation Firstly, we emphasise the key ingredient in Mei et al. (2020) for stabilisation purpose, namely, the condition: 3) is indeed more advanced than the conventional linear growth condition.
However, it is required for all modes, in particular, χ i3 and χi3 should be strictly positive for any i ∈ S.This seems a little restrictive in reality as this structure might be lost in some modes.For example, Fei, Hu, Mao, and Shen (2018)  1 since χ 13 = χ13 = 0. Thus to deal with this situation, we need to consider structured stabilisation.
To the best of the authors' knowledge, although the structured stability has drawn many researchers' interest (e.g.Fei et al. (2018), Lu, Song, and Zhu (2022) and Shen, Mei, and Deng (2019)), there are few results on structured stabilisation.While recently, Shi, Mao, and Wu (2022) made some efforts to this problem.They successfully designed a discrete-time state feedback control for hybrid stochastic differential equations with different structures in different modes.But they have not considered time delay in their systems, which could actually influence the modestructure classification (see Assumption 4, Example 1 later on).As a result, the structured stabilisation of hybrid SDDEs deserves our investigation.

Q2. Estimation of discrete-time state observations
Secondly, let us say more about condition (1.3).The reader might wonder why we need to give two similar inequalities at the same time, particularly, the first one can be deduced from the other.It is actually arisen from the effect of discrete-time state observations.To deal with this effect, we usually decompose the drift coefficient of the controlled system (1.2) as (f (x(t), x(t − δ), t, r(t)) + u(x(t), t, r(t))) + (u(x(t τ ), t, r(t)) − u(x(t), t, r(t))) and hope the second term (or |x(t) − x(t τ )|) could be small enough if the observation duration τ is sufficiently small.Currently, one popular method to estimate the second term is to compute Then the estimation result (see, e.g.equation ( 56) in Mei et al. (2020) and equation (4.27) in Shi et al. (2022)) forces us to give two inequalities in condition (1.3) unavoidably.But is it possible for us to modify the estimation so that condition (1.3) could be relaxed?In this paper, we will give a positive answer to this question (see Lemma 5 ).Owing to this modification, only the first inequality of condition (1.3) is required in the stability analysis.
Q3. Bounded-state-area control Thirdly, we highlight that in many papers studying discretestate-feedback stabilisation such as Mao (2013), Mei et al. (2020), Ren, Yin, and Sakthivel (2018), Shi et al. (2022) and You et al. (2015), the control function u(x, t, i) is usually designed on every observable discrete-time state, such as the linear form ν i x(t τ ) in the example of Mei et al. (2020).But this sometimes seems a little rough and would lead to some unnecessary cost.In general, the control cost is proportional to |u(x(t τ ), t, r(t))|.Thus the control cost goes up as system state value |x(t τ )| increases.Particularly, if the initial data is given large, the cost on the beginning stage will be relatively high.This then begs a question: is it really necessary to impose control on every discrete-time state?The answer at least in this paper is negative.We will design the feedback control in a bounded state area (see Rule 1 and Remark 1).

Q4. Variable time delays
Finally, let us comment on the delay function in Mei et al. (2020), which is assumed to be a constant δ.In a slew of realworld SDDE models, the time delay is a variable function of time such as Dong and Mao (2022), Gugat and Dick (2011), Gugat, Dick, and Leugering (2013), Gugat andTucsnak (2011), Min, Xu, Zhang, andMa (2018), Sun, Sun, and Chen (2020) and Wang, Liu, and Liu (2008).Therefore, it seems a little unreasonable to continue considering the constant delay.Moreover, rather than the widely imposed condition on delay systems that the time delay is differentiable with derivative less than one (see, e.g.Min et al. (2018) and Wang et al. (2008)), in this paper, we will consider the time delays recently studied in Dong and Mao (2022), which meet a weaker assumption (namely Assumption 1).This allows us to include more practical time delays, such as periodic switching delay (Gugat & Tucsnak, 2011) and sawtooth delay (Sun et al., 2020).But differently, the delay function here is no longer needed to be bounded below by a positive number.
This paper is devoted to addressing these four issues.In theory, conditions on the original system and the control function are less restrictive.In reality, not only could a much wider class of hybrid stochastic systems be covered, but could also the control costs be reduced significantly.

Notation
Throughout this paper, we will work on a complete probability space (Ω, F , P) with a filtration {F t } t≥0 satisfying the usual conditions (that is, it is increasing, right-continuous and F 0 contains all P-null sets).We let W (t) = (W 1 (t), . . ., W m (t)) T be an m-dimensional Brownian motion, and r(t) a right-continuous Markov chain taking values in a finite state space S = {1, . . ., N} with transition rate matrix Q = (q ij ) N×N given by as ϵ ↓ 0.Here q ij ≥ 0 is the transition rate from i to j if i ̸ = j, while q ii = − ∑ j̸ =i q ij .We assume that the Markov chain r(t) and the Brownian motion W (t) are independent. We
(2.1) Here, δ : R + → [0, ∆] denotes the system delay, f : R d×m is the diffusion coefficient.As a standing hypothesis, we assume that the coefficients f (x, y, t, i) and g(x, y, t, i) are locally Lipschitz continuous in x and y (see Theorem 3.15 in Mao and Yuan (2006)).In order to drive this equation, we need to know the initial data, which is given by {x The delay function considered in this paper should satisfy the following assumption, which is clearly less restrictive than the widely imposed differentiability condition.
Assumption 1. Assume the delay function δ(t) is a Borel measurable function satisfying that where Leb(•) denotes the Lebesgue measure on the real line and Assumption 1 is not so strong and can be met by many timevariable delay functions in practice.For example, the piecewise 2k+2)) (t) satisfies with ∆ * = 2.Moreover, if δ(t) is a Lipschitz continuous function with Lipschitz coefficient ĥ ∈ (0, 1), then Assumption 1 is satisfied with ∆ * = 1/(1 − ĥ).For more details about Assumption 1, we refer the reader to Dong and Mao (2022).But differently, the delay function δ(t) considered in this paper is not needed to be bounded below by a positive constant.Next, we need present a useful lemma to tackle time delay effect under our new Assumption 1.One can use the similar analysis of Lemma 2.2 in Dong and Mao (2022) to show it, so we omit the proof given the page limit.
Lemma 1.Under Assumption 1, for any T > 0 and continuous function φ (2.3) Although the linear growth condition is not of our interest, we still do not want system coefficients to grow very sharply.Hence the following polynomial growth condition is required.
Assumption 2. Assume that there exist non-negative constants (2.5) But note that Assumption 2 cannot guarantee hybrid SDDE (2.1) has a unique global solution.For this purpose, the Khasminskii-type condition is always needed, which arises widely now in the study of superlinear stochastic systems (see, e.g.Fei et al. (2018), Mao and Yuan (2006) and Mei et al. (2020)).But differently from these references, in this paper, we are more interested in that the structure of hybrid SDDE (2.1) does not always remain the same type in all modes.For simplicity, we divide the mode space S into two parts, S 1 = {1, . . ., N 1 } and S 2 = {N 1 + 1, . . ., N}, where 1 ≤ N 1 < N. The subsystems of hybrid SDDE (2.1) in S 1 -modes and S 2 -modes satisfy the classical Khasminskiitype condition and the generalised Khasminskii-type condition, respectively.
Assumption 3. Let q ≥ p + 1.For i ∈ S 1 , suppose that there exist constants ãi ∈ R and bi ≥ 0 such that for all (x, y, t) ∈ and for Ã = −(q + p − 1)diag(ã 1 , . . ., ãN 1 ) − (q ij ) i,j∈S 1 to be a nonsingular M-matrix.For i ∈ S 2 , assume that there exist constants γi , bi , di ≥ 0 and ci > 0 such that for any (x, (2.7) For the theory of M-matrix, the reader can refer to Section 2.6 in Mao and Yuan (2006).We have mentioned before that time delay could influence our mode-structure classification.Let us now give an example to explain this.
Theorem 1.Let Assumptions 1, 2, 3 hold.Further assume that (2.8) (2.9) (2.10) Combining (2.9) with (2.10), we could derive that where Since the system coefficients are locally Lipschitz continuous, there is a unique maximal local solution x(t) on t ∈ [0, σ e ) by Theorem 7.12 in Mao and Yuan (2006), where σ e is the explosion time.Let k 0 > 0 be sufficiently large for k 0 ≥ ∥ξ ∥.For each integer k ≥ k 0 , define stopping time s.If we can show that σ ∞ = ∞ a.s., then σ e = ∞ a.s., and the solution x(t) is global.Then, for any k ≥ k 0 and t ≥ 0, we derive from the Itô formula and (2.11) that (2.12) where (2.13) The proof is therefore complete.□ From now on, since the subsequent stability analysis will be our main focus, we will not mention the conditions of Theorem 1 explicitly and assume they are true.

Design of discrete-time feedback control
Suppose hybrid SDDE (2.1) is unstable, we want to design a feedback control to stabilise it.In theory, the design of the feedback control is based on continuous-time state observations.But in practice, the state can only be observed at discrete times, say 0, τ , 2τ , . ... Letting t τ = [t/τ ]τ , our controlled system actually becomes on t ≥ 0 with initial data ξ and r 0 .

Bounded-state-area feedback control
Before giving our control design, in addition to all conditions in Theorem 1, we need to give another assumption on our original unstable system (2.1) for the stabilisation aim.
For i ∈ S 2 , there are non-negative constants γ i , b i , d i , positive constant c i so that for all (x, y, t But the reader might find that condition (3.2) could be deduced by condition (2.6) with a i = âi = ãi and b i = bi = bi .
Is it necessary to give them at the same time?Actually, these parameters also influence the value of τ .Thus we need to give them separately.The same reason is applicable for condition (3.3).
On the other hand, the positivity of D b , D and D d cannot be derived from Assumption 3, whose roles will be explained in Remark 3. In other words, Assumptions 3 and 4 are different, and any one cannot be deduced from the other.As a result, Assumption 4 is indeed required.
Next, we will introduce how to design the feedback control u(x, t, i) according to mode-structure classification in Assumption 4. For convenience, for any 0 < a < b, we denote by . The control in this mode can be designed as: (i) when x ∈ B R i , design u(x, t, i) such that we can find a constant K > 0 to let Here, when R i = 0, we think u(x, t, i) = 0 for all x ∈ R d .Next, let us make some comments on this control rule.
Remark 1.If we pay attention to hybrid SDDE (2.1) on S 1 , we find these subsystems might become stable.There is no need to impose any control when i ∈ S 1 .But this does not mean the whole system is stable.Thus we need to design control for S 2modes, which is imposed in the bounded state area.In fact, we could rewrite the right-hand side of (3.3) as . We hence do not need to impose any control when |x| exceeds R i .
Remark 2. It should also be pointed out that we could in fact let u(x, t, i) = 0 for x / ∈ B R i and (t, i) ∈ R + × S 2 .But in our control scheme, we set an additional connect area B 2R i − B R i and require u(x, t, i) to vanish when |x| ≥ 2R i .This is needed for the purpose of continuity of u(x, t, i) in x to guarantee the existence of unique global solution of the controlled system (3.1), and can also guarantee the global Lipschitz continuity of u(x, t, i) in x with the same Lipschitz coefficient K assumed in x ∈ B R i , which is stated as Lemma 2.
From Rule 1 and the above discussions, after giving an appropriate κ i , we see that the design of u(x, t, i) for R d × R + × S 1 and (3.6) By discussing the positions of x and y, it is easy to show this lemma so we omit it.We also observe from Lemma 2 that u(x, t, i) meets the linear growth condition, namely, (3.7) Conditions (2.4) and (2.5) tell us that f (0, 0, t, i) ≡ 0 and g(0, 0, t, i) ≡ 0. Therefore, the controlled system (3.1)admits a trivial solution.Moreover, in analogy to the proof of Theorem 1, we can show that there exists a global solution of the controlled system (3.1),satisfying sup ) < ∞ for any t ≥ 0, under Assumption 4 and Rule 1.

Additional rules on control function
From Assumption 4 and Rule 1, we observe that the controlled system (3.1) also has different structures in different modes.For i ∈ S 1 , since u(x, t, i) ≡ 0, we then derive that for every (x, y, t) ∈ For S 2 -modes, we have the following lemma.

Lemma 3. Let Assumption 4 and Rule
It is easy to show this lemma by discussing the positions of x, so we leave it to the reader.The control function u(x, t, i) designed by Rule 1 might still not stabilise the original system (2.1).Hence we need to impose some additional conditions.

Rule 2.
Ensure that κ i we choose in Rule 1 make A = −2diag(a 1 , . . ., a N ) − Q be a non-singular M-matrix, where a i are the same in Assumption 4 or Lemma 3.
Let (η 1 , . . ., η N ) T = A −1 (1, . . ., 1) T .Define a function U : , where ηi and D q have been given in Assumption 4. While define a function LU : U(x, j).The estimation of LU(x, y, t, i) is the key ingredient for subsequent stability analysis.Here, for the convenience of the reader, we state it as the following lemma.Rules 1 and 2

Lemma 4. Let Assumption 4 and
(3.8) The proof of Lemma 4 is quite similar to the estimation of L Ũ(x, y, t, i) in (2.11), so we omit it.For the stability aim, we always want LU(x, y, t, i) to be negative, which forces us to give the following rule.

Rule 3.
Also ensure that κ i in Rule 1 can make the numbers But the reader may wonder if we can find the appropriate κ i to make Rules 2 and 3 fulfilled.The following remark will deny this worry.

Remark 3.
Since A 1 is a non-singular M-matrix required in Assumption 4, there is a constant κ large enough such that −2diag(a 1 , . . ., a N 1 , γ N 1 +1 − κ, . . ., γ N − κ) − Q is a non-singular M-matrix.Therefore, we can choose κ i = κ for all i ∈ S 2 .Rule 2 hence holds.Then for sufficiently large κ, Since D b , D and D d are positive, Rule 3 could be satisfied.Certainly, in application, we need to make use of the special forms of f and g to take κ i wisely.

The upper bound of observation duration
Now, we introduce a method on how to determine the value of τ * , the upper bound of τ .Let η M = max i∈S 2 η i and set a domain or 0, ϕ(ε) goes to zero.Therefore, there exists a number ε * ∈ E such that ϕ(ε * ) = max ε∈E ϕ(ε).Then the upper bound of observation duration is given by τ * = ϕ(ε * ) = max ε∈E ϕ(ε).It will also be very useful later that τ < τ * ≤ 1/K .From now on, we always have τ < τ * .

Lyapunov functional
The main method to study stability in this paper is the technique of Lyapunov functional.For this purpose, we define The Lyapunov functional will be 4 are positive constants to be determined later.
By the generalised Itô formula and the fundamental theory of calculus, we can show that V (x t , t, r(t)) is in fact an Itô process on t ≥ 0 with its Itô differential where and M(t) is a continuous local martingale vanishing at t = 0. Here ), and the explicit form of M(t) is of no use in this paper so we omit it here, but it can be found in Theorem 1.45 in Mao and Yuan (2006).Also from (4.2), we have to estimate Ū(t).
We can substitute this into (4.3) and then take expectations on both sides to obtain Then the required assertion follows if we use (2.5).□

Exponential stabilisation
In this part, we demonstrate that the original unstable system (2.1) can be stabilised by the feedback control designed in this paper in the sense of mean square exponential stability.For this purpose, the following remark is helpful.

An application to neural networks
Consider a stochastic delay neural network with 10 neurons perturbed by a scalar Brownian motion W (t), operating in two modes, busy and free.In the jth neuron, it obeys the Hopfield model in free mode dx j ) dt + σ x j (t)dW (t), while in busy mode, it could be described by the Cohen-Grossberg neuron network dx j (t) = −Γ x j (t) ( Here Π jk and Πjk stand for the connection weight from neuron k to neuron j in free mode and busy mode, respectively, is the system time lag.For more information about these two types of neuron network, we cite (Blythe, Mao, & Liao, 2001;Wang, Shu, Fang, & Liu, 2006;Ye, Michel, & Wang, 1995) for references.
This neuron network switches from one mode into the other according to a Markov chain r(t) on the state space S = {1, 2} (1 for free mode, 2 for busy mode) with transition rate matrix The network parameters are given as ϱ = 0.15, ρ = 0.3, ρ = 0.15, Γ = 3, P = 2.5, σ = 0.3, σ = 0.1.The connection weight Π jk and Πjk can be obtained from the network connection graphs in Fig. 1.

Conclusion
In this paper, we have designed the discrete-time state feedback control in a bounded state area to stabilise a kind of structured hybrid SDDEs with more general time delay.Not only could more general stochastic systems be covered, but also the control could be less costly.The conditions given on the original system and the control function were less restrictive and could also be verified easily in practice, in particular comparing with Assumption 6 in Mei et al. (2020) or Lemma 4.3 in Shi et al. (2022).For convenience, we only divided the system into two proper subsystems, which satisfied the Khasminskii-type structure and the generalised Khasminskii-type structure, respectively.But owing to mathematical restriction, we could not impose any control in the former subsystem.Our future work will be devoted to this problem.

Fig.
Fig. The neuron network connections at free mode (left) and busy mode (right).