Next Article in Journal
The Identification of Causal Mechanisms in Sustainable Urban Transitions—A Systematic Approach to Case Selection
Next Article in Special Issue
A Neural Probabilistic Graphical Model for Learning and Decision Making in Evolving Structured Environments
Previous Article in Journal
A Review of Sentiment, Semantic and Event-Extraction-Based Approaches in Stock Forecasting
Previous Article in Special Issue
End-to-End Training of Deep Neural Networks in the Fourier Domain
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Convergence of a Class of Delayed Neural Networks with Real Memristor Devices

1
Dipartimento di Ingegneria dell’Informazione e Scienze Matematiche, Università di Siena, Via Roma 56, 53100 Siena, Italy
2
Dipartimento di Ingegneria dell’Informazione, Università degli Studi di Firenze, Via S. Marta 3, 50139 Firenze, Italy
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(14), 2439; https://doi.org/10.3390/math10142439
Submission received: 31 May 2022 / Revised: 17 June 2022 / Accepted: 5 July 2022 / Published: 13 July 2022
(This article belongs to the Special Issue Neural Networks and Learning Systems II)

Abstract

:
Neural networks with memristors are promising candidates to overcome the limitations of traditional von Neumann machines via the implementation of novel analog and parallel computation schemes based on the in-memory computing principle. Of special importance are neural networks with generic or extended memristor models that are suited to accurately describe real memristor devices. The manuscript considers a general class of delayed neural networks where the memristors obey the relevant and widely used generic memristor model, the voltage threshold adaptive memristor (VTEAM) model. Due to physical limitations, the memristor state variables evolve in a closed compact subset of the space; therefore, the network can be mathematically described by a special class of differential inclusions named differential variational inequalities (DVIs). By using the theory of DVI, and the Lyapunov approach, the paper proves some fundamental results on convergence of solutions toward equilibrium points, a dynamic property that is extremely useful in neural network applications to content addressable memories and signal-processing in real time. The conditions for convergence, which hold in the general nonsymmetric case and for any constant delay, are given in the form of a linear matrix inequality (LMI) and can be readily checked numerically. To the authors knowledge, the obtained results are the only ones available in the literature on the convergence of neural networks with real generic memristors.

1. Introduction

Von Neumann computing machines are currently facing severe limitations in analyzing big data and handling the hard tasks which arise in the Internet of Things (IoT) or cloud computing [1,2,3]. These problems are due to the huge power needed for the continuous exchange of data between the central processing unit (CPU) and the memory (e.g., the RAM) that are placed at distinct physical locations. The use of emerging nanoscale devices, such as the memristor, is a promising way to alleviate some of the above problems via the implementation of new analog and parallel neuromorphic computing paradigms. Memristors enable implementation in memory computing systems where the same devices perform the computation and are also able to memorize the result of the computation, thus mimicking some basic principles of a biological brain [4,5,6,7,8].
One crucial aspect to account for when studying memristor neural architectures is the memristor model that is used. The ideal memristor, which was introduced in the original seminal paper by Leon Chua in 1971 [9], is the most basic and simplest model available in the literature. However, real memristor devices implemented in nanotechnology cannot be modeled in a sufficiently accurate way using ideal memristors [10,11,12,13]. Rather, more complex models, named generic or extended memristors, need to be used for real devices. The reader is referred to [14,15,16,17,18,19] for a classification and discussion of the hierarchy of memristor models and for the main models used in the literature.
One of the most fundamental dynamic properties of a neural network is the convergence of solutions toward equilibrium points. It is not possible to overemphasize the importance of convergence, since it is an essential property to implement content addressable memories and to design neural networks for solving combinatorial optimization problems and several other tasks in the field of image and signal processing [20,21,22]. There is a huge literature devoted to the convergence of traditional neural networks without memristors, see, e.g., [23,24,25,26,27,28,29] and references therein. On the other hand, the study of the convergence of memristor neural networks is only in its infancy and very few results are available in the literature. The authors of [30,31,32] addressed convergence of Hopfield-type and cellular neural networks with ideal memristors. General results on convergence have been established in the case of symmetric interconnections between neurons [30,32], and for cooperative interconnections [31], using the flux-charge analysis method developed in [15,33]. The method enables it to be shown that the state space can be decomposed in invariant manifolds for the dynamics and that, on each manifold, the dynamics is equivalent to that of a memristorless neural network, provided that flux and charge are used as variables in place of voltage and current. Other results on convergence have been established for memristors modeled as switching devices [34,35,36]. However, the usefulness and significance of such models in describing real memristor devices have not yet been clarified. As far as the authors are aware, no results on convergence are available to date for the important case of memristor neural networks with generic or extended memristors.
The goal of this manuscript is to study the convergence of a class of memristor neural networks with generic memristors obeying the voltage threshold adaptive memristor (VTEAM) model [17]. This is a relevant and highly studied model that is extremely flexible and accurately fits real memristor devices. Moreover, it is computationally efficient and appropriate for circuit simulation tools. In the neural network, we also account for the possible presence of constant delays. This feature is of importance, since, in practice, delays are unavoidable due to the interneuron distance and finite signal transmission speed. Moreover, the presence of delays enables neural networks to tackle the solution of some classes of peculiar tasks in real time, including motion detection [37] and inverse problems [38]. It is worth remarking that, due to physical limitations, the evolution of the memristor state is constrained in a closed compact interval. We therefore find it useful to model the memristor neural network via a class of delayed differential inclusions, named differential variational inequalities (DVIs) [39]. It is known that DVIs are the most adequate mathematical tool to describe systems with constraints evolving in a closed subset of the space. In the paper, we provide some easily testable sets of conditions on the interconnection and delayed interconnection matrices ensuring convergence of solutions for the memristor neural network. The conditions, which are expressed in the form of a linear matrix inequality (LMI) [40], are applicable in the general nonsymmetric case and do not require that the delay is bounded. Examples and numerical simulations are provided to illustrate the obtained dynamic results.

2. Preliminaries

In this section, we recall some basic properties of tangent and normal cones, and a class of differential inclusions named DVIs, that are used in the manuscript. The reader is referred to [39] for a more thorough treatment.

2.1. Tangent and Normal Cones

Consider a non-empty closed convex set Q R n . The tangent cone to Q at x Q is given by [41]
T Q ( x ) = { v R n : lim inf ρ 0 + dist ( x + ρ v , Q ) ρ = 0 }
where dist ( z , Q ) = inf y Q z y is the distance of z R n from set Q. Furthermore, the normal cone to Q at x Q is
N Q ( x ) = { p R n : p , v 0 , v T Q ( x ) } .
If, in particular, Q = K n = [ , ] n is a hypercube, it can be verified that T K n ( z ) = ( u ( z 1 ) , u ( z 2 ) , , u ( z n ) ) for any z K n , where u ( ρ ) = [ 0 , + ) if ρ = , u ( ρ ) = ( , + ) if | ρ | < , and u ( ρ ) = ( , 0 ] if ρ = . Moreover, it can be easily checked that N K n ( z ) = ( w ( z 1 ) , w ( z 2 ) , , w ( z n ) ) for any z K n , where w ( ρ ) = ( , 0 ] if ρ = , w ( ρ ) = 0 if | ρ | < , and w ( ρ ) = [ 0 , + ) if ρ = .
The tangent and normal cones enjoy the following properties [39,42].
Property 1.
Suppose that Q R n is a non-empty closed convex set. Then:
  • for any x Q , T Q ( x ) and N Q ( x ) are non-empty closed convex cones in R n ;
  • the normal cone to Q is a monotone operator, i.e., given any x , y Q and any n x N Q ( x ) , n y N Q ( y ) , we have x y , n x n y 0 .
  • If Q = K n = [ , ] n , and P = diag ( p 1 , p 2 , , p n ) 0 , then we have x y , P ( n x n y ) 0 for any x , y K n and any n x N K n ( x ) , n y N K n ( y ) .

2.2. Differential Variational Inequalities

Let Q R n be a non-empty closed convex set and F : Q R n . A differential variational inequality (DVI) is a problem of the following form [39] (p. 265): find an absolutely continuous function x ( t ) , t [ t a , t b ] , such that
x ( t ) Q , t [ t a , t b ]
and
x ˙ ( t ) F ( x ( t ) ) N Q ( x ( t ) ) , for   almost   all   ( a . a . ) t [ t a , t b ] .
From a mathematical viewpoint, a DVI is a special class of differential inclusions whose solutions evolve in a closed convex subset of R n . The next property summarizes some fundamental results on DVIs in [39] (Ch. 5) that are needed in the paper.
Property 2.
Let Q R n be a non-empty compact convex set and assume F : Q R n is continuous in Q. Then, for any initial condition x 0 Q , the DVI (1) and (2) has at least a solution x ( t ) , t [ 0 , + ) , such that x ( 0 ) = x 0 . Furthermore, there exists at least a solution ξ Q of the algebraic inclusion 0 F ( ξ ) N Q ( ξ ) , hence x ( t ) = ξ , t 0 , is a stationary solution to (1) and (2).

3. Memristor Neural Network

3.1. VTEAM Memristor Model

The ideal memristor was introduced by Leon Chua in the seminal 1971 paper [9] as the fourth basic passive circuit element. Let v ( t ) (resp., i ( t ) ) be the voltage (resp., current) of the memristor and let φ ( t ) = t v ( σ ) d σ (resp., q ( t ) = t i ( σ ) d σ ) be the flux (resp., charge) of the memristor. An ideal flux-controlled memristor is, by definition, a circuit element satisfying the constitutive relation
q ( t ) = q ^ ( φ ( t ) )
where q ^ : R R , q ^ C 1 ( R ) , is a given non-linear function. By differentiating in time, we obtain that an ideal memristor satisfies the quasi-static Ohm’s law
i ( t ) = g ( φ ( t ) ) v ( t )
where g ( φ ) = q ^ ( φ ) is named memductance and the state equation is
φ ˙ ( t ) = v ( t ) .
Later on, more general classes of memristors were introduced to better model real memristor devices with respect to an ideal memristor. In the manuscript, we assume that the memristor is described by the generic memristor model [14]
i ( t ) = g ( x ( t ) ) v ( t )
x ˙ ( t ) = h ˜ ( x ( t ) , v ( t ) )
where g ( · ) : R R , g C 0 ( R ) , is the memductance and h ˜ : R × R R , h ˜ C 0 ( R × R ) , is a non-linear function. We stress that the state variable x of a generic memristor, in general, does not coincide with the flux φ .
In particular, in the paper, we focus on the VTEAM memristor model introduced in [17] for which the state evolution is ruled by the equation
x ˙ ( t ) = h ˜ ( x ( t ) , v ( t ) ) h ( v ( t ) ) = k o f f v ( t ) v o f f 1 α o f f 0 < v o f f < v ( t ) 0 v o n < v ( t ) < v o f f k o n v ( t ) v o n 1 α o n v ( t ) < v o n < 0
where k o n , k o f f , v o n , v o f f , α o n , α o f f are model parameters; the parameters are all positive, except for v o n and k o n , which are negative. Note that this is a model with a voltage threshold; that is, the state does not change ( d x ( t ) / d t = 0 ) when the voltage v ( t ) belongs to the interval [ v o n , v o f f ] . Additionally, from a physical viewpoint, the state variable x ( · ) has to satisfy the hard constraint
x ( t ) K [ x o n , x o f f ]
where < x o n < x o f f < + . Such a constraint is usually enforced mathematically by using some suitable window functions in (5) [17]. However, it can be more simply and effectively guaranteed by rewriting (5) as the following DVI
x ˙ ( t ) h ( v ( t ) ) N K ( x ( t ) )
where N K ( x ( t ) ) is the normal cone to K at x ( t ) K .
The memductance g ( · ) is not explicitly defined by the VTEAM model and can be any continuous function, such that g ( x ) [ 1 / R O N , 1 / R O F F ] for any x [ x o n , x o f f ] , where R O N , R O F F > 0 . Indeed, the memductance can be described by linear dependence of the memristance 1 / g ( · ) , e.g.,
g ( x ( t ) ) = R O N + R O F F R O N x o f f x o n ( x ( t ) x o n ) 1
as well as an exponential dependence, e.g.,
g ( x ( t ) ) = e λ ( x ( t ) x o n ) x o f f x o n R O N
where λ is a fitting parameter and e λ = R O N R O F F .

3.2. Memristor Delayed Neural Network Model

In the following, we consider a neural network with delays whose basic cell is made by the interconnection of a capacitor C and a memristor obeying a VTEAM model. By letting, for simplicity, C = 1 , the memristive neural network can be described by the following set of delayed differential inclusions for i = 1 , , N
v ˙ i ( t ) = i M i ( t ) + Σ j = 1 N a i j f ( v j ( t ) ) + Σ j = 1 N a i j τ f ( v j ( t τ ) )
x ˙ i ( t ) h ( v i ( t ) ) N K ( x i ( t ) )
where v i ( t ) are the capacitor voltages and i M i ( t ) = g ( x i ( t ) ) v i ( t ) are the memristor currents ( i = 1 , , N ). Moreover, for i , j = 1 , , N , a i j (resp., a i j τ ) are the neuron interconnections (resp., delayed neuron interconnections), while the neuron activation f ( · ) : R R is Lipschitz in R , i.e., there exists L such that | f ( ρ 1 ) f ( ρ 2 ) | L | ρ 1 ρ 2 | for any ρ 1 , ρ 2 R , it is bounded, i.e., < f m f ( ρ ) f M < + for any ρ R , and it satisfies f ( 0 ) = 0 . Finally, τ > 0 is a constant delay. The neural network has an additive interconnecting structure that is typical of classic models, such as the Hopfield or cellular neural network models.
Equations (9) and (10) can be recast in vector form as follows
V ˙ ( t ) = G ( X ( t ) ) V ( t ) + A F ( V ( t ) ) + A τ F ( V ( t τ ) )
X ˙ ( t ) H ( V ( t ) ) N K X ( X ( t ) )
where V ( t ) = ( v 1 ( t ) , , v N ( t ) ) and X ( t ) = ( x 1 ( t ) , , x N ( t ) ) (⊤ denotes the transpose). Additionally, we denote with G ( X ( t ) ) = diag ( g ( x 1 ( t ) ) , , g ( x N ( t ) ) ) , F ( V ( t ) ) = ( f ( v 1 ( t ) ) , , f ( v N ( t ) ) ) , H ( V ( t ) ) = ( h ( v 1 ( t ) ) , , h ( v N ( t ) ) ) and with N K X ( X ( t ) ) the normal cone to the hypercube K X = [ x o n , x o f f ] N at X ( t ) .

4. Existence and Uniqueness of the Solution

Let us consider the initial value problem (IVP) associated with the delayed memristor neural network (9) and (10)
V ˙ ( t ) = G ( X ( t ) ) V ( t ) + A F ( V ( t ) ) + A τ F ( V ( t τ ) )
X ˙ ( t ) H ( V ( t ) ) N K X ( X ( t ) )
V ( t ) = ϕ ( t ) , t [ τ , 0 ]
X ( t ) = ψ ( t ) , t [ τ , 0 ]
with initial conditions ( ϕ ( t ) , ψ ( t ) ) , where ϕ C ( [ τ , 0 ] , R N ) and ψ C ( [ τ , 0 ] , K X ) .
A solution ( V ( t ) , X ( t ) ) , t [ τ , t ˜ ] , of the IVP is a continuous function in t [ τ , t ˜ ] such that:
  • ( V ( t ) , X ( t ) ) R N × K X , t [ τ , t ˜ ] ;
  • ( V ( t ) , X ( t ) ) = ( ϕ ( t ) , ψ ( t ) ) , t [ τ , 0 ] ;
  • ( V ( t ) , X ( t ) ) is absolutely continuous in [ 0 , t ˜ ] ;
  • ( V ˙ ( t ) , X ˙ ( t ) ) satisfies (13) and (14) for a.a. t [ 0 , t ˜ ] .
We find it useful to bring back the analysis of the differential inclusion (9) and (10) to that of a DVI in a compact convex subset of R N × R N , so that we can apply Property 2 to establish the existence of solutions. To this end, we prove the following.
Property 3.
Let ( V ( t ) , X ( t ) ) , t 0 , be a solution of the IVP (13)–(16). Let f ˜ = max { | f m | , f M } and consider the hypercube
K V [ v ¯ ε , v ¯ ε ] N
where
v ¯ ε = max i = 1 , , N R O F F Σ j = 1 N | a i j | f ˜ + Σ j = 1 N | a i j τ | f ˜ + ε
and ε > 0 . Then, V ( t ) enters K V in finite time and stays there thereafter.
Proof. 
Consider V ( t ) R N , V ( t ) K V and let N ˜ be the set of indexes i = 1 , , N such that | v i ( t ) | v ¯ ε . We note that, for any i N ˜ , we have
v ˙ i ( t ) · v i ( t ) = g ( x i ( t ) ) v i 2 ( t ) + Σ j = 1 N a i j f ( v j ( t ) ) v i ( t ) + Σ j = 1 N a i j τ f ( v j ( t τ ) ) v i ( t ) 1 / R O F F | v i ( t ) | 2 + Σ j = 1 N | a i j | f ˜ | v i ( t ) | + Σ j = 1 N a i j τ f ˜ | v i ( t ) | = | v i ( t ) | 1 / R O F F | v i ( t ) | + Σ j = 1 N | a i j | f ˜ + Σ j = 1 N a i j τ f ˜ | v i ( t ) | v ¯ ε / R O F F + Σ j = 1 N | a i j | f ˜ + Σ j = 1 N a i j τ f ˜ ε | v i ( t ) | < 0 .
As a consequence, the set K V is positively invariant and globally attractive for V ( · ) . Additionally, since | v ˙ i ( t ) | = | v ˙ i ( t ) · v i ( t ) | / | v i ( t ) | , it also holds, for any i N ˜ , | v ˙ i ( t ) | > ε , i.e., each component of V ( · ) which is outside the set [ v ¯ ε , v ¯ ε ] approaches the same set with a speed not smaller than ε , thus entering the set in finite time. □
In the paper, we are interested in the asymptotic behavior as t + of the neural network solutions. On the basis of Property 3, it is enough to consider an IVP whose initial conditions for variables V ( · ) is given by ϕ C ( [ τ , 0 ] , K V ) . Therefore, in what follows, we consider the IVP for a DVI in the compact convex set K V × K X
V ˙ ( t ) G ( X ( t ) ) V ( t ) + A F ( V ( t ) ) + A τ F ( V ( t τ ) ) N K V ( V ( t ) )
X ˙ ( t ) H ( V ( t ) ) N K N ( X ( t ) )
V ( t ) = ϕ ( t ) , t [ τ , 0 ]
X ( t ) = ψ ( t ) , t [ τ , 0 ]
with initial conditions ( ϕ ( t ) , ψ ( t ) ) , where ϕ C ( [ τ , 0 ] , K V ) and ψ C ( [ τ , 0 ] , K X ) .
A solution ( V ( t ) , X ( t ) ) , t [ τ , t ˜ ] , of the IVP (19)–(22) is defined in the same way as for the IVP (13)–(16), the only difference being that for the IVP (19)–(22), we have ( V ( t ) , X ( t ) ) K V × K X , t [ τ , t ˜ ] .
Property 4.
Given any initial conditions ( ϕ ( t ) , ψ ( t ) ) , where ϕ C ( [ τ , 0 ] , K V ) and ψ C ( [ τ , 0 ] , K X ) , there exists a unique solution ( V ( t ) , X ( t ) ) , t τ , of the IVP (19)–(22).
Proof. 
A. Existence of the solution
The proof of the existence of solutions for the system can be achieved through the method of steps.
In the first step, we show the existence of a solution in t [ τ , τ ] . Let us define the following IVP in t [ 0 , τ ] for a DVI without delay:
W ˙ ( t ) Y ˙ ( t ) θ ˙ ( t ) G ( Y ( t ) ) W ( t ) + A F ( W ( t ) ) + A τ F ( ϕ ( θ ) ) H ( W ( t ) ) 1 N K V ( W ( t ) ) N K X ( Y ( t ) ) N [ τ , 0 ] ( θ ( t ) ) = F W ( t ) Y ( t ) θ ( t ) N K V × K X × [ τ , 0 ] W ( t ) Y ( t ) θ ( t ) W ( 0 ) Y ( 0 ) θ ( 0 ) = ϕ ( 0 ) ψ ( 0 ) τ
where W R N , Y R N and θ R . F : K V × K X × [ τ , 0 ] R 2 N + 1 is continuous in K V × K X × [ τ , 0 ] and ( W ( 0 ) , Y ( 0 ) , θ ( 0 ) ) K V × K X × [ τ , 0 ] . Therefore, according to Property 2, there exists at least one solution to the IVP.
Solving (23) with respect to θ , we obtain θ ( t ) = t τ in t [ 0 , τ ] . Suppose now
V ( t ) X ( t ) = ϕ ( t ) ψ ( t ) , t [ τ , 0 ] W ( t ) Y ( t ) , t [ 0 , τ ] .
We have ( W ( 0 ) , Y ( 0 ) ) = ( ϕ ( 0 ) , ψ ( 0 ) ) , therefore, ( V ( t ) , X ( t ) ) is continuous in [ τ , τ ] and belongs to K V × K X . Additionally, ( V ( t ) , X ( t ) ) is absolutely continuous in t [ 0 , τ ] , and ( V ˙ ( t ) , X ˙ ( t ) ) satisfies (19) and (20) for a.a. t [ 0 , τ ] . As a consequence, ( V ( t ) , X ( t ) ) is a solution in t [ τ , τ ] of the DVI.
As an inductive step, let us assume that ( V ( t ) , X ( t ) ) is a solution in t [ τ , m τ ] , where m is an integer value greater than 1. We define in t [ τ , 0 ] the functions V m τ C ( [ τ , 0 ] , K V ) and X m τ C ( [ τ , 0 ] , K X ) as V m τ ( t ) = V ( t + m τ ) and X m τ ( t ) = X ( t + m τ ) , respectively. Following the procedure in the first step, and choosing ( V m τ ( t ) , X m τ ( t ) ) as the initial condition of the IVP, we obtain a solution ( V m ( t ) , X m ( t ) ) in t [ τ , τ ] .
Assuming ( V ( t ) , X ( t ) ) = ( V m ( t m τ ) , X m ( t m τ ) ) in t [ m τ , ( m + 1 ) τ ] , we obtain that ( V ( t ) , X ( t ) ) is a solution of the system in t [ τ , ( m + 1 ) τ ] , with initial condition ( V ( t ) , X ( t ) ) = ( ϕ ( t ) , ψ ( t ) ) in t [ τ , 0 ] . Indeed, it can be observed that:
  • for the inductive step, ( V ( t ) , X ( t ) ) with initial condition ( V ( t ) , X ( t ) ) = ( ϕ ( t ) , ψ ( t ) ) in t [ τ , 0 ] is a solution in t [ τ , m τ ] ;
  • since ( V m ( t m τ ) , X m ( t m τ ) ) is absolutely continuous in t ( 0 , τ ] , we have that ( V ( t ) , X ( t ) ) is absolutely continuous in t [ m τ , ( m + 1 ) τ ] ;
  • ( V ( t ) , X ( t ) ) is absolutely continuous in t [ τ , m τ ] ;
  • the limit of ( V ( t ) , X ( t ) ) as t m τ + is equal to the limit of ( V m ( t m τ ) , X m ( t m τ ) ) as t m τ + , i.e., we have ( V m ( 0 ) , X m ( 0 ) ) = ( V m τ ( 0 ) , X m τ ( 0 ) ) = ( V ( m τ ) , X ( m τ ) ) , therefore, ( V ( t ) , X ( t ) ) is continuous in t = m τ ;
  • ( V m ( t ) , X m ( t ) ) is a solution in t [ 0 , τ ] and satisfies (19) and (20). As a consequence, ( V ( t ) , X ( t ) ) satisfies (19) and (20) in t [ m τ , ( m + 1 ) τ ] .
In conclusion, ( V ( t ) , X ( t ) ) is a solution in t [ τ , ( m + 1 ) τ ] of the system (19) and (20) with the initial condition ( V ( t ) , X ( t ) ) = ( ϕ ( t ) , ψ ( t ) ) in t [ τ , 0 ] . By induction, this result can be extended to any t τ .
B. Uniqueness of the solution
Suppose, for contradiction, that there exist two solutions of (19)–(22) ( V 1 ( t ) , X 1 ( t ) ) and ( V 2 ( t ) , X 2 ( t ) ) . The distance between the solutions is defined as:
Δ ( t ) = 1 2 V 1 ( t ) X 1 ( t ) V 2 ( t ) X 2 ( t ) 2 2 .
We want to prove that the distance is 0 in any t [ τ , m τ ] , where m is a generic integer greater than or equal to 0.
The proof is easily reached when m = 0 , since ( V 1 ( t ) , X 1 ( t ) ) ( V 2 ( t ) , X 2 ( t ) ) = ( ϕ ( t ) , ψ ( t ) ) ( ϕ ( t ) , ψ ( t ) ) = 0 . Applying again the method of steps, let us assume that Δ ( t ) is equal to 0 in t [ τ , m τ ] . Since ( V 1 ( t ) , X 1 ( t ) ) and ( V 2 ( t ) , X 2 ( t ) ) are solutions of (19) and (20), when t [ m τ , ( m + 1 ) τ ] , we have that:
V 1 ˙ ( t ) = G ( X 1 ( t ) ) V 1 ( t ) + A F ( V 1 ( t ) ) + A τ F ( V 1 ( t τ ) ) n V 1 , t X 1 ˙ ( t ) = H ( V 1 ( t ) ) n X 1 , t
and
V 2 ˙ ( t ) = G ( X 2 ( t ) ) V 2 ( t ) + A F ( V 2 ( t ) ) + A τ F ( V 2 ( t τ ) ) n V 2 , t X 2 ˙ ( t ) = H ( V 2 ( t ) ) n X 2 , t
where n V 1 , t N K V ( V 1 ( t ) ) , n X 1 , t N K X ( X 1 ( t ) ) , n V 2 , t N K V ( V 2 ( t ) ) e n X 2 , t N K X ( X 2 ( t ) ) .
As a consequence
Δ ˙ ( t ) = V 1 ( t ) X 1 ( t ) V 2 ( t ) X 2 ( t ) , V ˙ 1 ( t ) X ˙ 1 ( t ) V ˙ 2 ( t ) X ˙ 2 ( t ) = V 1 ( t ) V 2 ( t ) X 1 ( t ) X 2 ( t ) , V ˙ 1 ( t ) V ˙ 2 ( t ) X ˙ 1 ( t ) X ˙ 2 ( t ) = V 1 ( t ) V 2 ( t ) , V ˙ 1 ( t ) V ˙ 2 ( t ) + X 1 ( t ) X 2 ( t ) , X ˙ 1 ( t ) X ˙ 2 ( t )
where:
V 1 ( t ) V 2 ( t ) , V ˙ 1 ( t ) V ˙ 2 ( t ) = V 1 ( t ) V 2 ( t ) , G ( X 1 ( t ) ) V 1 ( t ) G ( X 2 ( t ) ) V 2 ( t ) + V 1 ( t ) V 2 ( t ) , A F ( V 1 ( t ) ) F ( V 2 ( t ) ) + V 1 ( t ) V 2 ( t ) , A τ F ( V 1 ( t τ ) ) F ( V 2 ( t τ ) ) V 1 ( t ) V 2 ( t ) , n V 1 , t n V 2 , t X 1 ( t ) X 2 ( t ) , X ˙ 1 ( t ) X ˙ 2 ( t ) = X 1 ( t ) X 2 ( t ) , H ( V 1 ( t ) ) H ( V 2 ( t ) ) X 1 ( t ) X 2 ( t ) , n X 1 , t n X 2 , t .
According to Property 2:
V 1 ( t ) V 2 ( t ) , n V 1 , t n V 2 , t 0 X 1 ( t ) X 2 ( t ) , n X 1 , t n X 2 , t 0 .
Additionally, remembering that Δ ( t ) = 0 in t [ τ , m τ ] , we observe that V 1 ( t τ ) = V 2 ( t τ ) . As a consequence, starting from (28), we obtain:
Δ ˙ ( t ) V 1 ( t ) V 2 ( t ) , G ( X 1 ( t ) ) V 1 ( t ) G ( X 2 ( t ) ) V 2 ( t ) + V 1 ( t ) V 2 ( t ) , A F ( V 1 ( t ) ) F ( V 2 ( t ) ) + X 1 ( t ) X 2 ( t ) , H ( V 1 ( t ) ) H ( V 2 ( t ) ) = V 1 ( t ) V 2 ( t ) X 1 ( t ) X 2 ( t ) , G ( X 1 ( t ) ) V 1 ( t ) + A F ( V 1 ( t ) ) + G ( X 2 ( t ) ) V 2 ( t ) A F ( V 2 ( t ) ) H ( V 1 ( t ) ) H ( V 2 ( t ) ) .
Functions G ( X ) V + A F ( V ) and H ( V ) are Lipschitz continuous, therefore, there exists κ 0 such that
Δ ˙ ( t ) 2 κ Δ ( t ) .
From the Gronwall lemma, we obtain
0 Δ ( t ) Δ ( m τ ) exp ( 2 κ ( t m τ ) ) = 0
i.e., Δ ( t ) = 0 when t [ τ , m τ ] .
By induction, this result can be extended to any t τ , proving the uniqueness of the solution of (19)–(22). □

5. Main Results on Convergence

An equilibrium point of the memristor neural network (19) and (20) is a constant solution ( V ( t ) , X ( t ) ) = ( ξ V , ξ X ) K V × K X , t τ , of the IVP (19)–(22). Denote by π E the set of equilibrium points of (19) and (20). Since f ( 0 ) = 0 and h ( 0 ) = 0 , we have that
π = { ( 0 , ξ X ) : 0 R N , ξ X K X , t τ } π E .
Note that there is a continuum of non-isolated equilibrium points, that is a typical feature of neural networks with non-volatile memristors [15].
Example 1.
Let us show via an example that, in the general case, the memristor neural network (19) and (20) can have equilibrium points not belonging to set π. Let N = 2 , and f ( ρ ) = 1 / 2 ( | ρ + 1 | | ρ 1 | ) , 1 < v o n < 0 < v o f f < 1 , g ( x ) as in (7) and choose
A = 2 R O N 0 0 2 R O N ; A τ = 0 2 R O N 2 R O N 0 .
If ( v ˜ 1 , v ˜ 2 , x ˜ 1 , x ˜ 2 ) is an equilibrium point, the following relations hold true
v ˜ 1 = 2 R O N g ( x ˜ 1 ) ( f ( v ˜ 1 ) f ( v ˜ 2 ) ) v ˜ 2 = 2 R O N g ( x ˜ 2 ) ( f ( v ˜ 2 ) f ( v ˜ 1 ) ) .
Since f ( ρ ) is a piecewise linear function, the equilibrium points of the network can be easily found explicitly. It can be verified that, in addition to points in π, also ( 4 R O F F R O N , 4 , x o f f , x o n ) and ( 4 , 4 R O F F R O N , x o n , x o f f ) are equilibrium points for the network. Note that, since | v ˜ i | > 1 > max { | v o n | , v o f f } , only the extremal points x o n and x o f f can satisfy 0 h ( v ˜ i ) N K ( x ˜ i ) .
Definition 1.
The memristor neural network (19) and (20) are said to be convergent if, for any initial conditions ( ϕ ( t ) , ψ ( t ) ) , where ϕ C ( [ τ , 0 ] , K V ) and ψ C ( [ τ , 0 ] , K X ) , the solution of the IVP (19)–(22) converges toward an equilibrium point as t + .
We will address convergence under two different sets of conditions for the neuron activations f ( · ) and the interconnection and delay interconnection matrices A and A τ . Firstly, we enforce the following hypotheses.
Assumption 1.
We have f ( 0 ) = 0 and 0 < f ( v i ) / v i 1 for any 0 v i R and any i = 1 , 2 , , N .
Two interesting special cases are the sigmoidal, i.e., bounded and strictly increasing activations f ( ρ ) = ( 2 / π ) arctan ( π ρ / 2 ) and f ( ρ ) = ( 1 / 2 ) ( | ρ + 1 | | ρ 1 | ) used in the popular Hopfield neural network [43] and cellular neural network model [21], respectively. However, it is worth noting that neuron activation functions satisfying Assumption 1 may not be monotone-increasing.
Assumption 2.
There exist a diagonal positive definite matrix P R n and a symmetric positive definite matrix Z R N × N such that the following LMI holds true
S w s = 2 P G w s P A A P Z P A τ A τ P Z > 0
where we have let
G w s = diag ( 1 / R O F F , , 1 / R O F F ) R N × N .
It is worth noting that, in order to satisfy Assumption 2, it is necessary that matrix A G w s is Hurwitz, namely the real part of the eigenvalues of A is less then 1 / R O F F .
Theorem 1.
Suppose that Assumptions 1, 2 hold. Then the memristor neural network (19) and (20) is convergent. More specifically:
(1) V ( t ) 0 as t + for any initial conditions;
(2) X ( t ) converges in finite time, i.e., there exists X * K X and t f < + such that X ( t ) = X * , t t f , where X * and t f depend upon the initial conditions.
Proof. 
Let us consider the candidate Lyapunov function W ( · , · ) : R N + 1 R N given by
W ( V ( t ) , t ) = i = 1 N 2 p i 0 V i f ( V i ( t ) ) d V i + i = 1 N j = 1 N t τ t f ( V i ( s ) ) z i j f ( V j ( s ) ) d s .
Note that W ( V ( t ) , t ) 0 for any t 0 . Let
T = F ( V ( t ) ) F ( V ( t τ ) ) R 2 N .
Consider now the matrix
S = S w s + 2 P ( G ( X ( t ) ) G w s ) 0 0 0 = 2 P G ( X ( t ) ) P A A P Z P A τ A τ P Z
obtained by replacing G w s with G ( X ( t ) ) in S w s . Observe that G ( X ( t ) ) G w s 0 and consequently S w s > 0 implies S > 0 .
The time derivative of W ( · , · ) can be written as
d W ( V ( t ) , t ) d t = T S T + 2 F ( V ( t ) ) P G ( X ( t ) ) ( F ( V ( t ) ) V ( t ) ) ,
where 2 F ( V ( t ) ) P G ( X ( t ) ) ( F ( V ( t ) ) V ( t ) ) 0 , due to Assumption 2.
We have
d W ( V ( t ) , t ) d t T S T Λ m ( S ) T 2 Λ m ( S ) i = 1 N f ( v i ( t ) ) 2 0
for any t 0 , where with Λ m ( M ) , we denote the minimum eigenvalue of a matrix M. Since S > 0 , Λ m ( S ) > 0 . By integrating the previous inequality, we obtain
W ( V ( t ) , t ) W ( V ( 0 ) , 0 ) Λ m ( S ) i = 1 N 0 t f ( v i ( t ) ) 2 d t .
Assume, for contradiction, that V ( t ) 0 as t + . From Property 3, we know that, after a certain finite time, the dynamics of V ( t ) evolves in the hypercube K V = [ v ε , v ε ] N , hence, V ( t ) v ε N .
Furthermore, the norm of V ˙ ( t ) is limited as well. In fact, from (19) in the interior of the hypercube, we have
V ˙ ( t ) = G ( X ( t ) ) V ( t ) + A F ( V ( t ) ) + A τ F ( V ( t τ ) ) G ( X ( t ) ) V ( t ) + A F ( V ( t ) ) + A τ F ( V ( t τ ) ) .
We know that G ( X ) [ G O F F , G O N ] N , and from Assumption 1, we can observe that F ( V ) K V . As a consequence, V ˙ ( t ) is limited by
V ˙ ( t ) G ( X ( t ) ) V ( t ) + A F ( V ( t ) ) + A τ F ( V ( t τ ) ) G O N v ε N + i = 1 N j = 1 N a i j f ( v j ( t ) ) 2 + i = 1 N j = 1 N a i j τ f ( v j ( t τ ) ) 2 G O N v ε N + i = 1 N j = 1 N max 1 i , j N | a i j | f ˜ 2 + i = 1 N j = 1 N max 1 i , j N | a i j τ | f ˜ 2 G O N v ε N + max 1 i , j N | a i j | f ˜ N + max 1 i , j N | a i j τ | f ˜ N
where f ˜ is the maximum absolute value of function f, defined in Property 3.
Hence, there exist h { 1 , , N } and a sequence { t k } as k such that v h ( t k ) ν 0 as k + . Then, f ( v h ( t k ) ) f ( ν ) 0 as k + . Assume without loss of generality f ( ν ) > 0 . Since f ( v h ( t k ) ) f ( ν ) 0 , and V ˙ ( · ) is bounded above, there exist k ¯ and ϵ > 0 such that when k k ¯ we have f ( v h ( t ) ) > ( 1 / 2 ) f ( ν ) > 0 for t [ t k ϵ , t k + ϵ ] . Taking into account that d W ( V ( t ) , t ) / d t 0 for any t, we can write
lim t + W ( V ( t ) , t ) W ( V ( 0 ) , 0 ) Λ m ( S ) i = 1 N 0 f ( v i ( t ) ) 2 d t
W ( V ( 0 ) , 0 ) Λ m ( S ) k k ¯ t k ϵ t k + ϵ f ( ν ) 2 4 d t = .
However, this contradicts the positiveness of W ( · , · ) .
Now, consider that, since V ( t ) 0 as t + , there exists a time instant t f > 0 such that max i | V i ( t ) | max { | v o n | , v o f f } for any t t f . Taking into account (5), this implies X ˙ ( t ) = 0 for any t t f , i.e., X ( t ) converges to X * K N in finite time t f . □
In Theorem 1, we have proved the convergence of V ( · ) 0 and the convergence in finite time of X ( · ) under suitable hypotheses on the interconnection and delay interconnection matrix (cf. Assumption 2) and on the neuron activations f ( · ) (cf. Assumption 1). However, we have been unable to give an estimate of the convergence speed of V ( · ) or of the convergence time of X ( · ) . In what follows, we pose a different assumption on the interconnections that enables us to obtain such quantitative estimates and also permits us to weaken the restrictions in Assumption 1 for the activations.
Assumption 3.
We have 0 f ( v i ) / v i σ for some σ > 0 and for any 0 v i R and any i = 1 , 2 , , N .
Assumption 4.
There exist three symmetric positive definite matrices P R N × N , Z R N × N and Σ 3 R N × N such that the following LMI holds true
S 0 = P G w s + G w s P Σ ( Σ 3 + Z ) Σ P A P A τ A P Σ 3 0 A τ P 0 Z > 0
where we have let
Σ = diag ( σ , , σ ) R N × N .
We preliminarily recall the following two results.
Lemma 1
([44]). Given any real matrices Σ 1 , Σ 2 and a symmetric positive definite matrix Σ 3 = Σ 3 > 0 of appropriate dimensions, the following inequality holds
Σ 1 Σ 2 + Σ 2 Σ 1 β Σ 1 Σ 3 Σ 1 + β 1 Σ 2 Σ 3 1 Σ 2
where β > 0 is a scalar.
Lemma 2
(Schur Complement [40]). Given a symmetric matrix
M = A B B C
the following equivalence holds true
M > 0 C > 0 , A B C 1 B > 0 .
Proposition 1.
Suppose that Assumptions 3 and 4 hold. Let
0 < k min Λ m ( S 0 ) 4 Λ M ( P ) , 1 2 τ ln 1 + Λ m ( S 0 ) 2 Λ M ( P A τ Z 1 ( P A τ ) )
where Λ M ( M ) denotes the maximum eigenvalue of matrix M. Then, we have
S k = P G w s + P G w s 2 k P Σ ( Σ 3 + Z ) Σ P A P A τ A P Σ 3 0 A τ P 0 e 2 k τ Z > 0 .
Proof. 
In order to show that S k > 0 , let us consider
Ω k = P G w s + G w s P 2 k P Σ ( Σ 3 + Z ) Σ P A Σ 3 1 A P e 2 k τ P A τ Z 1 ( A τ ) P
and
Ω 0 = P G w s + G w s P Σ ( Σ 3 + Z ) Σ P A Σ 3 1 A P P A τ Z 1 ( A τ ) P .
Due to Assumption 4, and by applying Lemma 2 to S 0 , we have Ω 0 > 0 and we can rewrite Ω k as
Ω k = Ω 0 2 k P + ( 1 e 2 k τ ) P A τ Z 1 ( A τ ) P .
Since P A τ Z 1 ( A τ ) P > 0 and consequently Λ M ( P A τ Z 1 ( A τ ) P ) > 0 , if we choose k as in (43), the following inequality holds
Λ m ( Ω k ) Λ m ( Ω 0 ) 2 k Λ M ( P ) + ( 1 e 2 k τ ) Λ M ( P A τ Z 1 ( A τ ) P ) > 0
and hence Ω k > 0 . Taking into account the previous result, and by applying Lemma 2 to S k , we finally conclude that S k > 0 . □
Theorem 2.
Suppose that Assumptions 3 and 4 hold. Then the memristor neural network (19) and (20) is convergent. More specifically:
(1) V ( t ) 0 exponentially as t + with convergence rate k as in (43), i.e.,
V ( t ) Λ M ( P ) + Λ M ( Σ Z Σ ) 1 e 2 k τ 2 k Λ m ( P ) max τ θ 0 V ( θ ) e k t , t 0
(2) X ( t ) converges in finite time, i.e., there exists X * K N such that X ( t ) = X * , t t f , where
t f = 1 k log Λ M ( P ) + Λ M ( Σ Z Σ ) 1 e 2 k τ 2 k Λ m ( P ) max τ θ 0 V ( θ ) v ^
and v ^ = min { | v o n | , v o f f } .
Proof. 
Let us consider the following candidate Lyapunov function:
W ( V ( t ) , t ) = e 2 k t V ( t ) P V ( t ) + t τ t e 2 k s F ( V ( s ) ) Z F ( V ( s ) ) d s .
We have W ( V ( t ) , t ) e 2 k t Λ m ( P ) V ( t ) 2 ; moreover, the time derivative of (44) can be written as
d W ( V ( t ) , t ) d t = 2 k e 2 k t V ( t ) P V ( t ) + e 2 k t V ( t ) P V ˙ ( t ) + e 2 k t V ˙ ( t ) P V ( t ) + + e 2 k t F ( V ( t ) ) Z F ( V ( t ) ) e 2 k ( t τ ) F ( V ( t τ ) ) Z F ( V ( t τ ) ) = = e 2 k t V ( t ) 2 k P P G ( X ( t ) ) G ( X ( t ) ) P V ( t ) + + F ( V ( t ) ) A P V ( t ) + V ( t ) P A F ( V ( t ) ) + + F ( V ( t τ ) ) ) ( A τ ) P V ( t ) + V ( t ) P A τ F ( V ( t τ ) ) + F ( V ( t ) ) Z F ( V ( t ) ) e 2 k τ F ( V ( t τ ) ) ) Z F ( V ( t τ ) ) ) .
By applying Lemma 1, choosing Σ 1 = F ( V ( t ) ) , Σ 2 = A P V ( t ) and β = 1 , we obtain the following inequality
F ( V ( t ) ) A P V ( t ) + V ( t ) P A F ( V ( t ) ) F ( V ( t ) ) Σ 3 F ( V ( t ) ) + V P A Σ 3 1 A P V ( t ) .
Similarly, letting Σ 1 = F ( V ( t τ ) ) e k τ , Σ 2 = ( A τ ) P V ( t ) e k τ and β = 1 , by Lemma 1, we obtain
F ( V ( t τ ) ) ( A τ ) P V ( t ) + V ( t ) P A τ F ( V ( t τ ) ) e 2 k τ F ( V ( t τ ) ) Z F ( V ( t τ ) ) + e 2 k τ V P A τ Z 1 ( A τ ) P V ( t ) .
As a consequence, we can write the following inequality involving the time derivative of (44)
d W ( V ( t ) , t ) d t e 2 k t { V ( t ) 2 k P P G ( X ( t ) ) G ( X ( t ) ) P V ( t ) + + F ( V ( t ) ) Σ 3 F ( V ( t ) ) + V ( t ) P A Σ 3 1 A P V ( t ) + + F ( V ( t ) ) Z F ( V ( t ) ) + e 2 k τ V ( t ) P A τ Z 1 ( A τ ) P V ( t ) } .
Noting that Σ 3 + Z > 0 , the following inequality holds
F ( V ( t ) ) ( Σ 3 + Z ) F ( V ( t ) ) V ( t ) Σ ( Σ 3 + Z ) Σ V ( t )
and hence
d W ( V ( t ) , t ) d t e 2 k t ( V ( t ) Ω V ( t ) )
with matrix Ω defined as
Ω = P G ( X ( t ) ) + G ( X ( t ) ) P 2 k P Σ ( Σ 3 + Z ) Σ P A Σ 3 1 A P e 2 k τ P A τ Z 1 ( A τ ) P .
Noting that G ( X ( t ) ) G w s > 0 , then Proposition 1 ensures that Ω > Ω k > 0 .
Now, let us prove that V ( t ) 0 exponentially as t + and give an estimate of the convergence rate. Since d W ( V ( t ) , t ) / d t 0 , we can assert that W ( V ( 0 ) , 0 ) W ( V ( t ) , t ) , t 0 . Note that
W ( V ( 0 ) , 0 ) = V ( 0 ) P V ( 0 ) + τ 0 e 2 k s F ( V ( s ) ) Z F ( V ( s ) ) d s Λ M ( P ) V ( 0 ) 2 + τ 0 e 2 k s V ( s ) Σ Z Σ V ( s ) d s Λ M ( P ) V ( 0 ) 2 + Λ M ( Σ Z Σ ) 1 e 2 k τ 2 k max τ θ < 0 V ( θ ) 2 .
Recalling that W ( V ( t ) , t ) e 2 k t Λ m ( P ) V ( t ) 2 , we finally obtain
V ( t ) 2 e 2 k t Λ m ( P ) Λ M ( P ) V ( 0 ) 2 + Λ M ( Σ Z Σ ) 1 e 2 k τ 2 k max τ θ < 0 V ( θ ) 2
and hence
V ( t ) Λ M ( P ) Λ M ( Σ Z Σ ) 1 e 2 k τ 2 k Λ m ( P ) max τ θ < 0 V ( θ ) e k t .
This shows that V ( · ) is exponentially convergent to 0 with a convergence rate k.
Let us define v ^ = min { | v o n | , v o f f } . Since V ( t ) 0 as t + , there exists a time instant t f > 0 such that max i | V i ( t ) | max { | v o n | , v o f f } for any t t f . Taking into account (5), this implies X ˙ ( t ) = 0 for any t t f , i.e., X ( t ) converges to X * K N in finite time t f .
The worst case estimate of t f is obtained by letting V ( t ) = v ^ . From (45), we conclude that
t f = 1 k log Λ M ( P ) + Λ M ( Σ Z Σ ) 1 e 2 k τ 2 k Λ m ( P ) max τ θ 0 V ( θ ) v ^ .
The next result is an immediate consequence of the results on convergence in the two theorems.
Proposition 2.
Under the assumptions of Theorem 1, or the assumptions of Theorem 2, for the memristor neural network, we have π π E .

6. Numerical Simulations

In this section, we report on some simulations performed using MATLAB to illustrate the dynamic behavior of the considered memristor neural network.
In the simulations, the state evolution of the VTEAM model (5) is described by the following parameters
v o f f = 10 mV k o f f = 10 nm / s α o f f = 3 × 10 8 v o n = 10 mV k o n = 10 nm / s α o n = 3 × 10 8 .
For the memductance, we have chosen the linear dependence (7), setting its parameters as follows
R O F F = 1 Ω x o f f = 10 nm R O N = 0.1 Ω x o n = 0 nm .
We considered a two-neuron neural network with interconnection and delay interconnection matrices
A = 1.25 10 12.5 1.25 ; A τ = 1.25 1.25 12.5 / 8 12.5 / 8 .
The delay τ is set to 250 ms and the neuron activations are given by the piecewise-linear function f ( ρ ) = ( 1 / 2 ) ( | ρ + 1 | | ρ 1 | ) .
According to Property 3, the hypercube K V in (17) is defined by v ε = 16.875 V. Substituting the following matrices
P = 0.63 0 0 0.51 Z = 1.25 0.06 0.06 1.04
in matrix S w s defined in Assumption 4, i.e.,
S w s = 2 P G w s P A A P Z P A τ A τ P Z = = 1.585 0.015 0.7875 0.7875 0.015 1.255 0.7969 0.7969 0.7875 0.7969 1.25 0.06 0.7875 0.7969 0.06 1.04
and computing the eigenvalues of S w s , i.e.,
λ S w s = 0.027 0.2749 2.2959 2.5323
we observe that S w s > 0 , therefore the assumptions of Theorem 1 hold for the neural network.

6.1. Example 1

In this first example, we evaluate the neural network state evolution dynamics setting sinusoidal initial conditions for the neuron voltages
V ( t ) = 2.35 sin ( 50 π t + 0.2 π ) 1.15 sin ( 80 π t + 0.3 π ) V
for t [ τ , 0 ] and testing four different initial conditions for the memristor states, i.e.,
X 1 ( 0 ) = 0.5 0.5 nm X 2 ( 0 ) = 0.5 9.5 nm X 3 ( 0 ) = 9.5 0.5 nm X 4 ( 0 ) = 9.5 9.5 nm .
Figure 1 shows the solutions obtained starting from the initial conditions thus defined. It can be observed that, in all the simulations, the voltages converge to 0, as predicted by Theorem 1, even if the trajectories are influenced by the memristor state initial conditions and dynamics. However, note that the asymptotic values of the memristor states are different for each simulation, i.e., they depend upon the initial conditions.
Figure 2 shows the time behavior of the specific simulation performed with initial condition X 1 ( 0 ) = 0.5 , 0.5 nm. From the plot, we can see that when the voltages reach the threshold v o n ( v o f f ), the memristor dynamics stops, i.e., the memristor states do not change anymore.

6.2. Example 2

In the second example, we set the initial conditions for the memristor states at
X ( 0 ) = 5 5 nm
and we evaluated the system dynamics starting from four different sinusoidal initial conditions for the neurons voltages, i.e.,
V 1 ( t ) = 2.35 sin ( 50 π t + 0.2 π ) 1.15 sin ( 80 π t + 0.3 π ) V V 2 ( t ) = 2.35 sin ( 50 π t + 0.2 π ) 1.15 sin ( 80 π t + 0.3 π ) V V 3 ( t ) = 2.35 sin ( 50 π t + 0.2 π ) 1.15 sin ( 80 π t + 0.3 π ) V V 4 ( t ) = 2.35 sin ( 50 π t + 0.2 π ) 1.15 sin ( 80 π t + 0.3 π ) V .
Figure 3 shows the solutions obtained starting from the initial conditions defined above. The simulations show that, regardless of the initial condition, memristor voltages always converge to 0. The different evolution in time of the neuron voltages, however, determines different dynamics of the memristor states. In fact, even if the initial memristor state is the same in each of the four analyzed cases, the subsequent dynamics are different, with the memristors that reach different asymptotic values for its states at the end of each simulation. This is better illustrated in Figure 4 for the simulation performed with initial condition V 1 ( t ) = 2.35 sin ( 50 π t + 0.2 π ) , 1.15 sin ( 80 π t + 0.3 π ) V.

7. Conclusions

The paper has established some fundamental results on trajectory convergence for a class of differential inclusions, which are named DVIs, modeling delayed neural networks with real generic memristors obeying the VTEAM model. The conditions for convergence hold for any constant delay and they can be applied to nonsymmetric neuron interconnection matrices. Moreover, they can be effectively checked numerically since they are expressed in the form of LMIs. Although VTEAM can be used to fit several real memristor models, it is of interest in future work to extend the obtained results to other classes of real memristors, such as those modeled by extended memristors.

Author Contributions

Conceptualization, M.D.M., M.F., R.M., L.P., G.I. and A.T.; methodology, M.D.M., M.F., R.M., L.P., G.I. and A.T.; formal analysis, M.D.M., M.F., R.M., L.P., G.I. and A.T.; writing—original draft preparation, M.D.M., M.F., R.M., L.P., G.I. and A.T.; writing—review and editing, M.D.M., M.F., R.M., L.P., G.I. and A.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Italian Ministry of Education, University and Research (MIUR) grant PRIN 2017LSCR4K 002 (“Analogue Computing with Dynamic Switching Memristor Oscillators: Theory, Devices and Applications (COSMO)”).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

References

  1. Waldrop, M.M. The chips are down for Moore’s law. Nat. News 2016, 530, 144. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Williams, R.S. What’s Next? [The end of Moore’s law]. Comp. Sci. Eng. 2017, 19, 7–13. [Google Scholar] [CrossRef]
  3. Zidan, M.A.; Strachan, J.P.; Lu, W.D. The future of electronics based on memristive systems. Nat. Electron. 2018, 1, 22. [Google Scholar] [CrossRef]
  4. Yang, J.J.; Williams, R.S. Memristive devices in computing system: Promises and challenges. ACM J. Emerg. Technol. Comput. Syst. 2013, 9, 11. [Google Scholar] [CrossRef]
  5. Li, C.; Wang, Z.; Rao, M.; Belkin, D.; Song, W.; Jiang, H.; Yan, P.; Li, Y.; Lin, P.; Hu, M.; et al. Long Short-Term Memory Networks Memristor Crossbar Arrays. Nat. Mach. Intell. 2019, 1, 49. [Google Scholar] [CrossRef]
  6. Ielmini, D.; Pedretti, G. Device and Circuit Architectures for In-Memory Computing. Adv. Intell. Syst. 2020, 2, 2000040. [Google Scholar] [CrossRef]
  7. Ielmini, D.; Wong, H.S.P. In-memory computing with resistive switching devices. Nat. Electron. 2018, 1, 333–343. [Google Scholar] [CrossRef]
  8. Xia, Q.; Yang, J.J. Memristive crossbar arrays for brain-inspired computing. Nat. Mater. 2019, 18, 309–323. [Google Scholar] [CrossRef]
  9. Chua, L.O. Memristor-The missing circuit element. IEEE Trans. Circuit Theory 1971, 18, 507–519. [Google Scholar] [CrossRef]
  10. Chua, L.O.; Kang, S.M. Memristive devices and systems. Proc. IEEE 1976, 64, 209–223. [Google Scholar] [CrossRef]
  11. Hajri, B.; Aziza, H.; Mansour, M.M.; Chehab, A. RRAM device models: A comparative analysis with experimental validation. IEEE Access 2019, 7, 168963–168980. [Google Scholar] [CrossRef]
  12. Chen, P.Y.; Yu, S. Compact modeling of RRAM devices and its applications in 1T1R and 1S1R array design. IEEE Trans. Electron Devices 2015, 62, 4022–4028. [Google Scholar] [CrossRef]
  13. Mazumder, P.; Kang, S.M.; Waser, R. Special issue on memristors: Devices, models, and applications. Proc. IEEE 2012, 100, 1911–1919. [Google Scholar] [CrossRef]
  14. Chua, L. Everything You Wish to Know about Memristors But Are Afraid to Ask. Radioengineering 2015, 24, 319–368. [Google Scholar] [CrossRef]
  15. Corinto, F.; Forti, M.; Chua, L.O. Nonlinear Circuits and Systems with Memristors; Springer: Berlin/Heidelberg, Germany, 2021. [Google Scholar]
  16. Kvatinsky, S.; Friedman, E.G.; Kolodny, A.; Weiser, U.C. TEAM: Threshold adaptive memristor model. IEEE Trans. Circuits Syst. I Reg. Pap. 2013, 60, 211–221. [Google Scholar] [CrossRef]
  17. Kvatinsky, S.; Ramadan, M.; Friedman, E.G.; Kolodny, A. VTEAM: A general model for voltage-controlled memristors. IEEE Trans. Circuits Syst. II Express Briefs 2015, 62, 786–790. [Google Scholar] [CrossRef]
  18. Khalid, M. Review on various memristor models, characteristics, potential applications, and future works. Trans. Electr. Electron. Mater. 2019, 20, 289–298. [Google Scholar] [CrossRef]
  19. Ascoli, A.; Corinto, F.; Senger, V.; Tetzlaff, R. Memristor model comparison. IEEE Circuits Syst. Mag. 2013, 13, 89–105. [Google Scholar] [CrossRef]
  20. Haykin, S. Neural Networks: A Comprehensive Foundation; Prentice-Hall: Hoboken, NJ, USA, 1999. [Google Scholar]
  21. Chua, L.O.; Yang, L. Cellular neural networks: Theory. IEEE Trans. Circuits Syst. 1988, 35, 1257–1272. [Google Scholar] [CrossRef]
  22. Chua, L.O.; Yang, L. Cellular neural networks: Applications. IEEE Trans. Circuits Syst. 1988, 35, 1273–1290. [Google Scholar] [CrossRef]
  23. Cohen, M.A.; Grossberg, S. Absolute stability of global pattern formation and parallel memory storage by competitive neural networks. IEEE Trans. Syst. Man Cybern. 1983, 13, 815–825. [Google Scholar] [CrossRef]
  24. Hirsch, M. Convergent activation dynamics in continuous time networks. Neural Netw. 1989, 2, 331–349. [Google Scholar] [CrossRef] [Green Version]
  25. Chua, L.O. Special Issue on Nonlinear Waves, Patterns and Spatio-temporal Chaos in Dynamic Arrays. IEEE Trans. Circuits Syst. I Fundam. Theory Appl. 1995, 42, 557–823. [Google Scholar]
  26. Michel, A.N.; Farrell, J.A.; Porod, W. Qualitative analysis of neural networks. IEEE Trans. Circuits Syst. 1989, 36, 229–243. [Google Scholar] [CrossRef]
  27. Di Marco, M.; Forti, M.; Grazzini, M.; Pancioni, L. Limit Set Dichotomy and Convergence of Cooperative Piecewise Linear Neural Networks. IEEE Trans. Circuits Syst. I Reg. Pap. 2011, 58, 1052–1062. [Google Scholar] [CrossRef]
  28. Di Marco, M.; Forti, M.; Grazzini, M.; Pancioni, L. Convergent Dynamics of Nonreciprocal Differential Variational Inequalities Modeling Neural Networks. IEEE Trans. Circuits Syst. I Reg. Pap. 2013, 60, 3227–3238. [Google Scholar] [CrossRef]
  29. Zhang, H.; Wang, Z.; Liu, D. A Comprehensive Review of Stability Analysis of Continuous-Time Recurrent Neural Networks. IEEE Trans. Neural Netw. Learn. Syst. 2014, 25, 1229–1262. [Google Scholar] [CrossRef]
  30. Di Marco, M.; Forti, M.; Pancioni, L. Complete stability of feedback CNNs with dynamic memristors and second-order cells. Int. J. Circuit Theory Appl. 2016, 44, 1959–1981. [Google Scholar] [CrossRef]
  31. Di Marco, M.; Forti, M.; Pancioni, L. Convergence and Multistability of Nonsymmetric Cellular Neural Networks with Memristors. IEEE Trans. Cybern. 2017, 47, 2970–2983. [Google Scholar] [CrossRef]
  32. Di Marco, M.; Forti, M.; Pancioni, L. Memristor standard cellular neural networks computing in the flux–charge domain. Neural Netw. 2017, 93, 152–164. [Google Scholar] [CrossRef]
  33. Corinto, F.; Forti, M. Memristor Circuits: Flux–Charge Analysis Method. IEEE Trans. Circuits Syst. I Reg. Pap. 2016, 63, 1997–2009. [Google Scholar] [CrossRef] [Green Version]
  34. Yao, W.; Wang, C.; Cao, J.; Sun, Y.; Zhou, C. Hybrid multisynchronization of coupled multistable memristive neural networks with time delays. Neurocomputing 2019, 363, 281–294. [Google Scholar] [CrossRef]
  35. Nie, X.; Zheng, W.X.; Cao, J. Multistability of memristive Cohen-Grossberg neural networks with non-monotonic piecewise linear activation functions and time-varying delays. Neural Netw. 2015, 71, 27–36. [Google Scholar] [CrossRef] [PubMed]
  36. Pershin, Y.V.; Di Ventra, M. On the validity of memristor modeling in the neural network literature. Neural Netw. 2020, 121, 52–56. [Google Scholar] [CrossRef]
  37. Chua, L.O.; Roska, T. Cellular Neural Networks and Visual Computing: Foundation and Applications; Cambridge University Press: Cambridge, UK, 2005. [Google Scholar]
  38. Yin, W.; Yang, W.; Liu, H. A neural network scheme for recovering scattering obstacles with limited phaseless far-field data. J. Comput. Phys. 2020, 417, 109594. [Google Scholar] [CrossRef]
  39. Aubin, J.P.; Cellina, A. Differential Inclusions. Set-Valued Maps and Viability Theory; Springer: Berlin, Germany, 1984. [Google Scholar]
  40. Boyd, S.P.; El Ghaoui, L.; Feron, E.; Balakrishnan, V. Linear Matrix Inequalities in System and Control Theory; SIAM: Philadelphia, PA, USA, 1994; Volume 15. [Google Scholar]
  41. Aubin, J.P.; Frankowska, H. Set-Valued Analysis; Birkauser: Boston, MA, USA, 1990. [Google Scholar]
  42. Di Marco, M.; Forti, M.; Grazzini, M.; Pancioni, L. On global exponential stability of standard and full-range CNNs. Int. J. Circuit Theory Appl. 2008, 36, 653–680. [Google Scholar] [CrossRef]
  43. Hopfield, J. Neural Networks and Physical Systems with Emergent Collective Computational Abilities. Proc. Natl. Acad. Sci. USA 1982, 79, 2554–2558. [Google Scholar] [CrossRef] [Green Version]
  44. Sanchez, E.N.; Perez, J.P. Input-to-state stability (ISS) analysis for dynamic neural networks. IEEE Trans. Circuits Syst. I Fundam. Theory Appl. 1999, 46, 1395–1398. [Google Scholar] [CrossRef]
Figure 1. Simulations of the memristor neural network considered in Example 1. Each color represents a different solution, obtained for a specific memristor initial condition. The initial conditions are labeled by the red markers. The final memristor states are labeled by the black dots. The black dashed box in the right plot is the hypercube K X = [ x o n , x o f f ] 2 bounding the memristor states.
Figure 1. Simulations of the memristor neural network considered in Example 1. Each color represents a different solution, obtained for a specific memristor initial condition. The initial conditions are labeled by the red markers. The final memristor states are labeled by the black dots. The black dashed box in the right plot is the hypercube K X = [ x o n , x o f f ] 2 bounding the memristor states.
Mathematics 10 02439 g001
Figure 2. Time domain behavior for the solution in Example 1 having initial condition X 1 ( 0 ) = ( 0.5 , 0.5 ) nm.
Figure 2. Time domain behavior for the solution in Example 1 having initial condition X 1 ( 0 ) = ( 0.5 , 0.5 ) nm.
Mathematics 10 02439 g002
Figure 3. Simulations of the memristor neural network considered in Example 2. Each color represents a different solution, obtained for a specific memristor initial condition. The initial conditions are labeled by the red markers. The final memristor states are labeled by the black dots. The black dashed box in the right plot is the hypercube K X = [ x o n , x o f f ] 2 bounding the memristor states.
Figure 3. Simulations of the memristor neural network considered in Example 2. Each color represents a different solution, obtained for a specific memristor initial condition. The initial conditions are labeled by the red markers. The final memristor states are labeled by the black dots. The black dashed box in the right plot is the hypercube K X = [ x o n , x o f f ] 2 bounding the memristor states.
Mathematics 10 02439 g003
Figure 4. Time domain behavior for the state solution in Example 2 with V 1 ( t ) = 2.35 sin ( 50 π t + 0.2 π ) , 1.15 sin ( 80 π t + 0.3 π ) V.
Figure 4. Time domain behavior for the state solution in Example 2 with V 1 ( t ) = 2.35 sin ( 50 π t + 0.2 π ) , 1.15 sin ( 80 π t + 0.3 π ) V.
Mathematics 10 02439 g004
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Di Marco, M.; Forti, M.; Moretti, R.; Pancioni, L.; Innocenti, G.; Tesi, A. Convergence of a Class of Delayed Neural Networks with Real Memristor Devices. Mathematics 2022, 10, 2439. https://doi.org/10.3390/math10142439

AMA Style

Di Marco M, Forti M, Moretti R, Pancioni L, Innocenti G, Tesi A. Convergence of a Class of Delayed Neural Networks with Real Memristor Devices. Mathematics. 2022; 10(14):2439. https://doi.org/10.3390/math10142439

Chicago/Turabian Style

Di Marco, Mauro, Mauro Forti, Riccardo Moretti, Luca Pancioni, Giacomo Innocenti, and Alberto Tesi. 2022. "Convergence of a Class of Delayed Neural Networks with Real Memristor Devices" Mathematics 10, no. 14: 2439. https://doi.org/10.3390/math10142439

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop