A Neural Network Based on a Nonsmooth Equation for a Box Constrained Variational Inequality Problem

. Te variational inequality framework holds signifcant prominence across various domains including economic fnance, network transportation, and game theory. In addition, a novel approach utilizing a neural network model is introduced in the current work to address a box constrained variational inequality problem. Initially, the original problem is reformulated into a nonsmooth equation, following which the neural network model is meticulously devised to tackle this reformulated equation. Tis study comprehensively investigated inherent characteristics and properties of this neural network model. In addition, employing the Lyapunov function method, stability analysis of the neural network model proposed is rigorously demonstrated in the Lyapunov sense. Furthermore, the efcacy of the proposed technique is substantiated through numerical simulations, providing empirical support for its applicability and efectiveness.


Introduction
Variational inequalities serve as a comprehensive framework for examining numerous optimization problems and possess signifcant applications across various domains such as economics, engineering, and transportation, among others, as detailed in the monograph [1] and associated literature.As elucidated in [2], variational inequalities represent a contemporary extension of variational principles, with their historical roots tracing back to seminal works by Euler, Lagrange, and the Bernoulli siblings.Te concepts and methodologies inherent in variational inequalities are currently being employed across a spectrum of scientifc disciplines, showcasing their efcacy and innovation, as evidenced by scholarly contributions [3][4][5][6][7][8][9][10].
In this article, we study a box constrained variational inequality problem (BVIP(l, u, F)) as follows: fnding x ∈ X and thus.
Te utilization of neural networks in addressing variational inequalities, especially within engineering applications demanding real-time solutions, has garnered signifcant attention in recent years, as discussed extensively in the literature [18][19][20][21][22][23][24][25][26][27][28][29][30][31].ANNs ofer promising prospects due to their potential for efcient hardware implementation and the ability to provide real-time solutions, which may be challenging to achieve with traditional numerical algorithms, particularly for high-dimensional and dense problem settings.Various neural network architectures and methodologies have been proposed to tackle diferent aspects of variational inequality problems.For instance, in [24], the authors introduce a novel neural network approach for addressing constrained variational inequalities, which demonstrates stability properties under Lyapunov sense.Similarly, in [25] shows a projection neural network for addressing variational inequalities, with proven stability under certain conditions.Moreover, for mixed variational inequalities, the author proposes a proximal projection neural network method in [26], showcasing convergence properties under Lipschitz continuity conditions.Tese studies highlight the versatility and efcacy of neural networks in addressing diverse variational inequality problem formulations.Despite the advancements, it is acknowledged that most existing neural network approaches focus on general variational inequality problems and may not fully exploit the specifc structure of BVIP(l, u, F) formulations.Tis presents an opportunity for further research to develop specialized neural network architectures tailored to exploit the unique characteristics of box-constrained variational inequality problems, potentially leading to enhanced efciency and efectiveness in solving such problems.
Te introduction of specialized neural networks tailored for addressing time-varying equations has signifcantly advanced the feld, as evidenced by the Zeroing Neural Network (ZNN) model discussed in [32,33].Te ZNN model ofers exponential convergence towards theoretical solutions of time-varying equations, representing a notable improvement over existing methods.Building upon this foundation, subsequent research eforts have yielded valuable outcomes, as documented in various studies such as [34][35][36][37].One limitation of the classic ZNN is its reliance on infnite time cost for convergence to the theoretical solution.
To address this limitation, some new Neural Networks are introduced in [38].Moreover, they have been shown to be efective in tackling nonconvex Quadratic Programming (QP) problems [39].However, as computational scales increase, the time required to obtain results becomes prohibitive, necessitating even faster convergence speeds for practical applications.In response to this need, a neural network with varying parameters was developed in [40][41][42].Tis innovation represents a signifcant advancement in accelerating convergence speeds, addressing the challenges posed by larger computational scales.In addition, in [43], the authors integrate a redefned error monitor function into the neural network design.Tis integration enhances control over mobile redundant manipulators during tracking tasks, ofering superior performance in terms of overshoot, robustness, and convergence speed compared to traditional neural networks, as demonstrated in [44].Tese advancements underscore the potential of specialized neural networks in addressing complex dynamic equations and hold promise for future research endeavors in the feld.
Te paper introduces a novel neural network method aimed at solving (1).Stability analysis of the neural network proposed is shown on the basis of Lyapunov's sense, and the convergence of the solution sequence is guaranteed.Compared to existing studies, the article's main contributions can be summarized as follows: (1) Utilizing the structure of BVIP(l, u, F), the paper provides a nonsmooth equation formulation for solving the problem defned by (1).Subsequently, a neural network method is proposed to tackle (1).(2) In contrast to the method proposed in [6] and the classical neural network method described in [31], the proposed method facilitates faster convergence of the solution trajectory towards the equilibrium point.Tis improvement is evidenced by the numerical experiments conducted in Section 5. (3) Unlike neural networks relying on projection functions as discussed in [6], our neural network in this paper operates independently of estimating any parameters, simplifying the computational process.
Overall, these contributions highlight the efectiveness of the neural network put forward in addressing BVIP(l, u, F), ofering advancements over existing approaches and demonstrating promise for future applications.
Te structure of the current study is presented: Section 2 gives preliminaries necessary for understanding the subsequent sections.Section 3 introduces a neural network model for a nonsmooth equation.Section 4 establishes the consistency and stability analysis results.Section 5 conducts several numerical tests.
Te notations specifed below will be utilized consistently throughout this paper.A T is used to denote the transpose of a matrix A, 〈x, y〉 to represent the inner product of x and y in vector space, ‖z‖ denotes the Euclidean norm for any z ∈ R n .In addition, for φ: R n ⟶ R ∪ ± ∞ { }, ∇φ(x) signifes the gradient of φ at x. Furthermore, for ϕ: R n ⟶ R m , Jϕ(x) represents the Jacobian matrix of ϕ evaluated at x. Tese notations will remain consistent throughout the entirety of this paper to ensure clarity and coherence.

A Neutral Network Based on Nonsmooth Equation
A neutral network based on a nonsmooth equation formulation of BVIP(l, u, F) is proposed in the current section.Some foundational concepts such as P matrix, P 0 function, uniform P function, Clarke subdiferentiation, isolated equilibrium point, Stability in the sense of Lyapunov, exponential stability and asymptotical stability are sourced from [1,45,46].We adopt a nonsmooth equation formulation of BVIP(l, u, F) from [47].First, we introduce ψ: where with ϕ: R 2 ⟶ R being called the F-B function [48].
Ten, Ψ, Φ: R n ⟶ R n can be defned by 2 Journal of Mathematics where i � 1, 2, . . ., n.We also introduce G: R n ⟶ R n defned by where According to Teorem 2.2 in [47], f(x) is nonnegative, and f(x) � 0 is equivalent to that x ∈ R n is a solution of BVIP(l, u, F).Moreover, if f is continuously diferentiable when F is continuously diferentiable.
Utilizing the fastest descent method for BVIP(l, u, F), we now delve into a neural network model of the frst order as follows: where t 0 refers to the initial time and τ represents a factor that determines the step size in simulation.If τ is greater than 1, it suggests that a larger step can be utilized during the simulation process.In addition, Figure 1 shows the block diagram framework of (8).

Consistency and Stability of (8)
We focus on consistency analysis and stability analysis of the neutral network (8) proposed in this part.We begin by examining the connection between the equilibrium point of ( 8) and the solutions to BVIP(l, u, F).Theorem 1. Suppose that x * represents a solution of BVIP(l, u, F).In such a case, x * also serves as an equilibrium point of (8).Conversely, if x * serves as an equilibrium point of (8) and all elements V ∈ z c G(x * ) are nonsingular, or if l i and u i , i � 1, 2, . . ., n are fnite and F satisfes the properties of a P 0 function, in what follows, x * serves as a solution to BVIP(l, u, F).
Proof.If x * is a solution of BVIP(l, u, F), according to [47], f(x * ) � 0, implying G(x * ) � 0. Denote V be an element in z c G(x * ), then according to [49], the following can be acquired: which means that x * is an equilibrium point of (8).Conversely, if ∇f(x * ) � 0 and all V ∈ z c G(x * ) are nonsingular, then from (9) we have G(x * ) � 0, hence f(x * ) � 0, indicating that x * represents a solution of BVIP(l, u, F).If F is a P 0 function, the conclusion follows directly from Teorem 4.2 in [47].

Journal of Mathematics
Next, we study the trajectory of the solution to (8).

Lemma . Te function f(x(t)) decreases or remains constant as the variable t increases.
Proof.Since Terefore, the function f(x(t)) decreases or remains constant as the variable t increases.

□
We next defne the level set of the starting point x 0 as Theorem 3.For an arbitrary initial state x 0 ∈ R n (i) Tere exists exactly one maximal solution x(t), t ∈ [t 0 , τ(x 0 )) and τ(x 0 ) > T 0 ; (ii) If X is bounded or F satisfes the uniform P property, then τ(x 0 ) � +∞.

□
Inspired by Corollary in 4.3 in [50], the following result can be obtained.Theorem 4. Let x(t) ∈ [t 0 , τ(x 0 )) be the unique maximum solution of the diferential equation model (8), τ(x 0 ) � +∞ and x(t) { } is bounded, then Furthermore, if x * denotes the convergence point of trajectory x(t) and all elements V ∈ z c G(x * ) are nonsingular for, then x * is a solution of BVIP(l, u, F).
Proof.According to Lemma 2, f(x(t)) has a lower bound.And the unconstrained minimization problem (6) corresponding to model (1) is the steepest descending dynamic model.Terefore, according to Corollary 4.3 in [50], the analysis of this model shows that the trajectory of (8) will reach a steady state, and the conclusion can be established.
Furthermore, if x * is the convergence point of the trajectory x(t), lim t⟶∞ x(t) � x * .According to (14), it can be concluded that ∇f(x * ) � 0. Since all V ∈ z c G(x * ) are nonsingular, the conclusion can be drawn from Teorem 1. □ Remark 5.If l i and u i , i � 1, 2, . . ., n are fnite, then nonsingularity of elements in z c G(x * ) in Teorem 4 can be replaced by P 0 property of function the F.
Proof.At frst, it is demonstrated that f(x) serves as a Lyapunov function over the set Ω * , a neighborhood of x * , for equation (8).Using the defnition of f(x), it is nonnegative across R n .Because x * is isolated, f(x * ) � 0 and for any x ∈ Ω * / x * { }, f(x) > 0. Next, we verify the second condition in the defnition of a Lyapunov function.It can be found that: Tus, the function f(x) acts as a Lyapunov function for (8) over the set Ω * .As x * is an isolated equilibrium point, we have df(x(t))/dt < 0, ∀x ∈ Ω * / x * { }.By Lemma 5.3 in [46], it can follow that x * is asymptotically stable for (8).□ Theorem 7. Let x * be a solution of BVIP(l, u, F) and J x F(x * ) is a P matrix, then we have for ( 8), x * is exponentially stable.
Proof.Defne x * be a solution of BVIP(l, u, F), and later it holds that for V ∈ z c G(x * ), therefore, x * is an equilibrium point.Suppose that x * is not an isolated equilibrium point, later we can select a sequence x k   which converges to x * as k tends to infnity and satisfes for V k ∈ z c G(x k ).Since J x F(x * ) is a P matrix, we know from Corollary 5.3 in [47] that when k is large enough, V k is nonsingular, which, by (17), means that G(x k ) � 0. Terefore, x k refers to a solution of BVIP(l, u, F) for k large enough.However, by [1], it holds that under condition of J x F(x * ) being a P matrix, BVIP(l, u, F) has at most a solution, which is a contradiction.As a result, x * stands alone as an equilibrium point.By Teorem 6, x * is asymptotically stable.By Corollary 5.3 in [47], ∃ c > 0 and δ > 0 such that for every x ∈ B(x * , δ) and every V ∈ z c G(x), V is invertible and fulflls ‖V − 1 ‖ ≤ c.So ∃ κ 1 > 0 and κ 2 > 0 and thus: By Proposition 2.4 in [47], G is semismooth, which, by Proposition 2.4 in [50], means the following expansion for any V ∈ z c G(x).
Te proof that follows bears resemblance to the proof found in Teorem 5.5 of [46], and we write them out for completeness.
Defne δ be sufciently small such that.
for ∀x ∈ B(x * , δ) and some 0 < ϵ < κ 1 .Next, let In what follows, we have for every is the time at which the solution frst exits the ball B(x * , δ).Terefore, we can obtain G(x * ) � 0, and for ∀t ∈ I � [t 0 , τ) By [[52], Corollary 2.1], we have the equivalence of and

Journal of Mathematics
where which refers to a contradiction.Terefore, we have τ � +∞ and the proof is fnished.

Numerical Tests
In the current section, multiple instances of box-constrained variational inequalities are provided for validating the developed neural network model.Our simulation is based on MATLAB (2018B) and its ode45 solver.Te examples come from [31].
We know from Figures 2 and 3 that the trajectories of solutions of neural network based on models (8), (27), and (29) of box constrained variational inequality problem (1) all converge to equilibrium point x * � (4/3, 7/9, 4/9, 2/9) T .Moreover, compare with neural networks based on models (29) and (30), the trajectories of solutions of the neural network on the basis of model ( 8) converge to the equilibrium point faster.
Figure 5 shows the numerical test results of Example 3 based on the of model (8) with n � 4.
Figures 6 and 7 show the numerical test results based on the of model ( 8) of variational inequality 1.1 with diferent τ.

Conclusions
To conclude, this study introduces a neural network approach for addressing the box-constrained variational inequality problem.Alongside exploring the existence and convergence of neural network trajectories, we also examine the stability of solutions.Tese stability results include asymptotic stability and exponential stability.Finally, numerical experiments demonstrate the efectiveness of the neural network method.Of course, like all algorithms, the neural network method put forward in the present study also has drawbacks.For example, due to the involvement of subdiferential estimation, the computing time may be limited.Te smoothing method may be able to address this drawback, which may be our future research topic.