An Adaptive Zeroing Neural Network with Non-Convex Activation for Time-Varying Quadratic Minimization

: The ﬁeld of position tracking control and communication engineering has been increasingly interested in time-varying quadratic minimization (TVQM). While traditional zeroing neural network (ZNN) models have been effective in solving TVQM problems, they have limitations in adapting their convergence rate to the commonly used convex activation function. To address this issue, we propose an adaptive non-convex activation zeroing neural network (AZNNNA) model in this paper. Using the Lyapunov theory, we theoretically analyze the global convergence and noise-immune characteristics of the proposed AZNNNA model under both noise-free and noise-perturbed scenarios. We also provide computer simulations to illustrate the effectiveness and superiority of the proposed model. Compared to existing ZNN models, our proposed AZNNNA model outperforms them in terms of efﬁciency, accuracy, and robustness. This has been demonstrated in the simulation experiment of this article.


Introduction
Quadratic minimization (QM) is a widely studied branch of optimization theory, with applications in various fields such as image processing [1,2], communication engineering [3], robot kinematics [4], and energy system design [5]. While numerical algorithms can efficiently solve static QM problems, they are unable to handle large-scale time-varying quadratic minimization (TVQM) problems with real-time requirements due to their serial processing mechanism. Thus, it is crucial to develop a more effective solution framework to address TVQM problems.
Previous studies have shown that traditional gradient-based neural network (GNN) models can effectively solve the static or time-invariant matrix problems by designing a scalar evaluation where the error function approaches zero. For example, Nikolova and Chan proposed a static QM model for image restoration [1] and used the gradient linearization iterative method to solve the static QM problems. However, this method has limitations in solving time-varying quadratic minimization (TVQM) problems as it cannot trace exact solutions online. Therefore, it is inadequate in handling TVQM problems.
To address the limitations of traditional algorithms in effectively handling timevarying problems, Zhang et al. developed a zeroing neural network model called the Original Zeroing Neural Network (OZNN) [6]. The OZNN model uses derivative information to predict the evolution direction of the problem to be solved, resulting in high accuracy [7,8]. As a result, the OZNN model has found extensive use in automatic control and signal processing [9,10]. However, the OZNN model is sensitive to noise interferences, which can reduce its solution accuracy when solving time-varying problems [11].
Moreover, the OZNN model requires artificial setting and adjustment of the scale factor, leading to time-consuming iterative adjustment when applied to practical engineering applications [12]. To overcome these challenges, Yan et al. proposed a noise-tolerant zeroing neural network (NTZNN) model [13] that can effectively resist various noise disturbances, thereby solving the corresponding problems caused by noise interferences. Additionally, Wang et al. investigated the robustness of the proposed bounded ZNN model [14].
The NTZNN model has been further developed to improve its convergent performance. For instance, Xiao et al. introduced the limited-time robust neural network (LTRNN) model [15], which not only has noise-resistant capabilities but can also converge in finite time, unlike its predecessors. Li et al. proposed the finite time convergent and noise rejection recurrent neural network (FTNRZNN) model [16], which has similar performance to the LTRNN model and can solve other time-varying nonlinear equations. In addition, a novel design framework for finite and predefined time convergence performance for the zeroing neural network was proposed by Xiao et al. [17]. Liao et al. proposed the prescribed-time convergent and noise-tolerant Z-type neural dynamics (PTCNTZND) model [18], which not only converges in finite time but also accelerates the solving process of achieving the optimal solution and has anti-noise capabilities. Furthermore, a parameterchanging and complex-valued zeroing neural network (PC-CVZNN) model was introduced by Xiao et al. [19], which can solve time-varying complex linear matrix equations (CV-LME-TVC) in finite time and achieves superior performance due to the integration of a new parameter change function. However, these models share a common limitation of not being able to adaptively adjust their convergence speed. To address this, Jia et al. proposed an adaptive fuzzy control strategy to zeroing neural network (AFT-ZNN) model [20]. This model utilizes an adaptive fuzzy control value to adjust its convergence speed based on the calculated error, resulting in superior performance and faster convergence speed.
Recently, some researchers have also proposed robust and noise-tolerant ZNN models with applications to dynamic complex matrix equation solving [21,22] and mobile manipulator path tracking [23,24]. Additionally, improved recurrent neural networks have been proposed for text classification and dynamic Sylvester equation solving [25].
We are inspired by the residual learning framework [26] and combine its advantages with the advantages of the above zeroing neural network models to propose an Adaptive Zeroing Neural Network with Non-convex Activation (AZNNNA) model design framework for solving the TVQM problem. In addition, our proposed AZNNNA model can be applied to various applications such as robots and acoustic source localization, as demonstrated in the work of Jin et al. [27]. Furthermore, the AZNNNA model can resist linear noise and solve dynamic Sylvester equation problems, as shown in the work of Han et al. [28]. Overall, the AZNNNA model's main advantage over existing zeroing neural network models is its ability to adaptively adjust its convergence speed and improve its representation capability for solving complex and nonlinear problems. The main contributions of this paper can be summarized as follows: • An adaptive zeroing neural network with non-convex activation (AZNNNA) model design framework is proposed and investigated for the first time. Compared with existing zeroing neural network models, our proposed AZNNNA model performs well in terms of convergence and robustness. The rest of this paper is structured into five sections. Section 2 presents the problem description and method. Section 3 outlines the design framework of the adaptive coefficients, non-convex activation function, and evolution scheme of the proposed Adaptive Zeroing Neural Network with Non-convex Activation (AZNNNA) model. Section 4 presents theoretical analyses of the AZNNNA model from the perspective of Lyapunov stability theory, verifying its global convergence and robustness against noise disturbance for solving the TVQM problem. Section 5 presents the relevant quantitative simulation experiments and result survey. Finally, Section 6 summarizes the conclusions.

Problem Description and Related Solution Formula
Generally, the unknown form of the time-varying quadratic minimization (TVQM) problem can be written as where the given positive-definite matrix P(t) ∈ R n×n and coefficient vector q(t) ∈ R n are smooth and time varying, and for any time t ∈ [0, +∞), matrix P(t) is positive definite.
Moreover, x(t) ∈ R n is an unknown time-varying vector to be solved, and the superscript T denotes the transpose of a vector. For the convenience of statement, a function is defined Thus, the gradient of function F(x(t), t) can be described in the following form: It is worth noting that by zeroing the above function ∇F(x(t), t) at each time instant t ∈ [0, +∞], the theoretical solution of TVQM problem (1) can be obtained in real time. Therefore, the following Equation is equal to the TVQM problem (1): Specifically, it can be seen from the above that the theoretical time-varying solution x * (t) ∈ R n to (1), as the minimum point F(x(t), t) at any time instant t, satisfies P(t)x * (t) + q(t) = 0. The following error function is arranged to monitor and revise the development direction of the solving system: On the basis of the origin zeroing neural network (OZNN) construction framework, the evolution direction of the error function (4) should satisfy thatė(t) = −ηΩ(e(t)), where η represents the scale factor and η > 0, Ω(·) : R n → R n denotes the activation function. Therefore, the OZNN model for solving the TVQM problem (1) can be designed as where the parameterṖ(t),ẋ(t), andq(t) represent the time derivatives of the P(t), x(t), and q(t), respectively.

AZNNNA Model Construction
In this section, aiming at the shortcomings of the existing zeroing neural network models, an adaptive zeroing neural network with non-convex activation (AZNNNA) model is formulated as follows:˙ where, the time-varying parameter (·) > 0 : R n → R represents the adaptive scale coefficient, and k > 0 is a scaling coefficient adjusted to control the influence of the integral item. The other is −k t 0 e(τ), which penalizes the integration of e(t) towards zero. Next, the following method can be used to construct the adaptive coefficient (·): where the parameter γ > 0, a > 1, and · 2 denotes the 2-norm of a vector. Next, we define non-convex function Z Λ (G) = arg min O∈Λ O − G 2 with 0 ∈ Λ, where G and Λ denote two sets. Therefore, Z Λ (G) can be defined as a projection from set G to set Λ. The following two examples can be utilized to explain the construction method of the non-convex activation function: • Bounded situation with a saturation activation function.
Non-convex situation with a saturation activation function.
and u 1 , u 2 , and u 3 satisfy such relationships: 0 < u 1 < u 2 , and u 3 < −u 1 < 0. Therefore, it can be concluded that the proposed AZNNNA model for solving the TVQM problem (1) can be written as follows: In addition, the AZNNNA model (8) is inevitably interfered with by various noises in the real-time solution system. Therefore, the AZNNNA model (8) used to solve the TVQM problem (1) interfered with by noise is described as where the noise interference item ϑ(t) ∈ R n .
The characteristic comparison between the existing recurrent neural network and the proposed AZNNNA model in solving the TVQM problem (1) is shown in Table 1.
Considering that convergence is generally a key criterion of the AZNNNA model (8), the following theorem and the corresponding proof process to analyze the global convergence of the AZNNNA model (8) were presented.

Model Non-Convex Activation
Adaption Coefficient

Anti Perturbations
Integral Information Involved OZNN model in [6] No No No No GNN model in [29] No No No No NCZNN model [30] No No Yes No PTCZNN model [31] No No No No MZNN model [32] No No Yes Yes The proposed AZNNNA model (8) Yes Yes Yes Yes Theorem 1. Given any solvable TVQM problem (1), the calculated solution vector x(t) of the proposed AZNNNA model (8) can globally converge from any random initial state to the theoretical solution of the TVQM problem (1).
Proof of Theorem 1. The ith subsystem of the AZNNNA model evolution formulȧ The following Lyapunov candidate function is given for investigating the global convergence of the system (10) [33]: Obviously , when t 0 ( e i (t))dδ = 0 or e i (t) = 0, Lyapunov candidate function V i (t) > 0. If and only if e i (t) = t 0 ( e i (t))dδ = 0, V i (t) = 0. At this time, the Lyapunov candidate function V i (t) is semi-definite. Then, we take the time derivative of the function (11) as follows:V Obviously, the Lyapunov candidate functionV i (t) is negative definite. Therefore, on the basis of the Lyapunov theory, the function e i (t) and the error function e(t) globally converge to zero as time goes on. That is to say, the proposed AZNNNA model globally converges to the theoretical solution to the TVQM problem. The proof is complete.

Robustness of AZNNNA Model under Different Noise Situations
In this section, we propose three theorems to prove the robustness of the AZNNNA model (8) under constant noise, linear noise, and bounded random noise interferences, respectively. Theorem 2. The calculated solution vector x(t) of the AZNNNA model (9) used to solve the TVQM problem (1) interfered by the constant noise ϑ(t) = ϑ ∈ R n will globally converge to the solution of the problem (1).

Proof of Theorem 2.
For further analysis, according to the Laplace transform method [32], the ith sub-element of the AZNNNA model (9) interfered by constant noise ϑ i (t) = ϑ i is written as Reformulating Equation (12) can be performed as: According to the construction method of adaptive coefficient (·), we can draw a conclusion: lim t→∞ ( e(t)) = lim ι→0 ( e(ι)) = a. Therefore, Equation (13) can be written as In the end, it can be seen that the poles of the transfer function ι/(ι 2 + aι + k) are ι 1 = (−a + √ a 2 + 4k) and ι 2 = (−a − √ a 2 + 4k)/2. Due to the parameters a > 0 and k > 0, we can conclude that the two poles are located in the left half-plane, which shows the stability of the solution system. Therefore, according to the definition of the final value theorem, and applying it to Equation (14), we can obtain: In summary, the e(t) 2 of the AZNNNA model (8) to solve the TVQM problem (1) interfered by no matter how large constant noise ϑ(t) = ϑ ∈ R n will globally converge to zero.
Next, we provide Theorem 3 to study and prove the robustness of the proposed AZNNNA model (8) under linear noise interference. Theorem 3. The residual error e(t) 2 of the AZNNNA model (8) interfered by linear noise ϑ(t) = ϑt ∈ R n for solving the TVQM problem (1) will eventually converge to ϑ 2 /k, where k is the scale coefficient in (8). Notably, when lim t→∞ e(t) 2 = 0 as the parameter k → +∞.
Proof of Theorem 3. The Laplace transform of the AZNNNA model (8) can be utilized: where the parameter ϑ i /ι 2 is obtained by Laplace transform with linear noise ϑ i t. Similar to what is mentioned in Theorem 2, Equation (16) can be rearranged as Next, on the basis of the final value theorem, we can obtain the following Equation: lim t→∞ e i (t) = lim It can be seen from the above that when t → ∞, the error e i (t) 2 of the subsystem of the AZNNNA model (8) will converge to a fixed value ϑ i /k. All in all, the error of the AZNNNA model (8) interfered with by linear noise ϑ(t) will eventually converge to ϑ /k. Therefore, it can be concluded that The proof is thus completed.
In the above content, we described the robustness of the proposed AZNNNA model (8) under constant noise and linear noise. However, the influence of nonlinear dynamic noise on the solution system cannot be ignored. It is worth noting that one commonly encountered nonlinear dynamic noise can be regarded as a special kind of fast-changing random noise. The AZNNNA model (8) can achieve the anti-interference ability of random noise and avoid the limitation of the convex function while avoiding redundant preprocessing procedures. In order to analyze and prove the robustness of the AZNNNA model (8) with bounded random noise interference, we propose the following theorem.

Theorem 4.
Suppose the upper and lower bounds of the bounded random noise ϑ(t) = ξ(t) ∈ R n are ν + and ν − , respectively, then, the steady-state error lim t→∞ e(t) 2 of the AZNNNA model (8) interfered with by bounded random noise ξ(t) is as follows: Proof of Theorem 4. The ith subsystem of AZNNNA model (8) interfered with by bounded random noise ξ(t) can be expressed aṡ Known from Theorem 2, we can define lim t→∞ ( e(t)) = a. Therefore, when t → +∞, Equation (17) can be transformed as the following: where the parameter α i (t) = t 0 e i (δ)dδ. The roots of the general solution of the secondorder differential Equation (18) can be expressed as j 1 = (−a + √ a 2 − 4k)/2 and j 2 = (−a − √ a 2 − 4k)/2. Next, we can divide the proof process into three situations according to the values of a and k.
(1) The first case is a 2 > 4k: Combine Equation (17) with the second order differential function solving framework, it can be converted as Next, according to the triangular inequality in [34], we have Therefore, we can draw the following conclusion: where n represents the dimension number of bounded random noise ξ(t).
(2) The second case is a 2 = 4k: This situation is similar to the first situation, so the following Equation can be directly given as: where the parameter j = j 1 = j 2 = −a/2. According to Theorem 1 proposed in the paper [34], we can formulate the following inequality: where the constant κ > 0 and > 0. Therefore, combining inequality (20) with triangular inequality, we have Next, we simplify the above inequality: Therefore, we can obtain the following formula: (3) The third case is a 2 < 4k: In this case, we can convert Formula (17) into the following form: e i (t) = e i (0) exp(jt)(j sin(φt)/φ + cos(φt)) where the parameter φ = √ −a 2 + 4k/2. This situation is similar to the first situation, so we have: Combining the above three situations, the error lim t→∞ e(t) 2 of the AZNNNA model (8) can eventually converge to a fixed value under the interference of bounded random noise ξ(t). So far, the theoretical proof is complete.

Experiments and Results
In this section, we summarize and visualize the simulation experiment of the AZNNNA model (8) proposed to solve the TVQM problem (1). Second, we compare the performance of the AZNNNA model (8) with state-of-the-art neural network models, particularly the GNN model [29], the non-convex and bound constraint zeroing neural network (NCZNN) model [30], and the modified zeroing neural network (MZNN) model [32]. We note that all simulations are implemented using MATLAB R2016a on a computer equipped with an Intel Core i5-12400F 2.50 GHz CPU and 16 GB RAM.

Time-Varying Quadratic Minimization Example
The following time-varying quadratic minimization is an example of bounded constraint and non-convex activation functions applied for this simulation part: Next, examples of the time-varying matrix and vector in the TVQM problem (1) are constructed as follows: The adaptive coefficient (·) and scale coefficient k of the proposed AZNNNA model are set as ( e(t)) = || e(t)|| 3 2 + 5 and k = 5, respectively. The corresponding quantitative simulation results of the example are arranged in Figures 1-5. Take care that the scale parameter of the MZNN model [32], the GNN model [29], and the NCZNN model [30] are arranged the same as 5. Additionally, the parameter λ of the MZNN model [32] is set as 5.

Results in Noise-Free Case
The visualization results of the AZNNNA model for solving the TVQM problem example without noise are presented in Figures 1 and 2. Figure 1a displays the theoretical solution x * 1 and the calculated solution x 1 , and Figure 1b shows the theoretical solution x * 2 and the calculated solution x 2 . It is worth noting that the theoretical solution and the calculated solution are represented by the red and blue lines, respectively. As seen in Figure 1a,b, starting from randomly generated initial values, the computational solution trajectory of the AZNNNA model (8) can converge to the theoretical solution within 3 s.  As shown in Figure 2a, starting from stochastically generated initial values, the residual error e(t) 2 of the proposed AZNNNA model (8) rapidly approaches zero. This shows that the solving system can promptly converge to the theoretical solution. Compared with the other three models, the AZNNNA model (8) has the fastest convergence speed. The residual error logarithm e(t) 2 is shown in Figure 2b, which is used to depict the model solution precision in detail. As demonstrated in Figure 2b, the AZNNNA model (8) achieves higher precision in solving the noise-free TVQM problem than the GNN model and the NCZNN model. Specifically, the proposed AZNNNA model achieves a convergence precision of order −15, while the NCZNN and MZNN models only achieve a precision of orders of −3 and −10, respectively. Although the AZNNNA model (8) and the MZNN model have the same solution accuracy when solving the TVQM problem (Section 5.1) under the noise-free case, the convergence speed of the proposed AZNNNA model (8) is faster. Therefore, the proposed AZNNNA model not only converges faster than the other three models, but also has a higher accuracy.  (8) models, which were used to solve the TVQM problem under each type of noise. The proposed AZNNNA model employs an adaptive coefficient and a scale coefficient, where ( e(t)) = || e(t)|| 3 2 + 5 and k = 5, respectively. In this section, we analyze and discuss the results obtained for the following three different types of noise.

•
In the case of constant noise: The amplitude of the constant noise is supplied as [5,5] T . As shown in Figure 3, starting from a random-generated initial value, if the AZNNNA model (8) is interfered with by the constant noise ϑ, its system residual error || e(t)|| 2 can still accurately converge to the theoretical solution. Moreover, although the accuracy of MZNN under constant noise interference is the same as that of the AZNNNA model (8), the convergence speed of the AZNNNA model (8) is better than that of the MZNN model. • In the case of linear noise: The quantitative experimental simulation results of the AZNNNA model (8) for solving the TVQM problem example (Section 5.1) under linear time−varying noise ϑ(t) = ϑt ∈ R n interference are shown in Figure 4. Under the interference of linear noise ϑt, compared with the other three comparison models, the AZNNNA model (8) proposed in this paper has the highest solution accuracy among the four models and the fastest convergence speed. • In the case of bounded random noise: The quantitative experimental simulation results of the AZNNNA model (8) for solving the TVQM problem example (Section 5.1) under bounded random noise ϑ(t) = ξ(t) ∈ [0.5, 3] 2 interference is shown in Figure 5. It can be seen from Figure 5 that the AZNNNA model (8) proposed in this paper has the highest solution accuracy among the four models and the fastest convergence speed.
Therefore, we can determine that the proposed AZNNNA model (8) has higher robustness and stability than other ZNN models in the face of noises in different situations. The further performance comparison between the existing ZNN models and the proposed AZNNNA model for solving the TVQM problem (1) under different noise conditions is shown in Table 2.