The study of new fixed-point iteration schemes for solving absolute value equations

Absolute value equations (AVEs) play a crucial role in solving various complexities across scientific computing, engineering, management science, and operations research. This article presents two fixed-point iteration schemes designed for such AVEs. Furthermore, we show some convergence conditions under specific circumstances. Lastly, the theoretical findings are validated using numerical examples, which emphasize the distinct advantages offered by the newly developed schemes. These results are highly promising and may stimulate future research in this domain.


Introduction
Consider a matrix  in ℜ × and a vector  in ℜ  .We examine the AVE given by:  − || = , (1.1) where  is a coefficient matrix, specifically an  -matrix or a strictly diagonally dominant matrix, and || represents a vector with elements | 1 |, … , |  |.Expanding our analysis, we delve into a more general form of AVE: − || = , (1.2) where  is a matrix in ℜ × .Notably, AVE (1.2) simplifies to AVE (1.1) when  =   , with   denoting the  ×  identity matrix.
The system of AVEs constitutes a crucial class of nonlinear and non-differentiable systems with widespread applicability across diverse domains.This distinctive characteristic is encountered in various fields, including bimatrix games, contact problems, linear and convex quadratic programming, and network pricing, among others (refer to [1][2][3][4][5][6][7] for detailed discussions).Furthermore, the importance and utility of developing numerical algorithms and theoretical frameworks for AVEs go beyond their theoretical implications, encompassing a vast array of potential applications as well as substantial economic value.Therefore, addressing the challenges posed by AVEs is not only theoretically valuable but is also economically significant.
The study of numerical approaches for AVEs involves a comprehensive investigation into solution structures, mathematical hypotheses, algebraic frameworks, and the distinctive implementation of robust preconditioners and high-performance numerical algorithms.In recent periods, there has been a notable increase in interest in numerical procedures for AVEs, with many research publications offering different strategies.The growing attraction highlights the significance of refining strategies and processes for effectively managing the complexities inherent in AVEs.The evolving landscape of numerical strategies reflects a heightened focus on understanding solution features and underlines the continuous pursuit of innovative procedures to compute the challenges raised by AVEs.For example, Salkuyeh [8] familiarized the Picard-HSS approach for AVEs and discussed its convergence states.Khan et al. [9] have offered a new technique for AVEs based on a generalized Newton's technique and Simpson's rule.Chen et al. [10] contributed by suggesting exact and inexact Douglas-Rachford splitting procedures tailored for computing large-scale sparse AVEs, escorted by a thorough exploration of significant convergence theorems appropriate to their study.Noor et al. [11] have presented minimization strategies for AVEs and analyzed the convergence of these strategies under reasonable circumstances.Iqbal et al. [12] offered the Levenberg-Marquardt method for AVEs and examined the convergence effects in detail.Abdallah et al. [13] reformulated the AVE problem as a linear complementarity problem (LCP) and used a smoothing strategy to compute solutions.Li [14] introduced an innovative preconditioner AOR (accelerated over-relaxation) approach to solve AVEs.Prokopyev [15] studied the unique characteristics of AVEs and their affinity with LCPs.Ke and Ma [16] proposed an SOR-like approach for AVEs and discussed its convergence in detail.Based on this, Chen et al. [17] thoroughly examined the approach proposed in [16] and proposed optimal parameters for an SOR-like system.Fakhazadh and Shams [18] introduced a mixed-type splitting method for calculating AVE solutions and examined their convergence effects.Prior analysis by Zhao and Shao [19] familiarized a matrix-splitting iteration method for generalized AVEs, explaining the convergence aspects of their suggested approach.Ali and Pan [20] proposed a novel generalized iteration methodology tailored for AVEs, furnishing an insightful theoretical conversation of their analysis.Hladík and Moosaei [21] investigated AVE solvability conditions.By delving into the existing notes on solvability necessities, they also delivered valuable insights by showing two distinct generalizations of sufficient conditions that ensure AVE solvability.Zhou et al. [22] introduced an innovative strategy for solving generalized AVEs utilizing a modified Newton-based matrix splitting iteration.Their investigation contained a detailed analysis of aspects such as convergence rate, stability, and robustness across a variety of scenarios.In addition, Zhang et al. [23] introduced a newly devised two-sweep iteration strategy, which further boosts the range of techniques available for addressing AVEs.
In recent years, Dehghan and Hajarian [24], Mao et al. [25], and Li et al. [26] have designed various approaches to computing LCPs utilizing fixed-point algorithms.This investigation aims to extend this strategy to the realm of AVEs by leveraging the fixed-point principle and formulating useful schemes for solving AVE (1.1).The key contributions of this work can be sketched as follows: firstly, we decompose matrix  into three distinguishable parts (diagonal, strictly upper and lower triangular matrices), integrating them into the fixed-point formula to emanate novel iterative schemes.In addition, we research and establish the convergence properties of these newly formulated schemes in a variety of contexts.
The paper is structured as follows: Section 2 discusses the analysis and design of new schemes for computing AVE (1.1) and examines their convergence results.In Section 3, numerical simulation results are presented along with a detailed analysis of outcomes and a validation of the proposed methodologies.In Section 4, conclusions are drawn based on a comprehensive analysis of the previous sections, demonstrating implications and emphasizing the general relevance of the work.

Proposed schemes
This section explains the proposed schemes for solving AVE (1.1).First, we will discuss several preliminary findings in order to lay the groundwork for the rest of the discussion.
Throughout this analysis, the symbols (), Tdg(), and || = (|  |) signify the spectral radius, tridiagonal matrix, and absolute value of matrix , respectively.The matrix  = (  ) ∈ ℜ × is termed a -matrix if its off-diagonal components are exclusively nonpositive.If  is a -matrix and is nonsingular with  −1 ≥ 0, it is termed as an  -matrix.Additionally, for each  = 1, 2, … , , the matrix  is considered strictly row diagonally dominant when the inequality In order to introduce new schemes, we split matrix  into the following structure: where , , and  stand as the diagonal, strictly lower-triangular, and strictly upper-triangular matrices, respectively.Accompanying them is , 3).The research conducted in [27,28] demonstrated an equivalent transformation of the AVE (1.1) into a fixed point system of equations, given by (2.4) with  as a positive constant and  representing a diagonal matrix with positive diagonal elements.Upon selecting  =  −1 (see for more details [29,30]), the expression in equation (2.4) can be expressed in the following manner: As a result of combining equations (2.3) and (2.5), we obtain the following results: or, equivalently, where   is the identity matrix.Now we can express our iterative scheme, denoted as Scheme I, for AVE (1.1) as Next, we suppose the convergence of Scheme I.
Theorem 2.1.Consider the sequence {   } generated using the procedure described by formula (2.6) in Scheme I, and let  * represent the solution of AVE (1.1).Then, the following inequality holds: Proof.Consider  * be a solution of equation (1.1).In this case, Upon subtracting equation (2.7) from equation (2.6), the resulting expression is given by By applying the absolute value to each side, we subsequently deduce the inequalities or, equivalently, as Note that the matrix | −1 | is nonnegative.As stated in references [29,30], when (| −1 |) < 1, the iterative sequence {  } generated by Scheme I converges towards the solution  * of the AVE (1.1).
To specify the uniqueness of the solution, assume that  * is another solution to AVE (1.1).This implies that; and which we rewrite as and this implies that  * =  * .This concludes the proof.□ Next, we will examine Scheme II.Recall that the AVE (1.1) can be restated as expressed in equation (2.5): or, equivalently, as (2.8) Let Γ =   , where   represents the identity matrix, and 0 <  ≤ 1. Equations (2.3) and (2.8) together imply the following successively: and finally The iterative sequence developed by our Scheme II for computing AVE (1.1) is represented by (2.9) Next, we employ the subsequent theorem to examine the convergence of Scheme II.
Theorem 2.2.Consider the iterative sequences denoted by {  } generated through Scheme II, and let  * represent the solution to AVE (1.1).
As a result of simplifying the right-hand side, we obtain By applying the absolute values on both sides of the equation, we successively acquire Certainly, if the condition (| −1 | ) < 1 is satisfied, the sequence of iterations {  } formed by Scheme II converges.
The demonstration of uniqueness for Scheme II follows a similar explanation as the proof presented in Theorem 2.1 and is excluded from the discussion.□

Numerical tests
Here, we present findings from a variety of numerical analyses illustrating the effectiveness of the developed schemes.The assessments focus on factors such as the number of iteration steps (referred to as IT), CPU time (referred to as CPU), and relative residual error (referred to as RES).The RES is represented by the expression: and RES is constrained by the condition RES ≤ 10 −6 .We conducted our simulations utilizing MATLAB 2016a on a system equipped with an Intel(R) Core(TM) i5-3337U CPU 1.80 GHz and 4.00 GB RAM.
Based on Table 1, we conducted numerical investigations to assess the convergence conditions for both theorems.The study found that both schemes perform effectively and satisfy the convergence conditions.To thoroughly evaluate the implementation of our developed schemes, we carried out the following tests.
is a block-tridiagonal matrix, and and Here, we consider the effectiveness of the proposed schemes I and II in comparison with three existing approaches: the AOR and SOR approaches [14] and the mixed-type splitting approach (MSA) [18].A summary of the comparison can be found in Table 2.
Moreover, we consider problem sizes of  = 100 and  = 1600 and compare all strategies graphically.A graphic expression of the outcomes can be found in Fig. 1.
Table 2 provides a comprehensive analysis of the solution  * across various  values for each procedure.The results demonstrate that the schemes we introduced exhibit superior effectiveness compared to AOR, SOR, and MSA, especially in terms of both "IT" and "CPU".
Furthermore, we have included the convergence curves for all methodologies that have been implemented using the size of the problems of  = 100 and  = 1600 as well.As a result, the curves in Fig. 1 indicate the superiority of our new schemes over those of other methods.

𝑛
and  =  2 .The vector  is expressed as  =  * − | * |, with  * = ones(, 1) ∈ ℜ  .We employ identical stopping criteria and an initial vector as specified in the reference [27].To evaluate the efficacy of our suggested schemes, we execute a comparative analysis against the approach presented in [27] (displayed as MM), the SOR-like approximate optimal method (SAOM) [17], and the method outlined in [31] (referred to as NM).The outcomes of these comparisons are meticulously studied and reported in Table 3.
Additionally, we explore problem sizes of  = 256 and  = 2401, conducting a graphical comparison of all strategies.The visual expression of the results is presented in Fig. 2. Table 3 illustrates the effectiveness and accuracy of various methods of addressing the given problem.Upon closer examination, it is evident that Scheme I achieves superior results to all other techniques, including Scheme II.Furthermore, our proposed schemes, namely I and II, consistently produce superior results when compared to alternative methods, particularly in terms of both "IT" and "CPU" considerations.Moreover, we have incorporated convergence plots for all methodologies utilized, assuming problem sizes of  = 256 and  = 2401.The graphs in Fig. 2 vividly demonstrate the superior performance of our innovative schemes compared to the MM, SAOM, and NM methods.  = 4,  ,+1 =  +1, = ,   = 0.5 ( = 1, 2 … , ).
Define the vector  as  = ( −   ), where   is the identity matrix of order , and  is an  × 1 vector with all elements equal to unity.Notably, the actual solution is denoted by  = (1, 1, … , 1)  .
For this example, we initialize the vector as  (0) = ( 1 ,  2 , … ,   )  , with   = 0.001 * .We compare the proposed techniques with the minimization technique from [11] (referred to as MT), the SOR-like approximate optimal method (SAOM) [17], and the modified search direction iteration method from [32] (specified as MSD).An overview of the results is given in Table 4, and a graphic representation of  = 3000 and  = 7000 can be found in Fig. 3.
Based on the outcomes given in Table 4, all of the tested schemes in the study demonstrate a rapid ability to test the system solution (1.1).Notably, our proposed schemes showcase fewer iterations (IT) and reduced analysis time (CPU) compared to alternative techniques.We have also included convergence graphs for all methodologies, with problem sizes set at  = 3000 and  = 7000.The visuals in Fig. 3 compellingly demonstrate the superior efficacy of our inventive strategies when contrasted with the MT, SAOM, and MSD methods.In summary, this finding strongly supports the feasibility and significant advantages of the suggested schemes for AVEs.

Table 4
The results of Example 3.3 when employing  = 1 and  = 0.8.

Conclusions
In the article, we have presented two innovative iterative schemes designed to compute solutions to equation (1.1).In addition, we have examined the convergence theorems for the suggested schemes.Finally, several numerical analyses have been described, demonstrating the applicability and effectiveness of the proposed schemes.
In this analysis, our investigation specifically focused on AVEs where the coefficient matrix adheres to the properties of an matrix or a strictly diagonally dominant matrix.We intend to focus our future studies on AVEs with various broader classes of coefficient matrices.

Declaration of competing interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Table 1
Convergence requirement prescribed by Theorems 2.1 and 2.2.

Table 3
The results of Example 3.2 when employing  = 1 and  = 0.8.