The general tensor regular splitting iterative method for multilinear PageRank problem

: The paper presents an iterative scheme called the general tensor regular splitting iterative (GTRS) method for solving the multilinear PageRank problem, which is based on a (weak) regular splitting technique and further accelerates the iterative process by introducing a parameter. The method yields familiar iterative schemes through the use of speciﬁc splitting strategies, including ﬁxed-point, inner-outer, Jacobi, Gauss-Seidel and successive overrelaxation methods. The paper analyzes the convergence of these solvers in detail. Numerical results are provided to demonstrate the e ﬀ ectiveness of the proposed method in solving the multilinear PageRank problem.


Introduction
As the most crucial algorithm for web ranking in the web search engines, Google's PageRank has been extensively researched over the years [1][2][3][4][5][6][7][8].Gleich et al. [9] extended PageRank to higher-order Markov chains and proposed the following multilinear PageRank problem: where α ∈ (0, 1) is a parameter, v ∈ R n is a stochastic vector, i.e., v ≥ 0, v 1 = 1, R n is the set of all n-dimensional real vectors.The stochastic solution x ∈ R n is called the multilinear PageRank vector.
) is an m th -order n-dimensional stochastic tensor representing an (m − 1) th -order Markov chain, namely, where n = {1, 2, • • • , n}.The tensor-vector product Px m−1 is a vector in R n and its definition is given in Section 2.
In particular, the multilinear PageRank model (1.1) can be transformed into higher-order Markov chain when we take α = 1 (see [10] for the general case).When m = 2, (1.1) simplifies to the classical PageRank as described in [11].
The model (1.1) can be transformed into where P = αP + (1 − α)V and V is an m th -order n-dimensional tensor with (V Clearly, P is also a transition probability tensor. In recent years, the multilinear PageRank problem has attracted much attention.The theory of existence and uniqueness of solutions for the multilinear PageRank problem is a fundamental basis for analyzing the convergence of algorithms.Gleich et al. [9] pointed to the fact that the multilinear PageRank vector is unique in (1.1) for α < 1 m−1 .The above condition can be easily applied to compute the stochastic solution of (1.1) when α < 1  2 .However, when 1 2 < α < 1, especially when α ≈ 1, it becomes increasingly challenging to solve (1.1).This challenge inspired researchers to search for more general but stricter conditions, such as those discussed in studies by Li et al. [12,13], Huang and Wu [14], Fasino and Tudisco [15] and Liu et al. [16].It should be noted that we cannot theoretically compare the merits of these conditions and there is no optimal condition in numerical experiments.
Gleich et al. first proposed Newton's method to solve this problem in [9].Subsequently, Meini and Poloni [17] and Guo et al. [18] proposed the Perron Newton iterative algorithm and the multistep Newton method, respectively.These algorithms mentioned above involve gradient computation, which have shown excellent performance.But when solving asymmetric tensor models, the difficulty of gradient computation can lead to expensive calculations.Unlike gradient algorithms, non-gradient algorithms typically do not require the gradient information of the objective function.Instead, they estimate the optimal solution of the objective function through other methods.Non-gradient algorithms have proven to be effective in solving this problem and have gained a significant amount of attention in recent years.For example, Gleich et al. [9] proposed the fixed-point method, the shifted fixed-point method, the inner-outer method and the inverse method.Liu et al. [19] introduced several relaxation methods.Other novel methods have been proposed by various scholars in an effort to achieve better convergence performances (see References [20][21][22][23] and their respective citations).Hence, the problem about how to design high-speed, robust and flexible non-gradient algorithms has become the key challenge in solving the multilinear PageRank problem.
As an important class of non-gradient algorithms, the splitting algorithm has significant applications in tensor computations.For example, Liu et al. [24] proposed a tensor splitting algorithm for solving tensor equations and high-order Markov chain models.Cui et al. [25] proposed an iterative refinement method by using higher-order singular value decomposition to solve general multilinear systems.Jiang and Li [26] proposed a new preconditioner for improving the AOR-type method.Cui and Zhang [27] proposed a new constructive way to estimate bounds on the H-eigenvalues of two kinds of interval tensors.This paper will explore a tensor splitting algorithm for solving the multilinear PageRank problem.
The main contributions of this paper can be outlined as follows: (1) The general tensor regular splitting (GTRS) iterative method is proposed to address the multilinear PageRank problem.
(2) Additionally, we offer five typical splitting methods of the GTRS iteration.
(3) We give the convergence analysis for the proposed method based on the uniqueness condition provided by Li et al. [13].(4) Several numerical examples are provided and the outcomes significantly outperform the existing ones in most cases.
The reminder of this work is organized as follows.In Section 2, we present some essential definitions and key properties.By making use of the (weak) regular splitting of the coefficient matrix I−αPx m−2 , we propose the GTRS iterative method in Section 3. Section 4 is devoted to the convergence analysis of the GTRS method.In Section 5, some application experiments are presented to demonstrate the effectiveness of our method.Finally, the conclusion is given in Section 6.

Preliminaries
In this section, we present a brief overview of the notations and definitions that are necessary for this paper.
An m th -order n-dimensional tensor A with n m entries is defined as where R is the real field.A is called non-negative (positive For any two matrices A = (a i j ) n×n , B = (b i j ) n×n ∈ R n , A ≥ B means that a i j ≥ b i j for i, j ∈ n .
Definition 1. [13] Let A be an m th -order n-dimensional tensor, x be an n-dimensional vector and x i represent the i th entry of x.Ax m−r is an r th -order n-dimensional tensor is given by Definition 2. [13] Let A be an m th -order n-dimensional tensor and both x and y be n-dimensional tensors.Then we get the following definition Particularly, when r = 1 and r = 2, we obtain Ax m−1 and Ax m−2 from (2.1).Noticed that Ax m−1 = Ax m−2 x.Then, (2.2) can be expressed as Definition 3. [13] Let A be an m th -order n-dimensional tensor and both x and y be n-dimensional tensors.A (k)  xy is an n × n matrix whose (i, j) th -entry is defined as It should be noted that (1.1) can be rewritten as (2.3) Definition 4. A real square matrix Â is defined as an M-matrix when it satisfies the following conditions: Â is a Z-matrix (i.e., all of its off-diagonal elements âij ≤ 0) and Â can be expressed as hI − B, where B is a nonnegative matrix with bij ≥ 0, and its spectral radius ρ( B) < h.
Definition 5. [28] Let O be a null n × n matrix and Ǎ, M and Ň be n × n real matrices and Definition 6.Let z ∈ R n be a stochastic vector, 0 ∈ R n and z + = max(z, 0).We define the projection proj(•) as follows: It is clear that proj(•) is a stochastic vector.

The GTRS iterative method
Building upon the success of the general inner-outer iterative method (see [29]) for solving the typical PageRank problem, we introduce the GTRS iterative method as an efficient solution for computing the multilinear PageRank vector. Let be a (weak) regular splitting.Based on (3.1), the iterative method, i.e., GTRS, for (1.1) is constructed as follows: with 0 < φ < 1 and b = (1 − α)v.The scheme (3.2) is derived from the matrix splitting Let Ǔx ≥ 0 be the strictly upper-triangular matrix of the matrix Px m−2 , Ďx ≥ 0 be a diagonal matrix and Ľx ≥ 0 be a lower-triangular matrix such that Next, we give several typical practical choices of the matrices Mx and Nx .Derived from the above matrix splitting (3.3), those well-known iterative methods for solving (1.1) are presented below: (1) Let Mx k = I and Nx k = αPx m−2 k ; then, we can obtain the fixed point iteration (denote by GTRS-FP): ; then, we get another method, i.e., the inner-outer iteration (denote by GTRS-IO): where 0 < β < α. ( ; we derive the Jacobi iteration (denote by GTRS-JCB): then, we can get the Gauss-Seidel iteration (denote by GTRS-GS): ; then, we have the successive overrelaxation iteration (denote by GTRS-SOR): Finally, the whole algorithm of the GTRS iterative method can be outlined as follows: Algorithm 1 The GTRS iterative method Require: Mx , Nx , α, φ, β, ω, v, maximum k max , termination tolerance ε, and an initial stochastic vector x 0 Ensure: x x k+1 ← proj(y k+1 ) 8: end if 10: end while

Convergence analysis
In this section, we analyze the convergence of the GTRS iterative method.We establish the convergence of the GTRS iteration by using the uniqueness condition of the solution to the multilinear PageRank problem in [13].At first, we give several lemmas which will be used later.Lemma 1. [13] Let x and y be n-dimensional stochastic vectors, and let J (k) (k = 2, 3, • • • , m) be m th -order n-dimensional tensors defined as xy ∆x = 0, where ∆x = x − y.Lemma 2. [13] Let P be an m th -order stochastic tensor and v be a stochastic vector; then, the multilinear PageRank model (1.1) has the unique solution if where µ = min m) and ν = min Lemma 3. Let P be an m th -order n-dimensional stochastic tensor, and let y and z be n-dimensional stochastic vectors; then, we have where γ = min{γ, γ} with γ = 1 − min Proof.Let ∆y = y − z and ∆y i be the i-th entry of ∆y, and let J (2) be defined in Lemma 1; we have Let γ = min{γ, γ}; we have Px m−2 ∆y 1 ≤ γ ∆y 1 .
This completes the proof.
Lemma 4. [13] Let P be an m th -order n-dimensional stochastic tensor, and let x, y and z be ndimensional stochastic vectors.For any set The proof is completed.
Theorem 1.Let P be an m th -order n-dimensional stochastic tensor, α < 1 min{µ,ν} and x be the exact solution of (1.1).Then, for an arbitrary initial stochastic guess x 0 , the iterative sequence {x k } ∞ k=0 generated by (3.2) is convergent and Note that x is the solution of (1.1); hence, By taking the 1-norm of both sides of (4.4) and Lemma 4 we get From (4.5) and Lemma 6, we obtain Then the GTRS iteration converges linearly with the convergence rate δ x k when µ x k < 1 2 .Remark 1.In Theorem 1, the value of δ x k is associated with x k , which is difficult to prove in practical scenarios.Therefore, we will provide convergence analysis for specific splittings of the GTRS iterative method, such as (3.4)-(3.8).
Theorem 2. Let P be an m th -order n-dimensional stochastic tensor, x be a solution of (1.1), and α < min{ min{µ,ν} }; then, for an arbitrary initial stochastic guess x 0 , the iterative sequence {x k } ∞ k=0 generated by (3.4) is convergent and Proof.It is easy to verify that α < 1 min{µ,ν} when α < min{ Note that x is the solution of (1.1) yields Subtracting (4.8) from (4.7), we have By (4.7), we have n i=1 On the other hand Therefore, we get Combining (4.9) and Lemmas 3 and 4 together gives It is noted that Then by (4.10) and Lemma 5, we get By the above proof, we have that ς < 1; then, the proof is completed.
Theorem 3. Let P be an m th -order n-dimensional stochastic tensor, x be a solution of (1.1) and α < min{ )+γ , 1 min{µ,ν} }; then, for an arbitrary initial stochastic guess x 0 , the iterative sequence {x k } ∞ k=0 generated by (3.5) is convergent and Proof.Clearly, we have the unique solution )+γ , 1 min{µ,ν} } from Lemma 2. By (3.5), we have Then, we can get the following equation: By the same proof as in Theorem 2, we get Similarly, we have Hence By combining (4.11) with Lemmas 3 and 4, we get which implies that By (4.12) and Lemma 5, we have It is easy to check that ζ < 1 and the proof is completed.
Let êk = y k − x and e k = x k − x.By (3.6), we obtain Combining this with Lemmas 3 and 4 leads to By (4.13) and Lemma 6, we have Notice that ξ < 1.This completes the proof of the theorem.
Theorem 5. Let P be an m th -order n-dimensional stochastic tensor, x be a solution of (1.1) and α < min{ ) , 1 min{µ,ν} }; then, for an arbitrary initial stochastic guess x 0 , the iterative sequence {x k } ∞ k=0 generated by (3.7) is convergent and Proof.It is similar to the proof of Theorem 4.
Remark 2. For the uniqueness conditions stated in other papers, such as those presented in [12,13], we can also offer corresponding convergence analysis for our proposed algorithm.Here, we omit the detailed description.

Numerical experiments
In this section, we present numerical experiments to exhibit the asset of the algorithm we are proposing.
All experiments were performed by using MATLAB R2016a on a Windows 10 64 bit computer equipped with a 1.00 GHz Intel Core TM i4-29210M CPU processor and 8.00 GB of RAM.We chose to use three parameters to evaluate the effectiveness of the proposed method, which are the iteration number (denoted by IT), the computing time in seconds (denoted by CPU) and the error rate (denoted by err), respectively.The err parameter is defined by We have opted to employ large damping factors during the computation process, and in this case, iterative methods typically face major convergence problems when solving multilinear PageRank problems.Therefore, we tested the following values for the damping parameter α=0.95, 0.99, 0.995, 0.999, respectively.We selected that the maximum iterative number to be 1,000, the initial value , where e is an n-dimensional vector with all entries being 1, and the termination tolerance as ε = 10 −10 .The stopping criterion for all test methods was set as We searched the parameter φ (or ω) from 0.1 to 1 with the step size 0.01 and β ∈ (0, α) from 0.1 to α with the interval of 0.01; also, we set Next, we provide the numerical analysis for the GTRS iteration discussed in Section 3, using four examples.We compare our approach with the relaxation methods presented by Liu et al. [19] and the TARS method.The TARS iterative method is described in Algorithm 2. To clarify, we denote Algorithms 1-4 in the work of Liu et al. as Al 1 , Al 2 , Al 3 and Al 4 respectively.
Example 1.In this example, we considered three tensors from the practical problems in [30,31].Example (i) involves DNA sequence data, while Example (ii) applies interpersonal relationship data, and Example (iii) involves physicists' occupational mobility data.The numerical results are listed in Tables 1-3, where the minimum CPU times in each row are indicated in bold font.
From Tables 1-3, we have the following observations and remarks: (1) Although our algorithms require more iterative steps, the computational times are typically shorter than those of Al 1 , Al 2 , Al 3 and Al 4 and TARS, provided that the parameters are chosen appropriately.(2) Considering the least CPU times obtained from GTRS-FP, GTRS-IO, GTRS-JCB, GTRS-GS and GTRS-SOR, we observe that they account for approximately 0%, 91.7%, 0%, 0% and 8.3%, respectively.Therefore, in this example, GTRS-IO appears to be the most efficient algorithm among the five tested ones.
(3) Among the five types of splitting in the GTRS iteration, GTRS-JCB may result in higher CPU time usage compared to other methods, but it exhibits fewer iterative steps.This indicates that in cases with fewer iterations, GTRS-JCB can converge faster, while other methods may require more iterations to achieve the same convergence performance.
Next, we demonstrate the relationship between the iteration count and different values of the parameters α and φ in the GTRS iteration based on the inner-outer splitting approach, using Example 1(ii).By choosing β= {0.2, 0.4, 0.6,0.8}and φ from 0.1 to 1 in the interval of 0.01, we obtain Figure 1.It is shown that the GTRS iteration requires fewer iteration steps for a larger φ, which is more evident for a larger α, such as when α = 0.999.It is clear that the iteration number changes significantly depending on the chosen β, especially with β = 0.2.Example 2. [12,19] Let Γ be a directed graph with node set V = n = D ∪ P, where D and P denote the sets of dangling nodes and pairwise connected ones, respectively.Denote n p (n p ≥ 2) as the number of nodes in P, that is, P = {1, 2, ..., n p }.A nonnegative tensor A is constructed as follows: where a i 1 i 2 •••i l is taken randomly in (0, 1).Normalizing the entries with p i Based on the data presented in Table 4, there are observations that can be made as follows: (1) The GTRS iteration demonstrates significantly shorter CPU time than those for Al 1 -Al 4 and TARS.This suggests that the GTRS iterative method demonstrates higher efficiency in terms of computational time when appropriate parameters are chosen.
(2) In the case of all tested algorithms for a larger n p , such as the case of n p = 120, GTRS-SOR exhibits shorter CPU times than other methods.
(3) For n = 60, in terms of the CPU times, GTRS-IO achieves the best performance when α=0.9 or 0.95, while GTRS-FP, GTRS-JCB,GTRS-GS and GTRS-SOR outperform GTRS-IO when α=0.99 or 0.999.This implies that the behavior and reliability of these methods do not exhibit a monotonic relationship with α.
Furthermore, we display the test results of the relationship between CPU time and err at each step in Figure 2 with n = 150 and n p = 120.The values of err are taken as the logarithm of base-10.It is known from Figure 2 that the GTRS iterative method performs better than Al 1 -Al 4 and TARS in terms of both CPU time and error measures.Example 3. [13] We generated third-order stochastic tensors with dimension n of 100, 150 and 200 using the function rand of MATLAB.The results are reported in Table 5.
From Table 5, we observe the following: (1) If the parameters are chosen suitably, the GTRS iterative method can achieve shorter CPU times than Al 1 -Al 4 and TARS.
(a) α=0.90 (c) α=0.99As depicted in Figure 3, it is evident that the GTRS iterative method consistently outperforms Al 1 -Al 4 and TARS in terms of both CPU time and error measurement.
Example 4. [13] Let t denote the density of zero entries in a tensor.P is a positive tensor or a zero tensor when t = 0 or t = 1 respectively.Test sparse tensors with different t are generated by the function randsample of MATLAB.
In Example 4, we generated third-order tensors of dimension 200 with densities ranging from 0.1 to 0.9 in intervals of 0.1.The numerical results are presented in Table 6.The conclusions drawn from Table 6 are as follows: (1) When suitable parameters are found, our proposed algorithm consistently outperforms the methods presented in [19] and the TARS method.(2) Out of the 36 test cases, in terms of the minimum CPU times, GTRS-SOR, GTRS-GS, GTRS-IO, GTRS-JCB and GTRS-FP occupy about 36.1%, 0%, 44.5%, 19.4% and 0%, respectively.Based on these results, GTRS-SOR appears to be the most competitive algorithm, particularly when t=0.4 or 0.6.
(3) When t is less than 0.5, GTRS-JCB has better convergence performance than GTRS-GS and GTRS-FP.In most cases, GTRS-IO demonstrates a shorter CPU time than the other splitting methods of the GTRS iterative type, especially when t=0.8.
Next, we discuss the relationship between the iterative steps and φ with different values of α.Given the superior convergence properties of the GTRS iterative method with successive overrelaxation splitting, and the significant variation in the number of iterations in Example 4, we selected this splitting for the experiment.Specifically, we conducted the experiment by using ω = {0.3,0.5, 0.7, 0.9}, t = {0.4,0.6} and varying φ from 0.1 to 1 in increments of 0.01.We have plotted the associated comparison results in Figures 4 and 5.The results indicate that increasing φ leads to better performance of the GTRS iterative method, as evidenced by the reduction in the number of iterations required.Figures 6-9 show the convergence history of the GTRS iteration under different densities of tensors and values of α.From Figures 6-9, we find that the proposed algorithms converge faster than the existing methods in the most cases, while GTRS-IO performs the best.It is noticed that GTRS-FP, GTRS-IO and GTRS-JCB always require less computing time than the existing methods.This is due to the fact that they compute smaller errors, which reduce the number of iterations required for convergence.
Different parameter settings can lead to optimal performance for each GTRS iteration.By selecting the appropriate values for α and n, the computational efficiency and accuracy of the GTRS iteration can be improved.It is important to note that these optimal parameter settings may vary depending on the specific problem and dataset being analyzed.In general, the GTRS iteration can achieve better convergence performance than Al 1 -Al 4 and TARS in terms of solving the multilinear PageRank problem.

Conclusions
In this paper, based on the (weak) regular splitting of the matrix I − αPx m−2 , we presented the GTRS iterative method for solving (1.1) from the perspective of the multilinear PageRank problem, and we have proved its overall convergence.In addition, we provided several splits of the GTRS iteration and constructed the corresponding convergence theorems.Numerical experiments indicated that the GTRS iterative method is superior to the existing ones in [19] and the TARS method as a result of choosing appropriate parameters.Overall, this research contributes to the advancement of non-gradient algorithms and their practical implications in tensor computations.In future work, we plan to investigate the latest uniqueness conditions to establish better convergence properties.

Figure 1 .
Figure 1.The iterative steps for Example 1(ii) with different values of α and φ. 0

Figure 2 .
Figure 2. The relationship between CPU time and err for different values of α in Example 2.

Figure 3 .
Figure 3.The relationship between CPU time and err for different values of α in Example 3.

Figure 5 .
Figure 5. Taking t=0.6, the iterative steps with different values of α and φ for Example 4.

Figure 9 .
Figure 9.The relationship between CPU time and err for different values of t with α = 0.999 for Example 4.

Table 1 .
Comparison of GTRS iterative method with Al

Table 2 .
Comparison of GTRS iterative method with Al

Table 3 .
Comparison of GTRS iterative method with Al

Table 4 .
Comparison of GTRS iterative method with Al

Table 5 .
Comparison of GTRS iterative method with Al

Table 6 .
Comparison of GTRS iterative method with Al Figure 4. Taking t=0.4, the iterative steps with different values of α and φ for Example 4.
Figure 6.The relationship between CPU time and err for different values of t with α = 0.90 for Example 4.Figure 7. The relationship between CPU time and err for different values of t with α = 0.95 for Example 4.Figure 8.The relationship between CPU time and err for different values of t with α = 0.99 for Example 4.