SIR - an Efficient Solver for Systems of Equations

The Semi-Implicit Root solver (SIR) is an iterative method for globally convergent solution of systems of nonlinear equations. Since publication, SIR has proven robustness for a great variety of problems. We here present MATLAB and MAPLE codes for SIR, that can be easily implemented in any application where linear or nonlinear systems of equations need be solved efficiently. The codes employ recently developed efficient sparse matrix algorithms and improved numerical differentiation. SIR convergence is quasi-monotonous and approaches second order in the proximity of the real roots. Global convergence is usually superior to that of Newtons method, being a special case of the method. Furthermore the algorithm cannot land on local minima, as may be the case for Newtons method with linesearch.


Motivation and significance
Systems of algebraic equations generally need be solved computationally, whether it be with direct methods or iterative methods. The Semi-Implicit Root solver (SIR [5]), reported on here, was developed in order to improve on the global convergence characteristics of the widely used Newton method. Linesearch [6] is often combined with Newton's method to improve convergence but, unlike SIR, it may lead to local extrema rather than to the roots of the equations. SIR development was initially inspired by semi-implicit PDE algorithms ( [1], [2], [3]) and evolved as a robust equation solver for the time-spectral method GWRM [4] for systems of PDEs; it is however generally applicable to systems of equations.
Recently the algorithm, and its coding in MATLAB and MAPLE, has been substantially improved with respect to efficiency. Updated software and matrix handling is now used. We believe it would be beneficial to present ready-to-use codes, being the motivation for this paper. First, a brief overview of SIR is provided in section 2. For a thorough explanation of the SIR algorithm the reader is advised to consult [5]. An example application is given in section 3. Pseudocode, describing the algorithm in detail, can be found in section 4. In Appendix, MATLAB and MAPLE codes are provided.

Software description
The roots of the single equation f (x) = 0, with f (x) ≡ x − ϕ(x), are found by SIR after a reformulation in iterative form as where β is a real and arbitrary parameter.
Eq. 1 will have the same roots as the original equation. We cast it into the form where α = β/(1 + β) is a parameter for optimizing global and local convergence. Introducing Φ(x; α) ≡ α(x−ϕ(x))+ϕ(x) it may be shown that convergence requires |∂Φ/∂x| < 1 to hold for the iterates x i in a neighbourhood of the root [5]. Newton's method (for a single equation also known as Newton-Raphson's method) assumes |∂Φ/∂x| = 0 everywhere, thus potentially achieving maximized, second order convergence near the root. Newtons method is, however, not globally convergent because of the breakdown of the linear approximation for initial iterates x 0 positioned too distant from the root. This problem is remedied by SIR through enforcing monotonic convergence. The trick is to choose appropriate values of |∂Φ/∂x| ≡ R at each iteration i. Thus SIR iterates the equation using where ϕ ′ (x) = dϕ/dx. Typically R i is initially given a value in the interval [0.5, 0.99] whereafter SIR automatically reduces it towards zero to achieve second order convergence in the vicinity of the root. Since monotonic convergence is guaranteed, SIR will consecutively find all real roots x to the equation.
Generalizing to systems of equations, SIR now solves where Here the Jacobian matrix J has components J mn = ∂(x m − ϕ m (x))/∂x n , δ mn is the Kronecker delta and I is the identity matrix. It is usually most economic to obtain A by solving the linear matrix relation using ∂Φ m /∂x n = R mn (diagonal matrix) and sparse matrix methods. When solving multidimensional equations, strict monotonous convergence cannot be guaranteed, since there is no known effective procedure (like root bracketing in the 1D case) to safely maintain the direction from a starting point towards a root. A procedure for "quasi-monotonicity" is thus employed in SIR [5]. SIR is also safeguarded against certain pitfalls. An evident problem can be seen in Eq. (4); there is a risk that the A matrix may become singular. The cure for this is subiteration, where R m values are modified towards unity. See [5] for further discussion of measures that enhance convergence.
Summarizing, at each iteration the SIR algorithm reduces R m values towards zero to approach second order convergence. If non-monotonous convergence becomes pronounced in any dimension, local subiteration by increasing R m values towards unity is used.

Illustrative example
SIR has been compared to Newton's method with line-search (NL) for a large set of standard problems [5]. In several cases it features superior convergence characteristics, in particular when singularities appear in the computation of the A matrix.
It is well known that the NL method may sometime land on local minima rather than on the roots of the equations. An example of this was provided in [5], where also the concept of "convergence diagrams" were introduced. Since iterative solvers can be very sensitive to the starting points x 0 , the diagram displays, using colour marking, the quality of convergence using a set of 61 by 61 uniformly distributed x 0 ∈ [-5,5]. The two equations solved are x 1 = cos(x 2 ) and x 2 = 3cos(x 1 ).
The convergence diagrams clearly show that the global convergence properties of the SIR and NL schemes are complex even for this seemingly simple problem. It is seen that, in subiteration mode (SIR-s), convergence is rapid for nearly all starting points, whereas here standard SIR converges for about one third of the starting points. The NL method cannot match the convergence of SIR-s, due to excessive landing on non-zero local minima, and performs similarly as SIR.
Recently we have introduced a number of measures to improve SIR efficiency. An important example concerns the timeconsuming computation of the A matrix, which acts as a "scaffold" when building the solution. The scaffold needs not be perfect, however. In nonlinear computations it may suffice to recompute A the first few iterations only (parameter mA; see MAPLE code), whereafter it accurately leads to convergence. The fact that the A is a constant for linear systems of equations is also useful.
The MATLAB implementation has been significantly improved by minimizing the number of symbolic operations performed.

MATLAB code for the example
Below follows a MATLAB coding of the example described above.

Pseudocode
The semi-implicit real root solver, given by Eq. 5, has been coded in MATLAB and in MAPLE. A pseudo-code is provided below. Let us first, however, discuss the basic steps as the computation proceeds. After parameter and procedure definitions and initialization with estimate x 0 , there are three main steps of the iteration loop.
Firstly, for each iteration step new posi- Secondly, an optional validity check of the iteration step is carried out if convergence is slow, i.e. if at least one member of the set .., N} is greater than zero. Here it is controlled that convergence is quasi-monotonous and that the A matrix is not singular or near-singular. The Large K values result in costly function evaluations; we have found however that the value K = 1 is appropriate for most problems, including those of Table 1 in [5]. The value M c = 0 would correspond to strictly monotonous convergence, which usually is unattainable. Instead by choosing a small, negative value for M c , quasi-monotonicity is allowed.
Furthermore, if the A matrix with components α mn is near-singular, fatally large steps could result. We here require that all |α mn | α max , where α max = 2 is a good default value.
If either of these criteria is not satisfied for all problem dimensions, new values R i = (3R i +1)/4 are computed for at most I s subiteration steps for any relevant dimension m. This procedure causes the slope of the corresponding hypersurface to approach unity in the direction m, which supports monotonic convergence. The local gradient of the hypersurface is zero for all other dimensions.
As a final third step, the slope parameter R i is reduced towards zero to facilitate convergence approaching second order; thus we have R i+1 m = κR i m , where standard values are R 0 m = 0.95 and κ = 0.5. The iterations proceed until the desired accuracy is achieved or until the assigned number of iterations is exceeded. It should be remarked that the use of subiterations by calling SIR-s slows down the algorithm due to additional numerical operations and should only be used when required.
A formal pseudocode for SIR is given below and in [5]. In Appendices A and B, MATLAB and MAPLE codes for SIR are provided.

Impact
SIR is routinely used as a robust solver of systems of nonlinear algebraic equations in GWRM applications [5]. The GWRM is a time-spectral PDE solver that has been applied to linear and nonlinear PDEs. The latter include Burger's equation, a nonlinear wave equation and chaotic equations relevant for numerical weather prediction.
The algorithm has extended global convergence as compared to Newton's method and avoids landing on local minima, being problematic for Newton's method using linesearch. Thus SIR has a potential of being widely used in a number of computational physics areas.

Conclusion
SIR is a recently developed solver for general systems of nonlinear algebraic equations. Global convergence is often superior to that of Newton's method and Newton's method with linesearch, as shown in an example. SIR is also simple to code; compact MATLAB and MAPLE codes are included as appendices.
if sub and max(conv) > 0 then for j = 1 to I S do S 1 = subiterated sequence for monotonicity test (see text) S 2 = sequence of |A nj | for all dimensions n if max(S 2 ) < a c and min(S 1 ) ≥ M c then return false R n = (3R n + 1)/4