Next Article in Journal
A Three-Stage method for Data Text Mining: Using UGC in Business Intelligence Analysis
Next Article in Special Issue
Modified Optimal Class of Newton-Like Fourth-Order Methods for Multiple Roots
Previous Article in Journal
Mathematical Modeling of a Class of Symmetrical Islamic Design
Previous Article in Special Issue
Some Real-Life Applications of a Newly Constructed Derivative Free Iterative Scheme
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Efficient Class of Traub-Steffensen-Like Seventh Order Multiple-Root Solvers with Applications

1
Department of Mathematics, Sant Longowal Institute of Engineering and Technology Longowal Sangrur 148106, India
2
Department of Mathematical Sciences, Cameron University, Lawton, OK 73505, USA
*
Authors to whom correspondence should be addressed.
Symmetry 2019, 11(4), 518; https://doi.org/10.3390/sym11040518
Submission received: 20 March 2019 / Revised: 4 April 2019 / Accepted: 8 April 2019 / Published: 10 April 2019
(This article belongs to the Special Issue Symmetry with Operator Theory and Equations)

Abstract

:
Many higher order multiple-root solvers that require derivative evaluations are available in literature. Contrary to this, higher order multiple-root solvers without derivatives are difficult to obtain, and therefore, such techniques are yet to be achieved. Motivated by this fact, we focus on developing a new family of higher order derivative-free solvers for computing multiple zeros by using a simple approach. The stability of the techniques is checked through complex geometry shown by drawing basins of attraction. Applicability is demonstrated on practical problems, which illustrates the efficient convergence behavior. Moreover, the comparison of numerical results shows that the proposed derivative-free techniques are good competitors of the existing techniques that require derivative evaluations in the iteration.

1. Introduction

Solving nonlinear equations is an important task in numerical analysis and has numerous applications in engineering, mathematical biology, physics, chemistry, medicine, economics, and other disciplines of applied sciences [1,2,3]. Due to advances in computer hardware and software, the problem of solving the nonlinear equations by computational techniques has acquired an additional advantage of handling the lengthy and cumbersome calculations. In the present paper, we consider iterative techniques for computing multiple roots, say α , with multiplicity m of a nonlinear equation f ( x ) = 0 , that is f ( j ) ( α ) = 0 , j = 0 , 1 , 2 , , m 1 and f ( m ) ( α ) 0 . The solution α can be calculated as a fixed point of some function M : D C C by means of the fixed point iteration:
x n + 1 = M ( x n ) , n 0
where x D is a scalar.
Many higher order techniques, based on the quadratically-convergent modified Newton’s scheme (see [4]):
x n + 1 = x n m f ( x n ) f ( x n )
have been proposed in the literature; see, for example, [5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20] and the references therein. The techniques based on Newton’s or the Newton-like method require the evaluations of derivatives of first order. There is another class of multiple-root techniques involving derivatives of both the first and second order; see [5,21]. However, higher order derivative-free techniques to handle the case of multiple roots are yet to be explored. The main problem of developing such techniques is the difficulty in finding their convergence order. Derivative-free techniques are important in the situations when the derivative of the function f is difficult to compute or is expensive to obtain. One such derivative-free technique is the classical Traub–Steffensen method [22]. The Traub–Steffensen method actually replaces the derivative in the classical Newton’s method with a suitable approximation based on the difference quotient,
f ( x n ) f ( x n + β f ( x n ) ) f ( x n ) β f ( x n ) , β R { 0 }
or writing more concisely:
f ( x n ) f [ x n , t n ]
where t n = x n + β f ( x n ) and f [ x n , t n ] = f ( t n ) f ( x n ) t n x n is a first order divided difference. In this way, the modified Newton’s scheme (2) assumes the form of the modified Traub–Steffensen scheme:
x n + 1 = x n m f ( x n ) f [ x n , t n ] .
The Traub–Steffensen scheme (3) is a noticeable improvement of Newton’s scheme, since it maintains the quadratic convergence without using any derivative.
The aim of the present contribution is to develop derivative-free multiple-root iterative techniques with high computational efficiency, which means the techniques that may attain a high convergence order using as small a number of function evaluations as possible. Consequently, we develop a family of derivative-free iterative methods of seventh order convergence that requires only four function evaluations per full iteration. The scheme is composed of three steps, out of which the first step is the classical Traub–Steffensen iteration (3) and the last two steps are Traub–Steffensen-like iterations. The methodology is based on the simple approach of using weight functions in the scheme. Many special cases of the family can be generated depending on the different forms of weight functions. The efficacy of the proposed methods is tested on various numerical problems of different natures. In the comparison with existing techniques requiring derivative evaluations, the new derivative-free methods are observed to be computationally more efficient.
We summarize the contents of the rest of paper. In Section 2, the scheme of the seventh order multiple-root solvers is developed and its order of convergence is determined. In Section 3, the basins of attractors are presented to check the stability of new methods. To demonstrate the performance and comparison with existing techniques, the new techniques are applied to solve some practical problems in Section 4. Concluding remarks are given in Section 5.

2. Development of the Family of Methods

Given a known multiplicity m 1 , we consider a three-step iterative scheme with the first step as the Traub–Steffensen iteration (3) as follows:
y n = x n m f ( x n ) f [ x n , t n ] z n = y n m u H ( u ) f ( x n ) f [ x n , t n ] x n + 1 = z n m v G ( u , w ) f ( x n ) f [ x n , t n ]
where u = f ( y n ) f ( x n ) 1 m , v = f ( z n ) f ( x n ) 1 m , and w = f ( z n ) f ( y n ) 1 m . The function H ( u ) : C C is analytic in a neighborhood of 0, and the function G ( u , w ) : C × C C is holomorphic in a neighborhood of (0, 0). Note that second and third steps are weighted by the factors H ( u ) and G ( u , w ) , so these factors are called weight factors or weight functions.
We shall find conditions under which the scheme (4) achieves convergence order as high as possible. In order to do this, let us prove the following theorem:
Theorem 1.
Let f : C C be an analytic function in a region enclosing a multiple zero α with multiplicity m. Assume that initial guess x 0 is sufficiently close to α, then the iteration scheme defined by (4) possesses the seventh order of convergence, provided that the following conditions are satisfied:
H ( 0 ) = 1 , H ( 0 ) = 2 H ( 0 ) = 2 , and | H ( 0 ) | < G ( 0 , 0 ) = 1 , G 10 ( 0 , 0 ) = 2 , G 01 ( 0 , 0 ) = 1 , G 20 ( 0 , 0 ) = 0 and | G 11 ( 0 , 0 ) | <
where G i j ( 0 , 0 ) = i + j u i w j G ( u , w ) | ( 0 , 0 ) .
Proof. 
Let the error at the n th iteration be e n = x n α . Using the Taylor expansion of f ( x n ) about α , we have that:
f ( x n ) = f ( m ) ( α ) m ! e n m + f ( m + 1 ) ( α ) ( m + 1 ) ! e n m + 1 + f ( m + 2 ) ( α ) ( m + 2 ) ! e n m + 2 + f ( m + 3 ) ( α ) ( m + 3 ) ! e n m + 3 + f ( m + 4 ) ( α ) ( m + 4 ) ! e n m + 4 + f ( m + 5 ) ( α ) ( m + 5 ) ! e n m + 5 + f ( m + 6 ) ( α ) ( m + 6 ) ! e n m + 6 + f ( m + 7 ) ( α ) ( m + 7 ) ! e n m + 7 + O e n m + 8
or:
f ( x n ) = f ( m ) ( α ) m ! e n m 1 + C 1 e n + C 2 e n 2 + C 3 e n 3 + C 4 e n 4 + C 5 e n 5 + C 6 e n 6 + C 7 e n 7 + O ( e n 8 )
where C k = m ! ( m + k ) ! f ( m + k ) ( α ) f ( m ) ( α ) for k N .
Using (5) in t n = x n + β f ( x n ) , we obtain that:
t n α = x n α + β f ( x n ) = e n + β f ( m ) ( α ) m ! e n m 1 + C 1 e n + C 2 e n 2 + C 3 e n 3 + C 4 e n 4 + C 5 e n 5 + C 6 e n 6 + C 7 e n 7 + O ( e n 8 ) .
Taylor’s expansion of f ( t n ) about α is given as:
f ( t n ) = f ( m ) ( α ) m ! ( t n α ) m ( 1 + C 1 ( t n α ) + C 2 ( t n α ) 2 + C 3 ( t n α ) 3 + C 4 ( t n α ) 4 + C 5 ( t n α ) 5 + C 6 ( t n α ) 6 + C 7 ( t n α ) 7 + O ( ( t n α ) 8 ) ) .
By using Equations (5)–(7) in the first step of (4), after some simple calculations, it follows that:
y n α = C 1 m e n 2 + 2 m C 2 ( m + 1 ) C 1 2 m 2 e n 3 + 1 m 3 ( m + 1 ) 2 C 1 2 + m ( 4 + 3 m ) C 1 C 2 3 m 2 C 3 e n 4 + i = 1 3 ω i e n i + 4 + O ( e n 8 )
where ω i = ω i ( m , C 1 , C 2 , , C 7 ) are given in terms of m , C 1 , C 2 , , C 7 . The expressions of ω i are very lengthy, so they are not written explicitly.
Taylor’s expansion of f ( y n ) about α is given by:
f ( y n ) = f ( m ) ( α ) m ! C 1 m m e n 2 m ( 1 + 2 C 2 m C 1 2 ( m + 1 ) C 1 e n + 1 2 m C 1 2 ( ( 3 + 3 m + 3 m 2 + m 3 ) C 1 4 2 m ( 2 + 3 m + 2 m 2 ) C 1 2 C 2 + 4 ( 1 + m ) m 2 C 2 2 + 6 m 2 C 1 C 3 ) e n 2 + i = 1 4 ω ¯ i e n i + 2 + O ( e n 8 ) )
where ω ¯ i = ω ¯ i ( m , C 1 , C 2 , , C 7 ) .
By using (5) and (9), we get the expression of u as:
u = C 1 m e n + 2 C 2 m C 1 2 ( m + 2 ) m 2 e n 2 + i = 1 5 η i e n i + 2 + O ( e n 8 )
where η i = η i ( m , C 1 , C 2 , , C 7 ) are given in terms of m , C 1 , C 2 , , C 7 with one explicitly-written coefficient η 1 = 1 2 m 3 C 1 3 ( 2 m 2 + 7 m + 7 ) + 6 C 3 m 2 2 C 2 C 1 m ( 3 m + 7 ) .
We expand the weight function H ( u ) in the neighborhood of 0 by the Taylor series, then we have that:
H ( u ) H ( 0 ) + u H ( 0 ) + 1 2 u 2 H ( 0 ) + 1 6 u 3 H ( 0 ) .
Inserting Equations (5), (9), and (11) in the second step of Scheme (4) and simplifying,
z n α = A m C 1 e n 2 + 1 m 2 2 m A C 2 + C 1 2 ( 1 + m A + 3 H ( 0 ) H ( 0 ) ) e n 3 + 1 2 m 3 ( 6 A m 2 C 3 + 2 m C 1 C 2 ( 4 + 3 A m + 11 H ( 0 ) 4 H ( 0 ) ) + C 1 3 ( 2 2 A m 2 13 H ( 0 ) + 10 H ( 0 ) + m ( 4 11 H ( 0 ) + 4 H ( 0 ) ) H ( 0 ) ) ) e n 4 + i = 1 3 γ i e n i + 4 + O ( e n 8 )
where A = 1 + H ( 0 ) and γ i = γ i ( m , H ( 0 ) , H ( 0 ) , H ( 0 ) , H ( 0 ) , C 1 , C 2 , , C 7 ) .
In order to attain higher order convergence, the coefficients of e n 2 and e n 3 should be simultaneously equal to zero. That is possible only for the following values of H ( 0 ) and H ( 0 ) :
H ( 0 ) = 1 , H ( 0 ) = 2 .
By using the above values in (12), we obtain that:
z n α = 2 m C 1 C 2 + C 1 3 ( 9 + m H ( 0 ) ) 2 m 3 e n 4 + i = 1 3 γ i e n i + 4 + O ( e n 8 ) .
Expansion of f ( z n ) about α leads us to the expression:
f ( z n ) = f ( m ) ( α ) m ! ( z n α ) m 1 + C 1 ( z n α ) + C 2 ( z n α ) 2 + O ( ( z n α ) 3 ) .
From (5), (9), and (15), we get the expressions of v and w as:
v = ( 9 + m ) C 1 3 2 m C 1 C 2 2 m 3 e n 3 + i = 1 4 τ i e n i + 3 + O ( e n 8 )
and:
w = ( 9 + m H ( 0 ) ) C 1 2 2 m C 2 2 m 3 e n 2 + i = 1 5 ς i e n i + 2 + O ( e n 8 )
where τ i and ς i are some expressions of m , H ( 0 ) , H ( 0 ) , C 1 , C 2 , , C 7 .
Expanding the function G ( u , w ) in the neighborhood of origin ( 0 , 0 ) by Taylor series:
G ( u , w ) G 00 ( 0 , 0 ) + u G 10 ( 0 , 0 ) + 1 2 u 2 G 20 ( 0 , 0 ) + w ( G 01 ( 0 , 0 ) + u G 11 ( 0 , 0 ) )
where G i j = i + j u i w j G ( u , w ) | ( 0 , 0 ) .
Then by substituting (5), (16), (17), and (18) into the last step of Scheme (4), we obtain the error equation:
e n + 1 = 1 2 m 3 ( 1 + G 00 ( 0 , 0 ) ) C 1 2 m C 1 ( 9 + m H ( 0 ) ) C 1 2 e n 4 + i = 1 3 ξ i e n i + 4 + O ( e n 8 )
where ξ i = ξ i ( m , H ( 0 ) , H ( 0 ) , G 00 , G 01 , G 10 , G 20 , G 11 , C 1 , C 2 , , C 7 ) .
It is clear from Equation (19) that we will obtain at least fifth order convergence if we have G 00 ( 0 , 0 ) = 1 . Moreover, we can use this value in ξ 1 = 0 to obtain:
G 10 ( 0 , 0 ) = 2 .
By using G 00 = 1 and (20) in ξ 2 = 0 , the following equation is obtained:
C 1 2 m C 2 C 1 2 ( 9 + m H ( 0 ) ) ( 2 m C 2 ( 1 + G 01 ( 0 , 0 ) ) + C 1 2 ( 11 + m ( 1 + G 01 ( 0 , 0 ) ) ( 9 + H ( 0 ) ) G 01 ( 0 , 0 ) + G 20 ( 0 , 0 ) ) ) = 0
which further yields:
G 01 ( 0 , 0 ) = 1 , G 20 ( 0 , 0 ) = 0 and H ( 0 ) = 2 .
Using the above values in (19), the final error equation is given by:
e n + 1 = 1 360 m 6 ( 360 m 3 ( 47 + 5 m ) C 2 3 6 m C 3 2 10 m C 2 C 4 + 120 m 3 C 1 ( ( 623 + 78 m ) C 2 C 3 12 m C 5 ) 60 m 2 C 1 3 C 3 1861 + 1025 m + 78 m 2 + 12 H ( 0 ) + 10 m C 1 4 C 2 ( 32383 + 9911 m 2 + 558 m 3 + 515 H ( 0 ) + 396 G 11 ( 0 , 0 ) + 36 m 900 + 6 H ( 0 ) + G 11 ( 0 , 0 ) ) 60 m 2 C 1 2 6 m ( 67 + 9 m ) C 4 + C 2 2 3539 + 1870 m + 135 m 2 + 24 H ( 0 ) + 6 G 11 ( 0 , 0 ) C 1 6 ( 95557 + 20605 m + 978 m 4 + 2765 H ( 0 ) + 10890 G 11 ( 0 , 0 ) + m 2 ( 90305 + 600 H ( 0 ) + 90 G 11 ( 0 , 0 ) ) + 5 m ( 32383 + 515 H ( 0 ) + 396 G 11 ( 0 , 0 ) ) ) ) e n 7 + O ( e n 8 ) .
Hence, the seventh order convergence is established. □

Forms of the Weight Function

Numerous special cases of the family (4) are generated based on the forms of weight functions H ( u ) and G ( u , w ) that satisfy the conditions of Theorem 1. However, we restrict ourselves to simple forms, which are given as follows:
I. Some particular forms of H ( u )
Case I(a). When H ( u ) is a polynomial weight function, e.g.,
H ( u ) = A 0 + A 1 u + A 2 u 2 .
By using the conditions of Theorem 1, we get A 0 = 1 , A 1 = 2 and A 2 = 1 . Then, H ( u ) becomes:
H ( u ) = 1 + 2 u u 2 .
Case I(b). When H ( u ) is a rational weight function, e.g.,
H ( u ) = 1 + A 0 u A 1 + A 2 u .
Using the conditions of Theorem 1, we get that A 0 = 5 2 , A 1 = 1 , and A 2 = 1 2 . Therefore,
H ( u ) = 2 + 5 u 2 + u .
Case I(c). When H ( u ) is a rational weight function, e.g.,
H ( u ) = 1 + A 0 u + A 1 u 2 1 + A 2 u .
Using the conditions of Theorem 1, then we get A 0 = 3 , A 1 = 2 , and A 2 = 1 . H ( u ) becomes:
H ( u ) = 1 + 3 u + u 2 1 + u .
Case I(d). When H ( u ) is a rational function of the form:
H ( u ) = 1 + A 0 u 1 + A 1 u + A 2 u 2 .
Using the conditions of Theorem 1, we get A 0 = 1 , A 1 = 1 , and A 2 = 1 . Then,
H ( u ) = 1 + u 1 u + 3 u 2 .
II. Some particular forms of G ( u , w )
Case II(a). When G ( u , w ) is a polynomial weight function, e.g.,
G ( u , w ) = A 0 + A 1 u + A 2 u 2 + ( A 3 + A 4 u + A 5 u 2 ) w .
Using the conditions of Theorem 1, then we get A 0 = 1 , A 1 = 2 , A 2 = 0 , and A 3 = 1 . Therefore, G ( u , w ) becomes:
G ( u , w ) = 1 + 2 u + ( 1 + A 4 u + A 5 u 2 ) w
where A 4 and A 5 are free parameters.
Case II(b). When G ( u , w ) is a rational weight function, e.g.,
G ( u , w ) = B 0 + B 1 u + B 2 w + B 3 u w 1 + A 1 u + A 2 w + A 3 u w .
Using the conditions of Theorem 1, we have B 0 = 1 , A 1 = 2 , B 1 = 2 , A 2 = 0 , and B 2 = 1 . Then,
G ( u , w ) = 1 + 2 u + w + B 3 u w 1 + A 3 u w
where A 3 and B 3 are free parameters.
Case II(c). When G ( u , w ) is the sum of two weight functions H 1 ( u ) and H 2 ( w ) . Let H 1 ( u ) = A 0 + u A 1 + u 2 A 2 and H 2 ( u ) = B 0 + w B 1 + w 2 B 2 , then G ( u , w ) becomes:
G ( u , w ) = A 0 + A 1 u + A 2 u 2 + B 0 + B 1 w + B 2 w 2 .
By using the conditions of Theorem 1, we get:
G ( u , w ) = 1 + 2 u + w + B 2 w 2
where B 2 is a free parameter.
Case II(d). When G ( u , w ) is the sum of two rational weight functions, that is:
G ( u , w ) = A 0 + A 1 u 1 + A 2 u + B 0 + B 1 w 1 + B 2 w .
By using the conditions of Theorem 1, we obtain that:
G ( u , w ) = 2 u + 1 1 w .
Case II(e). When G ( u , w ) is product of two weight functions, that is:
G ( u , w ) = A 0 + A 1 u 1 + A 2 u × B 0 + B 1 w 1 + B 2 w .
Using the conditions of Theorem 1, then we get:
G ( u , w ) = ( 1 + 2 u ) ( 1 + w ) .

3. Complex Dynamics of Methods

Here, our aim is to analyze the complex dynamics of the proposed methods based on the visual display of the basins of attraction of the zeros of a polynomial p ( z ) in the complex plane. Analysis of the complex dynamical behavior gives important information about the convergence and stability of an iterative scheme. Initially, Vrscay and Gilbert [23] introduced the idea of analyzing complex dynamics. Later on, many researchers used this concept in their work, for example (see [24,25,26] and the references therein). We choose some of the special cases (corresponding to the above forms of H ( u ) and G ( u , w ) ) of family (4) to analyze the basins. Let us choose the combinations of special Cases II(c) (for B 2 = 1 ) and II(d) with I(a), I(b), I(c), and I(d) in (4) and denote the corresponding new methods by NM-i(j), i = 1, 2 and j = a, b, c, d.
We take the initial point as z 0 D , where D is a rectangular region in C containing all the roots of p ( z ) = 0 . The iterative methods starting at a point z 0 in a rectangle either converge to the zero of the function p ( z ) or eventually diverge. The stopping criterion considered for convergence is 10 3 up to a maximum of 25 iterations. If the desired tolerance is not achieved in 25 iterations, we do not continue and declare that the iterative method starting at point z 0 does not converge to any root. The strategy adopted is the following: A color is assigned to each starting point z 0 in the basin of attraction of a zero. If the iteration starting from the initial point z 0 converges, then it represents the basins of attraction with that particular color assigned to it, and if it fails to converge in 25 iterations, then it shows the black color.
To view complex geometry, we analyze the attraction basins for the methods NM-1(a–d) and NM-2(a–d) on the following two polynomials:
Problem 1.
In the first example, we consider the polynomial p 1 ( z ) = ( z 2 1 ) 2 , which has zeros { ± 1 } with multiplicity two. In this case, we use a grid of 400 × 400 points in a rectangle D C of size [ 2 , 2 ] × [ 2 , 2 ] and assign the color green to each initial point in the basin of attraction of zero ‘ 1 ’ and the color red to each point in the basin of attraction of zero ‘ 1 ’. Basins obtained for the methods NM-1(a–d) and NM-2(a–d) are shown in Figure 1, Figure 2, Figure 3 and Figure 4 corresponding to β = 0.01 , 0.002 . Observing the behavior of the methods, we see that the method NM-2(d) possesses a lesser number of divergent points and therefore has better stability than the remaining methods. Notice that there is a small difference in the basins for the rest of the methods with the same value of β. Notice also that the basins are becoming wider as parameter β assumes smaller values.
Problem 2.
Let us take the polynomial p 2 ( z ) = ( z 3 + z ) 3 having zeros { 0 , ± i } with multiplicity three. To see the dynamical view, we consider a rectangle D = [ 2 , 2 ] × [ 2 , 2 ] C with 400 × 400 grid points and allocate the colors red, green, and blue to each point in the basin of attraction of i , 0, and i, respectively. Basins for this problem are shown in Figure 5, Figure 6, Figure 7 and Figure 8 corresponding to parameter choices β = 0.01 , 0.002 in the proposed methods. Observing the behavior, we see that again, the method NM-2(d) has better convergence behavior due to a lesser number of divergent points. Furthermore, observe that in each case, the basins are becoming larger with the smaller values of β. The basins in the remaining methods other than NM-2(d) are almost the same.
From the graphics, we can easily observe the behavior and applicability of any method. If we choose an initial guess z 0 in a region wherein different basins of attraction touch each other, it is difficult to predict which root is going to be attained by the iterative method that starts from z 0 . Therefore, the choice of z 0 in such a region is not a good one. Both black regions and the regions with different colors are not suitable to assume the initial guess as z 0 when we are required to achieve a particular root. The most intricate geometry is between the basins of attraction, and this corresponds to the cases where the method is more demanding with respect to the initial point. We conclude this section with a remark that the convergence behavior of the proposed techniques depends on the value of parameter β . The smaller the value of β is, the better the convergence of the method.

4. Numerical Examples and Discussion

In this section, we implement the special cases NM-1(a–d) and NM-2(a–d) that we have considered in the previous section of the family (4), to obtain zeros of nonlinear functions. This will not only illustrate the methods’ practically, but also serve to test the validity of theoretical results. The theoretical order of convergence is also confirmed by calculating the computational order of convergence (COC) using the formula (see [27]):
COC = ln | ( x n + 1 α ) / ( x n α ) | ln | ( x n α ) / ( x n 1 α ) | .
The performance is compared with some well-known higher order multiple-root solvers such as the sixth order methods by Geum et al. [8,9], which are expressed below:
First method by Geum et al. [8]:
y n = x n m f ( x n ) f ( x n ) x n + 1 = y n Q f ( u , s ) f ( y n ) f ( y n )
where u = f ( y n ) f ( x n ) 1 m and s = f ( y n ) f ( x n ) 1 m 1 , and Q f : C 2 C is a holomorphic function in the neighborhood of origin (0, 0). The authors have also studied various forms of the function Q f leading to sixth order convergence of (21). We consider the following four special cases of function Q f ( u , s ) in the formula (21) and denote the corresponding methods by GKN-1(j), j = a, b, c, d:
(a)
Q f ( u , s ) = m ( 1 + 2 ( m 1 ) ( u s ) 4 u s + s 2 )
(b)
Q f ( u , s ) = m ( 1 + 2 ( m 1 ) ( u s ) u 2 2 u s )
(c)
Q f ( u , s ) = m + a u 1 + b u + c s + d u s ,
where a = 2 m m 1 , b = 2 2 m , c = 2 ( 2 2 m + m 2 ) m 1 , and d = 2 m ( m 1 )
(d)
Q f ( u , s ) = m + a 1 u 1 + b 1 u + c 1 u 2 1 1 + d 1 s ,
where a 1 = 2 m ( 4 m 4 16 m 3 + 31 m 2 30 m + 13 ( m 1 ) ( 4 m 2 8 m + 7 ) , b 1 = 4 ( 2 m 2 4 m + 3 ) ( m 1 ) ( 4 m 2 8 m + 7 ) ,
c 1 = 4 m 2 8 m + 3 4 m 2 8 m + 7 , and d 1 = 2 ( m 1 ) .
Second method by Geum et al. [9]:
y n = x n m f ( x n ) f ( x n ) z n = x n m G f ( u ) f ( x n ) f ( x n ) x n + 1 = x n m K f ( u , v ) f ( x n ) f ( x n )
where u = f ( y n ) f ( x n ) 1 m and v = f ( z n ) f ( x n ) 1 m . The function G f : C C is analytic in a neighborhood of 0, and K f : C 2 C is holomorphic in a neighborhood of ( 0 , 0 ) . Numerous cases of G f and K f have been proposed in [9]. We consider the following four special cases and denote the corresponding methods by GKN-2(j), j = a, b, c, d:
(a)
Q f ( u ) = 1 + u 2 1 u , K f ( u , v ) = 1 + u 2 v 1 u + ( u 2 ) v
(b)
Q f ( u ) = 1 + u + 2 u 2 , K f ( u , v ) = 1 + u + 2 u 2 + ( 1 + 2 u ) v
(c)
Q f ( u ) = 1 + u 2 1 u , K f ( u , v ) = 1 + u + 2 u 2 + 2 u 3 + 2 u 4 + ( 2 u + 1 ) v
(d)
Q f ( u ) = ( 2 u 1 ) ( 4 u 1 ) 1 7 u + 13 u 2 , K f ( u , v ) = ( 2 u 1 ) ( 4 u 1 ) 1 7 u + 13 u 2 ( 1 6 u ) v
Computations were carried out in the programming package Mathematica with multiple-precision arithmetic. Numerical results shown in Table 1, Table 2, Table 3 and Table 4 include: (i) the number of iterations ( n ) required to converge to the solution, (ii) the values of the last three consecutive errors e n = | x n + 1 x n | , (iii) the computational order of convergence (COC), and (iv) the elapsed CPU time (CPU-time). The necessary iteration number ( n ) and elapsed CPU time are calculated by considering | x n + 1 x n | + | f ( x n ) | < 10 350 as the stopping criterion.
The convergence behavior of the family of iterative methods (4) is tested on the following problems:
Example 1.
(Eigenvalue problem). Finding eigenvalues of a large square matrix is one of the difficult tasks in applied mathematics and engineering. Finding even the roots of the characteristic equation of a square matrix of order greater than four is a big challenge. Here, we consider the following 6 × 6 matrix:
M = 5 8 0 2 6 6 0 1 0 0 0 0 6 18 1 1 13 9 3 6 0 4 6 6 4 14 2 0 11 6 6 18 2 1 13 8 .
The characteristic equation of the above matrix ( M ) is given as follows:
f 1 ( x ) = x 6 12 x 5 + 56 x 4 130 x 3 + 159 x 2 98 x + 24 .
This function has one multiple zero at α = 1 of multiplicity three. We choose initial approximation x 0 = 0.25 . Numerical results are shown in Table 1.
Example 2.
(Kepler’s equation). Let us consider Kepler’s equation:
f 2 ( x ) = x α sin ( x ) K = 0 , 0 α < 1 and 0 K π .
A numerical study, for different values of the parameters α and K, has been performed in [28]. As a particular example, let us take α = 1 4 and K = π 5 . Consider this particular case four times with same values of the parameters, then the required nonlinear function is:
f 2 ( x ) = x 1 4 sin x π 5 4 .
This function has one multiple zero at α = 0.80926328 of multiplicity four. The required zero is calculated using initial approximation x 0 = 1 . Numerical results are displayed in Table 2.
Example 3.
(Manning’s equation). Consider the isentropic supersonic flow around a sharp expansion corner. The relationship between the Mach number before the corner (i.e., M 1 ) and after the corner (i.e., M 2 ) is given by (see [3]):
δ = b 1 / 2 tan 1 M 2 2 1 b 1 / 2 tan 1 M 1 2 1 b 1 / 2 tan 1 ( M 2 2 1 ) 1 / 2 tan 1 ( M 1 2 1 ) 1 / 2
where b = γ + 1 γ 1 and γ is the specific heat ratio of the gas.
As a particular case study, the equation is solved for M 2 given that M 1 = 1.5 , γ = 1.4 , and δ = 10 0 . Then, we have that:
tan 1 5 2 tan 1 ( x 2 1 ) + 6 tan 1 x 2 1 6 tan 1 1 2 5 6 11 63 = 0
where x = M 2 .
Consider this particular case three times with the same values of the parameters, then the required nonlinear function is:
f 3 ( x ) = tan 1 5 2 tan 1 ( x 2 1 ) + 6 tan 1 x 2 1 6 tan 1 1 2 5 6 11 63 3 .
This function has one multiple zero at α = 1.8411027704 of multiplicity three. The required zero is calculated using initial approximation x 0 = 1.5 . Numerical results are shown in Table 3.
Example 4.
Next, consider the standard nonlinear test function:
f 4 ( x ) = 1 x 2 + x + cos π x 2 + 1 4
which has a multiple zero at α = 0.72855964390156 of multiplicity four. Numerical results are shown in Table 4 with initial guess x 0 = 0.5 .
Example 5.
Consider the standard function, which is given as (see [8]):
f 5 ( x ) = x 3 x 3 cos π x 6 + 1 x 2 + 1 11 5 + 4 3 ( x 2 ) 4 .
The multiple zero of function f 5 is α = 2 with multiplicity five. We choose the initial approximation x 0 = 1.5 for obtaining the zero of the function. Numerical results are exhibited in Table 5.
Example 6.
Consider another standard function, which is given as:
f 6 ( x ) = sin x π 3 e x 2 2 x 3 cos ( x 3 ) + x 2 9 27 e 2 ( x 3 ) x 3 28 ( x 3 + 1 ) + x cos x π 6
which has a zero α = 3 of multiplicity three. Let us choose the initial approximation x 0 = 3.5 for obtaining the zero of the function. Numerical results are shown in Table 6.
Example 7.
Finally, considering yet another standard function:
f 7 ( x ) = cos ( x 2 + 1 ) x log ( x 2 π + 2 ) + 1 3 ( x 2 + 1 π ) .
The zero of function f 7 is α = 1.4632625480850 with multiplicity four. We choose the initial approximation x 0 = 1.3 to find the zero of this function. Numerical results are displayed in Table 7.
It is clear from the numerical results shown in Table 1, Table 2, Table 3, Table 4, Table 5, Table 6 and Table 7 that the accuracy in the successive approximations increases as the iterations proceed. This shows the stable nature of the methods. Moreover, the present methods like that of existing methods show consistent convergence behavior. We display the value zero of | e n | in the iteration at which | x n + 1 x n | + | F ( x n ) | < 10 350 . The values of the computational order of convergence exhibited in the penultimate column in each table verify the theoretical order of convergence. However, this is not true for the existing methods GKN-1(a–d) and GKN-2(a) in Example 2. The entries in the last column in each table show that the new methods use less computing time than the time used by existing methods. This verifies the computationally-efficient nature of the new methods. Similar numerical tests, performed for many problems of different types, have confirmed the aforementioned conclusions to a large extent.
We conclude the analysis with an important problem regarding the choice of initial approximation x 0 in the practical application of iterative methods. The required convergence speed of iterative methods can be achieved in practice if the selected initial approximation is sufficiently close to the root. Therefore, when applying the methods for solving nonlinear equations, special care must be given for guessing close initial approximations. Recently, an efficient procedure for obtaining sufficiently close initial approximation has been proposed in [29]. For example, the procedure when applied to the function of Example 1 in the interval [0, 1.5] using the statements:
f[x_ ]=x^6-12x^5+56x^4-130x^3+159x^2-98x+24; a=0; b=1.5;
k=1; x0=0.5∗(a+b+Sign[f[a]]∗NIntegrate[Tanh[k∗f[x]],{x,a,b}])
in programming package Mathematica yields a close initial approximation x 0 = 1.04957 to the root α = 1 .

5. Conclusions

In the present work, we have designed a class of seventh order derivative-free iterative techniques for computing multiple zeros of nonlinear functions, with known multiplicity. The analysis of convergence shows the seventh order convergence under standard assumptions for the nonlinear function, the zeros of which we have searched. Some special cases of the class were stated. They were applied to solve some nonlinear equations and also compared with existing techniques. Comparison of the numerical results showed that the presented derivative-free methods are good competitors of the existing sixth order techniques that require derivative evaluations. The paper is concluded with the remark that unlike the methods with derivatives, the methods without derivatives are rare in the literature. Moreover, such algorithms are good options to Newton-like iterations in the situation when derivatives are difficult to compute or expensive to obtain.

Author Contributions

Methodology, J.R.S.; writing, review and editing, J.R.S.; investigation, D.K.; data curation, D.K.; conceptualization, I.K.A.; formal analysis, I.K.A.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Argyros, I.K. Convergence and Applications of Newton-Type Iterations; Springer: New York, NY, USA, 2008. [Google Scholar]
  2. Argyros, I.K.; Magreñán, Á.A. Iterative Methods and Their Dynamics with Applications; CRC Press: New York, NY, USA, 2017. [Google Scholar]
  3. Hoffman, J.D. Numerical Methods for Engineers and Scientists; McGraw-Hill Book Company: New York, NY, USA, 1992. [Google Scholar]
  4. Schröder, E. Über unendlich viele Algorithmen zur Auflösung der Gleichungen. Math. Ann. 1870, 2, 317–365. [Google Scholar] [CrossRef]
  5. Hansen, E.; Patrick, M. A family of root finding methods. Numer. Math. 1977, 27, 257–269. [Google Scholar] [CrossRef]
  6. Behl, R.; Cordero, A.; Motsa, S.S.; Torregrosa, J.R. On developing fourth-order optimal families of methods for multiple roots and their dynamics. Appl. Math. Comput. 2015, 265, 520–532. [Google Scholar] [CrossRef]
  7. Behl, R.; Cordero, A.; Motsa, S.S.; Torregrosa, J.R.; Kanwar, V. An optimal fourth-order family of methods for multiple roots and its dynamics. Numer. Algorithms 2016, 71, 775–796. [Google Scholar] [CrossRef]
  8. Geum, Y.H.; Kim, Y.I.; Neta, B. A class of two-point sixth-order multiple-zero finders of modified double-Newton type and their dynamics. Appl. Math. Comput. 2015, 270, 387–400. [Google Scholar] [CrossRef]
  9. Geum, Y.H.; Kim, Y.I.; Neta, B. A sixth–order family of three–point modified Newton–like multiple–root finders and the dynamics behind their extraneous fixed points. Appl. Math. Comput. 2016, 283, 120–140. [Google Scholar] [CrossRef]
  10. Li, S.G.; Cheng, L.Z.; Neta, B. Some fourth-order nonlinear solvers with closed formulae for multiple roots. Comput. Math. Appl. 2010, 59, 126–135. [Google Scholar] [CrossRef]
  11. Li, S.; Liao, X.; Cheng, L. A new fourth-order iterative method for finding multiple roots of nonlinear equations. Appl. Math. Comput. 2009, 215, 1288–1292. [Google Scholar]
  12. Liu, B.; Zhou, X. A new family of fourth-order methods for multiple roots of nonlinear equations. Nonlinear Anal. Model. Control 2013, 18, 143–152. [Google Scholar]
  13. Neta, B. Extension of Murakami’s high-order nonlinear solver to multiple roots. Int. J. Comput. Math. 2010, 87, 1023–1031. [Google Scholar] [CrossRef]
  14. Sharifi, M.; Babajee, D.K.R.; Soleymani, F. Finding the solution of nonlinear equations by a class of optimal methods. Comput. Math. Appl. 2012, 63, 764–774. [Google Scholar] [CrossRef]
  15. Sharma, J.R.; Sharma, R. Modified Jarratt method for computing multiple roots. Appl. Math. Comput. 2010, 217, 878–881. [Google Scholar] [CrossRef]
  16. Soleymani, F.; Babajee, D.K.R. Computing multiple zeros using a class of quartically convergent methods. Alex. Eng. J. 2013, 52, 531–541. [Google Scholar] [CrossRef]
  17. Soleymani, F.; Babajee, D.K.R.; Lotfi, T. On a numerical technique for finding multiple zeros and its dynamics. J. Egypt. Math. Soc. 2013, 21, 346–353. [Google Scholar] [CrossRef]
  18. Victory, H.D.; Neta, B. A higher order method for multiple zeros of nonlinear functions. Int. J. Comput. Math. 1983, 12, 329–335. [Google Scholar] [CrossRef]
  19. Zhou, X.; Chen, X.; Song, Y. Constructing higher-order methods for obtaining the multiple roots of nonlinear equations. J. Comput. Math. Appl. 2011, 235, 4199–4206. [Google Scholar] [CrossRef]
  20. Zhou, X.; Chen, X.; Song, Y. Families of third and fourth order methods for multiple roots of nonlinear equations. Appl. Math. Comput. 2013, 219, 6030–6038. [Google Scholar] [CrossRef]
  21. Osada, N. An optimal multiple root-finding method of order three. J. Comput. Appl. Math. 1994, 51, 131–133. [Google Scholar] [CrossRef]
  22. Traub, J.F. Iterative Methods for the Solution of Equations; Chelsea Publishing Company: New York, NY, USA, 1982. [Google Scholar]
  23. Vrscay, E.R.; Gilbert, W.J. Extraneous fixed points, basin boundaries and chaotic dynamics for Schröder and König rational iteration functions. Numer. Math. 1988, 52, 1–16. [Google Scholar] [CrossRef]
  24. Varona, J.L. Graphic and numerical comparison between iterative methods. Math. Intell. 2002, 24, 37–46. [Google Scholar] [CrossRef]
  25. Scott, M.; Neta, B.; Chun, C. Basin attractors for various methods. Appl. Math. Comput. 2011, 218, 2584–2599. [Google Scholar] [CrossRef]
  26. Lotfi, T.; Sharifi, S.; Salimi, M.; Siegmund, S. A new class of three-point methods with optimal convergence order eight and its dynamics. Numer. Algorithms 2015, 68, 261–288. [Google Scholar] [CrossRef]
  27. Weerakoon, S.; Fernando, T.G.I. A variant of Newton’s method with accelerated third-order convergence. Appl. Math. Lett. 2000, 13, 87–93. [Google Scholar] [CrossRef]
  28. Danby, J.M.A.; Burkardt, T.M. The solution of Kepler’s equation. I. Celest. Mech. 1983, 40, 95–107. [Google Scholar] [CrossRef]
  29. Yun, B.I. A non-iterative method for solving non-linear equations. Appl. Math. Comput. 2008, 198, 691–699. [Google Scholar] [CrossRef]
Figure 1. Basins of attraction for NM-1 (for β = 0.01 ) in polynomial p 1 ( z ) .
Figure 1. Basins of attraction for NM-1 (for β = 0.01 ) in polynomial p 1 ( z ) .
Symmetry 11 00518 g001
Figure 2. Basins of attraction for NM-1 (for β = 0.002 ) in polynomial p 1 ( z ) .
Figure 2. Basins of attraction for NM-1 (for β = 0.002 ) in polynomial p 1 ( z ) .
Symmetry 11 00518 g002
Figure 3. Basins of attraction for NM-2 (for β = 0.01 ) in polynomial p 1 ( z ) .
Figure 3. Basins of attraction for NM-2 (for β = 0.01 ) in polynomial p 1 ( z ) .
Symmetry 11 00518 g003
Figure 4. Basins of attraction for NM-2 (for β = 0.002 ) in polynomial p 1 ( z ) .
Figure 4. Basins of attraction for NM-2 (for β = 0.002 ) in polynomial p 1 ( z ) .
Symmetry 11 00518 g004
Figure 5. Basins of attraction for NM-1 (for β = 0.01 ) in polynomial p 2 ( z ) .
Figure 5. Basins of attraction for NM-1 (for β = 0.01 ) in polynomial p 2 ( z ) .
Symmetry 11 00518 g005
Figure 6. Basins of attraction for NM-1 (for β = 0.002 ) in polynomial p 2 ( z ) .
Figure 6. Basins of attraction for NM-1 (for β = 0.002 ) in polynomial p 2 ( z ) .
Symmetry 11 00518 g006
Figure 7. Basins of attraction for NM-2 (for β = 0.01 ) in polynomial p 2 ( z ) .
Figure 7. Basins of attraction for NM-2 (for β = 0.01 ) in polynomial p 2 ( z ) .
Symmetry 11 00518 g007
Figure 8. Basins of attraction for NM-2 (for β = 0.002 ) in polynomial p 2 ( z ) .
Figure 8. Basins of attraction for NM-2 (for β = 0.002 ) in polynomial p 2 ( z ) .
Symmetry 11 00518 g008
Table 1. Comparison of the performance of methods for Example 1.
Table 1. Comparison of the performance of methods for Example 1.
Methodsn | e n 3 | | e n 2 | | e n 1 | COCCPU-Time
GKN-1(a)4 5.46 × 10 3 2.40 × 10 14 1.78 × 10 82 6.00000.05475
GKN-1(b)4 5.65 × 10 3 3.22 × 10 14 1.13 × 10 81 6.00000.05670
GKN-1(c)4 5.41 × 10 3 2.80 × 10 14 5.59 × 10 82 6.00000.05856
GKN-1(d)4 7.52 × 10 3 4.85 × 10 13 3.78 × 10 74 6.00000.05504
GKN-2(a)4 2.85 × 10 3 1.57 × 10 16 4.32 × 10 96 6.00000.07025
GKN-2(b)4 9.28 × 10 3 1.58 × 10 12 4.13 × 10 71 6.00000.05854
GKN-2(c)4 7.11 × 10 3 1.87 × 10 13 6.53 × 10 77 6.00000.06257
GKN-2(d)5 1.03 × 10 5 3.87 × 10 30 1.07 × 10 176 6.00000.07425
NM-1(a)4 1.62 × 10 3 1.79 × 10 19 3.58 × 10 131 7.00000.04675
NM-1(b)4 1.62 × 10 3 1.85 × 10 19 4.63 × 10 131 6.99900.05073
NM-1(c)4 1.62 × 10 3 1.96 × 10 19 5.92 × 10 131 6.99980.05355
NM-1(d)4 1.60 × 10 3 1.02 × 10 19 4.36 × 10 133 6.99900.05077
NM-2(a)4 1.37 × 10 3 5.56 × 10 20 1.02 × 10 134 6.99970.05435
NM-2(b)4 1.37 × 10 3 5.77 × 10 20 1.35 × 10 134 6.99980.05454
NM-2(c)4 1.38 × 10 3 5.98 × 10 20 1.77 × 10 134 6.99960.05750
NM-2(d)4 1.34 × 10 3 2.97 × 10 20 8.00 × 10 137 6.99980.05175
Table 2. Comparison of the performance of methods for Example 2.
Table 2. Comparison of the performance of methods for Example 2.
Methodsn | e n 3 | | e n 2 | | e n 1 | COCCPU-Time
GKN-1(a)5 1.90 × 10 25 2.55 × 10 76 1.19 × 10 228 3.00182.1405
GKN-1(b)5 1.74 × 10 25 1.94 × 10 76 5.28 × 10 229 3.00182.1445
GKN-1(c)5 2.31 × 10 25 4.56 × 10 76 6.79 × 10 228 3.00182.1835
GKN-1(d)5 1.75 × 10 25 1.97 × 10 76 5.51 × 10 229 3.00182.1797
GKN-2(a)5 3.52 × 10 19 5.28 × 10 117 1.22 × 10 233 1.19231.8047
GKN-2(b)4 9.33 × 10 9 6.67 × 10 53 8.88 × 10 318 6.00001.4452
GKN-2(c)4 3.74 × 10 9 9.33 × 10 56 2.27 × 10 335 6.00001.4415
GKN-2(d)4 1.64 × 10 8 2.04 × 10 51 7.50 × 10 309 6.00001.4492
NM-1(a)3 1.91 × 10 1 5.70 × 10 10 6.59 × 10 70 7.00000.9845
NM-1(b)3 1.91 × 10 1 6.02 × 10 10 1.04 × 10 69 7.00000.9650
NM-1(c)3 1.91 × 10 1 6.32 × 10 10 1.58 × 10 69 7.00000.9570
NM-1(d)3 1.90 × 10 1 9.62 × 10 11 2.48 × 10 76 7.05400.8590
NM-2(a)3 1.91 × 10 1 5.70 × 10 10 6.59 × 10 70 7.00000.9842
NM-2(b)3 1.91 × 10 1 6.02 × 10 10 1.04 × 10 69 7.00000.9607
NM-2(c)3 1.91 × 10 1 6.32 × 10 10 1.58 × 10 69 7.00000.9767
NM-2(d)3 1.91 × 10 1 9.68 × 10 11 2.63 × 10 76 7.05400.7460
Table 3. Comparison of the performance of methods for Example 3.
Table 3. Comparison of the performance of methods for Example 3.
Methodsn | e n 3 | | e n 2 | | e n 1 | COCCPU-Time
GKN-1(a)4 2.17 × 10 8 4.61 × 10 25 1.01 × 10 151 6.00001.3047
GKN-1(b)4 2.17 × 10 8 4.60 × 10 25 2.27 × 10 151 6.00001.2852
GKN-1(c)4 2.11 × 10 8 4.21 × 10 25 1.03 × 10 151 6.00001.3203
GKN-1(d)4 1.77 × 10 8 2.48 × 10 25 2.68 × 10 151 6.00001.2970
GKN-2(a)4 4.83 × 10 7 1.36 × 10 41 6.84 × 10 249 6.00001.2382
GKN-2(b)4 4.90 × 10 7 2.89 × 10 41 1.21 × 10 246 6.00001.2440
GKN-2(c)4 4.88 × 10 7 2.22 × 10 41 1.98 × 10 247 6.00001.2422
GKN-2(d)4 4.89 × 10 7 3.22 × 10 41 2.62 × 10 246 6.00001.2577
NM-1(a)4 7.85 × 10 9 1.56 × 10 60 07.00001.0274
NM-1(b)4 7.85 × 10 9 1.58 × 10 60 07.00001.0272
NM-1(c)4 7.89 × 10 9 1.60 × 10 60 07.00001.0231
NM-1(d)4 7.84 × 10 9 1.31 × 10 60 07.00001.0235
NM-2(a)4 7.69 × 10 9 1.35 × 10 60 07.00001.0398
NM-2(b)4 7.69 × 10 9 1.37 × 10 60 07.00001.0742
NM-2(c)4 7.69 × 10 9 1.38 × 10 60 07.00001.0467
NM-2(d)4 7.68 × 10 9 1.13 × 10 60 07.00001.0192
Table 4. Comparison of the performance of methods for Example 4.
Table 4. Comparison of the performance of methods for Example 4.
Methodsn | e n 3 | | e n 2 | | e n 1 | COCCPU-Time
GKN-1(a)4 7.20 × 10 6 1.80 × 10 30 4.39 × 10 178 6.00000.1017
GKN-1(b)4 7.21 × 10 6 1.85 × 10 30 5.32 × 10 178 5.99990.0977
GKN-1(c)4 7.42 × 10 6 2.52 × 10 30 3.84 × 10 177 5.99990.1055
GKN-1(d)4 8.83 × 10 6 1.30 × 10 29 1.34 × 10 172 5.99990.1015
GKN-2(a)4 2.15 × 10 5 8.22 × 10 28 2.60 × 10 162 5.99990.1132
GKN-2(b)4 2.39 × 10 5 4.22 × 10 27 1.27 × 10 157 5.99990.1052
GKN-2(c)4 2.33 × 10 5 2.57 × 10 27 4.61 × 10 159 5.99990.1055
GKN-2(d)4 2.43 × 10 5 5.31 × 10 27 5.83 × 10 157 5.99990.1095
NM-1(a)4 2.87 × 10 6 1.03 × 10 37 8.12 × 10 258 6.99990.0720
NM-1(b)4 2.88 × 10 6 1.06 × 10 37 9.60 × 10 258 6.99990.0724
NM-1(c)4 2.88 × 10 6 1.08 × 10 37 1.13 × 10 257 6.99990.0722
NM-1(d)4 2.83 × 10 6 7.39 × 10 38 6.09 × 10 259 6.99990.0782
NM-2(a)4 2.80 × 10 6 8.55 × 10 38 2.15 × 10 258 6.99990.0732
NM-2(b)4 2.80 × 10 6 8.74 × 10 37 2.54 × 10 258 6.99990.0723
NM-2(c)4 2.80 × 10 6 8.93 × 10 38 3.00 × 10 258 6.99990.0746
NM-2(d)4 2.76 × 10 6 6.09 × 10 38 1.56 × 10 259 6.99990.0782
Table 5. Comparison of the performance of methods for Example 5.
Table 5. Comparison of the performance of methods for Example 5.
Methodsn | e n 3 | | e n 2 | | e n 1 | COCCPU-Time
GKN-1(a)4 1.20 × 10 5 6.82 × 10 31 2.31 × 10 182 6.00000.5820
GKN-1(b)4 1.20 × 10 5 6.86 × 10 31 2.40 × 10 182 6.00000.5860
GKN-1(c)4 1.21 × 10 5 7.72 × 10 31 5.18 × 10 182 6.00000.5937
GKN-1(d)4 1.58 × 10 5 1.00 × 10 29 6.51 × 10 175 6.00000.5832
GKN-2(a)4 3.17 × 10 5 1.64 × 10 28 3.21 × 10 168 6.00000.7120
GKN-2(b)4 3.50 × 10 5 6.90 × 10 28 4.05 × 10 164 6.00000.6992
GKN-2(c)4 3.41 × 10 5 4.42 × 10 28 2.09 × 10 165 6.00000.6915
GKN-2(d)4 3.54 × 10 5 8.45 × 10 28 1.56 × 10 163 6.00000.6934
NM-1(a)4 2.35 × 10 6 1.81 × 10 40 2.92 × 10 279 7.00000.3712
NM-1(b)4 2.35 × 10 6 1.84 × 10 40 3.31 × 10 279 7.00000.3360
NM-1(c)4 2.35 × 10 6 1.87 × 10 40 3.74 × 10 279 7.00000.3555
NM-1(d)4 2.33 × 10 6 1.41 × 10 40 4.23 × 10 280 7.00000.3633
NM-2(a)4 2.25 × 10 6 1.34 × 10 40 3.65 × 10 280 7.00000.3585
NM-2(b)4 2.25 × 10 6 1.37 × 10 40 4.15 × 10 280 7.00000.3592
NM-2(c)4 2.25 × 10 6 1.39 × 10 40 4.70 × 10 280 7.00000.3791
NM-2(d)4 2.24 × 10 6 1.05 × 10 40 5.20 × 10 281 7.00000.3467
Table 6. Comparison of the performance of methods for Example 6.
Table 6. Comparison of the performance of methods for Example 6.
Methodsn | e n 3 | | e n 2 | | e n 1 | COCCPU-Time
GKN-1(a)4 5.04 × 10 4 6.20 × 10 22 2.15 × 10 129 6.00003.8670
GKN-1(b)4 9.53 × 10 4 4.36 × 10 20 3.98 × 10 118 6.00004.1287
GKN-1(c)4 1.37 × 10 4 2.87 × 10 25 2.43 × 10 149 5.99993.8866
GKN-1(d)4 2.53 × 10 3 5.53 × 10 17 6.03 × 10 99 6.00004.5195
GKN-2(a)5 4.22 × 10 7 8.51 × 10 41 9.95 × 10 81 5.45765.5310
GKN-2(b)4 7.24 × 10 3 4.58 × 10 14 2.94 × 10 81 6.00003.9647
GKN-2(c)4 4.43 × 10 3 1.12 × 10 15 2.90 × 10 91 5.99953.7772
GKN-2(d)8 8.78 × 10 10 1.75 × 10 55 1.09 × 10 329 6.00006.2194
NM-1(a)4 8.78 × 10 3 1.35 × 10 15 2.76 × 10 105 7.00001.9372
NM-1(b)4 3.50 × 10 6 4.38 × 10 41 2.10 × 10 285 7.00001.5625
NM-1(c)4 3.57 × 10 6 5.15 × 10 41 6.69 × 10 285 7.00001.5662
NM-1(d)4 1.83 × 10 6 2.66 × 10 43 3.70 × 10 301 7.00001.5788
NM-2(a)4 3.42 × 10 6 3.63 × 10 41 5.51 × 10 286 7.00001.5900
NM-2(b)4 3.50 × 10 6 4.36 × 10 41 2.05 × 10 285 7.00001.5585
NM-2(c)4 3.57 × 10 6 5.13 × 10 41 6.53 × 10 285 7.00001.6405
NM-2(d)4 1.82 × 10 6 2.62 × 10 43 3.30 × 10 301 7.00001.3444
Table 7. Comparison of the performance of methods for Example 7.
Table 7. Comparison of the performance of methods for Example 7.
Methodsn | e n 3 | | e n 2 | | e n 1 | COCCPU-Time
GKN-1(a)4 6.61 × 10 5 8.80 × 10 25 4.90 × 10 144 6.00001.7305
GKN-1(b)4 6.87 × 10 5 1.15 × 10 24 2.57 × 10 143 6.00001.7545
GKN-1(c)4 6.35 × 10 5 7.67 × 10 25 2.38 × 10 144 6.00001.7150
GKN-1(d)4 1.15 × 10 4 8.83 × 10 23 1.82 × 10 131 6.00001.7852
GKN-2(a)4 5.57 × 10 6 8.57 × 10 32 1.14 × 10 186 6.00001.6405
GKN-2(b)4 1.27 × 10 4 1.23 × 10 22 1.02 × 10 130 6.00001.7813
GKN-2(c)4 7.49 × 10 5 2.89 × 10 24 9.62 × 10 141 6.00001.7382
GKN-2(d)4 1.18 × 10 3 9.34 × 10 17 2.31 × 10 95 6.00001.9150
NM-1(a)4 5.19 × 10 5 1.05 × 10 28 1.42 × 10 194 7.00001.0077
NM-1(b)4 5.29 × 10 5 1.23 × 10 28 4.63 × 10 194 7.00000.9062
NM-1(c)4 5.37 × 10 5 1.41 × 10 28 1.23 × 10 193 7.00001.0040
NM-1(d)4 2.73 × 10 5 7.07 × 10 31 5.57 × 10 210 7.00001.0054
NM-2(a)4 5.14 × 10 5 9.79 × 10 29 8.91 × 10 195 7.00000.8867
NM-2(b)4 5.24 × 10 5 1.16 × 10 28 3.02 × 10 194 7.00000.9802
NM-2(c)4 5.33 × 10 5 1.34 × 10 28 8.30 × 10 194 7.00000.9412
NM-2(d)4 2.60 × 10 5 5.06 × 10 31 5.39 × 10 211 7.00000.9142

Share and Cite

MDPI and ACS Style

Sharma, J.R.; Kumar, D.; Argyros, I.K. An Efficient Class of Traub-Steffensen-Like Seventh Order Multiple-Root Solvers with Applications. Symmetry 2019, 11, 518. https://doi.org/10.3390/sym11040518

AMA Style

Sharma JR, Kumar D, Argyros IK. An Efficient Class of Traub-Steffensen-Like Seventh Order Multiple-Root Solvers with Applications. Symmetry. 2019; 11(4):518. https://doi.org/10.3390/sym11040518

Chicago/Turabian Style

Sharma, Janak Raj, Deepak Kumar, and Ioannis K. Argyros. 2019. "An Efficient Class of Traub-Steffensen-Like Seventh Order Multiple-Root Solvers with Applications" Symmetry 11, no. 4: 518. https://doi.org/10.3390/sym11040518

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop