Optimal One-Point Iterative Function Free from Derivatives for Multiple Roots

Deepak Kumar 1 , Janak Raj Sharma 2 and Ioannis K. Argyros 3,* 1 Department of Mathematics, Chandigarh University, Gharuan, Mohali, Punjab 140413, India; deepak.babbi@gmail.com 2 Department of Mathematics, Sant Longowal Institute of Engineering and Technology, Longowal, Punjab 148106, India; jrshira@yahoo.co.in 3 Department of Mathematics Sciences, Cameron University, Lawton, OK 73505, USA * Correspondence: iargyros@cameron.edu


Introduction
Finding the roots of nonlinear functions is a significant problem in numerical mathematics and has numerous applications in different parts of applied sciences [1,2]. In this paper, we consider iterative techniques to locate a multiple root α of multiplicity m-that means g (j) (α) = 0, j = 0, 1, 2, . . . , m − 1 and g (m) (α) = 0-of a nonlinear equation g(x) = 0.
The most widely used method for finding the multiple root of g(x) = 0 is the quadratically convergent modified Newton's method (MNM) (see [3]): In literature, there are many iterative methods of different order of convergence to approximate the multiple roots of g(x) = 0, (see, for example [4][5][6][7][8][9][10][11][12][13][14][15][16][17][18][19]). Such methods require the evaluations of derivatives of either first order or first and second order. The motivation for developing higher order methods is closely related to the Kung-Traub conjecture [20] that establishes an upper bound for the order of convergence to be attained with a specific number of functional evaluations, ρ c ≤ 2 µ−1 , where ρ c is order of convergence and µ represents functions' evaluations. The methods which follow the Kung-Traub conjecture are called optimal methods.
Contrary to the methods that require derivative evaluation, the derivative-free techniques to deal with the instances of multiple roots are exceptionally uncommon. The principle issue with generating such techniques is the difficulty in finding their convergence order. Derivative-free techniques are significant in the circumstances when the derivative of function g is hard to evaluate or is costly to compute. One such derivative-free scheme is the old style Traub-Steffensen iteration [21] which replaces g in the traditional Newton's iteration with suitable approximation based on difference quotient: where g[w n , x n ] = g(w n )−g(x n ) w n −x n is the first order divided difference with w n = x n + βg(x n ), β ∈ R − {0}.
Then modified Newton's method (1) becomes modified Traub-Steffensen method The modified Traub-Steffensen method (2) is a noticeable improvement of Newton's method, because it maintains quadratic convergence without using any derivative.
The principle objective of this work is to design a general class of derivative-free multiple root methods including (2) by utilising the weight-function approach. These methods are developed when specific weight functions (that may rely upon at least one parameters) are chosen. The rest of the paper is as follows. In Section 2, the plan for a second-order technique is created and its convergence is considered. In Section 3, the basins of attractors are displayed to check the steadiness of techniques. Numerical examinations on various conditions are enacted in Section 4 to exhibit the applicability and efficiency of the schemes introduced here. Concluding comments are given in Section 5.

Formulation of Method
Given a known multiplicity m > 1, we consider the following one-step scheme for multiple roots with the first step as the Traub-Steffensen iteration (2): where the function G(Θ n ) : C → C is analytic in a neighbourhood of zero with Θ n = g(x n ) g[w n , x n ] . In the sequel, we shall study the convergence results of the proposed iterative scheme (3). For clarity, the results are obtained separately for different cases depending upon the multiplicity m. Theorem 1. Assume that g : C → C is an analytic function in a domain containing a multiple zero (say, α) with multiplicity m = 2. Suppose that the initial point x 0 is close enough to α; then the convergence order of the formula (3) is at least 2, provided that G(0) = 0, G (0) = 2 and |G (0)| < ∞.
Proof. Assume that the error at n-th stage is e n = x n − α. Using the Taylor's expansion of g(x n ) about α and keeping in mind that g(α) = 0, g (α) = 0 and g (α) = 0, we have where for k ∈ N. Similarly we have the Taylor's expansion of g(w n ) about α g(w n ) = g (α) 2! e 2 w n 1 + C 1 e w n + C 2 e 2 w n + C 3 e 3 w n + · · · , where e w n = w n − α = e n + βg (α) 2! e 2 n 1 + C 1 e n + C 2 e 2 n + C 3 e 3 n + · · · . By using (4) and (19), we obtain that Expanding weight function G(Θ n ) in the neighbourhood of origin by Taylor series, we have that Using the expression (6) into (3), it follows that In order to obtain second order convergence, the constant term and coefficient of e n in (7) should be simultaneously put equal to zero. It is possible only for the following values of G(0) and G (0): By using the above values in (7), the error relation is given by Thus, the theorem is proven.
Theorem 2. Assume that g : C → C is an analytic function in a domain containing a multiple zero (say, α) with multiplicity m = 3. Suppose that the initial point x 0 is close enough to α; then the convergence order of the formula (3) is at least 2, provided that G(0) = 0, G (0) = 3 and |G (0)| < ∞.
Proof. Suppose that the error at n-th iteration is e n = x n − α. Using the Taylor's expansion of g(x n ) about α and keeping into mind that g(α) = 0, g (α) = 0, g (α) = 0, and g (α) = 0, we have g(x n ) = g (α) 3! e 2 n 1 + C 1 e n + C 2 e 2 n + C 3 e 3 n + · · · , where g (α) for k ∈ N. Additionally, the Taylor's expansion of g(w n ) about α is where e w n = w n − α = e n + βg (α) 3! e 2 n 1 + C 1 e n + C 2 e 2 n + C 3 e 3 n + · · · . By using (10) and (11), we obtain that We can expand weight function G(Θ n ) in the neighbourhood of origin by Taylor series; then we have that By inserting the expression (12) into (3) In order to obtain second order convergence, the constant term and coefficient of e n in (13) should be simultaneously equal to zero. It is possible only for the following values of G(0) and G (0): By using the above values in (13), the error relation is given by Thus, the theorem is proven.
Remark 1. We can observe from the above results that the number of conditions on G(Θ n ) is 2 corresponding to the cases m = 2, 3 to attain the quadratic order convergence of the method (3). These cases also satisfy common conditions: G(0) = 0, G (0) = m. Nevertheless, their error equations differ from each other, as the parameter β does not appear in the equation for m = 3. It has been seen that when m ≥ 3, the error equation in each such case does not contain β term. We shall prove this fact in next theorem.
For the multiplicity m ≥ 3, we prove the order of convergence of scheme (3) by the following theorem: Theorem 3. Let g : C → C be an analytic function in a region enclosing a multiple zero (say, α) with multiplicity m. Assume that initial guess x 0 is sufficiently close to α; then the iteration scheme defined by (3) has second order of convergence, provided that G(0) = 0, G (0) = m and |G (0)| < ∞.
Proof. Let the error at n-th iteration be e n = x n − α. Using the Taylor's expansion of g(x n ) about α, we have that Using (16) in w n = x n + βg(x n ), we obtain that Taylor's expansion of g(w n ) about α is given as From (16)-(18), it follows that By using (16) and (19), we obtain that We can expand weight function G(Θ n ) in the neighbourhood of origin by Taylor series; then we have that Inserting the expression (20) in the scheme (3), we obtain the error In order to obtain second order convergence, the constant term and coefficient of e n in (21) should be simultaneously equal to zero. That is possible only for the following values of G(0) and G (0) By using the above values in (21), the error relation is given by Hence, the second order convergence is established. This completes the proof of theorem.

Remark 2.
It is important to note that parameter β, which is used in w n , appears only in the error equations of the cases m = 2 but not for m ≥ 3. For m ≥ 3 we have observed that this parameter appears in the coefficients of e 3 n and higher order. However, we do not need such terms in order to show the required quadratic order convergence.
We can yield numerous methods of the family (3) based on the form of function G(Θ n ) that satisfies the conditions (22). However, we must restrict the choices to consider the forms of low order polynomials and simple rational functions. Accordingly, the following simple forms, satisfying the conditions (22), are chosen: where a 1 , a 2 , a 3 , a 4 and a 5 are free parameters.
The corresponding method to each of the above form is defined as follows: Method 2 (M2): Method 3 (M3): x n+1 = x n − mΘ n 1 + a 3 mΘ n .

Basins of Attraction
Our point here is to analyse the new strategies dependent on graphical tool-the basins of attraction-of the zeros of a polynomial p(z) in Argand plane. Examination of the basins of attraction of roots by the iterative scheme provides a significant idea about the convergence of scheme. This thought was presented at first by Vrscay and Gilbert [22]. As of late, numerous researchers utilised this idea in their work; see, for instance [23,24] and references in therein. We consider different cases relating to the function G(Θ n ) of family (3) to study the basins of attraction.
We select z 0 as the initial point belonging to D, where D is a rectangular region in C containing all the roots of the equation g(z) = 0. An iterative method beginning at a point z 0 ∈ D can converge to the zero of the function g(z) or diverge. In order to assess the basins, we consider 10 −3 as the stopping criterion for convergence up to maximum of 25 iterations. If this tolerance is not achieved in the required iterations, the procedure is dismissed with the result showing the divergence of the iteration function started from z 0 . While drawing the basins, the following criterion is adopted: A colour is allotted to every initial guess z 0 in the attraction basin of a zero. If the iterative formula begins at the point of z 0 convergence, then it forms the basins of attraction with that assigned colour, and if the formula fails to converge in the required number of iterations, then it is draw in black.
We draw the basins of attraction by applying the methods M1-M8 (choosing β = 10 −2 , 10 −4 , 10 −8 ) on the following two polynomials: Problem 1. In the first example, we consider the polynomial P 1 (z) = (z 5 − 2z 4 − 2z 3 + 8z 2 − 8z) 2 which has zeros {±2, 0, 1 ± i} with multiplicity 2. In this case, we use a grid of 500 × 500 points in a rectangle D ∈ C of size [−3, 3] × [−3, 3] and assign the colours cyan, green, yellow, red and blue to each initial point in the basin of attraction of zero "−2," "1 + i," "0," "1 − i" and "2." Basins obtained for the methods M1-M8 are shown in Figures 1-3 corresponding to β = 10 −2 , 10 −4 , 10 −8 . When observing the behaviour of the methods, we see that the method M5 possesses a lesser number of divergent points, followed by M4 and M6. On the contrary, the methods M3 and M7 have higher numbers of divergent points, followed by M2 and M8. Notice that the basins are becoming wider as parameter β assumes smaller values. From the graphics, we can judge the behaviour and suitability of any strategy relying upon the conditions. In the event that we pick an underlying point z 0 in a zone where various basins of attraction meet one another, it is difficult to foresee which root will be attained by the iterative strategy that starts in z 0 . Thus, the selection of z 0 in such a zone is anything but a decent one. Both the dark zones and the zones with various hues are not appropriate to speculate z 0 when we need to accomplish a specific root. The most alluring pictures show up when we have extremely perplexing wildernesses between basins of attraction, and they compare to the situations wherein the strategy is additionally requesting, as it does for the underlying point, and its dynamic conduct is progressively eccentric. We close this segment with a comment that the union conduct of proposed strategies relies on the estimation of parameter β. The smaller the value of β, the better the convergence of the technique.

Numerical Results
This section is committed to exhibiting the convergence behaviour of the displayed family. In this regard, we consider the special cases of the proposed class, M1-M8 by choosing (a 1 = 1/10), (a 2 = 1/4), (a 3 = 1/10), (a 4 = 1/10) and (a 5 = 1/5). Performance is compared with the existing classical Newton modified method (1). We select four test functions for correlation.
To check the theoretical order of convergence, we obtain the computational order of convergence (COC) using the formula (see [25]): All computations were performed in the programming package Mathematica using multiple-precision arithmetic with 4096 significant digits. Numerical results displayed in Tables 1-4 include: (i) number of iterations (n) required to converge to the solution such that |x n+1 − x n | + | f (x n )| < 10 −100 , (ii) values of last three consecutive errors |x n+1 − x n |, (iii) residual error | f (x n )| and (iv) computational order of convergence (COC).
Example 1 (Eigenvalue problem). One of the challenging tasks in linear algebra is concerned with the Eigenvalues of a large square matrix. Finding the zeros of the characteristic equation of a square matrix of order greater than 4 is another big job. So, we consider the following 9 × 9 matrix.
The characteristic polynomial of the matrix (M) is given as This function has one multiple zero α = 3 with multiplicity 4. We choose initial approximations x 0 = 2.75 and obtain the numerical results as shown in Table 1. Example 2 (Beam Designing Model). We consider a beam positioning problem (see [26]) where a beam of length r unit is leaning against the edge of a cubical box with sides of length 1 unit each, such that one end of the beam touches the wall and the other end touches the floor, as shown in Figure 7.
The problem is: What should be the distance along the floor from the base of the wall to the bottom of the beam? Suppose y is the distance along the beam from the floor to the edge of the box, and let x be the distance from the bottom of the box to bottom of the beam. Then, for a given value of r, we have The positive solution of the equation is a double root x = 2. We consider the initial guess x 0 = 1.8 to find the root. Numerical results of various methods are shown in Table 2.
explains the behaviour of a real gas by taking in the ideal gas equations two more parameters, a 1 and a 2 , specific for each gas. In order to determine the volume V of the gas in terms of the remaining parameters, we are required to solve the nonlinear equation in V.
Given the constants a 1 and a 2 of a particular gas, one can find values for n, P and T, such that this equation has three real roots. By using the particular values, we obtain the following nonlinear function f 1(x) = x 3 − 5.22x 2 + 9.0825x − 5.2675, having three roots, out of which one is a multiple zero α = 1.75 of multiplicity of order two, and other one is simple zero ξ = 1.72. However, we seek the multiple zero α = 1.75. We consider initial guess x 0 = 1.8 to obtain this zero. The numerical results so-obtained are shown in Table 3.
explains the behaviour of a real gas by taking in the ideal gas equations two more parameters, a 1 and a 2 , specific for each gas. In order to determine the volume V of the gas in terms of the remaining parameters, we are required to solve the nonlinear equation in V.
Given the constants a 1 and a 2 of a particular gas, one can find values for n, P and T, such that this equation has three real roots. By using the particular values, we obtain the following nonlinear function f 1(x) = x 3 − 5.22x 2 + 9.0825x − 5.2675, having three roots, out of which one is a multiple zero α = 1.75 of multiplicity of order two, and other one is simple zero ξ = 1.72. However, we seek the multiple zero α = 1.75. We consider initial guess x 0 = 1.8 to obtain this zero. The numerical results so-obtained are shown in Table 3.

Conclusions
In the foregoing work, we have developed a class of optimal second order methods free from derivatives for solving nonlinear uni-variate functions. Convergence analysis has shown the second order convergence under standard presumptions with respect to the nonlinear functions whose zeros we were searching for. Some best cases of the class were discussed. Their stability was tested by analyzing complex geometry shown by drawing their basins. We have noticed from the graphics that the basins become wider and wider as the parameter assumes smaller values. The methods were employed to solve some nonlinear uni-variate equations and also compared with already defined techniques. We conclude the paper with the remark that derivative-free techniques may prove to be good options for Newton-like methods in the cases where derivatives are expensive to evaluate.

Author Contributions:
The contribution of all the authors have been similar. All of them worked together to develop the present manuscript. All authors have read and agreed to the published version of the manuscript.
Funding: This research received no external funding.

Conflicts of Interest:
The authors declare no conflict of interest.