Next Article in Journal
Strong and Δ-Convergence Fixed-Point Theorems Using Noor Iterations
Previous Article in Journal
Optimising the Optic Fibre Deployment in Small Rural Communities: The Case of Mexico
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Approximating Multiple Roots of Applied Mathematical Problems Using Iterative Techniques

1
Mathematical Modelling and Applied Computation Research Group (MMAC), Department of Mathematics, King Abdulaziz University, P.O. Box 80203, Jeddah 21589, Saudi Arabia
2
Department of Mathematics, Guru Nanak Dev University, Amritsar 143005, India
3
Instituto Universitario de Matemática Multidisciplinar, Universitat Politècnica de València, 46022 València, Spain
*
Author to whom correspondence should be addressed.
Axioms 2023, 12(3), 270; https://doi.org/10.3390/axioms12030270
Submission received: 4 January 2023 / Revised: 8 February 2023 / Accepted: 23 February 2023 / Published: 6 March 2023
(This article belongs to the Special Issue Approximation Theory and Related Applications II)

Abstract

:
In this study, we suggest a new iterative family of iterative methods for approximating the roots with multiplicity in nonlinear equations. We found a lack in the approximation of multiple roots in the case that the nonlinear operator be non-differentiable. So, we present, in this paper, iterative methods that do not use the derivative of the non-linear operator in their iterative expression. With our new iterative technique, we find better numerical results of Planck’s radiation, Van Der Waals, Beam designing, and Isothermal continuous stirred tank reactor problems. Divided difference and weight function approaches are adopted for the construction of our schemes. The convergence order is studied thoroughly in the Theorems 1 and 2, for the case when multiplicity p 2 . The obtained numerical results illustrate the preferable outcomes as compared to the existing ones in terms of absolute residual errors, number of iterations, computational order of convergence (COC), and absolute error difference between two consecutive iterations.
MSC:
41A25; 41A58; 49M15; 65G99; 65H10

1. Introduction

Let f : C C be a complex valued function that possesses the nth order derivatives. Let k e r f = { α C : f ( j ) ( α ) = 0 ; j = 0 , 1 , 2 , , p 1 and f ( p ) ( α ) 0 ; p N } be the kernel of function f. This k e r f consists of the roots of function f and p is the multiplicity of the root. Therefore, to find the k e r f , one should work to find out the roots of the equation f ( x ) = 0 . In most of the cases, it is almost impossible to find the exact roots. In such cases, one needs to apply an iterative approach to find the approximate roots. There are different research papers that provide us with iterative techniques for approximating the solution α of nonlinear equation f ( x ) = 0 with multiplicity p > 1 . The well-known modified Newton’s method [1] is one of the simplest and most popular iterative methods for multiple roots, which is provided by
x t + 1 = x t p f ( x t ) f ( x t ) , t = 0 , 1 , 2 ,
The convergence order of the modified Newton’s method is quadratic for p 1 . Based on this method, many methods (Hansen and Patrick [2], Osada [3], Neta [4], Sharifi et al. [5], Soleymani et al. [6], Zhou et al. [7], Li et al. [8], and Chebyshev–Halley methods [9]) have been published; the theoretical treatment of iterative methods can be found in [10,11]. However, it can be seen that all these methods use the derivative of the function f in their iterative function, or higher order derivatives.
So, we have account for how sometimes the derivatives of function f do not exist or we consume a huge time in order to have them, then we think about derivative free methods for multiple roots, but these techniques are very limited in the literature. This is because it is not an easy task to retain the same convergence order (as the simple roots) and the calculation work is very hard and time consuming. However, due to the rapid development of digital computers, advanced computer languages, and software, the production of derivative free methods for obtaining the multiple roots of nonlinear equations have become the new area of interest. Most of the time derivatives are replaced by first-order divided differences in such methods.
Traub–Steffensen’s method [12] is one of the derivative free methods where the derivative f ( x t ) in Equation (1) is replaced with the first-order divided difference f [ η t , x t ] = f ( η t ) f ( x t ) η t x t and η t = x t + γ f ( x t ) , γ 0 R . Therefore, Equation (1) becomes
x t + 1 = x t p f ( x t ) f [ η t , x t ] .
Recently, some higher order derivative free methods have been proposed by different researchers on the basis of Traub–Steffensen’s method [12]. The methods by Kumar et al. [13], Behl et al. [14], Sharma et al. [15,16], Dong [17,18], and Kumar et al. [19] are some examples of derivative free methods.
Motivated by derivative-free methods for multiple roots, we try to develop a new derivative-free multipoint iterative method. The advantages of our techniques are: they have smaller residual errors, consume smaller number of iterations, and have better error difference and more stable computational order of convergence ( C O C ) . In addition, the proposed scheme also adopts as small a number of function evaluations as possible to procure a high convergence order. The convergence order of the new family is four.
The rest of the paper is summarized as follows. Section 2 includes the construction as well as the convergence analysis of new family. Some special cases of the newly developed family are discussed in Section 3. In Section 4, various numerical examples are considered to confirm the theoretical results. Finally, concluding remarks are provided in Section 5.

2. Construction of Higher-Order Scheme

Here, we construct a fourth-order family of Steffensen-type method [12] for multiple zeros ( p 2 ) , which is defined by
y t = x t p f ( x t ) f [ η t , θ t ] x t + 1 = y t G ( κ t ) f ( x t ) f [ η t , θ t ] ,
where η t = x t + γ f ( x t ) , θ t = x t γ f ( x t ) , γ R , γ 0 , and p 2 is known as the multiplicity of the required zero. In addition, G ( κ t ) is a single variable weight function and f [ η t , θ t ] is a finite difference of order one and is provided by f [ η t , θ t ] = f ( η t ) f ( θ t ) η t θ t . Moreover, κ t = f ( y t ) f ( x t ) 1 p , is a multi-valued function. Suppose their principal analytic branches (see [20]), s t as a principal root provided by κ t = exp 1 p log f ( y t ) f ( x t ) , with log f ( y t ) f ( x t ) = log f ( y t ) f ( x t ) + i arg f ( y t ) f ( x t ) for π < arg f ( y t ) f ( x t ) π . The choice of arg ( z ) for z C agrees with that of log ( z ) to be employed later in the numerical experiments of section. We have an analogous way κ t = f ( y t ) f ( x t ) 1 p . exp 1 p arg f ( y t ) f ( x t ) = O ( e t ) .
In Theorem 1, we illustrate that the constructed scheme (3) attains a maximum fourth-order of convergence for all γ 0 , without adopting the evaluation of the derivative.
Theorem 1. 
Let us consider function f : D C C an analytic function in D surrounding the required zero x * , which is a solution of multiplicity p = 2 of equation f ( x ) = 0 . Then, the scheme (3) has fourth-order convergence, when
G ( 0 ) = 0 , G ( 0 ) = 2 , G ( 0 ) = 8 , | G ( 0 ) | < ,
and satisfies the following error equation
e t + 1 = G ( 0 ) 66 B 1 2 96 1 4 B 1 B 2 e t 4 + O ( e t 5 ) ,
where e t = x t x * and B i = 2 ! ( 2 + i ) ! f ( 2 + i ) ( x * ) f ( 2 ) ( x * ) , i = 1 , 2 , 3 , are the errors in the t–th iteration and the asymptotic error constant numbers, respectively.
Proof. 
We develop the Taylor’s series expansions for the functions f ( x t ) , f ( η t ) and f ( θ t ) around x = x * with the assumption f ( x * ) = f ( x * ) = 0 and f ( 2 ) ( x * ) 0 , which are given, respectively, by
f ( x t ) = f ( 2 ) ( x * ) 2 ! e t 2 1 + B 1 e t + B 2 e t 2 + B 3 e t 3 + B 4 e t 4 + O ( e t 5 )
f ( η t ) = f ( 2 ) ( x * ) 2 ! e t 2 [ 1 + γ Δ 2 + B 1 e t + 1 4 10 B 1 γ Δ 2 + 4 B 2 + γ 2 Δ 2 2 e t 2 + 1 4 5 B 1 γ 2 Δ 2 2 + 6 B 1 2 γ Δ 2 + 4 3 B 2 γ Δ 2 + B 3 e t 3 + 1 8 B 1 28 B 2 γ Δ 2 + γ 3 Δ 2 3 + 4 4 B 2 γ 2 Δ 2 2 + 7 B 3 γ Δ 2 + 2 B 4 + 14 B 1 2 γ 2 Δ 2 2 e t 4 + O ( e t 5 ) ] ,
and
f ( θ t ) = f ( 2 ) ( x * ) 2 ! e t 2 [ 1 + B 1 γ Δ 2 e t + 1 4 10 B 1 γ Δ 2 + 4 B 2 + γ 2 Δ 2 2 e t 2 + 1 4 5 B 1 γ 2 Δ 2 2 6 B 1 2 γ Δ 2 + 4 B 3 3 B 2 γ Δ 2 e t 3 + 1 8 14 B 1 2 γ 2 Δ 2 2 B 1 ( 28 B 2 γ Δ 2 + γ 3 Δ 2 3 ) + 4 ( 4 B 2 γ 2 Δ 2 2 7 B 3 γ Δ 2 + 2 B 4 ) e t 4 + O ( e t 5 ) ] ,
where Δ 2 = f ( 2 ) ( x * ) .
By adopting expressions (5)–(7), we obtain further
f ( x t ) f [ η t , θ t ] = 1 2 e t B 1 4 e t 2 + 1 8 ( 3 B 1 2 4 B 2 ) e t 3 + 1 16 B 1 20 B 2 γ 2 Δ 2 2 9 B 1 3 12 B 3 e t 4 + O ( e t 5 ) ,
and, by using the expression (8) in the first step of (3), we have
y t = B 1 2 e t 2 + B 2 + 3 4 B 1 2 e t 3 + 1 8 B 1 γ 2 Δ 2 2 20 B 2 + 9 B 1 3 + 12 B 3 e t 4 + O ( e t 5 ) .
Now, we use the expression (9), so we obtain
f ( y t ) = 1 2 Δ 2 e t 2 [ 1 4 B 1 2 e t 2 + B 1 B 2 3 B 1 2 4 e t 3 + 1 16 2 B 1 2 γ 2 Δ 2 2 32 B 2 + 29 B 1 4 + 24 B 3 B 1 + 16 B 2 2 e t 4 + O ( e t 5 ) ] ,
and
κ t = f ( y t ) f ( x t ) 1 2 = B 1 2 e t + ( B 2 B 1 2 ) e t 2 + 1 16 2 B 1 ( γ 2 Δ 2 2 26 B 2 ) + 29 B 1 3 + 24 B 3 e t 3 + O ( e t 4 ) .
The expression (11) demonstrates that κ t = O ( e t ) . So, we expand the weight function G ( κ t ) in the neighborhood of the origin obtaining:
G ( κ t ) = G ( 0 ) + G ( 0 ) κ t + 1 2 ! G ( 0 ) κ t 2 + 1 3 ! G ( 0 ) κ t 3 .
Now, we have the following expression by inserting Equation (11) in the scheme (3)
e t + 1 = G ( 0 ) 2 e t + 1 4 B 1 G ( 0 ) G ( 0 ) + 2 e t 2 + 1 16 8 B 2 G ( 0 ) G ( 0 ) + 2 B 1 2 6 G ( 0 ) 10 G ( 0 ) + G ( 0 ) + 12 e t 3 + Ω 1 e t 4 + O ( e t 5 ) ,
where Ω 1 is a function that depends on the parameters defined previously,
Ω 1 γ , Δ 2 , B 1 , B 2 , B 3 , G ( 0 ) , G ( 0 ) , G ( 0 ) , G ( 0 ) .
From expression (13), we deduce that the scheme (3) reaches at the least fourth-order convergence, if
G ( 0 ) = 0 , G ( 0 ) G ( 0 ) + 2 = 0 , 6 G ( 0 ) 10 G ( 0 ) + G ( 0 ) + 12 = 0
which further provide
G ( 0 ) = 0 , G ( 0 ) = 2 , G ( 0 ) = 8 .
Next, by inserting above expression (14) in (12), we obtain
e t + 1 = 1 96 B 1 3 ( G ( 3 ) 66 ) B 2 B 1 4 e t 4 + O ( e p 5 ) ,
provided | G ( 0 ) | < . Hence, the scheme (3) has fourth-order convergence for p = 2 . □
Theorem 2. 
Let us consider function f : D C C an analytic function in D surrounding the required zero x * , which is a solution of multiplicity p 3 of equation f ( x ) = 0 . Then, the scheme (3) has fourth-order convergence, when
G ( 0 ) = 0 , G ( 0 ) = p , G ( 0 ) = 4 p , | G ( 0 ) | < ,
and satisfies the following error equation
e t + 1 = G ( 0 ) 3 p ( p + 9 ) M 1 2 6 p 4 1 p 2 M 1 M 2 e t 4 + O ( e t 5 ) ,
where e t = x t x * and M i = p ! ( p + i ) ! f ( p + i ) ( x * ) f ( p ) ( x * ) , i = 1 , 2 , 3 , are the errors in t-th iteration and asymptotic error constant numbers, respectively.
Proof. 
We obtain Taylor’s series expansions for the functions f ( x t ) , f ( η t ) , and f ( θ t ) around x = x * with the assumption f ( x * ) = f ( x * ) = f ( x * ) = = f ( p 1 ) ( x * ) = 0 and f ( p ) ( x * ) 0 , which are provided by, respectively,
f ( x t ) = f ( p ) ( x * ) p ! e t p 1 + M 1 e t + M 2 e t 2 + M 3 e t 3 + M 4 e t 4 + O ( e p 5 )
f ( η t ) = f p ( x * ) p ! e t p 1 + M 1 e t + Γ 1 e t 2 + Γ 2 e t 3 + Γ 3 e t 4 + O ( e t 5 ) ,
and
f ( θ t ) = f p ( x * ) p ! e t p 1 + M 1 e t + Γ ¯ 1 e t 2 + Γ ¯ 2 e t 3 + Γ ¯ 3 e t 4 + O ( e t 5 ) ,
where Δ p = f ( p ) ( x * ) , p = 3 , 4 , 5 , , Γ i = Γ i γ , Δ p , M 1 , M 2 , M 3 , M 4 , and Γ ¯ i = Γ ¯ i γ , Δ p , M 1 , M 2 , M 3 , M 4 ; some of them are provided below:
Γ 1 = 1 2 γ Δ 3 + 2 M 2 , p = 3 M 2 p 4 ,
and
Γ ¯ 1 = 1 2 γ Δ 3 + 2 M 2 , p = 3 M 2 p 4 .
By adopting expressions (17)–(19), we obtain further
f ( x t ) f [ η t , θ t ] = 1 p e t M 1 p 2 e t 2 + 1 p 3 ( p + 1 ) M 1 2 2 p M 2 e t 3 1 p 4 ( p + 1 ) 2 M 1 3 p ( 3 p + 4 ) M 1 M 2 + 3 p 2 M 3 e t 4 + O ( e p 5 ) .
For the expression (20) used in the first substep of (3), we have
y t = e y t = M 1 p e t 2 1 p 2 ( p + 1 ) M 1 2 2 p M 2 e t 3 + 1 p 3 ( p + 1 ) 2 M 1 3 p ( 3 p + 4 ) M 1 M 2 + 3 p 2 M 3 e t 4 + O ( e p 5 ) .
Now, use the expression (21), we obtain
f ( y t ) = f ( p ) ( x * ) p ! e y t p 1 + M 1 e y t + M 2 e y t 2 + M 3 e y t 3 + M 4 e y t 4 + O ( e p 5 )
and
κ t = f ( y t ) f ( x t ) 1 p = M 1 2 e t + ( M 2 M 1 2 ) e t 2 + 1 16 2 M 1 ( γ 2 Δ 2 2 26 M 2 ) + 29 M 1 3 + 24 M 3 e t 3 + O ( e t 4 ) .
The expression (23) demonstrates that κ t = O ( e t ) . So, we expand the weight function G ( κ t ) in the neighborhood of the origin in the following way
G ( κ t ) = G ( 0 ) + G ( 0 ) κ t + 1 2 ! G ( 0 ) κ t 2 + 1 3 ! G ( 0 ) κ t 3 .
We have the following expression by inserting equation (23) in the scheme (3)
e t + 1 = G ( 0 ) p e t + 1 p 2 M 1 G ( 0 ) G ( 0 ) + p e t 2 + 1 2 p 3 [ 4 p M 2 G ( 0 ) G ( 0 ) + p M 1 2 G ( 0 ) + 2 ( p + 1 ) G ( 0 ) 2 ( p + 3 ) G ( 0 ) + 2 p ( p + 1 ) ] e t 3 + C e t 4 + O ( e t 5 ) ,
where C depends on parameters defined before, C γ , Δ p , M 1 , M 2 , M 3 , G ( 0 ) , G ( 0 ) , G ( 0 ) , G ( 0 ) .
From the expression (25), we deduce that the scheme (3) reaches at least fourth-order convergence, if
G ( 0 ) = 0 , G ( 0 ) G ( 0 ) + p = 0 , G ( 0 ) + 2 ( p + 1 ) G ( 0 ) 2 ( p + 3 ) G ( 0 ) + 2 p ( p + 1 ) = 0
which further provide
G ( 0 ) = 0 , G ( 0 ) = p , G ( 0 ) = 4 p .
Next, by inserting above expression (26) in (24), we obtain
e t + 1 = G ( 3 ) 3 m ( m + 9 ) 6 p 4 M 1 3 M 2 M 1 p 2 e t 4 + O ( e p 5 ) ,
provided | G ( 0 ) | < . Hence, the scheme (3) has fourth-order convergence for p 3 . □

3. Special Cases

In this section, we want to show that we can develop as many new derivative-free methods for multiple roots as we can build weight functions. However, all the weight functions should satisfy the conditions of Theorem 1. Some of the important special cases are mentioned in Table 1.

4. Numerical Results

In this section, the efficiency and convergence of the newly generated methods are checked on some nonlinear problems. For this purpose, we consider RM1, RM2, RM3, and RM4 methods. We compared them with other existing derivative free and fourth-order convergence methods.
First of all, we compare them with the following fourth-order method proposed by Kumar et al. [19]:
y t = x t p f ( x t ) f [ η t , x t ] x t + 1 = y t ( p + 2 ) κ t 1 2 κ t f ( x t ) f [ η t , x t ] + 2 f [ η t , y t ] ,
where η t = x t + γ f ( x t ) , γ 0 R , p 2 is the known multiplicity of the required zero, κ t = f ( y t ) f ( x t ) 1 p is multi-valued functions, and f [ η t , x t ] = f ( η t ) f ( x t ) η t x t is finite difference of order one. We called the expression (28) using ( S M 2 ) .
We also contrast them with a fourth-order scheme provided by Zafar et al. [21], which is defined as follows:
y t = x t p f ( x t ) f ( x t ) + p 2 f ( x t ) x t + 1 = y t p κ t ( 11 2 κ t ) 2 + 2 κ t + 1 f ( x t ) f ( x t ) + 2 p f ( x t ) ,
where κ t = f ( y t ) f ( x t ) 1 p and p 2 is the known multiplicity of the required zero. We symbolized the scheme (29) by ( Z M ) .
We chose a fourth-order method presented by Behl et al. [22], which is provided below:
y t = x t p f ( x t ) f [ η t , x t ] x t + 1 = x t + p 1 + κ t 1 2 α κ t 1 2 κ t ( 1 + s t 2 + 2 ( 1 α ) s t 2 ) f ( x t ) f [ η t , x t ] ,
where η t = x t + γ f ( x t ) , γ 0 R , α R , p 2 is the known multiplicity of the required zero, and f [ η t , x t ] = f ( η t ) f ( x t ) η t x t is finite difference of order one. Further, κ t = f ( y t ) f ( x t ) 1 p and s t = f ( y t ) f ( η t ) 1 p are multi-valued functions. We denoted the scheme (30) using ( B M ) .
We consider the following method provided by Kansal et al. [23]:
y t = x t p f ( x t ) f ( x t ) x t + 1 = y t p κ t 1 + 2 s t + 13 2 s t 2 f ( x t ) f ( x t ) ,
where κ t = f ( y t ) f ( x t ) 1 p and s t = κ t 1 + κ t are multi-valued functions. We called the scheme (30) using ( M K M ) .
The computational work is performed on Mathematica programming software [24] by selecting the value of the parameter γ = 0.01 . The numerical results are depicted in Table 2, Table 3, Table 4 and Table 5. The tables include the number of iterations required to obtain the root with stopping criterion | x t + 1 x t | + | f ( x t ) | < 10 200 , estimated errors | x t + 1 x t | , and residual errors of the considered function | f ( x t ) | . In addition, the computational order of convergence ( C O C ) by using the proceeding formula:
C O C = l o g | x t + 2 α x t + 1 α | l o g | x t + 1 α x t α | , where t = 1 , 2 ,
In order to illustrate the applicability of our scheme, we chose the following four real life problems. The considered problems are mentioned in Examples 1–4, which are defined as follows:
Example 1. 
Van Der Waals equation of state:
In 1873, Van Der Waals modified the Ideal Gas Law ( P V = n R T ) when he realised no gas in the universe is ideal. He just adjusted the Pressure and Volume. This equation of state is presented as
P + a n 2 V 2 ( V n b ) = n R T ,
where P is the pressure, V is the volume, R is the universal gas constant, and T is the absolute temperature. The constants a and b represent the magnitude of intermolecular attraction and excluded, respectively. These constants are specific to a particular gas. The above equation can also be written as
P V 3 ( n b P + n R T ) V 2 + α n 2 V α β n 2 = 0 .
For a particular gas, the problem reduces to the following polynomial in x of degree three
f ( x ) = x 3 5.22 x 2 + 9.0825 x 5.2675 .
According to the Fundamental Theorem of arithmetic, the above polynomial has three roots and, among them, x = 1.75 is a multiple root of multiplicity p = 2 and x = 1.72 is a simple zero. The numerical results for this problem are mentioned in Table 2.
Example 2. 
Planck’s radiation problem:
Consider the Planck’s radiation equation that determines the spectral density E λ of electromagnetic radiations emitted by a black body in the thermal equilibrium at a definite temperature [25] as
E λ = 8 π c h λ 5 × 1 e c h k T λ 1 ,
where T , λ , k , h , and c are, respectively, the absolute temperature of the black body, the wavelength of radiation, the Boltzmann constant, the Plank’s constant, and the speed of light in the medium (vacuum).
To evaluate the wavelength λ , for which the energy density E λ is maximum, the necessary condition is E λ = 0 , provides us with the following equation:
( c h λ k T ) e c h λ k T e c h λ k T 1 = 5 .
Using x = c h λ k T , the corresponding non-linear function is as follows:
f ( x ) = e x 1 + x 5 p .
The approximated zero of f ( x ) is x * = 4.965114231744276303698759 with multiplicity p = 4 and, by using this solution, one can easily determine the wave length λ from the relation x = c h λ k T . The computational results are provided in Table 3.
Example 3. 
Beam Designing Model
Consider a beam designing problem where a beam of length r unit is leaning against the edge of a cubical box with sides of length 1 unit each, such that one end of the beam touches the wall and the other end touches the floor. What should the distance along the floor from the base of the wall to the bottom of the beam be? Suppose y is the distance along the beam from the floor to the edge of the box, and let x be the distance from the bottom of the box to the bottom of the beam. Then for a given value of r, we have
x 4 + 4 x 3 24 x 2 + 16 x + 16 = 0 .
The solution x = 2 of the above equation is a double root. We consider the initial guess x * = 1.8 to find the root. Table 4 shows the numerical results of various methods corresponding to the problem.
Example 4. 
Isothermal Continuous Stirred Tank Reactor Problem [26]
The test equation corresponding to this problem is provided as follows:
x 4 + 11.50 x 3 + 47.49 x 2 + 83.06325 x + 51.23266875 = 0 .
The solution of the equation is x = 2.85 with multiplicity 2. We consider the initial guess as x * = 2.7 to find the root. The numerical results of various methods for this problem are shown in Table 5.

5. Concluding Remarks

  • We presented new derivative-free and multi-point iterative techniques that can handle multiple zeros ( p 2 ) of nonlinear models.
  • Divided difference and weight function approaches are the main pillar where the construction of our scheme lies.
  • Our expression (3) consuming is an optimal scheme in the regard of Kung–Traub conjecture. Because, it adopts only three values of f at different points.
  • Many new weight functions are depicted in Table 1 that satisfy the hypotheses of the Theorems 1 and 2. These new weight functions also correspond to new iterative techniques.
  • Our techniques provide better numerical solutions in terms of the residual errors, stable C O C , absolute error between two iterations, and number iterations as compared to the existing ones (see Table 2, Table 3, Table 4 and Table 5). We have emphasized, in the Tables, the better result in all problems and coincides in Table 2 and Table 3 with the new method RM2, while, in Table 5, we remark that new methods perform less iterations for reaching the same tolerance than the known ones.
  • Finally, we wind up with this statement that “our schemes is a good alternative to the existing methods”. Our scheme is not valid for the solutions of nonlinear system. In the future, we will try to work on this direction.

Author Contributions

Conceptualization, R.B., H.A., E.M. and T.S.; methodology, R.B., H.A., E.M. and T.S.; software, R.B., H.A., E.M. and T.S.; validation, R.B., H.A., E.M. and T.S.; formal analysis, R.B., H.A., E.M. and T.S.; investigation, R.B., H.A., E.M. and T.S.; resources, R.B., H.A., E.M. and T.S.; data curation, R.B., H.A., E.M. and T.S.; writing—original draft preparation, R.B., H.A., E.M. and T.S.; writing—review and editing, R.B., H.A., E.M. and T.S.; visualization, R.B., H.A., E.M. and T.S.; supervision, R.B., H.A., E.M. and T.S.; project administration, R.B., H.A., E.M. and T.S.; funding acquisition, R.B., H.A., E.M. and T.S. All authors have read and agreed to the published version of the manuscript.

Funding

Tajinder Singh acknowledges CSIR for financial support under the Grant No. 09/0254(11217)/2021-EMR-I.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Schröder, E. Über unendlich viele Algorithmen zur Auflösung der Gleichungen. Math. Ann. 1870, 2, 317–365. [Google Scholar] [CrossRef] [Green Version]
  2. Hansen, E.; Patrick, M. A family of root finding methods. Numer. Math. 1977, 27, 257–269. [Google Scholar] [CrossRef]
  3. Osada, N. An optimal multiple root finding method of order three. J. Comput. Appl. Math. 1994, 51, 131–133. [Google Scholar] [CrossRef] [Green Version]
  4. Neta, B. New third order nonlinear solvers for multiple roots. Appl. Math. Comput. 2008, 202, 162–170. [Google Scholar]
  5. Sharifi, M.; Babajee, D.K.R.; Soleymani, F. Finding the solution of nonlinear equations by a class of optimal methods. Comput. Math. Appl. 2012, 63, 764–774. [Google Scholar] [CrossRef] [Green Version]
  6. Soleymani, F.; Babajee, D.K.R.; Lotfi, T. On a numerical technique for finding multiple zeros and its dynamics. J. Egypt. Math. Soc. 2013, 21, 346–353. [Google Scholar] [CrossRef]
  7. Zhou, X.; Chen, X.; Song, Y. Constructing higher-order methods for obtaining the multiple roots of nonlinear equations. J. Comput. Appl. Math. 2011, 235, 4199–4206. [Google Scholar] [CrossRef] [Green Version]
  8. Li, S.G.; Cheng, L.Z.; Neta, B. Some fourth-order nonlinear solvers with closed formulae for multiple roots. Comput. Math. Appl. 2010, 59, 126–135. [Google Scholar] [CrossRef] [Green Version]
  9. Hernádez, M.A.; Salanova, M.A. A family of chebyshev type methods in banach spaces. Int. J. Comput. Methods 1996, 61, 145–154. [Google Scholar] [CrossRef]
  10. Krasnosel kii, M.A.; Vainikko, G.M.; Zabreiko, P.P. Approximate Solution of Operator Equations; Nauka: Moscow, Russia, 1969. [Google Scholar]
  11. Kurchatov, V.A. On a Method for Solving Nonlinear Functional Equations. Dokl. AN USSR 1969, 189, 247–249. [Google Scholar]
  12. Traub, J.F. Iterative Methods for the Solution of Equations; Prentice-Hall Series in Automatic Computation: Englewood Cliffs, NJ, USA, 1964. [Google Scholar]
  13. Kumar, D.; Sharma, J.R.; Argyros, I.K. Optimal one-point iterative function free from derivatives for multiple roots. Mathematics 2020, 8, 709. [Google Scholar] [CrossRef]
  14. Behl, P.; Alharbi, S.K.; Mallawi, F.O.; Salimi, M. An optimal derivative free Ostrowski’s scheme for multiple roots of nonlinear equations. Mathematics 2020, 8, 1809. [Google Scholar] [CrossRef]
  15. Sharma, J.R.; Kumar, S.; Jäntschi, L. On Derivative Free Multiple Root finders with Optimal Fourth Order Convergence. Mathematics 2020, 8, 1091. [Google Scholar] [CrossRef]
  16. Sharma, J.R.; Kumar, S.; Jäntschi, L. On a class of optimal fourth order multiple root solvers without using derivatives. Symmetry 2019, 11, 1452. [Google Scholar] [CrossRef] [Green Version]
  17. Dong, C. A basic theorem of constructing an iterative formula of the higher order for computing multiple roots of an equation. Math. Numer. Sin. 1982, 11, 445–450. [Google Scholar]
  18. Dong, C. A family of multipoint iterative functions for finding multiple roots of equation. Int. J. Comput. Math. 1987, 21, 363–4670. [Google Scholar] [CrossRef]
  19. Kumar, S.; Kumar, D.; Sharma, J.R.; Cesarano, C.; Agarwal, P.; Chu, Y.M. An Optimal Fourth Order Derivative-Free Numerical Algorithm for Multiple roots. Symmetry 2020, 12, 1038. [Google Scholar] [CrossRef]
  20. Ahlfors, I.V. Complex Analysis; Mc Graw-Hill Book, Inc.: New York, NY, USA, 1979. [Google Scholar]
  21. Zafar, F.; Cordero, A.; Torregrosa, J.R. Stability analysis of a family of optimal fourth-order methods for multiple roots. Numer. AlgoriTheorem. 2018, 81, 947–981. [Google Scholar] [CrossRef]
  22. Behl, R.; Bhalla, S.; Magreñán, Á.A.; Moysi, A. An Optimal Derivative Free Family of Chebyshev-Halley’s Method for Multiple Zeros. Mathematics 2021, 9, 546. [Google Scholar] [CrossRef]
  23. Kansal, M.; Behl, R.; Mahnashi, M.A.A.; Mallawi, F.O. Modified Optimal Class of Newton–Like Fourth-Order Methods for Multiple roots. Symmetry 2019, 11, 526. [Google Scholar] [CrossRef] [Green Version]
  24. Wolfram, S. The Mathematica Book, 5th ed.; Wolfram Media: Champaign, IL, USA, 2003. [Google Scholar]
  25. Bradie, B. A Friendly Introduction to Numerical Analysis; Pearson Education Inc.: New Delhi, India, 2006. [Google Scholar]
  26. Douglas, J.M. Process Dynamics and Control; Hall: Englewood Cliffs, NJ, USA, 1972. [Google Scholar]
Table 1. Some special cases of our scheme (3).
Table 1. Some special cases of our scheme (3).
Cases (Naming)Weight FunctionsCorresponding
Iterative Method
Case-1 (RM1) G ( κ t ) = p κ t 1 2 κ t y t = x t p f ( x t ) f [ η t , θ t ]
x t + 1 = y t p κ t 1 2 κ t f ( x t ) f [ η t , θ t ] .
Case-2 (RM2) G ( κ t ) = p κ t 1 2 κ t κ t 2 y t = x t p f ( x t ) f [ η t , θ t ]
x t + 1 = y t p κ t 1 2 κ t κ t 2 f ( x t ) f [ η t , θ t ] .
Case-3 (RM3) G ( κ t ) = p κ t + κ t 2 1 2 κ t κ t 2 y t = x t p f ( x t ) f [ η t , θ t ]
x t + 1 = y t ( p κ t + κ t 2 1 2 κ t κ t 2 ) f ( x t ) f [ η t , θ t ] .
Case-4 (RM4) G ( κ t ) = p κ t 1 2 κ t + κ t 2 y t = x t p f ( x t ) f [ η t , θ t ]
x t + 1 = y t p κ t 1 2 κ t + κ t 2 f ( x t ) f [ η t , θ t ] .
Case-5 (RM5) G ( κ t ) = p 1 κ t p + p κ t 2 y t = x t p f ( x t ) f [ η t , θ t ]
x t + 1 = y t ( p 1 κ t p + p κ t 2 ) f [ η t , θ t ] .
Case-6 (RM6) G ( κ t ) = p κ t + 2 p κ t 2 y t = x t p f ( x t ) f [ η t , θ t ]
x t + 1 = y t ( p κ t + 2 p κ t 2 ) f ( x t ) f [ η t , θ t ]
Case-7 (RM7) G ( κ t ) = p exp 2 κ t sin κ t 1 y t = x t p f ( x t ) f [ η t , θ t ]
x t + 1 = y t ( p exp 2 κ t sin κ t 1 ) f ( x t ) f [ η t , θ t ]
Table 2. Comparison of different iterative methods based on Example (1).
Table 2. Comparison of different iterative methods based on Example (1).
Methodst | e t 2 | | e t 1 | | e t | | f ( x t + 1 ) | COC
RM17 6.11 × 10 26 9.67 × 10 98 6.08 × 10 385 2.71 × 10 3068 4.0
RM27 3.24 × 10 32 2.57 × 10 123 1.00 × 10 487 1.66 × 10 3891 4.0
RM37 2.55 × 10 30 9.77 × 10 116 2.11 × 10 457 6.39 × 10 3649 4.0
RM47 3.44 × 10 22 1.63 × 10 82 8.13 × 10 324 7.65 × 10 2579 4.0
ZM8 1.61 × 10 48 3.95 × 10 189 5.36 × 10 376 7.09 × 10 2999 6.0
BM ( α = 0 )7 1.21 × 10 19 3.52 × 10 72 2.48 × 10 282 1.14 × 10 2246 4.0
( α = 0 )7 6.04 × 10 26 9.22 × 10 98 5.01 × 10 385 5.78 × 10 3069 4.0
MKM7 3.47 × 10 22 6.70 × 10 83 9.33 × 10 326 3.68 × 10 2595 4.0
SKM7 7.75 × 10 28 1.67 × 10 105 3.59 × 10 416 1.77 × 10 3318 4.0
Table 3. Comparison of different iterative methods based on Example (2).
Table 3. Comparison of different iterative methods based on Example (2).
Methodst | e t 2 | | e t 1 | | e t | | f ( x t + 1 ) | COC
RM15 6.78 × 10 26 2.43 × 10 105 3.99 × 10 423 9.96 × 10 6778 4.0
RM25 5.63 × 10 26 1.09 × 10 105 1.56 × 10 424 2.29 × 10 6800 4.0
RM35 6.19 × 10 26 1.64 × 10 105 8.12 × 10 424 7.79 × 10 6789 4.0
RM45 8.11 × 10 26 5.23 × 10 105 9.04 × 10 422 5.82 × 10 6756 4.0
ZM7 1.55 × 10 33 2.73 × 10 132 7.74 × 10 264 1.11 × 10 4214 6.0
BM ( α = 0 )5 9.61 × 10 26 1.08 × 10 104 1.72 × 10 420 2.01 × 10 6735 4.0
( α = 1 )5 6.75 × 10 26 2.38 × 10 105 3.71 × 10 423 3.07 × 10 6778 4.0
MKM5 6.30 × 10 26 1.76 × 10 105 1.07 × 10 423 6.52 × 10 6787 4.0
SKM5 6.77 × 10 26 2.42 × 10 105 3.90 × 10 423 7.03 × 10 6778 4.0
Table 4. Comparison of different iterative methods based on Example (3).
Table 4. Comparison of different iterative methods based on Example (3).
Methodst | e t 2 | | e t 1 | | e t | | f ( x t + 1 ) | COC
RM16 1.71 × 10 18 1.56 × 10 73 1.09 × 10 293 1.57 × 10 2346 4.0
RM26 3.97 × 10 22 6.44 × 10 89 4.48 × 10 356 2.65 × 10 2847 4.0
RM36 2.09 × 10 21 4.93 × 10 86 1.54 × 10 344 5.02 × 10 2755 4.0
RM46 1.71 × 10 17 2.87 × 10 69 2.29 × 10 276 2.08 × 10 2207 4.0
ZM10 1.06 × 10 35 1.31 × 10 140 2.56 × 10 280 4.92 × 10 2236 6.0
BM ( α = 0 )7 4.22 × 10 45 6.32 × 10 179 3.18 × 10 714 9.96 × 10 5709 4.0
( α = 1 )7 9.11 × 10 26 6.14 × 10 51 4.75 × 10 203 6.70 × 10 809 1.3
MKM6 1.20 × 10 18 2.20 × 10 74 2.42 × 10 297 3.07 × 10 2376 4.0
SKM8 7.55 × 10 50 1.37 × 10 198 1.40 × 10 396 6.22 × 10 3169 6.0
Table 5. Comparison of different iterative methods based on Example (4).
Table 5. Comparison of different iterative methods based on Example (4).
Methodst | e t 2 | | e t 1 | | e t | | f ( x t + 1 ) | COC
RM16 1.37 × 10 49 2.02 × 10 198 9.39 × 10 794 4.12 × 10 6349 4.0
RM26 1.36 × 10 49 1.94 × 10 198 8.12 × 10 794 1.27 × 10 6349 4.0
RM36 1.36 × 10 49 1.94 × 10 198 8.11 × 10 794 1.27 × 10 6349 4.0
RM46 1.38 × 10 49 2.09 × 10 198 1.09 × 10 793 1.33 × 10 6348 4.0
ZM6 3.15 × 10 35 9.46 × 10 70 2.24 × 10 277 4.84 × 10 1107 1.3
BM ( α = 0 )6 4.90 × 10 48 4.80 × 10 192 4.41 × 10 768 2.10 × 10 6143 4.0
( α = 1 )6 4.44 × 10 48 3.18 × 10 192 8.31 × 10 769 3.17 × 10 6149 4.0
MKM6 1.37 × 10 49 1.97 × 10 198 8.62 × 10 794 2.07 × 10 6349 4.0
SKM6 4.43 × 10 48 3.14 × 10 192 7.93 × 10 769 2.18 × 10 6149 4.0
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Behl, R.; Arora, H.; Martínez, E.; Singh, T. Approximating Multiple Roots of Applied Mathematical Problems Using Iterative Techniques. Axioms 2023, 12, 270. https://doi.org/10.3390/axioms12030270

AMA Style

Behl R, Arora H, Martínez E, Singh T. Approximating Multiple Roots of Applied Mathematical Problems Using Iterative Techniques. Axioms. 2023; 12(3):270. https://doi.org/10.3390/axioms12030270

Chicago/Turabian Style

Behl, Ramandeep, Himani Arora, Eulalia Martínez, and Tajinder Singh. 2023. "Approximating Multiple Roots of Applied Mathematical Problems Using Iterative Techniques" Axioms 12, no. 3: 270. https://doi.org/10.3390/axioms12030270

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop