Next Article in Journal
Numerical Simulation of Soliton Propagation Behavior for the Fractional-in-Space NLSE with Variable Coefficients on Unbounded Domain
Next Article in Special Issue
New Generalized Jacobi Polynomial Galerkin Operational Matrices of Derivatives: An Algorithm for Solving Boundary Value Problems
Previous Article in Journal
Analysis of Pore Characterization and Energy Evolution of Granite by Microwave Radiation
Previous Article in Special Issue
New Generalized Jacobi Galerkin Operational Matrices of Derivatives: An Algorithm for Solving Multi-Term Variable-Order Time-Fractional Diffusion-Wave Equations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A New Approach to Multiroot Vectorial Problems: Highly Efficient Parallel Computing Schemes

1
Faculty of Engineering, Free University of Bozen-Bolzano (BZ), 39100 Bolzano, Italy
2
Department of Mathematics and Statistics, Riphah International University, I-14, Islamabad 44000, Pakistan
3
Department of Mathematics, National University of Modern Languages (NUML), Islamabad 44000, Pakistan
*
Author to whom correspondence should be addressed.
Fractal Fract. 2024, 8(3), 162; https://doi.org/10.3390/fractalfract8030162
Submission received: 1 December 2023 / Revised: 8 February 2024 / Accepted: 22 February 2024 / Published: 12 March 2024
(This article belongs to the Special Issue Feature Papers for Numerical and Computational Methods Section)

Abstract

:
In this article, we construct an efficient family of simultaneous methods for finding all the distinct as well as multiple roots of polynomial equations. Convergence analysis proves that the order of convergence of newly constructed family of simultaneous methods is seventeen. Fractal-based simultaneous iterative algorithms are thoroughly examined. Using self-similar features, fractal-based simultaneous schemes can converge to solutions faster, saving computational time and resources necessary for solving nonlinear equations. Fractals analysis illustrates the newly developed method’s global convergence behavior when compared to single root-finding procedures for solving fractional order polynomials that arise in complex engineering applications. Some real problems from various branches of engineering along with some higher degree polynomials are considered as test examples to show the global convergence property of simultaneous methods, performance and efficiency of the proposed family of methods. Further computational efficiencies, CPU time and residual graphs are also drawn to validate the efficiency, robustness of the newly introduced family of methods as compared to the existing methods in the literature.

1. Introduction

Consider nonlinear polynomial equation of degree m
f ( ς ) = i = 1 n ( ς ζ i ) σ i
with multiple real or complex exact roots ζ 1 , , ζ n of respective unknown multiplicities σ 1 , , σ n ( σ 1 + + σ n = m ) . Generally, the multiplicity of roots is not given in advance. However, researchers are working on numerical methods which approximate the unknown multiplicity of roots; see, e.g., [1,2,3,4,5,6,7]. To solve (1), is one of the primal problems of sciences in general and in applied and computational mathematics, in particular. Nonlinear equations play a crucial role in providing more realistic and precise descriptions of phenomena, enabling scientists and engineers to understand, simulate, and predict the behavior of diverse systems. From the microscopic world of quantum mechanics to the macroscopic scale of climate modeling, nonlinear equations serve as the foundation for capturing the dynamic and non-trivial relationships that characterize the behavior of materials, physical processes, and complex systems. Their application extends to optimization problems [8], signal processing [9], and the modeling of population dynamics [10], emphasizing their pervasive role in advancing our understanding and facilitating the design and optimization of systems in science and engineering [11,12]. In essence, the importance of nonlinear equations lies in their ability to bridge the gap between theoretical models and the intricate realities of the natural and engineered world, providing a powerful tool for analysis, simulation, and innovation. The numerical iterative methods which approximate the roots of (1) may be classified into two main groups: iterative methods which estimate one root at a time; see, e.g., Chicharro et al. [13], Kung and Trub [14], King et al. [15], Cordero et al. [16], Chun et al. [17], Behl [18], Kou et al. [19], Chun and Ham [20], Jarrat [21], Chicharro and Cordero [22], Ostrowski [23], Cordero and Garcia-Maimo [24], and the iterative techniques which find all roots of (1) simultaneously.
Simultaneous iterative methods are more precise, stable, and consistent than single root finding methods, and they can also be used for parallel computation. Karl Weierstrass developed a derivative-free simultaneous technique [25] known as the Weierstrass method in the literature, which was rediscovered by Kerner [26], Durand [27], Dochev [28], and Presic [29] to estimate all roots of nonlinear equations. In 1962, Dochev shows that the Weierstrass iterative technique converged locally with convergence order two. Later, Dochev and Byrnev [30] and Börsch-Supan [31] developed a derivative-free simultaneous technique of convergence order three, whereas Ehrlich [32] developed a simultaneous method with derivative. Based on Ehrlich and Börsch-Supan’s approaches, Nourein [33,34] developed two fourth-order simultaneous schemes in 1977. In the literature, Nourein techniques are obtained by combining two approaches, namely Ehrlich methods with Newton’s corrections and Brouch-Supan methods with Weierstrass’ correction. Sakurai, Torii, and Sugiura [35] developed and study a fifth-order simultaneous method with derivative in 1991 using the Padé approximation. In 2019, Proinov et al. [36] presented the detailed convergence of the Sakurai–Torii–Sugiura method, followed by local and semi local convergence [37]. Petkovic et al [38] use Halley’s single root finding method as a correction to accelerate the Sakurai–Torii–Sugiura method up to convergence order 6. There have been numerous implementations of Nourein’s method to construct simultaneous methods with accelerated convergence, including [39,40,41,42,43,44] and references therein.
Motivated by the aforementioned work, the primary goal of this study is to develop a family of simultaneous methods that are both more efficient and have a higher convergence order. Using appropriate corrections allows us to achieve seventeenth-order convergence with the fewest functional evaluations required in each step, resulting in very high computational efficiency for the newly constructed family of numerical schemes for finding distinct as well as multiple roots. So far, only the Midrog Petkovic method [45] of order 10 and the Gargantini–Farmer–Loizou method of 2N + 1 convergence order (where N is a positive integer) [46,47] exist in the literature, but their efficiency is low and abbreviated as NN. The main contributions of this study are as follows.
  • Three novel simultaneous methods are proposed to find all the distinct and multiple roots of (1).
  • A local convergence analysis demonstrates that the newly developed methods rate seventeenth in terms of convergence order.
  • We provide an in-depth complexity analysis to illustrate how effective the new approach is as compared to existing methods.
  • Using fractal behavior, the newly developed methods are numerically evaluated for stability and efficiency.
  • Utilizing a variety of stopping criteria, the method’s general applicability to a wide range of nonlinear engineering problems is thoroughly investigated.
Global convergence behavior of simultaneous iterative methods is characterized by analyzing their basins of attractions and taking initial guesses away from the roots. The most frequently used classical approach for finding single roots of (1) of the first category is Newton’s method:
ς ( ϑ + 1 ) = ς ( ϑ ) f ( ς ( ϑ ) ) f ( ς ( ϑ ) ) , ( ϑ = 1 , 2 , ) .
Method (2) is of optimal convergence order 2 with efficiency index 1.41. If we use Weierstrass’ Correction [48,49,50]
f ( ς i ( ϑ ) ) f ( ς i ( ϑ ) ) = w ( ς i ( ϑ ) ) = f ( ς i ( ϑ ) ) Π n j i j = 1 ( ς i ( ϑ ) ς j ( ϑ ) ) ,
in (2), we get Weierstrass method to estimate all distinct roots of (1) as:
ς i ( ϑ + 1 ) = ς i ( ϑ ) f ( ς i ( ϑ ) ) Π n j i j = 1 ( ς i ( ϑ ) ς j ( ϑ ) ) .
Method (4) has convergence order two. Later, in 1973 Ehrlich [51] and Alefeld and Herzberger proposed the following third order convergent method [52] as below:
ς i ( ϑ + 1 ) = ς i ( ϑ ) 1 1 N ( ς i ( ϑ ) ) j i j = 1 n 1 ( ς i ( ϑ ) ς j ( ϑ ) ) ,
and
ς i ( ϑ + 1 ) = ς i ( ϑ ) 1 1 N i ( ς i ( ϑ ) ) i = 1 j 1 1 ς i ( ϑ ) ς j ( ϑ ) j = i + 1 n 1 ς i ( ϑ ) ς j ( ϑ ) ,
where N ( ς i ( ϑ ) ) = f ( ς i ( ϑ ) ) f ( ς i ( ϑ ) ) . Nourein accelerated the convergence order of (5) from three to four [33] as:
ς i ( ϑ + 1 ) = ς i ( ϑ ) 1 1 N i ( ς i ( ϑ ) ) j i j = 1 n 1 ς i ( ϑ ) ς j ( ϑ ) + f ( ς j ( ϑ ) ) f ( ς j ( ϑ ) ) ,
Milovanovic et al. [53] proposed the following 4th-order convergent method:
ς i ( ϑ + 1 ) = ς i ( ϑ ) 1 1 N i ( ς i ( ϑ ) ) i = 1 j 1 1 ς i ( ϑ ) ς j ( ϑ ) j = i + 1 n 1 ς i ( ϑ ) ς j ( ϑ ) + N j ( ς j ( ϑ ) ) .
Petkovic et al. [54] accelerated the convergence order of (5) from three to six:
ς i ( ϑ + 1 ) = ς i ( ϑ ) 1 1 N i ( ς i ( ϑ ) ) j i j = 1 n 1 ( ς i ( ϑ ) Z j ( ϑ ) ) ) ,
where Z j ( ϑ ) = ς j ( ϑ ) f ( y j ( ϑ ) ) f ( ς j ( ϑ ) ) 2 * f ( y j ( ϑ ) ) f ( ς j ( ϑ ) ) f ( ς j ( ϑ ) ) f ( ς j ( ϑ ) ) and y j ( ϑ ) = ς j ( ϑ ) f ( ς j ( ϑ ) ) f ( ς j ( ϑ ) ) .
Petkovic et al. [45] accelerated the convergence order of (5) from three to ten as:
ς i ( ϑ + 1 ) = ς i ( ϑ ) 1 1 N i ( ς i ( ϑ ) ) j i j = 1 n 1 ( ς i ( ϑ ) v j ( ϑ ) ) ) ,
where v j ( ϑ ) = u j ( ϑ ) ( y j ( ϑ ) u j ( ϑ ) ) f ( u j ( ϑ ) ) f ( ς j ( ϑ ) ) f ( ς j ( ϑ ) ) f ( ς j ( ϑ ) ) f ( u j ( ϑ ) ) 2 f ( y j ( ϑ ) ) f ( ς j ( ϑ ) ) f ( y j ( ϑ ) ) f ( u j ( ϑ ) ) , u j ( ϑ ) = y j ( ϑ ) f ( ς j ( ϑ ) ) f ( y j ( ϑ ) ) f ( ς j ( ϑ ) ) f ( ς j ( ϑ ) ) f ( ς j ( ϑ ) ) f ( y j ( ϑ ) ) 2 , y j ( ϑ ) = ς j ( ϑ ) f ( ς j ( ϑ ) ) f ( ς j ( ϑ ) ) .
To our knowledge, this contribution is unique. A review of the literature reveals that not much work has been done on higher-order, consistent, efficient, and reliable simultaneous methods for finding all distinct and multiple roots of (1). The paper is organized in the following way: In Section 2, fractal behavior—a representation of the global convergence behavior of the proposed methods—and the development and analysis of simultaneous methods are covered. A detailed discussion of the computational cost analysis of the simultaneous methods are provided in Section 3. Section 4 deals with numerical test problems and engineering applications. Section 5 concludes the study and discusses future research directions.

2. Family of Simultaneous Methods, Its Construction and Convergence Framework

Using the technique of the weight function, Rafiq [55] proposed the following family of fifteenth-order iterative methods for finding single root of (1):
z ( ϑ + 1 ) = w ( ϑ ) f ( w ( ϑ ) ) f [ ς ( ϑ ) , w ( ϑ ) ] + f [ y ( ϑ ) , ς ( ϑ ) , y ( ϑ ) ] f [ y ( ϑ ) , ς ( ϑ ) , w ( ϑ ) ] f [ h ( ϑ ) , ς ( ϑ ) , w ( ϑ ) ] ς ( ϑ ) w ( ϑ ) ,
where w ( ϑ ) = h ( ϑ ) G ( υ ( ϑ ) ) f ( h ( ϑ ) ) f [ ς ( ϑ ) , h ( ϑ ) ] f [ ς ( ϑ ) , y ( ϑ ) , h ( ϑ ) ] ( ς ( ϑ ) h ( ϑ ) ) ,
h ( ϑ ) = y ( ϑ ) 1 2 f ( ς ( ϑ ) ) + β f ( y ( ϑ ) ) f ( ς ( ϑ ) ) + ( β 2 ) f ( y ( ϑ ) ) f ( y ( ϑ ) ) f ( ς ( ϑ ) ) + f ( y ( ϑ ) ) 2 f ( y ( ϑ ) ) f ( ς ( ϑ ) ) y ( ϑ ) ς ( ϑ ) f ( ς ( ϑ ) ) ,
y ( ϑ ) = ς ( ϑ ) f ( ς ( ϑ ) ) f ( ς ( ϑ ) ) and υ ( ϑ ) = f ( h ( ϑ ) ) f ( ς ( ϑ ) ) .
In (11), f [ ς ( ϑ ) , w ( ϑ ) ] and f [ y ( ϑ ) , ς ( ϑ ) , w ( ϑ ) ] are the divided difference and defined as:
f [ ς ( ϑ ) , w ( ϑ ) ] = f ( ς ( ϑ ) ) f ( w ( ϑ ) ) ς ( ϑ ) w ( ϑ ) ,
and
f [ y ( ϑ ) , ς ( ϑ ) , w ( ϑ ) ] = f [ y ( ϑ ) , ς ( ϑ ) ] f [ ς ( ϑ ) , w ( ϑ ) ] y ( ϑ ) w ( ϑ ) .
If G ( υ ( ϑ ) ) is real valued function defined on υ ( ϑ ) = f ( h ( ϑ ) ) f ( ς ( ϑ ) ) such that G ( 0 ) = G ( 0 ) = 1 and G ( 0 ) < , then the family of methods (11) has convergence order fifteen, if ζ is simple root of (1) and = ς ζ and error equation is given by:
ς ( ϑ + 1 ) ζ ς ( ϑ ) ζ 15 C 2 2 C 3 C 4 C 3 C 2 5 + 2 C 3 2 C 2 3 C 3 C 4 C 2 2 ;
C k ( ς ) = f ( k ) ( ζ ) k ! f ( ζ ) , k = 2 , 3 ,
or
ς ( ϑ + 1 ) ζ = O ( 15 ) .
We consider here, three concrete fifteenth order family of iterative methods as presented in [55]:
Method-1: (N1)
Selecting G ( υ ( ϑ ) ) = 1 + υ ( ϑ ) + α υ ( ϑ ) 2 , α R the family of iterative schemes becomes:
z 1 ( ϑ ) = w 1 ( ϑ ) f ( w ( ϑ ) ) f [ ς ( ϑ ) , w ( ϑ ) ] + f [ y ( ϑ ) , ς ( ϑ ) , y ( ϑ ) ] f [ y ( ϑ ) , ς ( ϑ ) , w ( ϑ ) ] f [ h ( ϑ ) , ς ( ϑ ) , w ( ϑ ) ] ς ( ϑ ) w ( ϑ ) ,
where w 1 ( ϑ ) = h ( ϑ ) 1 + f ( h ( ϑ ) ) f ( ς ( ϑ ) ) + α ( f ( h ( ϑ ) ) f ( ς ( ϑ ) ) ) 2 f ( h ( ϑ ) ) f [ ς ( ϑ ) , h ( ϑ ) ] f [ ς ( ϑ ) , y ( ϑ ) , h ( ϑ ) ] ( ς ( ϑ ) h ( ϑ ) ) ,
h ( ϑ ) = y ( ϑ ) 1 2 f ( ς ( ϑ ) ) + β f ( y ( ϑ ) ) f ( ς ( ϑ ) ) + ( β 2 ) f ( y ( ϑ ) ) f ( y ( ϑ ) ) f ( ς ( ϑ ) ) + f ( y ( ϑ ) ) 2 f ( y ( ϑ ) ) f ( ς ( ϑ ) ) y ( ϑ ) ς ( ϑ ) f ( ς ( ϑ ) ) ,
y ( ϑ ) = ς ( ϑ ) f ( ς ( ϑ ) ) f ( ς ( ϑ ) ) and υ ( ϑ ) = f ( h ( ϑ ) ) f ( ς ( ϑ ) ) .
Method-2: (N2)
Choosing G ( υ ( ϑ ) ) = 1 α υ ( ϑ ) 1 α , α 0 , α R the family of iterative schemes is given by:
z 2 ( ϑ ) = w 2 ( ϑ ) f ( w ( ϑ ) ) f [ ς ( ϑ ) , w ( ϑ ) ] + f [ y ( ϑ ) , ς ( ϑ ) , y ( ϑ ) ] f [ y ( ϑ ) , ς ( ϑ ) , w ( ϑ ) ] f [ h ( ϑ ) , ς ( ϑ ) , w ( ϑ ) ] ς ( ϑ ) w ( ϑ ) ,
where w 2 ( ϑ ) = h ( ϑ ) 1 α f ( h ( ϑ ) ) f ( ς ( ϑ ) ) 1 α f ( h ( ϑ ) ) f [ ς ( ϑ ) , h ( ϑ ) ] f [ ς ( ϑ ) , y ( ϑ ) , h ( ϑ ) ] ( ς ( ϑ ) h ( ϑ ) ) ,
h ( ϑ ) = y ( ϑ ) 1 2 f ( ς ( ϑ ) ) + β f ( y ( ϑ ) ) f ( ς ( ϑ ) ) + ( β 2 ) f ( y ( ϑ ) ) f ( y ( ϑ ) ) f ( ς ( ϑ ) ) + f ( y ( ϑ ) ) 2 f ( y ( ϑ ) ) f ( ς ( ϑ ) ) y ( ϑ ) ς ( ϑ ) f ( ς ( ϑ ) ) ,
y ( ϑ ) = ς ( ϑ ) f ( ς ( ϑ ) ) f ( ς ( ϑ ) ) and υ ( ϑ ) = f ( h ( ϑ ) ) f ( ς ( ϑ ) ) .
Method-3: (N3)
Considering G ( υ ( ϑ ) ) = 1 + υ ( ϑ ) + α υ ( ϑ ) 2 + γ υ ( ϑ ) 3 , α , γ R the family of iterative schemes is given as follows:
z 3 ( ϑ ) = w 3 ( ϑ ) f ( w ( ϑ ) ) f [ ς ( ϑ ) , w ( ϑ ) ] + f [ y ( ϑ ) , ς ( ϑ ) , y ( ϑ ) ] f [ y ( ϑ ) , ς ( ϑ ) , w ( ϑ ) ] f [ h ( ϑ ) , ς ( ϑ ) , w ( ϑ ) ] ς ( ϑ ) w ( ϑ ) ,
where w 3 ( ϑ ) = h ( ϑ ) 1 + f ( h ( ϑ ) ) f ( ς ( ϑ ) ) + α f ( h ( ϑ ) ) f ( ς ( ϑ ) ) 2 + γ f ( h ( ϑ ) ) f ( ς ( ϑ ) ) 3 f ( z ( ϑ ) ) f [ ς ( ϑ ) , h ( ϑ ) ] f [ ς ( ϑ ) , y ( ϑ ) , h ( ϑ ) ] ( ς ( ϑ ) h ( ϑ ) ) ,
h ( ϑ ) = y ( ϑ ) 1 2 f ( ς ( ϑ ) ) + β f ( y ( ϑ ) ) f ( ς ( ϑ ) ) + ( β 2 ) f ( y ( ϑ ) ) f ( y ( ϑ ) ) f ( ς ( ϑ ) ) + f ( y ( ϑ ) ) 2 f ( y ( ϑ ) ) f ( ς ( ϑ ) ) y ( ϑ ) ς ( ϑ ) f ( ς ( ϑ ) ) ,
y ( ϑ ) = ς ( ϑ ) f ( ς ( ϑ ) ) f ( ς ( ϑ ) ) and υ ( ϑ ) = f ( h ( ϑ ) ) f ( ς ( ϑ ) ) .
Suppose, (1) has m distinct roots. Then f ( ς ) and f ( ς ) can be approximated as:
f ( ς ) = j = 1 m ς ς j and f ( ς ) = k = 1 n m j k j = 1 ς ς j or f ( ς k ) = k = 1 n m j k j = 1 ς k ς j .
This implies,
f ( ς ) f ( ς ) = j = 1 m 1 ( ς ς j ) = 1 1 ς ς i j i j = 1 m 1 ( ς ς j ) .
This provides the Aberth Ehrlich method (5).
ς i ( ϑ + 1 ) = ς i ( ϑ ) 1 1 N ( ς i ( ϑ ) ) j i j = 1 m 1 ( ς i ( ϑ ) ς j ( ϑ ) ) , where N ( ς i ( ϑ ) ) = f ( ς i ( ϑ ) ) f ( ς i ( ϑ ) ) .
Now from (25), an approximation of f ( ς i ( ϑ ) ) f ( ς i ( ϑ ) ) is formed by replacing ς j ( ϑ ) with z v j ( ϑ ) as follows:
f ( ς i ( ϑ ) ) f ( ς i ( ϑ ) ) = 1 1 N ( ς i ( ϑ ) ) j i j = 1 m 1 ( ς i ( ϑ ) z v j ( ϑ ) ) , v = 1 , 2 , 3
Using (21) in (5), we have:
ς i ( ϑ + 1 ) = ς i ( ϑ ) 1 1 N ( ς i ( ϑ ) ) j i j = 1 m 1 ( ς i ( ϑ ) z v j ( ϑ ) ) .
In case of multiple roots:
ς i ( ϑ + 1 ) = ς i ( ϑ ) σ i 1 N ( ς i ( ϑ ) ) j i j = 1 n σ j ( ς i ( ϑ ) z v j ( ϑ ) ) ,
where ζ 1 , , ζ n are now multiple roots of respective unknown multiplicities σ 1 , , σ n ( σ 1 + + σ n = m ) and
z 1 j ( ϑ ) = w 1 j ( ϑ ) f ( w j ( ϑ ) ) f [ ς j ( ϑ ) , w j ( ϑ ) ] + f [ y j ( ϑ ) , ς j ( ϑ ) , y j ( ϑ ) ] f [ y j ( ϑ ) , ς j ( ϑ ) , w j ( ϑ ) ] f [ h j ( ϑ ) , ς j ( ϑ ) , w j ( ϑ ) ] ς j ( ϑ ) w j ( ϑ ) ,
where w 1 j = h j ( ϑ ) 1 + f ( h j ( ϑ ) ) f ( ς j ( ϑ ) ) + α f ( h j ( ϑ ) ) f ( ς j ( ϑ ) ) 2 f ( h j ( ϑ ) ) f [ ς j ( ϑ ) , h j ( ϑ ) ] f [ ς j ( ϑ ) , y j ( ϑ ) , h j ( ϑ ) ] ( ς j ( ϑ ) h j ( ϑ ) ) ,
h j ( ϑ ) = y j ( ϑ ) 1 2 f ( ς j ( ϑ ) ) + β f ( y j ( ϑ ) ) f ( ς j ( ϑ ) ) + ( β 2 ) f ( y j ( ϑ ) ) f ( y j ( ϑ ) ) f ( ς j ( ϑ ) ) + f ( y j ( ϑ ) ) 2 f ( y j ( ϑ ) ) f ( ς j ( ϑ ) ) y j ( ϑ ) ς j ( ϑ ) f ( ς j ( ϑ ) ) ,
y j ( ϑ ) = ς j ( ϑ ) f ( ς j ( ϑ ) ) f ( ς j ( ϑ ) ) , υ j ( ϑ ) = f ( h j ( ϑ ) ) f ( ς j ( ϑ ) ) ,
z 2 j ( ϑ ) = w 2 j ( ϑ ) f ( w j ( ϑ ) ) f [ ς j ( ϑ ) , w j ( ϑ ) ] + f [ y j ( ϑ ) , ς j ( ϑ ) , y j ( ϑ ) ] f [ y j ( ϑ ) , ς j ( ϑ ) , w j ( ϑ ) ] f [ h j ( ϑ ) , ς j ( ϑ ) , w j ( ϑ ) ] ς j ( ϑ ) w j ( ϑ ) ,
w 2 j ( ϑ ) = h j ( ϑ ) 1 α f ( h j ( ϑ ) ) f ( ς j ( ϑ ) ) 1 α f ( h j ( ϑ ) ) f [ ς j ( ϑ ) , h j ( ϑ ) ] f [ ς j ( ϑ ) , y j ( ϑ ) , h j ( ϑ ) ] ( ς j ( ϑ ) h j ( ϑ ) ) ,
h j ( ϑ ) = y j ( ϑ ) 1 2 f ( ς j ( ϑ ) ) + β f ( y j ( ϑ ) ) f ( ς j ( ϑ ) ) + ( β 2 ) f ( y j ( ϑ ) ) f ( y j ( ϑ ) ) f ( ς j ( ϑ ) ) + f ( y j ( ϑ ) ) 2 f ( y j ( ϑ ) ) f ( ς j ( ϑ ) ) y j ( ϑ ) ς j ( ϑ ) f ( ς j ( ϑ ) ) ,
y j ( ϑ ) = ς j ( ϑ ) f ( ς j ( ϑ ) ) f ( ς j ( ϑ ) ) , υ j ( ϑ ) = f ( h j ( ϑ ) ) f ( ς j ( ϑ ) ) ,
z 3 j ( ϑ ) = w 3 j ( ϑ ) f ( w j ( ϑ ) ) f [ ς j ( ϑ ) , w j ( ϑ ) ] + f [ y j ( ϑ ) , ς j ( ϑ ) , y j ( ϑ ) ] f [ y j ( ϑ ) , ς j ( ϑ ) , w j ( ϑ ) ] f [ h j ( ϑ ) , ς j ( ϑ ) , w j ( ϑ ) ] ς j ( ϑ ) w j ( ϑ ) ,
w 3 j ( ϑ ) = h j ( ϑ ) 1 + f ( h j ( ϑ ) ) f ( ς j ) + α f ( h j ( ϑ ) ) f ( ς j ( ϑ ) ) 2 + γ f ( h j ( ϑ ) ) f ( ς j ( ϑ ) ) 3 f ( h j ( ϑ ) ) f [ ς j ( ϑ ) , h j ( ϑ ) ] f [ ς j ( ϑ ) , y j ( ϑ ) , h j ( ϑ ) ] ( ς j ( ϑ ) h j ( ϑ ) ) ,
h j ( ϑ ) = y j ( ϑ ) 1 2 f ( ς j ( ϑ ) ) + β f ( y j ( ϑ ) ) f ( ς j ( ϑ ) ) + ( β 2 ) f ( y j ( ϑ ) ) f ( y j ( ϑ ) ) f ( ς j ( ϑ ) ) + f ( y j ( ϑ ) ) 2 f ( y j ( ϑ ) ) f ( ς j ( ϑ ) ) y j ( ϑ ) ς j ( ϑ ) f ( ς j ( ϑ ) ) ,
y j ( ϑ ) = ς j ( ϑ ) f ( ς j ( ϑ ) ) f ( ς j ( ϑ ) ) , υ j ( ϑ ) = f ( h j ( ϑ ) ) f ( ς j ( ϑ ) ) .
Thus, we have constructed family of three simultaneous methods of seventeenth order for finding all the distinct roots of nonlinear polynomial equations, abbreviated as MN1–MN3.

Convergence Analysis

Here, we prove convergence order of simultaneous methods MN1–MN3 using following convergence theorem.
Theorem 1.
Let ξ 1 , ξ 2 , , ξ n be simple roots of (1) with multiplicity σ 1 , , σ n ( σ 1 + + σ n = m ) and if ς 1 ( 0 ) , . . , ς n ( 0 ) be sufficiently close initial guessed estimates of the roots, then MN1–MN3 has seventeenth order convergence.
Proof. 
Let ϵ i = ς i ( ϑ ) ξ i and ϵ i = ς i ( ϑ + 1 ) ξ i be the errors in ς i ( ϑ ) and ς i ( ϑ + 1 ) estimates correspondingly. Considering MN1–MN3, we have:
ς i ( ϑ + 1 ) = ς i ( ϑ ) 1 σ i N ( ς i ( ϑ ) ) j i i = 1 n 1 ( ς i ( ϑ ) z v j ( ϑ ) ) , ( v = 1 , 2 , 3 ) .
where N ( ς i ( ϑ ) ) = f ( ς i ( ϑ ) ) f ( ς i ( ϑ ) ) . Then, obviously for distinct roots:
1 N ( ς i ( ϑ ) ) = f ( ς i ( ϑ ) ) f ( ς i ( ϑ ) ) = j = 1 n 1 ( ς i ( ϑ ) ξ j ) = 1 ( ς i ( ϑ ) ξ i ) + j i i = 1 n 1 ς i ( ϑ ) ξ j ) .
Thus, for multiple roots we have from MN1 to MN3:
ς i ( ϑ + 1 ) = ς i ( ϑ ) σ i σ i ( ς i ( ϑ ) ξ i ) + j i i = 1 n σ j ( ς i ( ϑ ) ξ j ) j i i = 1 n σ j ( ς i ( ϑ ) z v j ( ϑ ) ) ,
ς i ( ϑ + 1 ) ξ i = ς i ( ϑ ) ξ i σ i σ i ( ς i ( ϑ ) ξ i ) + j i i = 1 n σ j ( ς i ( ϑ ) z v j ( ϑ ) ς i ( ϑ ) + ξ j ) ( ς i ( ϑ ) ξ j ) ( ς i ( ϑ ) z v j ( ϑ ) ) ,
ϵ i = ϵ i σ i σ i ϵ i + j i i = 1 n σ j ( z v j ( ϑ ) ξ j ) ( ς i ( ϑ ) ξ j ) ( ς i ( ϑ ) z v j ( ϑ ) ) ,
ϵ i = ϵ i σ i ϵ i σ i + ϵ i j i i = 1 n σ j ( z v j ( ϑ ) ξ j ) ( ς i ( ϑ ) ξ j ) ( ς i ( ϑ ) z v j ( ϑ ) ) ,
= ϵ i σ i ϵ i σ i + ϵ i j i i = 1 n E i ϵ j 4 ,
where z v j ( ϑ ) ξ j = ϵ j 15 . Using (29) and E i = σ j ( ς i ( ϑ ) ξ j ) ( ς i ( ϑ ) z v j ( ϑ ) ) , we have
ϵ i = ϵ i 2 j i i = 1 n E i ϵ j 15 σ i + ϵ i j i i = 1 n E i ϵ j 15 .
Assuming ϵ i = ϵ j = O ϵ , from (31), we have:
ϵ i = O ( ϵ i ) 17 .
hence the theorem. □
Fractal Analysis of Simultaneous Scheme: To provoke the fractal-basin of attractions of iterative schemes N1–N3, MN1–MN3, and NN for the determining the roots of nonlinear equation, we execute the real and imaginary parts of the starting approximations represented as two axes over a mesh of 250 × 250 in complex plane. Use ς ( ϑ + 1 ) ς ( ϑ ) < 10 3 as a stopping criteria and consider maximum 3 iterations. We allow different colors to mark to which root the iterative scheme converges and black in other cases. Color brightness in basins shows less number of iterations. Figure 1a–g and Figure 2a–g illustrate the fractal behavior of N1–N3, MN1–MN3 and NN for fractional-order nonlinear polynomials.
Figure 1a–g, shows basins of attraction for nonlinear function f ( ς ) = ς 312 / 99 1 3 of iterative methods N1–N3, MN1–MN3, and NN, respectively.
Figure 2a–g, shows basins of attraction for nonlinear function f ( ς ) = e ς 210 / 99 log ( ς ) of iterative methods N1–N3, MN1–MN3, and NN, respectively.
From the elapsed time in Table 1 and brightness in color in Figure 1a–g and Figure 2a–g, shows the dominance behavior of MN1–MN3 over N1–N3 and NN, respectively.
The iterative schemes N1–N3, MN1–MN3, and NN for solving f ( ς ) = ς 312 / 99 1 3 , f ( ς ) = e ς 210 / 99 log ( ς ) , exhibit fractal behavior, as seen in Figure 1a–g and Figure 2a–g, respectively. In comparison to solving fractional order nonlinear equations f ( ς ) = ς 312 / 99 1 3 with N1–N3, MN1–MN3, and NN converge to 510,003, 534,673, 507,432, 618,654, 638,765, 632,154, and 588,975 points out of a total of 640,000, respectively. While generating the fractal of f ( ς ) = e ς 210 / 99 log ( ς ) , with N1–N3, MN1–MN3, and NN converge to 543,254, 543,565, 546,754, 632,455, 63,765, 638,765, and 523,451 points out of a total of 640,000, respectively. Furthermore, MN1–MN3 has a far higher convergence rate than NN. In the complex plane, MN1–MN3 converges to more points than NN due to global convergence.

3. Computational Aspect

We compare the percentage cost of computation of Midrog Petkovic method (NN-method) [45] and the new family of methods MN1–MN3, the computation efficiencies (CE) are computed as:
C E ( m ) = log u D ,
where D and u are cost of computation order of convergence of MN1–MN3,NN, respectively. Applying (33) and data given in Table 2, we calculate the percentage computational cost as ρ ( MN1–MN2, NN) [56] given by:
ρ ( M N 1 M N 3 , NN ) = C E ( MN 1 MN 3 ) C E ( NN ) 1 × 100 .
Figure 3a–d graphically clarifies these percentage costs of computation. It is obvious from Figure 3b–d, that MN1–MN3 are more efficient as compared to NN.

4. Numerical Results

We use CAS Maple 18 with 64-Digits-Floating-Point arithmetic for all numerical calculations with stopping criteria as follows:
e i ( ϑ ) = ς i ϑ + 1 ρ i 2 < = 10 30 ,
where e i ( ϑ ) represents the absolute error of norm-2. In all numerical calculations we take α = 0.008 , β = 0.001 , γ = 1.5 .
Numerical tests problems from [57,58,59,60] are provided in Table 3, Table 4, Table 5, Table 6, Table 7, Table 8, Table 9, Table 10, Table 11, Table 12 and Table 13 and Algorithm 1.
Algorithm 1: For simultaneous scheme MN1 to approximate all distinct and multiple root of (1).
For the preliminary computations ς i ( 0 ) ( i i = 1 , . . , N ) , tolerance > 0 and set j j = 0 for tt - iterations Calculate y j ( ϑ ) = ς j ( ϑ ) f ( ς j ( ϑ ) ) f ( ς j ( ϑ ) ) , υ j ( ϑ ) = f ( h j ( ϑ ) ) f ( ς j ( ϑ ) ) . h i ( ϑ ) = y i ( ϑ ) 1 2 f ( ς j ( ϑ ) ) + β f ( y j ( ϑ ) ) f ( ς j ( ϑ ) ) + ( β 2 ) f ( y j ( ϑ ) ) f ( y j ( ϑ ) ) f ( ς j ( ϑ ) ) + f ( y j ( ϑ ) ) 2 f ( y j ( ϑ ) ) f ( ς j ( ϑ ) ) y j ( ϑ ) ς j ( ϑ ) f ( ς j ( ϑ ) ) , w 1 j = h j ( ϑ ) 1 + f ( h j ( ϑ ) ) f ( ς j ( ϑ ) ) + α f ( h j ( ϑ ) ) f ( ς j ( ϑ ) ) 2 f ( h j ( ϑ ) ) f [ ς j ( ϑ ) , h j ( ϑ ) ] f [ ς j ( ϑ ) , y j ( ϑ ) , h j ( ϑ ) ] * ( ς j ( ϑ ) h j ( ϑ ) ) , z 1 j ( ϑ ) = w 1 j ( ϑ ) f ( w j ( ϑ ) ) f [ ς j ( ϑ ) , w j ( ϑ ) ] + f [ y j ( ϑ ) , ς j ( ϑ ) , y j ( ϑ ) ] f [ y j ( ϑ ) , ς j ( ϑ ) , w j ( ϑ ) ] f [ h j ( ϑ ) , ς j ( ϑ ) , w j ( ϑ ) ] ς j ( ϑ ) w j ( ϑ ) , Update ς i ( ϑ + 1 ) = ς i ( ϑ ) σ i 1 N ( ς i ( ϑ ) ) j i j = 1 n σ j ( ς i ( ϑ ) z v j ( ϑ ) ) , where N ( ς i ( ϑ ) ) = f ( ς i ( ϑ ) ) f ( ς i ( ϑ ) ) . ς i ( ϑ + 1 ) = ς i ( ϑ ) ( i i = 1 , , n ) . if e i ( ϑ ) = ς i ( ϑ + 1 ) ς i ( ϑ ) < = 10 30 or σ > t t , then break . Set j j = j j + 1 and go to 1 s t - iteration . End do .
Here, we solve some standard nonlinear polynomials arising in biomedical engineering application.
Application in Bio-engineering
Example 1: Large molecules known as ligands bind to cell surface receptors. Similar to digestive enzymes, sensors are specialized proteins with unique binding attributes that often perform a job when specific ligands are bound, such as transferring a ligand across a cell membrane or activating a signal to turn on specific genes. The process where a ligand can attach to many receptors at once because it has multiple binding sites is referred to as “cross-linking” or “aggregating” numerous receptors. Hormones, antibody–antigen complexes, and other extracellular signaling molecules act as ligands, causing receptors on the cell surface to congregate. For the equilibrium binding of multivalent ligands to cell surface receptors, Perelson [57] presented a model. In this concept, the multivalent ligand is presumed to be available at v active sites for binding to receptors on the surfaces of suspended cells. By coupling one of its v binding sites to a receptor on the cell surface, a ligand molecule can bind to the surface of a cell in solution (see Figure 4). The orientation of the ligand, however, may be able to restrict the variety of binding sites that can interact with a cell once a single link has been created between the ligand and the cell receptor. A ligand may provide f total binding locations for adhering to a cell surface. a ligand’s potential to attach to receptors on the cell surface that have multiple identical binding regions.
The concentration of unbound receptors on the cell surface at equilibrium can be calculated using the following equation:
R R T = R R e q 1 + v L L 0 K K 0 1 + K K ς R R e q f 1 ,
where
v is the sum of all the binding sites on the component’s surface.
The entire number of binding locations that are accustomed to affixing to an individual cell is f,
K K ς , Equilibrium constant for multifaceted linking (1/(# ligands/cell)),
L L 0 , Amount of agonist in fluid (M),
K K D , The ligand-molecule binding in solution to cells dissociation constant in all three directions (M),
R R e q , the balance quantity of liberated receptors that are on the outside of the cell (# receptors/cell),
R R T , the whole amount of receptors on the surface of platelets.
In order to bind blood platelets cells, a plasma protein known as von Willebrand factor [57] engages in multiple receptor-ligand interactions. The following elements for the von Willebrand factor platelet system are projected:
(i)
v = 18.00
(ii)
f = 9.00
(iii)
K K 0 = 7.73 × 10 5 M ,
(iv)
K K ς = 5.80 × 10 5 cell per number of ligands
(v)
L L 0 = 2.0 × 10 9 M
(vi)
R R T = 10,688.00 number of receptor per cell
By replacing R e q with ς in a platelet, we want to find the equilibrium concentration R R e q of unbound or free receptors as:
f 1 ( ς ) = ς 1 + v L L 0 K K 0 1 + ς K K ς f 1 R R T = 0 ,
Substituting values of v , f , K K D , K K ς , L L 0 , R R T in f 1 ( ς ) and by simplifying we obtain a nonlinear polynomial of degree 9.0 as:
f 2 ( ς ) = ς 1 + 18 2 × 10 9 7.73 × 10 5 1 + 5.8 × 10 5 ς 9 1 10688 = 0 ,
or
f 2 ( ς ) = 5.964127997 × 10 38 ς 9 + 8.226383444 × 10 33 ς 8 + 4.964196905 × 10 28 ς 7 + 1.711792037 × 10 23 ς 6 + 3.689206976 × 10 19 ς 5 + 5.088561346 × 10 15 ς 4 + 4.386690815 × 10 11 ς 3 + 2.160931436 × 10 7 ς 2 + 1.000465718 x 10688 .
we take the following initial Gaussed values:
ς 1 ( 0 ) = 69696 17330 i , ς 2 ( 0 ) = 69696 + 17330 i , ς ( 0 ) 3 = 45422 41931 i , ς 4 ( 0 ) = 45422 + 41931 i , ς 5 ( 0 ) = 2235 42268 i , ς 6 ( 0 ) = 2235 + 42268 i , ς ( 0 ) 7 = 10470 , ς 8 ( 0 ) = 22153 18201 i , ς 9 ( 0 ) = 22153 + 18201 i .
For the nonlinear nonlinear problem f 1 ( ς ) , the numerical results of the iterative schemes MN1–MN3 and NN are shown in Table 3 and Table 4, respectively.
The simultaneous NN and MN1 for solving f 2 ( ς ) exhibit fractal behavior, as seen in Figure 5a,b. In comparison to solving fractional order polynomial equations, solving f 2 ( ς ) with NN and MN1 takes 0.016774 s and 0.0145 s, respectively. Furthermore, MN1 has a far higher convergence rate than NN. In the complex plane, NN converges to 512,412 points while MN1 converges to 631,243 points out of a total of 640,000. Due to global convergence, MN2 and MN3 exhibit the same fractal behavior as MN1.
Example 2: [58] Standard polynomial of degree 10,000 with 21 multiple roots of multiplicities 600, 100, 100, 200, 200, 200, 200, 300, 300, 400, 400, 500, 500, 600, 600, 700, 700, 800, 800, 900, and 900, respectively.
Consider
f 3 ( ς ) = ( ς 4 ) 600 ( ς 2 1 ) 100 ( ς 4 16 ) 200 ( ς 2 + 9 ) 300 ( ς 2 + 16 ) 400 ( ς 2 + 2 ς + 5 ) 500 ( ς 2 + 2 ς + 2 ) 600 ( ς 2 2 ς + 2 ) 700 ( ς 2 4 ς + 5 ) 800 ( ς 2 2 ς + 10 ) 900 ,
with exact real and complex roots:
ξ 1 = 4 , ξ 2 , 3 = ± 1 , ξ 4 , 5 , 6 , 7 = ± 2 , ξ 8 , 9 = ± 3 i , ξ 10 , 11 = ± 4 i , ξ 12 , 13 = 1 ± 2 i , ξ 14 , 15 = 1 ± i , ξ 16 , 17 = 1.0 ± i , ξ 18 , 19 = 2.0 ± 1 i , ξ 20 , 21 = 1.0 ± 3.0 i .
The initial guessed values below were randomly selected for global convergence:
ς 1 ( 0 ) = 4.2 + 0.1 i , ς 2 ( 0 ) = 1.2 + 0.1 i , ς ( 0 ) 3 = 2.2 + 0.1 i , ς ( 0 ) 4 = 2.2 0.1 i , ς ( 0 ) 5 = 0.2 + 2.1 i . ς 6 ( 0 ) = 0.2 + 3.1 i , ς 7 ( 0 ) = 0.2 3.1 i , ς ( 0 ) 8 = 1.2 + 2.1 i , ς ( 0 ) 9 = 1.2 2.1 i , ς ( 0 ) 10 = 1.2 2.1 i . ς 11 ( 0 ) = 1.2 + 1.1 i , ς 12 ( 0 ) = 1.2 1.1 i , ς ( 0 ) 13 = 1.2 + 1.1 i , ς ( 0 ) 14 = 1.2 1.1 i , ς ( 0 ) 15 = 2.2 + 1.1 i . ς 16 ( 0 ) = 2.2 1.1 i , ς 17 ( 0 ) = 1.2 + 3.1 i , ς ( 0 ) 18 = 1.2 3.1 i , ς ( 0 ) 19 = 0.2 + 4.1 i , ς ( 0 ) 20 = 1.2 4.1 i . ς 21 ( 0 ) = 1.1 + 0.2 i ,
For distinct roots:
f 3 * ( ς ) = ( ς 4 ) 600 ( ς 2 1 ) 100 ( ς 4 16 ) 200 ( ς 2 + 9 ) 300 ( ς 2 + 16 ) 400 ( ς 2 + 2 ς + 5 ) 500 ( ς 2 + 2 ς + 2 ) 600 ( ς 2 2 ς + 2 ) 700 ( ς 2 4 ς + 5 ) 800 ( ς 2 2 ς + 10 ) 900 .
Table 5 and Table 6, shows numerical results of iterative schemes MN1–MN3 and NN for nonlinear equation f 3 ( ς ) , f 3 * ( ς ) , respectively.
The simultaneous NN and MN1 for solving f 3 ( ς ) exhibit fractal behavior, as seen in Figure 6a,b. In comparison to solving fractional order polynomial equations, solving f 3 ( ς ) with NN and MN1 takes 0.008774 s and 0.00135 s, respectively. Furthermore, MN1 has a far higher convergence rate than NN. In the complex plane, NN converges to 212,413 points while MN1 converges to 621,345 points out of a total of 640,000. Due to global convergence, MN2 and MN3 exhibit the same fractal behavior as MN1.
Example 3: [58] Standard polynomial of degree 1000 with four multiple roots with multiplicities 100, 200, 300, and 400, respectively.
Consider a problem of beam positioning, resulting a nonlinear polynomial equation given as:
f 4 ( ς ) = ς 0.3 + 0.6 i 100 ς 0.1 + 0.7 i 200 ς 0.7 + 0.5 i 300 ς 0.3 + 0.4 i 400 .
The exact roots of (35) are ξ 1 = 0.3 + 0.6 i , ξ 2 = 0.1 + 0.7 i , ξ 3 = 0.7 + 0.5 i , ξ 4 = 0.3 + 0.4 i .
The initial estimates for f 4 ( ς ) are:
ς 1 ( 0 ) = 0.301 + 0.601 i , ς 2 ( 0 ) = 0.100 + 0.702 i , ς ( 0 ) 3 = 0.702 + 0.498 i , ς ( 0 ) 4 = 0.289 + 0.400 i
f 4 * ( ς ) = ς 0.3 + 0.6 i ς 0.1 + 0.7 i ς 0.7 + 0.5 i ς 0.3 + 0.4 i .
Table 7 and Table 8, presents the computational outcomes of MN1–MN3 and NN for nonlinear equation f 4 ( ς ) , f 4 * ( ς ) , respectively.
The simultaneous NN and MN1 for solving f 4 ( ς ) exhibit fractal behavior, as seen in Figure 7a,b. In comparison to solving fractional order polynomial equations, solving f 4 ( ς ) with NN and MN1 takes 0.036774 s and 0.0245 s, respectively. Furthermore, MN1 has a far higher convergence rate than NN. In the complex plane, NN converges to 610,115 points while MN1 converges to 637,017 points out of a total of 640,000. Due to global convergence, MN2 and MN3 exhibit the same fractal behavior as MN1.
Example 4: [58] Standard polynomial of degree 18 with four multiple roots.
A nonlinear polynomial equation for a beam alignment problem is as follows:
f 5 ( ς ) = ς 18 + 12 ς 16 + 268 ς 14 + 278 ς 12 + 3471 ς 10 + 324696 ς 8 + 620972 ς 6 2270592 ς 4 28303951 ς 2 25704900 .
The exact roots of (36) are ξ 1 , 2 = 3.06 ± 1.2 i ,   ξ 3 , 4 = 2.4 ± 3.4 i ,   ξ 5 = 1.7 , ξ 6 , 7 = 0.6 ± 2.7 i ,
ξ 8 , 9 = ± 1.8 i ξ 10 , 11 = ± 0.9 i ξ 12 , 13 = 0.6 ± 2.7 i , ξ 14 = 1.7 , ξ 15 , 16 = 2.4 ± 3.4 , ξ 17 , 18 = 3.0 ± 1.2 i .
The initial estimates for f 5 ( ς ) are:
ς 1 ( 0 ) = 1.1 + 2.1 i , ς 2 ( 0 ) = 1.1 2.1 i , ς ( 0 ) 3 = 1.1 + 2.1 i , ς ( 0 ) 4 = 1.1 2.1 i , ς ( 0 ) 5 = 2.2 + 0.1 i . ς 6 ( 0 ) = 2.1 + 0.1 i , ς 7 ( 0 ) = 0.2 + 0.9 i , ς ( 0 ) 8 = 0.2 + 1.1 i , ς ( 0 ) 9 = 3.1 + 1.9 i , ς ( 0 ) 10 = 3.2 1.9 i . ς 11 ( 0 ) = 3.2 + 1.9 i , ς 12 ( 0 ) = 3.2 1.9 i , ς ( 0 ) 13 = 2.1 + 2.9 i , ς ( 0 ) 14 = 2.1 2.9 i , ς ( 0 ) 15 = 2.1 + 2.9 i . ς 16 ( 0 ) = 2.1 2.9 i , ς 17 ( 0 ) = 0.1 + 3.1 i , ς ( 0 ) 18 = 0.1 2.9 i .
Table 9 and Table 10, shows numerical results of iterative schemes MN1–MN3 and NN for nonlinear equation f 5 ( ς ) , respectively.
The simultaneous NN and MN1 for solving f 5 ( ς ) exhibit fractal behavior, as seen in Figure 8a,b. In comparison to solving fractional order polynomial equations, solving f 5 ( ς ) with NN and MN1 takes 0.041294 s and 0.0215445 s, respectively. Furthermore, MN1 has a far higher convergence rate than NN. In the complex plane, NN converges to 217,615 points while MN1 converges to 621,971 points out of a total of 640,000. Due to global convergence, MN2 and MN3 exhibit the same fractal behavior as MN1.
Example 5: [58] Standard polynomial of degree 12 with two multiple roots with multiplicity 2 and 8 distinct roots.
Consider the nonlinear polynomial equation:
f 6 ( ς ) = ς 12 ( 2 + 5 i ) ς 11 ( 1 10 i ) ς 10 + ( 12 25 i ) ς 9 30 x 8 ς 4 + ( 2 + 5 i ) ς 3 + ( 1 10 i ) ς 2 ( 12 25 i ) ς + 30 .
The exact roots of (37) are ξ 1 , 2 = 2 2 ± 2 2 i ,   ξ 3 , 4 = 2 2 ± 2 2 i ,   ξ 5 , 6 = 1 ± 2 i ,
ξ 7 , 8 = ± i ξ 9 = 2 i , ξ 10 = 3 i , ξ 11 , 12 = ± 1 .
The initial estimates for f 6 ( ς ) are:
ς 1 ( 0 ) = 1.30 + 0.2 i , ς 2 ( 0 ) = 1.3 + 0.2 i , ς ( 0 ) 3 = 0.30 1.20 i , ς ( 0 ) 4 = 0.30 + 1.20 i , ς ( 0 ) 5 = 0.5 + 0.5 i , ς 6 ( 0 ) = 0.50 0.5 i , ς 7 ( 0 ) = 0.5 + 0.5 i , ς ( 0 ) 8 = 0.5 0.50 i , ς ( 0 ) 9 = 0.2 + 2.2 i , ς ( 0 ) 10 = 0.20 + 2.30 i , ς 11 ( 0 ) = 1.30 + 2.20 i , ς 12 ( 0 ) = 1.3 2.20 i ,
Table 11 and Table 12, shows numerical results of iterative schemes MN1–MN3 and NN for nonlinear equation f 6 ( ς ) , respectively.
The simultaneous NN and MN1 for solving f 6 ( ς ) exhibit fractal behavior, as seen in Figure 9a,b. In comparison to solving fractional order polynomial equations, solving f 6 ( ς ) with NN and MN1 takes 0.0745321 s and 0.0349 s, respectively. Furthermore, MN1 has a far higher convergence rate than NN. In the complex plane, NN converges to 572,416 points while MN1 converges to 635,097 points out of a total of 640,000. Due to global convergence, MN2 and MN3 exhibit the same fractal behavior as MN1.
Example 6. Model of Blood Rheology [59,60] Blood, a non-Newtonian fluid, is modeled using the Casson Fluid. Based to the Casson fluid model, an elementary fluid, like bloodstream, is going to move through a tube so that the gradient in velocity happens near the wall and the center portion of the fluid goes as a lump with little deformation.
We developed the plug flow of Casson fluids [60] using the following nonlinear polynomial equation, where flow rate reduction is measured by
G = 1 16 7 ς + 4 3 ς 1 21 ς 4 ,
or where reduction in flow rate is measured by G. Using G = 0.40 in (38), we have:
f 7 ( ς ) = 1 441 ς 8 8 63 ς 5 0.05714285714 ς 4 + 16 9 ς 2 3.624489796 ς + 0.36 .
The simultaneous NN and MN1 for solving f 7 ( ς ) exhibit fractal behavior, as seen in Figure 10a,b. In comparison to solving fractional order polynomial equations, solving f 7 ( ς ) with NN and MN1 takes 0.016774 s and 0.0145 s, respectively. Furthermore, MN1 has a far higher convergence rate than NN. In the complex plane, NN converges to 630,819 points while MN1 converges to 639,514 points out of a total of 640,000. Due to global convergence, MN2 and MN3 exhibit the same fractal behavior as MN1.
The exact roots of (38) are:
ζ 1 = 0.1046986515 , ζ 2 = 3.822389235 , ζ 3 = 1.553919850 + 0.9404149899 i , ζ 4 = 1.238769105 + 3.408523568 i , ζ 5 = 2.278694688 + 1.987476450 i ζ 6 = 2.278694688 1.987476450 i , ζ 7 = 1.238769105 3.408523568 , ζ 8 = 1.553919850 0.9404149899 .
we take the following initial guesses:
ς 1 ( 0 ) = 0.1 , ς 2 ( 0 ) = 3.8 , ς ( 0 ) 3 = 1.5 + 0.9 i , ς ( 0 ) 4 = 1.2 + 3.4 i . ς 5 ( 0 ) = 2.2 + 1.9 i , ς 6 ( 0 ) = 2.2 1.9 i ς 7 ( 0 ) = 1.2 3.4 i , ς ( 0 ) 8 = 1.5 + 0.9 i .
Table 13, shows numerical results of iterative schemes MN1–MN3 and NN for nonlinear equation f 7 ( ς ) , respectively.
Figure 11a−f, shows the residual error graph of the simultaneous method MN1–MN3 and NN for polynomial equations used in examples 1–6, respectively.

5. Conclusions

Here, we have developed three families of simultaneous methods of convergence order seventeenth for finding all the real, complex, distinct, and multiple roots of (1). Fractal behavior from Figure 1, Figure 2, Figure 3, Figure 4, Figure 5, Figure 6, Figure 7, Figure 8, Figure 9 and Figure 10 of simultaneous schemes is explored in detail, and it is discovered that in terms of convergence divergence region, our newly created methods MN1–MN3 converge to more points than NN methods while using less elapsed time. The fractal behavior of simultaneous techniques clearly indicated that the newly developed methods are more stable, consistent, and reliable than existing methods N1–N3 and NN for solving nonlinear complex engineering problems. The numerical results from Table 1, Table 2, Table 3, Table 4, Table 5, Table 6, Table 7, Table 8, Table 9, Table 10, Table 11, Table 12 and Table 13, and Figure 11a–f, show that the newly developed technique MN1–MN3 outperforms existing method NN in terms of computational cost, efficiency, CPU-time, and residual errors. In the future, we construct higher order efficient, optimal, and stable iterative methods for finding simple as well as all distinct and multiple roots of (1).
In the future, we will develop and generalize higher-order simultaneous methods for solving linear and nonlinear systems of equations. We may also explore the applications of these results in other areas. For example, the dynamics nonlinear inventory management system, numerical solution of fractional SIR epidemiological model, stability and optimal control strategies for a novel epidemic model [61,62] of COVID-19.

Author Contributions

Conceptualization, M.S. and B.C.; methodology, M.S. and N.R.; software, M.S.; validation, M.S.; formal analysis, B.C. and N.A.M.; investigation, M.S.; resources, B.C.; writing—original draft preparation, M.S. and B.C.; writing—review and editing, B.C.; visualization, M.S. and B.C.; supervision, B.C.; project administration, B.C.; funding acquisition, B.C. All authors have read and agreed to the published version of the manuscript.

Funding

The work is supported by Provincia autonoma di Bolzano/Alto Adigeâ euro Ripartizione Innovazione, Ricerca, Universitá e Musei (contract nr. 19/34). Bruno Carpentieri is a member of the Gruppo Nazionale per it Calcolo Scientifico (GNCS) of the Istituto Nazionale di Alta Matematia (INdAM) and this work was partially supported by INdAM-GNCS under Progetti di Ricerca 2022.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this article.

References

  1. Parida, P.K.; Gupta, D.K. An improved method for finding multiple roots and it’s multiplicity of nonlinear equations in R. Appl. Math. Comput. 2008, 202, 498–503. [Google Scholar] [CrossRef]
  2. Miyakoda, T. Iterative methods for multiple zeros of a polynomial by clustering. J. Comput. Appl. Math. 1989, 28, 315–326. [Google Scholar] [CrossRef]
  3. Soleymani, F.; Babajee, D.K.R.; Lotfi, T. On a numerical technique for finding multiple zeros and its dynamic. J. Egyp. Math. Soc. 2013, 21, 346–353. [Google Scholar] [CrossRef]
  4. Traub, J.F. Iterative methods for the solution of equations. Am. Math. Soc. 1982, 312, 1–10. [Google Scholar] [CrossRef]
  5. Lagouanelle, J.L. Sur une mtode de calcul de l’ordre de multiplicit des zros d’un polynme. C. R. Acad. Sci. Paris Sr. A. 1966, 262, 626–627. [Google Scholar]
  6. Petković, M.S.; Petković, L.D.; Džunić, J. Accelerating generators of iterative methods for finding multiple roots of nonlinear equations. Comput. Math. Appl. 2010, 59, 2784–2793. [Google Scholar] [CrossRef]
  7. Johnson, T.; Tucker, W. Enclosing all zeros of an analytic function—A rigorous approach. J. Comput. Appl. Math. 2009, 228, 418–423. [Google Scholar] [CrossRef]
  8. Fu, D.; Si, Y.; Wang, D.; Xiong, Y. An accelerated neural dynamics model for solving dynamic nonlinear optimization problem and its applications. Chaos Solitons Fractals 2024, 180, 114542. [Google Scholar] [CrossRef]
  9. Awwal, A.M.; Wang, L.; Kumam, P.; Mohammad, H.; Watthayu, W. A projection Hestenes Stiefel method with spectral parameter for nonlinear monotone equations and signal processing. Math. Comput. Appl. 2020, 25, 27. [Google Scholar] [CrossRef]
  10. Vyas, S.; Golub, M.D.; Sussillo, D.; Shenoy, K.V. Computation through neural population dynamics. Annu. Rev. Neurosci. 2020, 43, 249–275. [Google Scholar] [CrossRef]
  11. Ali, K.; Ahmad, A.; Ahmad, S.; Nisar, K.S.; Ahmad, S. Peristaltic pumping of MHD flow through a porous channel: Biomedical engineering application. Waves Random Complex Media 2023, 1, 1–30. [Google Scholar] [CrossRef]
  12. Ali, H.S.; Habib, M.A.; Miah, M.M.; Akbar, M.A. Solitary wave solutions to some nonlinear fractional evolution equations in mathematical physics. Heliyon 2020, 6, 1–12. [Google Scholar] [CrossRef]
  13. Chicharro, F.I.; Cordero, A.; Torregrosa, J.R. Drawing dynamical and parameters planes of iterative families and methods. Sci. World J. 2013, 2013, 780153. [Google Scholar] [CrossRef]
  14. Kung, H.T.; Traub, J.F. Optimal order of one-point and multipoint iteration. J. Assoc. Comput. Mach. 1974, 21, 643–651. [Google Scholar] [CrossRef]
  15. King, R. Family of fourth-order methods for nonlinear equations. SIAM J. Numer. Anal. 1973, 10, 876–879. [Google Scholar] [CrossRef]
  16. Cordero, A.; Hueso, J.L.; Martínez, E.; Torregrosa, J.R. New modifications of Potra–Pták’s method with optimal fourth and eighth orders of convergence. J. Comput. Appl. Math. 2010, 234, 2969–2976. [Google Scholar] [CrossRef]
  17. Chun, C. Fourth-order iterative methods for solving nonlinear equations. Appl. Math. Comput. 2008, 10, 454–459. [Google Scholar] [CrossRef]
  18. Behl, R.; Kanwar, V.; Sharma, K.K. Modified optimal families of fourth-order Jarratt’s method. Int. J. Pure Appl. Math. 2013, 84, 331–343. [Google Scholar] [CrossRef]
  19. Kou, J.; Li, Y.; Wang, X. A composite fourth-order iterative method for solving non-linear equations. Appl. Math. Comput. 2007, 184, 471–475. [Google Scholar] [CrossRef]
  20. Chun, C.; Ham, Y. Some fourth-order modifications of Newton’s method. Appl. Math. Comput. 2008, 197, 654–658. [Google Scholar] [CrossRef]
  21. Jarratt, P. Some efficient fourth-order multipoint methods for solving equations. BIT 1969, 9, 119–124. [Google Scholar] [CrossRef]
  22. Chicharro, F.I.; Cordero, A.; Garrido, N.; Torregrosa, J.R. Generating root-finder iterative methods of second order: Convergence and stability. Axioms 2019, 8, 55. [Google Scholar] [CrossRef]
  23. Ostrowski, A.M. Solution of Equations and Systems of Equations; Prentice-Hall: Englewood Cliffs, NJ, USA, 1964. [Google Scholar]
  24. Cordero, A.; García-Maimó, J.; Torregrosa, J.R.; Vassileva, M.P.; Vindel, P. Chaos in King’s iterative family. Appl. Math. Lett. 2013, 26, 842–848. [Google Scholar] [CrossRef]
  25. Weierstrass, K. Neuer Beweis des Satzes, dass jede ganze rationale Function einer Vern derlichen dargestellt werden kann alsein Product aus linearen Functionen derselben Ver n derlichen. Sitzungsberichte K Niglich Preuss. Akad. Der Wiss. Berl. 1981, 2, 1085–1101. [Google Scholar]
  26. Kerner, I.O. Ein gesamtschrittverfahren zur berechnung der nullstellen von polynomen. Numer. Math. 1966, 8, 290–294. [Google Scholar] [CrossRef]
  27. Durand, A. Solutions numeriques des equations algebriques Tom 1: Equation du tupe F (I) = 0. Racines d’um Polnome Masson 1960, 2030, 279–281. [Google Scholar]
  28. Dochev, K. Modified Newton methodfor the simultaneous computation of all roots of a given algebraic equation, in Bulgarian. Phys. Math. J. Bulg. Acad. Sci. 1962, 5, 136–139. [Google Scholar]
  29. Presic, S. Un proca ditratif pour la factorisation des polynames. C. R. Acad. Sci. Paris 1966, 262, 862–863. [Google Scholar]
  30. Dochev, K.; Byrnev, P. Certain modifications of Newton’s method for the approximate solution of algebraic equations. USSR Comput. Math. Math. Phy. 1964, 4, 174–182. [Google Scholar] [CrossRef]
  31. Börsch-Supan, W. Residuenabsch tzung fr Polynom-Nullstellen mittels Lagrange-Interpolation. Numer. Math. 1970, 14, 287–296. [Google Scholar] [CrossRef]
  32. Ehrlich, L.W. A modified Newton method for polynomials. Commun. ACM 1967, 10, 107–108. [Google Scholar] [CrossRef]
  33. Nourein, A.W. An improvement on Nourein’s method for the simultaneous determination of the zeroes of a polynomial (an algorithm). J. Comput. Appl. Math. 1977, 3, 109–112. [Google Scholar] [CrossRef]
  34. Anourein, A.W.M. An improvement on two iteration methods for simultaneous determination of the zeros of a polynomial. Int. J. Comput. Math. 1977, 6, 241–252. [Google Scholar] [CrossRef]
  35. Sakurai, T.; Torii, T.; Sugiura, H. A high-order iterative formula for simultaneous determination of zeros of a polynomial. J. Comput. Appl. Math. 1991, 38, 387–397. [Google Scholar] [CrossRef]
  36. Proinov, P.D.; Ivanov, S.I. Convergence analysis of Sakurai-Torii-Sugiura iterative method for simultaneous approximation of polynomial zeros. J. Comput. Appl. Math. 2019, 357, 56–70. [Google Scholar] [CrossRef]
  37. Proinov, P.; Ivanov, S.I. Local and semi local convergence of an accelerated Sakurai-Torii-Sugiura method with Newton’s correction. In Proceedings of the International Conference of Numerical Analysis and Applied Mathematics, (ICNAAM 2018), Rhodes, Greece, 13–18 September 2018. [Google Scholar]
  38. Petković, M.S.; Stefanovic, L.V. On some improvements of square root iteration for polynomial complex zeros. J. Comput. Appl. Math. 1986, 15, 13–25. [Google Scholar] [CrossRef]
  39. Wang, D.R.; Wu, Y.J.; Wang, D. Some modifications of the parallel Halley iteration method and their convergence. Computing 1987, 38, 75–87. [Google Scholar] [CrossRef]
  40. Petković, M.S.; Triokovic, S.; Herceg, D. On Euler-like methods for the simultaneous approximation of polynomial zeros. Jpn. J. Ind. Appl. Math. 1998, 15, 295–315. [Google Scholar] [CrossRef]
  41. Proinov, P.D.; Ivanov, S.I.; Petković, M.S. On the convergence of Gander’s type family of iterative methods for simultaneous approximation of polynomial zeros. Appl. Math. Comput. 2019, 349, 168–183. [Google Scholar] [CrossRef]
  42. Neves Machado, R.; Lopes, L.G. Ehrlich-type methods with King’s correction for the simultaneous approximation of polynomial complex zeros. Math. Stat. 2019, 7, 129–134. [Google Scholar] [CrossRef]
  43. Proinov, P.D.; Vasileva, M.T. A new family of high-order ehrlich-type iterative methods. Mathematics 2021, 9, 1855. [Google Scholar] [CrossRef]
  44. Shams, M.; Carpentieri, B. Efficient Inverse Fractional Neural Network-Based Simultaneous Schemes for Nonlinear Engineering Applications. Fractal Fract. 2023, 7, 849. [Google Scholar] [CrossRef]
  45. Petković, M.S.; Petković, L.D.; Duni, J. On an efficient simultaneous method for finding polynomial zeros. Appl. Math. Lett. 2014, 28, 60–65. [Google Scholar] [CrossRef]
  46. Proinov, P.D.; Petkova, M.D. Local and semilocal convergence of a family of multi-point Weierstrass-type root finding methods. Mediterr. J. Math. 2020, 17, 107. [Google Scholar] [CrossRef]
  47. Proinov, P.D.; Vasileva, M.T. On the convergence of high-order Ehrlich-type iterative methods for approximating all zeros of a polynomial simultaneously. J. Ineq. Appl. 2015, 1, 1–25. [Google Scholar] [CrossRef]
  48. Chinesta, F.; Cordero, A.; Garrido, N.; Torregrosa, J.R.; Triguero-Navarro, P. Simultaneous roots for vectorial problems. Comput. Appl. Math. 2023, 42, 227. [Google Scholar] [CrossRef]
  49. Cordero, A.; Garrido, N.; Torregrosa, J.R.; Triguero-Navarro, P. An iterative scheme to obtain multiple solutions simultaneously. Appl. Math. Lett. 2023, 145, 108738. [Google Scholar] [CrossRef]
  50. Shams, M.; Kausar, N.; Araci, S.; Oros, G.I. Numerical scheme for estimating all roots of non-linear equations with applications. AIMS Math. 2023, 8, 23603–23620. [Google Scholar] [CrossRef]
  51. Aberth, O. Iteration methods for finding all zeros of apolynomial simultaneously. Math. Comp. 1973, 27, 339–344. [Google Scholar] [CrossRef]
  52. Alefeld, G.; Herzberger, J. Introduction to Interval Computation; Academic Press: Cambridge, MA, USA, 2012. [Google Scholar]
  53. Milovanovic, G.V.; Petkovic, M.S. On computational efficiency of the iterative methods for the simultaneous approximation ofpolynomial zeros. ACM Trans. Math. Softw. 1986, 12, 295–306. [Google Scholar] [CrossRef]
  54. Petković, M.S.; Petković, L.D.; Džunić, J. On an efficient method for the simultaneous approximation of polynomial multiple root. Appl. Anal. Discret. Math. 2014, 8, 73–94. [Google Scholar] [CrossRef]
  55. Rafiq, N. Numerical Solution of Nonlinear Equations; CASPAM, Bahauddin Zakariya University: Multan, Pakistan, 2014. [Google Scholar]
  56. Mir, N.A.; Shams, M.; Rafiq, N.; Akram, S.; Ahmed, R. On Family of Simultaneous Method for Finding Distinct as Well as MultipleRoots of Non-linear Equation. Punjab Univ. J. Math. 2020, 52, 31–44. [Google Scholar]
  57. Fournier, R.L. Basic Transport Phenomena in Biomedical Engineering; Taylor & Franics: New York, NY, USA, 2007. [Google Scholar]
  58. Farmer, M.R. Computing the Zeros of Polynomials Using the Divide and Conquer Approach. Ph.D. Thesis, Department of Computer Science and Information Systems, Birkbeck, University of London, London, UK, 2014. [Google Scholar]
  59. Griffithms, D.V.; Smith, I.M. Numerical Methods for Engineers, 2nd ed.; Special Indian Edition; CRC: Boca Raton, FL, USA, 2011. [Google Scholar]
  60. Bradie, B. A Friendly Introduction to Numerical Analysis; Pearson Education Inc.: New Delhi, India, 2006. [Google Scholar]
  61. Prodanov, D. Analytical parameter estimation of the SIR epidemic model. Applications to the COVID-19 pandemic. Entropy 2020, 23, 59. [Google Scholar] [CrossRef]
  62. Papa, F.; Binda, F.; Felici, G.; Franzetti, M.; Gandolfi, A.; Sinisgalli, C.; Balotta, C. A simple model of HIV epidemic in Italy: The role of the antiretroviral treatment. Math. Biosci. Eng. 2018, 15, 181–207. [Google Scholar]
Figure 1. (ag) shows basins of attraction for nonlinear function f ( ς ) = ς 312 / 99 1 3 of iterative methods N1–N3, MN1–MN3, and NN, respectively.
Figure 1. (ag) shows basins of attraction for nonlinear function f ( ς ) = ς 312 / 99 1 3 of iterative methods N1–N3, MN1–MN3, and NN, respectively.
Fractalfract 08 00162 g001aFractalfract 08 00162 g001b
Figure 2. (ag) shows basins of attraction for nonlinear function f ( ς ) = e ς 210 / 99 log ( ς ) of iterative methods N1–N3, MN1–MN3, and NN, respectively.
Figure 2. (ag) shows basins of attraction for nonlinear function f ( ς ) = e ς 210 / 99 log ( ς ) of iterative methods N1–N3, MN1–MN3, and NN, respectively.
Fractalfract 08 00162 g002aFractalfract 08 00162 g002b
Figure 3. (ad): Graphically clarifies these percentage costs of computation. It is obvious from (bd), that MN1–MN3 are more efficient as compared to NN.
Figure 3. (ad): Graphically clarifies these percentage costs of computation. It is obvious from (bd), that MN1–MN3 are more efficient as compared to NN.
Fractalfract 08 00162 g003
Figure 4. Building of multi-valent legend in solution to univalent receptors on a cell surface.
Figure 4. Building of multi-valent legend in solution to univalent receptors on a cell surface.
Fractalfract 08 00162 g004
Figure 5. (a,b) shows fractal behavior of NN and MN1 for solving f 2 ( ς ) .
Figure 5. (a,b) shows fractal behavior of NN and MN1 for solving f 2 ( ς ) .
Fractalfract 08 00162 g005
Figure 6. (a,b) shows fractal behavior of NN and MN for solving f 3 ( ς ) .
Figure 6. (a,b) shows fractal behavior of NN and MN for solving f 3 ( ς ) .
Fractalfract 08 00162 g006
Figure 7. (a,b) shows fractal behavior of NN and MN1 for solving f 4 ( ς ) .
Figure 7. (a,b) shows fractal behavior of NN and MN1 for solving f 4 ( ς ) .
Fractalfract 08 00162 g007
Figure 8. (a,b) shows fractal behavior of NN and MN1 for solving f 5 ( ς ) .
Figure 8. (a,b) shows fractal behavior of NN and MN1 for solving f 5 ( ς ) .
Fractalfract 08 00162 g008
Figure 9. (a,b) shows fractal behavior of NN and MN1 for solving f 6 ( ς ) .
Figure 9. (a,b) shows fractal behavior of NN and MN1 for solving f 6 ( ς ) .
Fractalfract 08 00162 g009
Figure 10. (a,b) shows fractal behavior of NN and MN1 for solving f 7 ( ς ) .
Figure 10. (a,b) shows fractal behavior of NN and MN1 for solving f 7 ( ς ) .
Fractalfract 08 00162 g010
Figure 11. (af) shows the residual error graph of the simultaneous method MN1–MN3 and NN for polynomial equations used in examples 1–6, respectively.
Figure 11. (af) shows the residual error graph of the simultaneous method MN1–MN3 and NN for polynomial equations used in examples 1–6, respectively.
Fractalfract 08 00162 g011aFractalfract 08 00162 g011b
Table 1. Elasped time in seconds.
Table 1. Elasped time in seconds.
MethodN1N2N3MN1MN2MN3NN
f ( ς ) = ς 312 / 99 1 3 5.12936.14224.32310.07140.01520.01420.1235
f ( ς ) = e ς 210 / 99 log ( ς ) 4.16095.23823.43190.01240.01930.01470.2145
Table 2. Basic operations per iterations.
Table 2. Basic operations per iterations.
MN1MN2MN3NN
Addition and subtraction 6 7 9 8
Multiplications 1 1 1 6
Divisions 2 1 2 2
where = 1 + 2 ; 1 = m 2 ; 2 = O ( m ) .
Table 3. Parallel numerical scheme residual error comparison NN, MN1–MN3.
Table 3. Parallel numerical scheme residual error comparison NN, MN1–MN3.
MethodCPU-Time e 1 ( 2 ) e 2 ( 2 ) e 3 ( 2 ) e 4 ( 2 ) e 5 ( 2 ) e 6 ( 2 ) e 7 ( 2 ) e 8 ( 2 ) e 9 ( 2 )
NN0.4060.00.00.05.7  × 10 17 3.2  × 10 21 0.00.00.00.0
MN 1 0.0160.00.00.00.00.00.00.00.00.0
MN 2 0.1560.00.00.00.00.00.00.00.00.0
MN 3 0.1590.00.00.06.0  × 10 37 0.9  × 10 45 0.00.00.00.0
Table 4. Approximate roots of the f 2 ( ς ) utilising MN1–MN3 up-to 25 decimal places.
Table 4. Approximate roots of the f 2 ( ς ) utilising MN1–MN3 up-to 25 decimal places.
RootsApproximate Roots after Second Iteration Using Simultaneous Methods MN1–MN3
ς 1 ( 2 ) − 59,696.70991140727878983742649 − 17,330.45629984850327806571946 × i
ς 2 ( 2 ) − 59,696.70991140727878983742649 + 17,330.45629984850327806571946 × i
ς 3 ( 2 ) − 35,422.49261941473703632706384 − 41,931.87069228758403349587288 × i
ς 4 ( 2 ) − 35,422.49261941473703632706384 + 41,931.87069228758403349587288 × i
ς 5 ( 2 ) − 1235.689293801075002171094064 − 42,268.27946411286842679890715 × i
ς 6 ( 2 ) − 1235.689293801075002171094064 + 42,268.27946411286842679890715 × i
ς 7 ( 2 ) 10,470.78502392132516903681496 + 0 × i
ς 8 ( 2 ) 22,153.98207243945655027389363 − 18,201.65072991473532250086939 × i
ς 9 ( 2 ) 22,153.98207243945655027389363 + 18,201.65072991473532250086939 × i
Table 5. Parallel numerical scheme residual error comparison NN, MN1–MN3.
Table 5. Parallel numerical scheme residual error comparison NN, MN1–MN3.
MethodCPU-Time e 1 ( 2 ) e 2 ( 2 ) e 3 ( 2 ) e 4 ( 2 ) e 5 ( 2 ) e 6 ( 2 ) e 7 ( 2 )
NN4.5470.1  × 10 11 3.6  × 10 12 3.9  × 10 10 3.3  × 10 12 1.0  × 10 10 3.0  × 10 11 3.0  × 10 11
MN 1 1.5471.1  × 10 114 3.2  × 10 85 3.3  × 10 113 3.6  × 10 114 3.0  × 10 110 3.6  × 10 98 0.0
MN 2 1.3201.3  × 10 224 1.0  × 10 115 3.0  × 10 114 1.6  × 10 118 0.1  × 10 117 6.9  × 10 115 0.0
MN 3 1.0161.0  × 10 214 3.0  × 10 115 1.0  × 10 123 3.5  × 10 117 3.0  × 10 114 2.0  × 10 101 0.0
Residual errors for finding of all multiple roots of polynomial of degree 10200 with 21 multiple roots
NN41.4063.0  × 10 3 3.6  × 10 3 1.0  × 10 9 3.0  × 10 3 3.1  × 10 2 2.0  × 10 3 4.8  × 10 7
MN 1 12.5105.5  × 10 18511 3.7  × 10 3661 3.0  × 10 3224 3.0  × 10 1192 2.0  × 10 498 1.1  × 10 1518 2.5  × 10 3866
MN 2 11.3203.3  × 10 17588 3.8  × 10 2661 3.6  × 10 3224 0.9  × 10 1214 5.5  × 10 385 1.0  × 10 1517 5.0  × 10 3513
MN 3 13.0333.6  × 10 14561 3.9  × 10 4125 3.6  × 10 3112 0.8  × 10 2112 2.6  × 10 396 3.6  × 10 1311 3.6  × 10 1501
e 8 ( 2 ) e 9 ( 2 ) e 10 ( 2 ) e 11 ( 2 ) e 12 ( 2 ) e 13 ( 2 ) e 14 ( 2 ) e 15 ( 2 ) e 16 ( 2 ) e 17 ( 2 )
2.0  × 10 12 2.3  × 10 12 1.3  × 10 101 0.00.00.00.00.2  × 10 102 9.3  × 10 90 8.9  × 10 112
0.00.00.00.00.00.00.00.00.09.5  × 10 201
0.00.00.00.00.00.00.00.00.02.2  × 10 121
0.00.00.00.00.00.00.00.00.03.1  × 10 131
using MN1–3, NN
1.0  × 10 3 3.9  × 10 2 9.3  × 10 1014 3.6  × 10 3145 9.1  × 10 1112 0.00.00.06.0  × 10 1103 7.8  × 10 330
0.00.09.0  × 10 3215 4.9  × 10 4120 7.8  × 10 1125 0.00.00.00.08.2  × 10 1205
0.00.05.6  × 10 5125 0.5  × 10 4412 8.1  × 10 1225 0.00.00.00.03.6  × 10 1225
0.00.01.2  × 10 6132 5.9  × 10 4 4.5  × 10 4255 0.00.00.00.04.6  × 10 1135
e 18 ( 2 ) e 19 ( 2 ) e 20 ( 2 ) e 21 ( 2 )
7.8  × 10 91 1.0  × 10 107 9.8  × 10 112 9.2  × 10 111
8.7  × 10 108 3.5  × 10 158 9.6  × 10 319 3.2  × 10 113
1.2  × 10 228 1.0  × 10 118 3.2  × 10 319 1.3  × 10 103
1.0  × 10 128 9.0  × 10 128 1.2  × 10 219 1.1  × 10 123
respectively.
6.3  × 10 2001 3.6  × 10 1152 6.3  × 10 3011 3.6  × 10 1222
0.09.8  × 10 50122 7.2  × 10 4513 4.5  × 10 4135
0.08.0  × 10 50145 3.3  × 10 4873 1.2  × 10 4014
0.09.8  × 10 50143 0.3  × 10 4713 3.2  × 10 4781
Table 6. Approximate roots of the f 3 ( ς ) utilising MN1–MN3 up-to 25 decimal places.
Table 6. Approximate roots of the f 3 ( ς ) utilising MN1–MN3 up-to 25 decimal places.
RootsApproximate Roots after Second Iteration Using Simultaneous Methods MN1–MN3
ς 1 ( 2 ) 3.9999999999999999999999862205 − 2.534958216275585934291378436 × 10−22 × i
ς 2 ( 2 ) − 0.9999999999999999999999893629 − 5.690633353809298633369220348 × 10−26 × i
ς 3 ( 2 ) 1.9999999999999999999989781561 + 2.614467550760436593548580136 × 10−20 × i
ς 4 ( 2 ) − 1.999999999999999999999930336 + 3.160564818685589784758770362 × 10−23 × i
ς 5 ( 2 ) − 2.323214931157996872628855725 × 10−21 + 2.0000000000000000000342 × i
ς 6 ( 2 ) − 2.529298089824518863489277124 × 10−22 − 1.999999999999999999999938556 × i
ς 7 ( 2 ) 1.023976376364266788722524957 × 10−21 + 3.000000000000000000005376510 × i
ς 8 ( 2 ) − 2.840865161134075189641229891 × 10−24 − 2.999999999999999999999993380 × i
ς 9 ( 2 ) − 1.0000000000000000000000000011529 + 2.00000000000000000000000000280 × i
ς 10 ( 2 ) − 0.99999999999999999999999997808790 − 2.00000000000000000000000105176 × i
ς 11 ( 2 ) − 1.0000000000000000000000000003310 + 0.999999999999999999999999960241 × i
ς 12 ( 2 ) − 1.00000000000000000000000000005614 − 1.0000000000000000000000001786 × i
ς 13 ( 2 ) 0.9999999999999999999998888888966375 + 0.99999999999999999999997438242 × i
ς 14 ( 2 ) 1.00000000000000000000000000004082 − 0.9999999999999999999999999976687 × i
ς 15 ( 2 ) 2.00000000000000000000000000304411 + 1.000000000000000000000001584937 × i
ς 16 ( 2 ) 2.00000000000000000000000000050095 − 0.9999999999999999999999999538760 × i
ς 17 ( 2 ) 1.00000000000000000000000007480959 + 3.000000000000000000000012531548 × i
ς 18 ( 2 ) 0.999999999999999999999999676240 − 3.000000000000000000000000000014860 × i
ς 19 ( 2 ) 6.7038343474826752003668885234 × 10−21 + 4.00000000000000000000005924627 × i
ς 20 ( 2 ) 10.593133679422746292651200009402 − 3.19174226899999915825206220041498 × i
ς 21 ( 2 ) 1.00000000000000000000000000032127 − 5.51392157952973235574917078 × 10−24 × i
Table 7. Parallel numerical scheme residual error comparison NN, MN1–MN3.
Table 7. Parallel numerical scheme residual error comparison NN, MN1–MN3.
MethodCPU-Time e 1 ( 2 ) e 2 ( 2 ) e 3 ( 2 ) e 4 ( 2 )
NN0.2353.1  × 10 27 0.02.1  × 10 41 0.0
MN 1 0.2010.00.00.00.0
MN 2 0.2110.00.00.00.0
MN 3 0.1910.00.00.00.0
Simultaneous finding of all multiple roots
NN0.3351.2  × 10 1105 0.01.2  × 10 1613 0.0
MN 1 0.1017.2  × 10 2114 1.2  × 10 3801 1.2  × 10 5305 5.2  × 10 6662
MN 2 0.1215.2  × 10 2100 4.2  × 10 3710 3.2  × 10 5211 6.2  × 10 6653
MN 3 0.3119.2  × 10 2100 7.2  × 10 3601 1.9  × 10 5305 8.2  × 10 6714
Table 8. Approximate roots of the f 4 ( ς ) utilising MN1–MN3 up-to 25 decimal places.
Table 8. Approximate roots of the f 4 ( ς ) utilising MN1–MN3 up-to 25 decimal places.
RootsApproximate Roots after Second Iteration Using Simultaneous Methods MN1–MN3
ς 1 ( 2 ) 0.3000000000000000000000000000 + 0.6000000000000000000000000000 × i
ς 2 ( 2 ) 0.1000000000000000000000000000 + 0.7000000000000000000000000000 × i
ς 3 ( 2 ) 0.7000000000000000000000000000 + 0.5000000000000000000000000000 × i
ς 4 ( 2 ) 0.3000000000000000000000000000 + 0.4000000000000000000000000000 × i
Table 9. Parallel numerical scheme residual error comparison NN, MN1–MN3.
Table 9. Parallel numerical scheme residual error comparison NN, MN1–MN3.
MethodCPU-Time e 1 ( 2 ) e 2 ( 2 ) e 3 ( 2 ) e 4 ( 2 ) e 5 ( 2 ) e 6 ( 2 ) e 7 ( 2 )
NN0.2350.3  × 10 87 3.0  × 10 91 1.0  × 10 90 4.3  × 10 93 5.0  × 10 83 2.1  × 10 12 2.1  × 10 72
MN 1 0.2017.3  × 10 101 3.2  × 10 101 2.0  × 10 110 1.3  × 10 125 3.4  × 10 137 6.0  × 10 112 0.3  × 10 101
MN 2 0.2111.7  × 10 111 0.1  × 10 101 1.2  × 10 101 5.5  × 10 225 7.6  × 10 134 8.0  × 10 112 9.3  × 10 104
MN 3 0.1918.3  × 10 111 9.0  × 10 111 2.1  × 10 101 4.3  × 10 125 3.0  × 10 132 2.8  × 10 112 8.3  × 10 101
e 8 ( 2 ) e 9 ( 2 ) e 10 ( 2 ) e 11 ( 2 ) e 12 ( 2 ) e 13 ( 2 ) e 14 ( 2 ) e 15 ( 2 )
2.1  × 10 82 2.5  × 10 73 9.3  × 10 74 3.0  × 10 77 2.1  × 10 73 1.3  × 10 112 3.0  × 10 98 2.0  × 10 85
6.1  × 10 114 7.0  × 10 113 8.3  × 10 115 3.5  × 10 117 2.2  × 10 114 9.3  × 10 117 9.0  × 10 111 2.9  × 10 135
3.3  × 10 131 5.8  × 10 101 1.5  × 10 110 3.3  × 10 118 0.6  × 10 115 6.3  × 10 119 0.9  × 10 112 5.0  × 10 131
9.0  × 10 120 7.6  × 10 100 7.3  × 10 115 6.0  × 10 120 1.7  × 10 116 7.3  × 10 121 4.4  × 10 115 2.5  × 10 117
e 16 ( 2 ) e 17 ( 2 ) e 18 ( 2 )
1.3  × 10 95 3.0  × 10 95 5.0  × 10 91
2.3  × 10 111 9.0  × 10 102 0.1  × 10 119
1.3  × 10 115 5.8  × 10 113 2.9  × 10 203
5.3  × 10 101 3.9  × 10 114 8.0  × 10 117
Table 10. Approximate roots of the f 5 ( ς ) utilising MN1–MN3 up-to 25 decimal places.
Table 10. Approximate roots of the f 5 ( ς ) utilising MN1–MN3 up-to 25 decimal places.
RootsApproximate Roots after Second Iteration Using Simultaneous Methods MN1–MN3
ς 1 ( 2 ) 0.8603435385822107751310968108 + 1.854362078138018927197345307 × i
ς 2 ( 2 ) 0.8603435385824826826164634926 − 1.854362078137389759121145467 × i
ς 3 ( 2 ) − 0.8603435385824826826191976484 + 1.854362078137389759116467203 × i
ς 4 ( 2 ) − 0.8603435385824826826250491509 − 1.854362078137389758498169291 × i
ς 5 ( 2 ) 2.099515617155402337664929826 + 2.0018942089444809594275 × 10−25 × i
ς 6 ( 2 ) − 2.099515617155402337664930488 + 8.500261481816581859095 × 10−65 × i
ς 7 ( 2 ) − 0.1153355754810685747732055498 − 0.9452308584818627592298044050 × i
ς 8 ( 2 ) 5.735924944438839111033359624 + 9.9691958921656837672916427681 × i
ς 9 ( 2 ) 2.780333195217222311406266628 + 1.5314159752240276458048625411 × i
ς 10 ( 2 ) 10.38458363502628061602412598 − 2.2689074595992585372659252860 × i
ς 11 ( 2 ) − 2.889237782026642801550622077 + 1.387876808887673226733098622 × i
ς 12 ( 2 ) − 2.889237274653490024449890492 − 1.387876796309858631439562289 × i
ς 13 ( 2 ) 2.378876612988062257880708956 + 3.4667457639218525242487229114 × i
ς 14 ( 2 ) 2.378876612988074253739225478 − 3.4667457639218476882627822682 × i
ς 15 ( 2 ) − 2.378876612988074361358332472 + 3.466745763921847828159258028 × i
ς 16 ( 2 ) − 2.378876615468194672405838207 − 3.466745742882904234182513076 × i
ς 17 ( 2 ) − 1.705508831120901418129885320 × 10−50 + 3.1838193018332206021608 × i
ς 18 ( 2 ) 2.684827765525257608577503573 × 10−34 − 3.18381930183322060216087 × i
Table 11. Parallel numerical scheme residual error comparison NN, MN1–MN3.
Table 11. Parallel numerical scheme residual error comparison NN, MN1–MN3.
MethodCPU-Time e 1 ( 2 ) e 2 ( 2 ) e 3 ( 2 ) e 4 ( 2 ) e 5 ( 2 ) e 6 ( 2 ) e 7 ( 2 ) e 8 ( 2 )
NN3.5471.0  × 10 57 3.1  × 10 40 8.4  × 10 51 6.6  × 10 52 1.1  × 10 47 0.1  × 10 51 3.1  × 10 47 0.1  × 10 59
MN 1 1.2010.08.4  × 10 61 8.4  × 10 62 9.6  × 10 62 1.6  × 10 62 0.1  × 10 62 3.1  × 10 63 6.1  × 10 62
MN 2 2.2011.5  × 10 61 5.2  × 10 64 5.5  × 10 61 0.9  × 10 62 0.1  × 10 62 9.1  × 10 62 6.1  × 10 65 7.1  × 10 62
MN 3 2.1010.07.1  × 10 60 54  × 10 64 0.6  × 10 62 7.1  × 10 62 9.1  × 10 62 3.8  × 10 62 8.1  × 10 62
e 9 ( 2 ) e 10 ( 2 ) e 11 ( 2 ) e 12 ( 2 )
3.5  × 10 41 4.1  × 10 52 3.4  × 10 57 3.7  × 10 59
9.5  × 10 63 9.1  × 10 61 7.7  × 10 60 0.7  × 10 63
3.5  × 10 65 8.1  × 10 62 3.4  × 10 64 9.1  × 10 61
1.0  × 10 64 4.8  × 10 65 6.4  × 10 67 6.7  × 10 67
Table 12. Approximate roots of the f 6 ( ς ) utilising MN1–MN3 up-to 25 decimal places.
Table 12. Approximate roots of the f 6 ( ς ) utilising MN1–MN3 up-to 25 decimal places.
RootsApproximate Roots after Second Iteration Using Simultaneous Methods MN1–MN3.
ς 1 ( 2 ) 0.9999999999999625082889225593 + 5.230898364697391997065172835 × 10−13 × i
ς 2 ( 2 ) − 1.000000000000000000437788933 − 6.025279497445305009774428506 × 10−19 × i
ς 3 ( 2 ) 4.646633054497955014523803834 × 10−23 − 1.000000000000000000000104279 × i
ς 4 ( 2 ) 7.070081680587045867351347555452430 + 1.896634372172127350402444677884 × i
ς 5 ( 2 ) 0.707106781188722682564395565555579 + 0.70710678111831121600988084789 × i
ς 6 ( 2 ) 0.707106781186547524204653788888888 − 0.707106708811865475234335246205 × i
ς 7 ( 2 ) − 0.707106781186225449342615220000005 + 0.70710678118305664052862413909 × i
ς 8 ( 2 ) − 0.707106781186547522667534642222232 − 0.707106781122865475118767454190 × i
ς 9 ( 2 ) 0.3078726620356213541889978222222356 + 2.57715246122258727612850281995 × i
ς 10 ( 2 ) 0.272083407184584274611022222616488 × 10−1 − 1 + 3.017417711237066199457739396 × i
ς 11 ( 2 ) 0.999998299931370519781229455555571 + 1.999995421705299555855269774625 × i
ς 12 ( 2 ) 0.9999999999999999966472269015554873 − 1.999999999999999999407695317157 × i
Table 13. Parallel numerical scheme residual error comparison NN, MN1–MN3.
Table 13. Parallel numerical scheme residual error comparison NN, MN1–MN3.
MethodCPU-Time e 1 ( 2 ) e 2 ( 2 ) e 3 ( 2 ) e 4 ( 2 ) e 5 ( 2 ) e 6 ( 2 ) e 7 ( 2 ) e 8 ( 2 )
NN0.5067.3  × 10 42 8.1  × 10 61 4.0  × 10 34 1.1  × 10 35 3.0  × 10 27 4.1  × 10 37 1.0  × 10 31 5.0  × 10 38
MN 1 0.2166.0  × 10 55 9.9  × 10 70 3.0  × 10 42 6.1  × 10 39 2.0  × 10 37 5.0  × 10 38 1.2  × 10 40 5.0  × 10 41
MN 2 0.2665.0  × 10 57 9.8  × 10 72 6.0  × 10 43 8.1  × 10 39 4.1  × 10 38 4.1  × 10 38 5.0  × 10 41 3.0  × 10 42
MN 3 0.2593.5  × 10 58 5.0  × 10 78 4.4  × 10 44 1.9  × 10 39 0.0  × 10 30 5.0  × 10 37 7.0  × 10 40 4.0  × 10 47
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Shams, M.; Rafiq, N.; Carpentieri, B.; Ahmad Mir, N. A New Approach to Multiroot Vectorial Problems: Highly Efficient Parallel Computing Schemes. Fractal Fract. 2024, 8, 162. https://doi.org/10.3390/fractalfract8030162

AMA Style

Shams M, Rafiq N, Carpentieri B, Ahmad Mir N. A New Approach to Multiroot Vectorial Problems: Highly Efficient Parallel Computing Schemes. Fractal and Fractional. 2024; 8(3):162. https://doi.org/10.3390/fractalfract8030162

Chicago/Turabian Style

Shams, Mudassir, Naila Rafiq, Bruno Carpentieri, and Nazir Ahmad Mir. 2024. "A New Approach to Multiroot Vectorial Problems: Highly Efficient Parallel Computing Schemes" Fractal and Fractional 8, no. 3: 162. https://doi.org/10.3390/fractalfract8030162

Article Metrics

Back to TopTop