1 Introduction

Fractional calculus is a generalization of classical calculus, and many properties from this are held. Many problems in real life can be described by using mathematical tools form fractional calculus, because of the higher degree of freedom compared to classical calculus tools [1, 2].

In order to find the solution \(\bar {x}\in \mathbb {R}\) of a nonlinear function f(x) = 0, where \(f:I\subseteq \mathbb {R}\longrightarrow \mathbb {R}\) is a continuous function in \(I\in \mathbb {R}\), some fractional Newton-type methods for solving nonlinear equations were proposed in recent years by using the Riemann-Liouville, Caputo and conformable fractional derivatives (see [3,4,5]). Our goal is to design a conformable vectorial Newton-type method, and make a comparison with the classical vectorial Newton method in terms of convergence analysis and numerical stability.

Let us firstly introduce some preliminary concepts related to scalar conformable derivative. The left conformable fractional derivative of a function \(f:[a,\infty )\longrightarrow \mathbb {R}\), starting from a, of order α ∈ (0,1], \(\alpha ,a,x\in \mathbb {R}\), a < x, is defined as (see [11])

$$ (T_{\alpha}^{a}f)(x)=\lim_{\varepsilon\longrightarrow0}\frac{f(x+\varepsilon(x-a)^{1-\alpha})-f(x)}{\varepsilon}. $$
(1)

If that limit exists, f is said to be α-differentiable. If f is differentiable, \((T_{\alpha }^{a}f)(x)=(x-a)^{1-\alpha }f^{\prime }(x)\). If f is α-differentiable in (a,b), for some \(b\in \mathbb {R}\), \((T_{\alpha }^{a}f)(a)= \underset {x\rightarrow a^+}{\lim }(T_{\alpha }^{a}f)(x)\). It is also easy to see that \(T_{\alpha }^{a}C=0\), being C a constant.

The conformable fractional derivative is the most natural definition of fractional derivatives and involves a low computational cost, because it does not require the evaluation of special functions, such as Gamma or Mittag-Leffler functions.

Recently, a fractional Newton-type method by using conformable derivative has been designed for solving nonlinear equations in [5] with the following iterative expression:

$$ x_{k+1}=a+\left( (x_{k}-a)^{\alpha}-\alpha\frac{f(x_{k})}{(T_{\alpha}^{a}f)(x_{k})}\right)^{1/\alpha},\quad k=0,1,2,\dots $$
(2)

Where \((T_{\alpha }^{a}f)(x_{k})\) is the left conformable fractional derivative of order α, α ∈ (0,1], starting at a, a < xk, ∀k. When α = 1, the classical Newton-Raphson method is obtained.

In [5], the quadratic convergence of this method by using a suitable conformable Taylor series (see [6]) is stated by the next result.

Theorem 1

([5]) Let \(f:I\subseteq \mathbb {R}\longrightarrow \mathbb {R}\) be a continuous function in the interval \(I\in \mathbb {R}\) containing the zero \(\bar {x}\) of f(x). Let \((T_{\alpha }^{a}f)(x)\) be the conformable fractional derivative of f(x) starting from a, with order α, for any α ∈ (0,1]. Let us suppose that \((T_{\alpha }^{a}f)(x)\) is continuous and not null at \(\bar {x}\). If an initial approximation x0 is sufficiently close to \(\bar {x}\), then the local order of convergence of the conformable fractional Newton-type method

$$ x_{k+1}=a+\left( (x_{k}-a)^{\alpha}-\alpha\frac{f(x_{k})}{(T_{\alpha}^{a}f)(x_{k})}\right)^{1/\alpha},\quad k=0,1,2,\dots $$

is at least 2, where 0 < α ≤ 1, and the error equation is

$$ e_{k+1}=\alpha(\bar{x}-a)^{\alpha-1}C_{2}{e_{k}^{2}}+O\left( {e_{k}^{3}}\right), $$
(3)

where \(C_{j}=\frac {1}{j!\alpha ^{j-1}}\frac {(T_{\alpha }^{a}f)^{(j)}(\bar {x})}{(T_{\alpha }^{a}f)(\bar {x})}\) for \(j=2,3,4,\dots \)

Remark 1

It can be shown that, by using the conformable product and chain rules stated in [11], the asymptotic constant of the error equation can be expressed as (3)

$$ \begin{array}{@{}rcl@{}} \alpha(\bar{x}-a)^{\alpha-1}C_{2}&=&\alpha(\bar{x}-a)^{\alpha-1}\frac{1}{2\alpha}\frac{(T_{\alpha}^{a}f)^{(2)}(\bar{x})}{(T_{\alpha}^{a}f)(\bar{x})} \\ &=&\frac{\alpha(\bar{x}-a)^{\alpha-1}}{2\alpha} \left[\frac{(\bar{x}-a)^{2-2\alpha}f^{\prime\prime}(\bar{x})+(1-\alpha)(\bar{x}-a)^{1-2\alpha}f^{\prime}(\bar{x})}{(\bar{x}-a)^{1-\alpha}f^{\prime}(\bar{x})}\right] \\ &=&\frac{1}{2}\left[\frac{f^{\prime\prime}(\bar{x})}{f^{\prime}(\bar{x})}+\frac{1-\alpha}{\bar{x}-a}\right] \\ &=&c_{2}+\frac{1}{2}\frac{1-\alpha}{\bar{x}-a}, \end{array} $$
(4)

being \(c_{j}=\frac {1}{j!}\frac {f^{(j)}(\bar {x})}{f^{\prime }(\bar {x})}\) for \(j=2,3,4,\dots ,\) which is the classical asymptotical error constant. In this case, j = 2. It can also be proven that the error equation of iterative scheme (2) by using the classical Taylor Series is:

$$ e_{k+1}=\left( c_{2}+\frac{1}{2}\frac{1-\alpha}{\bar{x}-a}\right){e_{k}^{2}}+O\left( {e_{k}^{3}}\right). $$
(5)

So, (4) and (5) show that error equation obtained by both Taylor series (the classical one, and that provided in [6]) is the same.

Remark 2

As predicted by Traub, since conformable Newton-type method proposed in [5] and the classical one have the same order of convergence, the asymptotical error constant of conformable Newton-type method equals the asymptotical error constant of classical one, plus some value described in [7] (Theorem 2-8).

In both error equations, (3) and (5), when α = 1, we obtain the error equation of classical Newton’s method. In this work, we are going to use both Taylor series to make the convergence analysis, in this case, for a vector valued function.

That method proposed in [5], as seen in Theorem 1, can be only used to solve scalar nonlinear problems. In order to design a conformable vectorial Newton’s method to find the solution \(\bar {x}\in \mathbb {R}^{n}\) of a nonlinear system \(F(x)=\hat {0}\), with coordinate functions \(f_{1},\dots ,f_{n}\), where \(F:D\subseteq \mathbb {R}^{n}\longrightarrow \mathbb {R}^{n}\) is a sufficiently Fréchet-differentiable function in an open convex set D, we have to state the existing concepts and results which will be necessary.

First, for the analysis of the convergence of nonlinear systems by using the classical Taylor Series, we can find in [8, 9] the following notation:

Definition 1

Let \(F:D\subseteq \mathbb {R}^{n}\longrightarrow \mathbb {R}^{n}\) be sufficiently Fréchet-differentiable in D. The q th derivative of F at \(u\in \mathbb {R}^{n}\), \(q\in \mathbb {N}\), q ≥ 1, is the q-linear function \(F^{(q)}(u):\mathbb {R}^{n}\times \cdots \times \mathbb {R}^{n}\longrightarrow \mathbb {R}^{n}\) such that \(F^{(q)}(u)(v_{1},\dots ,v_{q})\in \mathbb {R}^{n}\). It can be observed that:

  1. 1.

    \(F^{(q)}(u)(v_{1},\dots ,v_{q-1},\cdot )\in {\mathscr{L}}(\mathbb {R}^{n})\), being \({\mathscr{L}}(\mathbb {R}^{n})\) the space of linear mappings of \(\mathbb {R}^{n}\longrightarrow \mathbb {R}^{n}\).

  2. 2.

    \(F^{(q)}(u)(v_{\sigma _{1}},\dots ,v_{\sigma _{q}})=F^{(q)}(u)(v_{1},\dots ,v_{q})\), for any permutation \(\sigma \in \{1,\dots ,q\}\).

From properties above, we can use the following notation:

  1. 1.

    \(F^{(q)}(u)(v_{1},\dots ,v_{q})=F^{(q)}(u)v_{1}{\cdots } v_{q}\).

  2. 2.

    F(q)(u)vq− 1F(p)(u)vp = F(q)(u)F(p)(u)vq+p− 1.

In [10], we can find a definition of conformable partial derivative as shown next:

Definition 2

Let f be a function in n variables, \(x_{1},\dots ,x_{n}\), the conformable partial derivative of f of order α ∈ (0,1] in xi > a = 0 is defined as:

$$ \frac{\partial_{0}^{\alpha}}{\partial x_{i}^{\alpha}}f(x_{1},\dots,x_{n})=\lim_{\epsilon\rightarrow 0}\frac{f(x_{1},\dots,x_{i}+\epsilon x_{i}^{1-\alpha},\dots,x_{n})-f(x_{1},\dots,x_{n})}{\epsilon}, $$
(6)

In [10] is also defined the conformable Jacobian matrix as:

Definition 3

Let f, g be functions in 2 variables x and y, and their respective partial derivatives exist and are continuous, where x > a1 and y > a2, being \(a=(a_{1},a_{2})=(0,0)=\hat {0}\), then the conformable Jacobian matrix is given by:

$$ \begin{array}{@{}rcl@{}} F_{\hat{0}}^{\alpha(1)}(x)= \left( \begin{array}{cc} \frac{\partial_{0}^{\alpha}f}{\partial x^{\alpha}} & \frac{\partial_{0}^{\alpha}f}{\partial y^{\alpha}} \\ \frac{\partial_{0}^{\alpha}g}{\partial x^{\alpha}} & \frac{\partial_{0}^{\alpha}g}{\partial y^{\alpha}} \end{array}\right)= \left( \begin{array}{cc} x^{1-\alpha}\frac{\partial f}{\partial x} & y^{1-\alpha}\frac{\partial f}{\partial y} \\ x^{1-\alpha}\frac{\partial g}{\partial x} & y^{1-\alpha}\frac{\partial g}{\partial y} \end{array}\right), \end{array} $$
(7)

This can be directly extended to higher dimensions and, as it will be seen in the next section, a can be considered not null.

Another necessary concept, the Hadamard product, can be found in [12]:

Definition 4

Let A = (aij)m×n and B = (bij)m×n be m × n matrices. The Hadamard product is defined by AB := (aijbij)m×n.

Remark 3

An analogous concept to Hadamard product is the Hadamard power, where \(A^{\odot r}=\underbrace {A\odot A\odot \cdots \odot A}_{r\text { times}}\), being \(r\in \mathbb {R}\).

In next section, the new concepts and results needed to design a vectorial conformable Newton-type method are stated.

In this manuscript, the design and convergence analysis of the proposed method are made in Section 3, the numerical tests and numerical stability are discussed in Section 4, and the conclusions are given in Section 5.

2 New concepts and results

Regarding that, in (6), \(x_{i}\in (0,\infty )\), we can define the conformable partial derivative in \(x_{i}\in (a,\infty )\) as follows:

Definition 5

Let f be a function in n variables, \(x_{1},\dots ,x_{n}\), the conformable partial derivative of f of order 0 < α ≤ 1 in \(x_{i}\in (a,\infty )\) is defined as

$$ \frac{\partial_{a}^{\alpha}}{\partial x_{i}^{\alpha}}f(x_{1},\dots,x_{n})= \lim_{\epsilon\rightarrow 0}\frac{f(x_{1},\dots,x_{i}+\epsilon(x_{i}-a)^{1-\alpha},\dots,x_{n})-f(x_{1},\dots,x_{n})}{\epsilon}. $$
(8)

In the case xi = a, \(\frac {\partial _{a}^{\alpha }}{\partial x_{i}^{\alpha }}f(x_{1},\dots ,a,\dots ,x_{n})=\underset {x_{i}\rightarrow a^{+}}{\lim }\frac {\partial _{a}^{\alpha }}{\partial x_{i}^{\alpha }}f(x_{1},\dots ,x_{i},\dots ,x_{n})\).

This derivative is linear, and the product, quotient and chain rules are satisfied, likewise to conformable derivative given in [11]. In next result, a relation between classical partial derivative and conformable partial derivative is stated:

Theorem 2

Let f be a differentiable function in n variables, \(x_{1},\dots ,x_{n}\), xi > a, then,

$$ \frac{\partial_{a}^{\alpha}}{\partial x_{i}^{\alpha}}f(x_{1},\dots,x_{n})=(x_{i}-a)^{1-\alpha}\frac{\partial}{\partial x_{i}}f(x_{1},\dots,x_{n}). $$
(9)

Proof

Let h = 𝜖(xia)1−α, and 𝜖 = h(xia)α− 1, we have

$$ \begin{array}{@{}rcl@{}} \frac{\partial_{a}^{\alpha}}{\partial x_{i}^{\alpha}}f(x_{1},\dots,x_{n})&=& \lim_{h\rightarrow 0}\frac{f(x_{1},\dots,x_{i}+h,\dots,x_{n})-f(x_{1},\dots,x_{n})}{h(x_{i}-a)^{\alpha-1}} \\ &=&(x_{i}-a)^{1-\alpha}\lim_{h\rightarrow 0}\frac{f(x_{1},\dots,x_{i}+h,\dots,x_{n})-f(x_{1},\dots,x_{n})}{h} \\ &=&(x_{i}-a)^{1-\alpha}\frac{\partial}{\partial x_{i}}f(x_{1},\dots,x_{n}). \end{array} $$

We can also define the conformable Jacobian matrix for \(x_{1}\in (a_{1},\infty )\) and \(x_{2}\in (a_{2},\infty )\), where x = (x1,x2) and a = (a1,a2):

Definition 6

Let f and g be coordinate functions of a vector valued function \(F:\mathbb {R}^{2}\longrightarrow \mathbb {R}^{2}\) in variables x1 > a1 and x2 > a2, where x = (x1,x2) and a = (a1,a2), such that their respective partial derivatives exist and are continuous. Then, the conformable Jacobian matrix is given by

$$ \begin{array}{@{}rcl@{}} F_{a}^{\alpha(1)}(x)= \left( \begin{array}{cc} \frac{\partial_{a_{1}}^{\alpha}f}{\partial x_{1}^{\alpha}} & \frac{\partial_{a_{2}}^{\alpha}f}{\partial x_{2}^{\alpha}} \\ \frac{\partial_{a_{1}}^{\alpha}g}{\partial x_{1}^{\alpha}} & \frac{\partial_{a_{2}}^{\alpha}g}{\partial x_{2}^{\alpha}} \end{array}\right)= \left( \begin{array}{cc} (x_{1}-a_{1})^{1-\alpha}\frac{\partial f}{\partial x_{1}} & (x_{2}-a_{2})^{1-\alpha}\frac{\partial f}{\partial x_{2}} \\ (x_{1}-a_{1})^{1-\alpha}\frac{\partial g}{\partial x_{1}} & (x_{2}-a_{2})^{1-\alpha}\frac{\partial g}{\partial x_{2}} \end{array}\right). \end{array} $$
(10)

This can be directly extended to higher dimensions.

To analyze the convergence of nonlinear systems by using a conformable Taylor Series, we can use the following notation analogous to Definition 1:

Definition 7

Let \(F:D\subseteq \mathbb {R}^{n}\longrightarrow \mathbb {R}^{n}\) be sufficiently α-differentiable in D. The q th conformable derivative of F at \(u\in \mathbb {R}^{n}\) is the α(q)-linear function \(F_{a}^{\alpha (q)}(u):\mathbb {R}^{n}\times \cdots \times \mathbb {R}^{n}\longrightarrow \mathbb {R}^{n}\) such that \(F_{a}^{\alpha (q)}(u)(v_{1},\dots ,v_{q})\in \mathbb {R}^{n}\). It can be observed that:

  1. 1.

    \(F_{a}^{\alpha (q)}(u)(v_{1},\dots ,v_{q-1},\cdot )\in {\mathscr{L}}(\mathbb {R}^{n})\), being \({\mathscr{L}}(\mathbb {R}^{n})\) the space of linear mappings of \(\mathbb {R}^{n}\longrightarrow \mathbb {R}^{n}\).

  2. 2.

    \(F_{a}^{\alpha (q)}(u)(v_{\sigma _{1}},\dots ,v_{\sigma _{q}})=F_{a}^{\alpha (q)}(u)(v_{1},\dots ,v_{q})\), for any permutation \(\sigma \in \{1,\dots ,q\}\).

From properties above, we can use the following notation:

  1. 1.

    \(F_{a}^{\alpha (q)}(u)(v_{1},\dots ,v_{q})=F_{a}^{\alpha (q)}(u)v_{1}{\cdots } v_{q}\).

  2. 2.

    \(F_{a}^{\alpha (q)}(u)v^{q-1}F_{a}^{\alpha (p)}(u)v^{p}=F_{a}^{\alpha (q)}(u)F_{a}^{\alpha (p)}(u)v^{q+p-1}\).

To define a conformable Taylor series for a vector valued function, we proceed in a similar way as in Theorem 4.1 from [11].

Theorem 3

Let us suppose that \(F:\mathbb {R}^{n}\longrightarrow \mathbb {R}^{n}\) is an infinitely α-differentiable vector valued function, for some α ∈ (0,1], around a point \(t_{0}\in \mathbb {R}^{n}\). Then, F has the conformable Taylor power series

$$ F(t)=\sum\limits_{k=0}^{\infty}\frac{F_{t_{0}}^{\alpha(k)}(t_{0})(t-t_{0})^{k\alpha}}{\alpha^{k}k!}, $$
(11)

where \(F_{t_{0}}^{\alpha (k)}(t_{0})\) means the mapping of conformable derivative k times.

Proof

Let F(t) = K0 + K1(tt0)α + K2(tt0)2α + K3(tt0)3α + ⋯. Then, F(t0) = K0.

If we map the conformable derivative once to F, and then we evaluate at t0, we obtain \(F_{t_{0}}^{\alpha (1)}(t_{0})=K_{1}\alpha \), so, \(K_{1}=\frac {F_{t_{0}}^{\alpha (1)}(t_{0})}{\alpha }\).

If we map the conformable derivative twice to F, and then we evaluate at t0, we obtain \(F_{t_{0}}^{\alpha (2)}(t_{0})=2K_{2}\alpha ^{2}\), so, \(K_{2}=\frac {F_{t_{0}}^{\alpha (2)}(t_{0})}{2\alpha ^{2}}\). Proceeding by induction, we have

$$ K_{n}=\frac{F_{t_{0}}^{\alpha(n)}(t_{0})}{\alpha^{n}n!},~n\in\mathbb{N}. $$
(12)

So, (11) is obtained. □

Thus, F(t) in (11) may be written as

$$ F(t)=F(t_{0})+\frac{F_{t_{0}}^{\alpha(1)}(t_{0})}{\alpha}(t-t_{0})^{\alpha}+\frac{F_{t_{0}}^{\alpha(2)}(t_{0})}{2\alpha^{2}}(t-t_{0})^{2\alpha}+\dots $$

As it may be seen, the conformable derivatives start at t0, which is the value where derivatives are being also evaluated. This is a problem to be avoided in order to define a conformable Newton-type iterative method.

Proceeding as in [6] (Theorem 4.1), we can obtain a new Taylor series by using Theorem 3 , where the conformable derivatives start at some point \(a=(a_{1},\dots ,a_{n})\in \mathbb {R}^{n}\) different from another point \(b=(b_{1},\dots ,b_{n})\in \mathbb {R}^{n}\) where they are evaluated:

Theorem 4

Let \(F:\mathbb {R}^{n}\longrightarrow \mathbb {R}^{n}\) be an infinitely α-differentiable vector valued function, for some α ∈ (0,1], around a point \(b_{i}\in (a_{i},\infty )\), \(\forall i=1,\dots ,n\), where \(a=(a_{1},\dots ,a_{n})\in \mathbb {R}^{n}\) and \(b=(b_{1},\dots ,b_{n})\in \mathbb {R}^{n}\). Then, F has the conformable Taylor power series

$$ F(t)=F(b)+\frac{F_{a}^{\alpha(1)}(b)}{\alpha}{\Delta}+\frac{F_{a}^{\alpha(2)}(b)}{2!\alpha^{2}}{\Delta}^{2}+\cdots, $$
(13)

where Δ = HαLα; H = ta, L = ba, being ⊙ the Hadamard power.

Proof

Let us denote by t0 = a in (11),

$$ F(t)=F(a)+\frac{F_{a}^{\alpha(1)}(a)}{\alpha}(t-a)^{\alpha}+\frac{F_{a}^{\alpha(2)}(a)}{2\alpha^{2}}(t-a)^{2\alpha}+\cdots $$
(14)

Evaluating (14) at b,

$$ F(b)=F(a)+\frac{F_{a}^{\alpha(1)}(a)}{\alpha}(b-a)^{\alpha}+\frac{F_{a}^{\alpha(2)}(a)}{2\alpha^{2}}(b-a)^{2\alpha}+\cdots, $$
(15)

isolating F(a), we get

$$ F(a)=F(b)-\frac{F_{a}^{\alpha(1)}(a)}{\alpha}(b-a)^{\alpha}-\frac{F_{a}^{\alpha(2)}(a)}{2\alpha^{2}}(b-a)^{2\alpha}-\cdots $$
(16)

If we map the conformable derivative once and twice to F, starting at a, we obtain, respectively,

$$ F_{a}^{\alpha(1)}(a)=F_{a}^{\alpha(1)}(b)-\frac{F_{a}^{\alpha(2)}(a)}{\alpha}(b-a)^{\alpha}-\frac{F_{a}^{\alpha(3)}(a)}{2\alpha^{2}}(b-a)^{2\alpha}-\dots $$
(17)

and

$$ F_{a}^{\alpha(2)}(a)=F_{a}^{\alpha(2)}(b)-\frac{F_{a}^{\alpha(3)}(a)}{\alpha}(b-a)^{\alpha}-\frac{F_{a}^{\alpha(4)}(a)}{2\alpha^{2}}(b-a)^{2\alpha}-\dots $$
(18)

Replacing all derivatives evaluated at a in (14), with all derivatives evaluated at b in (16), (17) and (18) we obtain (13), which can be written as

$$ \begin{array}{@{}rcl@{}} F(t)&=&F(b)+\frac{F_{a}^{\alpha(1)}(b)}{\alpha} \left[(t-a)^{\odot\alpha}-(b-a)^{\odot\alpha}\right]\\ &&+\frac{F_{a}^{\alpha(2)}(b)}{2\alpha^{2}}\left[(t-a)^{\odot\alpha}-(b-a)^{\odot\alpha}\right]^{2}+\cdots,\\ &=&F(b)+\frac{F_{a}^{\alpha(1)}(b)}{\alpha}{\Delta}+\frac{F_{a}^{\alpha(2)}(b)}{2\alpha^{2}}{\Delta}^{2}+\cdots, \end{array} $$

and the proof is finished. □

Remark 4

With these expressions, we can write the Taylor power series expansion of F around the solution \(\bar {x}\), being the conformable Jacobian matrix \(F_{a}^{\alpha (1)}(\bar {x})\) not singular, as shown next:

$$ F\left( x^{(k)}\right)=\frac{F_{a}^{\alpha(1)}(\bar{x})}{\alpha}\left[{\Delta}+C_{2}{\Delta}^{2}+C_{3}{\Delta}^{3}+\cdots+C_{p}{\Delta}^{p}\right]+O\left( {e^{(k)}}^{p+1}\right), $$
(19)

where Δ = HαLα; H = x(k)a, \(L=\bar {x}-a\), \(e^{(k)}=x^{(k)}-\bar {x}\) and being ⊙ the Hadamard power, C1 = I, \(C_{q}=\frac {1}{q!\alpha ^{q-1}}\left [F_{a}^{\alpha (1)}(\bar {x})\right ]^{-1}F_{a}^{\alpha (q)}(\bar {x})\), q ≥ 2.

Remark 5

By using Definition 7, Theorem 2 and Hadamard power, we obtain

$$ F_{a}^{\alpha(1)}(x)=(x-a)^{\odot(1-\\ \alpha)}F^{\prime}(x), $$
(20)

and

$$ F_{a}^{\alpha(1)}(a)=\underset{x\rightarrow a^+}{\lim}(x-a)^{\odot(1-\alpha)}F^{\prime}(x), $$
(21)

respectively, for a vector valued function F, being \(F^{\prime }(x)\) the classical Jacobian matrix. Note that, in (21) \(x\rightarrow a^{+}\) means that \(x_{i}\rightarrow a_{i}^{+}\), \(\forall i=1,\dots ,n\), where \(x=(x_{1},\dots ,x_{n})\in \mathbb {R}^{n}\) and \(a=(a_{1},\dots ,a_{n})\in \mathbb {R}^{n}\).

Moreover, in order to make the convergence analysis of our main proposal, another concept must be introduced.

Theorem 5

Let \(x,y\in \mathbb {R}^{n}\), \(r\in \mathbb {R}\), and be ⊙ the Hadamard power/product. The Newton’s binomial theorem for vector values and fractional power is given by

$$ (x+y)^{\odot r}=\sum\limits_{k=0}^{\infty}\binom{r}{k}x^{\odot(r-k)}\odot y^{\odot k},\quad k\in\{0\}\cup\mathbb{N}, $$
(22)

being the generalized binomial coefficient (see [13])

$$ \binom{r}{k}=\frac{\Gamma(r+1)}{k!{\Gamma}(r-k+1)},\quad k\in\{0\}\cup\mathbb{N}. $$
(23)

Proof

Since Hadamard power/product is an element-wise power/product, the proof is analogous to classical one. □

In next section, we deduce the conformable Newton-type method for solving nonlinear systems.

3 Design and convergence analysis

As we proceeded in [5], let us regard the approximation of function F through the Taylor power series (13) up to order one evaluated at the solution \(\bar {x}\), as follows:

$$ F(x)\approx F(\bar{x})+\frac{F_{a}^{\alpha(1)}(\bar{x})}{\alpha}{\Delta}. $$
(24)

As \(F(\bar {x})=\hat {0}\), and Δ = HαLα; H = xa, \(L=\bar {x}-a\),

$$ F(x)\approx\frac{F_{a}^{\alpha(1)}(\bar{x})}{\alpha}\left[(x-a)^{\odot\alpha}-(\bar{x}-a)^{\odot\alpha}\right]. $$
(25)

Multiplying both sides of (25), by \(\alpha \left [F_{a}^{\alpha (1)}(\bar {x})\right ]^{-1}\) from the left,

$$ \alpha\left[F_{a}^{\alpha(1)}(\bar{x})\right]^{-1}F(x)\approx(x-a)^{\odot\alpha}-(\bar{x}-a)^{\odot\alpha}. $$
(26)

From \((\bar {x}-a)^{\odot \alpha }\), we isolate \(\bar {x}\), so

$$ \bar{x}\approx a+\left( (x-a)^{\odot\alpha}-\alpha\left[F_{a}^{\alpha(1)}(\bar{x})\right]^{-1}F(x)\right)^{\odot1/\alpha}. $$
(27)

Regarding iterates x(k) and x(k+ 1) are approximations of solution \(\bar {x}\), we obtain the conformable Newton-type method for nonlinear systems:

$$ x^{(k+1)}=a+\left[\left( x^{(k)}-a\right)^{\odot\alpha}-\alpha\left[F_{a}^{\alpha(1)}\left( x^{(k)}\right)\right]^{-1}F\left( x^{(k)}\right)\right]^{\odot1/\alpha},~~k=0,1,2,\dots $$
(28)

Next, convergence analysis of conformable Newton-type method (28) is made by using the conformable Taylor series (13), and the classical one.

In next result, the quadratic convergence of vectorial Newton-type method (28) by using the conformable Taylor series (13) is proven.

Theorem 6

Let \(F:D\subseteq \mathbb {R}^{n}\longrightarrow \mathbb {R}^{n}\) be a continuous function in an open convex set \(D\subseteq \mathbb {R}^{n}\) holding a zero \(\bar {x}\in \mathbb {R}^{n}\) of a vector valued function F(x). Let \(F_{a}^{\alpha (1)}(x)\) be the conformable Jacobian matrix of F starting at \(a\in \mathbb {R}^{n}\), of order α, for any α ∈ (0,1]. Let us suppose that \(F_{a}^{\alpha (1)}(x)\) is continuous and not singular at \(\bar {x}\). If an initial approximation \(x^{(0)}\in \mathbb {R}^{n}\) is sufficiently close to \(\bar {x}\), then the local order of convergence of conformable vectorial Newton’s method

$$ x^{(k+1)}=a+\left[\left( x^{(k)}-a\right)^{\odot\alpha}-\alpha\left[F_{a}^{\alpha(1)}\left( x^{(k)}\right)\right]^{-1}F\left( x^{(k)}\right)\right]^{\odot1/\alpha},~~k=0,1,2,\dots $$

is at least 2, and the error equation is

$$ e^{(k+1)}=\alpha C_{2}(\bar{x}-a)^{\odot(\alpha-1)}{e^{(k)}}^{2}+O\left( {e^{(k)}}^{3}\right), $$
(29)

being \(C_{j}=\frac {1}{j!\alpha ^{j-1}}\left [F_{a}^{\alpha (1)}(\bar {x})\right ]^{-1}F_{a}^{\alpha (j)}(\bar {x})\), \(j=2,3,4,\dots \), such that a < x(k), ∀k.

Proof

By using Definition 7, Theorem 4, and regarding \(x^{(k)}=e^{(k)}+\bar {x}\), the conformable Taylor power series expansion of F(x) around \(\bar {x}\) is

$$ \begin{array}{@{}rcl@{}} F\left( x^{(k)}\right)&=&\frac{F_{a}^{\alpha(1)}(\bar{x})}{\alpha}\left[{\Delta}+C_{2}{\Delta}^{2}\right]+O\left( {e^{(k)}}^{3}\right) \\ &=&\frac{F_{a}^{\alpha(1)}(\bar{x})}{\alpha}\left[\left( \left( \bar{x}-a+e^{(k)}\right)^{\odot\alpha}-(\bar{x}-a)^{\odot\alpha}\right)\right. \\ &&+\left.C_{2}\left( \left( \bar{x}-a+e^{(k)}\right)^{\odot\alpha}-(\bar{x}-a)^{\odot\alpha}\right)^{2}\right]+O\left( {e^{(k)}}^{3}\right), \end{array} $$

being \(C_{j}=\frac {1}{j!\alpha ^{j-1}}\left [F_{a}^{\alpha (1)}(\bar {x})\right ]^{-1}F_{a}^{\alpha (j)}(\bar {x})\); \(j=2,3,4,\dots \) Using Theorem 5 (22) and (23), and considering the Hadamard powers (Definition 4 and Remark 3),

$$ \begin{array}{@{}rcl@{}} F\left( x^{(k)}\right)&=&\frac{F_{a}^{\alpha(1)}(\bar{x})}{\alpha}\left[\left( \alpha(\bar{x}-a)^{\odot(\alpha-1)}\right)e^{(k)}\right. \\ &&+\left.\left( \frac{\alpha}{2}(\alpha-1)(\bar{x}-a)^{\odot(\alpha-2)}+\alpha^{2}C_{2}(\bar{x}-a)^{\odot(2\alpha-2)}\right){e^{(k)}}^2\right] + O\left( {e^{(k)}}^{3}\right). \end{array} $$

Regarding (20), and using again Definition 7 and Theorem 5, the conformable Jacobian matrix of \(F\left (x^{(k)}\right )\) can be expressed as

$$ \begin{array}{@{}rcl@{}} F_{a}^{\alpha(1)}\left( x^{(k)}\right)&=&\frac{F_{a}^{\alpha(1)}(\bar{x})}{\alpha}\left[\alpha I+\left( 2\alpha^{2}C_{2}(\bar{x}-a)^{\odot(\alpha-1)}\right)e^{(k)}\right]+O\left( {e^{(k)}}^{2}\right). \end{array} $$

We can set the Taylor power series expansion of \(\left [F_{a}^{\alpha (1)}\left (x^{(k)}\right )\right ]^{-1}\) as

$$ \left[F_{a}^{\alpha(1)}\left( x^{(k)}\right)\right]^{-1}=\left[\frac{1}{\alpha}I+X_{2}e^{(k)}\right]\alpha\left[F_{a}^{\alpha(1)}(\bar{x})\right]^{-1}+O\left( {e^{(k)}}^{2}\right), $$

being X2 an unknown variable such that \(\left [F_{a}^{\alpha (1)}\left (x^{(k)}\right )\right ]^{-1}F_{a}^{\alpha (1)}\left (x^{(k)}\right )=I\), so,

$$ \left( 2\alpha C_{2}(\bar{x}-a)^{\odot(\alpha-1)}+\alpha X_{2}\right)e^{(k)}=\hat{0}. $$

Solving for X2,

$$ X_{2}=-2C_{2}(\bar{x}-a)^{\odot(\alpha-1)}. $$

So,

$$ \left[F_{a}^{\alpha(1)}\left( x^{(k)}\right)\right]^{-1} = \left[\frac{1}{\alpha}I + \left( -2C_{2}(\bar{x} - a)^{\odot(\alpha-1)}\right)e^{(k)}\right]\alpha\left[F_{a}^{\alpha(1)}(\bar{x})\right]^{-1}+O\left( {e^{(k)}}^{2}\right). $$

Thus,

$$ \begin{array}{@{}rcl@{}} -\alpha\left[F_{a}^{\alpha(1)}\left( x^{(k)}\right)\right]^{-1}F\left( x^{(k)}\right)&=&-\left( \alpha(\bar{x}-a)^{\odot(\alpha-1)}\right)e^{(k)} \\ &&+\left( \alpha^{2}C_{2}(\bar{x}-a)^{\odot(2\alpha-2)}\right. \\ &&-\left.\frac{\alpha}{2}(\alpha-1)(\bar{x}-a)^{\odot(\alpha-2)}\right){e^{(k)}}^2+O\left( {e^{(k)}}^3\right). \end{array} $$

Then,

$$ \begin{array}{@{}rcl@{}} \left( x^{(k)} - a\right)^{\odot\alpha} - \alpha\left[F_{a}^{\alpha(1)}\left( x^{(k)}\right)\right]^{-1}F\left( x^{(k)}\right)& = &(\bar{x} - a)^{\odot\alpha} + \alpha^{2}C_{2}(\bar{x} - a)^{\odot(2\alpha-2)}{e^{(k)}}^2 \\ &&+O\left( {e^{(k)}}^3\right). \end{array} $$

Using once again Theorem 5,

$$ \begin{array}{@{}rcl@{}} \left[\left( x^{(k)} - a\right)^{\odot\alpha} - \alpha\left[F_{a}^{\alpha(1)}\left( x^{(k)}\right)\right]^{-1}F\left( x^{(k)}\right)\right]^{\odot1/\alpha}& = &\bar{x} - a + \alpha C_{2}(\bar{x} - a)^{\odot(\alpha-1)}{e^{(k)}}^2 \\ &&+O\left( {e^{(k)}}^3\right). \end{array} $$

Let \(x^{(k+1)}=e^{(k+1)}+\bar {x}\),

$$e^{(k+1)}+\bar{x}=a+\bar{x}-a+\alpha C_{2}(\bar{x}-a)^{\odot(\alpha-1)}{e^{(k)}}^{2}+O\left( {e^{(k)}}^3\right).$$

Finally,

$$ e^{(k+1)}=\alpha C_{2}(\bar{x}-a)^{\odot(\alpha-1)}{e^{(k)}}^{2}+O\left( {e^{(k)}}^3\right). $$

And this completes the proof. □

As in (4), it can be shown that, by using the product and chain rules, and considering (20), in error (29),

$$ \alpha C_{2}(\bar{x}-a)^{\odot(\alpha-1)}=c_{2}+\frac{1}{2}(1-\alpha)(\bar{x}-a)^{\odot(-1)}, $$

being \(c_{j}=\frac {1}{j!}\left [F^{\prime }(\bar {x})\right ]^{-1}F^{(j)}(\bar {x})\) for \(j=2,3,4,\dots ,\) which is the classical asymptotical error constant for a vector valued function F, and \(F^{\prime }\) is the classical Jacobian matrix. For this case, j = 2.

In next result, the quadratic convergence of conformable Newton-type method (28) by using the the classical Taylor series can be proven:

Corollary 1

Let \(F:D\subseteq \mathbb {R}^{n}\longrightarrow \mathbb {R}^{n}\) be a continuous function in an open convex set \(D\subseteq \mathbb {R}^{n}\) holding a zero \(\bar {x}\in \mathbb {R}^{n}\) of a vector valued function F(x). Let \(F_{a}^{\alpha (1)}(x)\) be the conformable Jacobian matrix of F starting at \(a\in \mathbb {R}^{n}\), of order α, for any α ∈ (0,1]. Let us suppose that \(F_{a}^{\alpha (1)}(x)\) is continuous and not singular at \(\bar {x}\). If an initial approximation \(x^{(0)}\in \mathbb {R}^{n}\) is sufficiently close to \(\bar {x}\), then the local order of convergence of conformable vectorial Newton’s method

$$ x^{(k+1)}=a+\left[\left( x^{(k)}-a\right)^{\odot\alpha}-\alpha\left[F_{a}^{\alpha(1)}\left( x^{(k)}\right)\right]^{-1}F\left( x^{(k)}\right)\right]^{\odot1/\alpha},~~k=0,1,2,\dots $$

is at least 2, and the error equation is

$$ e^{(k+1)}=\left( c_{2}+\frac{1}{2}(1-\alpha)(\bar{x}-a)^{\odot(-1)}\right){e^{(k)}}^2+O \left( {e^{(k)}}^3\right), $$
(30)

being \(c_{j}=\frac {1}{j!}\left [F^{\prime }(\bar {x})\right ]^{-1}F^{(j)}(\bar {x})\) for \(j=2,3,4,\dots \), such that a < x(k), ∀k.

Remark 6

It is confirmed that error equations given in (29) and (30) are the same.

In next section, some numerical tests with some nonlinear systems of equations are made. We remark that, in all tests, a comparison with classical Newton-Raphson’s method (when α = 1) is made. Also, the dependence on initial estimates of both methods is analyzed by using the convergence plane.

4 Numerical results

The following tests have been made by using MATLAB R2020a with double precision arithmetic, ∥F(x(k+ 1))∥ < 10− 8 or ∥x(k+ 1)x(k)∥ < 10− 8 as stopping criterium, and at most 500 iterations. For each test, we use \(a=(a_{1},\dots ,a_{n})=(-10,\dots ,-10)\) to ensure that ai < xi, \(\forall i=1,\dots ,n\), according to Definitions 5 and 6 , and a < x(k), ∀k, according to Theorem 6 and Corollary 1. We also use the Approximated Computational Order of Convergence (ACOC)

$$ ACOC=\frac{\ln(\|x^{(k+1)}-x^{(k)}\|/\|x^{(k)}-x^{(k-1)}\|)}{\ln(\|x^{(k)}-x^{(k-1)}\|/\|x^{(k-1)}-x^{(k-2)}\|)},~~k=0,1,2,\dots, $$

defined in [14], to check the theoretical order of convergence is obtained in practice. To make a comparison to each of all test vector valued functions, we have used the same initial estimation for each table, and α ∈ (0,1].

From each table, two figures with error curves are provided in order to visualize the error committed (∥x(k+ 1)x(k)∥) versus number of iterations for different values of α; firstly, it is shown a figure for all the able values of α, then, it is shown a figure for some values of α in order to distinguish each curve from others. In the latter case, the curves chosen correspond to values of α with fewer iterations, or to an arbitrary choice when the number of iterations is the same. For each case, the corresponding curve to α = 1 is always chosen if possible to visualize both methods, the classical one (when α = 1) and the proposed in this paper, in the same figure.

Our first test vector valued function is F1(x,y) = (x2 − 2xy + 0.5,x2 + 4y2 − 4)T with real and complex roots \(\bar {x}_{1}\approx (-0.2222,0.9938)^{T}\), \(\bar {x}_{2}\approx (1.9007,0.3112)^{T}\) and \(\bar {x}_{3}\approx (1.1608-0.6545i,-0.9025-0.2104i)^{T}\). The conformable Jacobian matrix of F1(x,y) is

$$ {F_{a}^{\alpha(1)}}_{1}(x,y)= \left( \begin{array}{cc} (x-a_{1})^{1-\alpha}(2x-2) & (y-a_{2})^{1-\alpha}(-1) \\ (x-a_{1})^{1-\alpha}(2x) & (y-a_{2})^{1-\alpha}(8y) \end{array}\right), $$

being a = (a1,a2) = (− 10,− 10).

In Table 1, we observe for F1(x,y) that classical Newton’s method (when α = 1) does not find any solution in 500 iterations, whereas conformable vectorial Newton’s procedure converges. We can also observe that ACOC may be even slightly greater than 2 when α≠ 1. We have to remark that a complex root is found with real initial estimate and different values of α when conformable vectorial Newton’s method is used. In Figs. 1 and 2, error curve for classical Newton’s procedure (when α = 1) is not provided because no solution was found in this case, whereas we can see that error curves stop erratic behaviour in later iterations.

Table 1 Results for \(F_{1}(x,y)=\hat {0}\) with initial estimation x(0) = (− 2,− 1.5)T
Fig. 1
figure 1

Error curves of F1(x,y) for all values of α from Table 1

Fig. 2
figure 2

Error curves of F1(x,y) for some values of α from Table 1

In Table 2, we can see for F1(x,y) with a different initial estimation that, classical vectorial Newton’s scheme and conformable vectorial Newton’s method have a similar behaviour as to amount of iterations and ACOC. Again, the quadratic convergence of conformable Newton’s method is held, for all α ∈ (0,1]. In Figs. 3 and 4, erratic behaviour is not observed, due to errors decrease with each iteration.

Table 2 Results for \(F_{1}(x,y)=\hat {0}\) with initial estimation x(0) = (− 2,1.5)T
Fig. 3
figure 3

Error curves of F1(x,y) for all values of α from Table 2

Fig. 4
figure 4

Error curves of F1(x,y) for some values of α from Table 2

The second test vector valued function is F2(x,y) = (x2 + y2 − 1,x2y2 − 1/2)T with real roots \(\bar {x}_{1}=\left (\sqrt {3}/2,1/2\right )^{T}\), \(\bar {x}_{2}=\left (-\sqrt {3}/2,1/2\right )^{T}\), \(\bar {x}_{3}=\left (\sqrt {3}/2,-1/2\right )^{T}\) and \(\bar {x}_{4}=\left (-\sqrt {3}/2,-1/2\right )^{T}\). The conformable Jacobian matrix of F2(x,y) is

$$ {F_{a}^{\alpha(1)}}_{2}(x,y)= \left( \begin{array}{cc} (x-a_{1})^{1-\alpha}(2x) & (y-a_{2})^{1-\alpha}(2y) \\ (x-a_{1})^{1-\alpha}(2x) & (y-a_{2})^{1-\alpha}(-2y) \end{array}\right), $$

being a = (a1,a2) = (− 10,− 10).

It can be seen in Tables 3 and 4 for F2(x,y) that classical Newton’s method and conformable vectorial Newton’s scheme have a similar behaviour as in amount of iterations as in ACOC. Figures 567 and 8 show that erratic behaviour is not observed, because errors are decreasing with iterations.

Table 3 Results for \(F_{2}(x,y)=\hat {0}\) with initial estimation x(0) = (2,− 2.5)T
Table 4 Results for \(F_{2}(x,y)=\hat {0}\) with initial estimation x(0) = (2,2.5)T
Fig. 5
figure 5

Error curves of F2(x,y) for all values of α from Table 3

Fig. 6
figure 6

Error curves of F2(x,y) for some values of α from Table 3

Fig. 7
figure 7

Error curves of F2(x,y) for all values of α from Table 4

Fig. 8
figure 8

Error curves of F2(x,y) for some values of α from Table 4

Our third test vector valued function is \(F_{3}(x,y)=(x^{2}-x-y^{2}-1,-{\sin \limits } x+y)^{T}\) with real roots \(\bar {x}_{1}\approx (-0.8453,-0.7481)^{T}\) and \(\bar {x}_{2}\approx (1.9529,0.9279)^{T}\). The conformable Jacobian matrix of F3(x,y) is

$$ {F_{a}^{\alpha(1)}}_{3}(x,y)= \left( \begin{array}{cc} (x-a_{1})^{1-\alpha}(2x-1) & (y-a_{2})^{1-\alpha}(-2y) \\ (x-a_{1})^{1-\alpha}(-\cos x) & (y-a_{2})^{1-\alpha}(1) \end{array}\right), $$

being a = (a1,a2) = (− 10,− 10).

We can see in Table 5 for F3(x,y) that conformable vectorial Newton’s procedure requires less iterations than classical Newton’s method for lower values of α. It can also be observed that ACOC may be slightly greater than 2 for lower values of α. In Figs. 9 and 10, errors are decreasing in each iteration.

Table 5 Results for \(F_{3}(x,y)=\hat {0}\) with initial estimation x(0) = (2.5,− 0.5)T
Fig. 9
figure 9

Error curves of F3(x,y) for all values of α from Table 5

Fig. 10
figure 10

Error curves of F3(x,y) for some values of α from Table 5

In Table 6, we can see for F3(x,y) that conformable vectorial and classical Newton’s method require the same amount of iterations, and ACOC is around 2 in all cases. Again, the errors are decreasing in each iteration in Figs. 11 and 12.

Table 6 Results for \(F_{3}(x,y)=\hat {0}\) with initial estimation x(0) = (2.5,0.5)T
Fig. 11
figure 11

Error curves of F3(x,y) for all values of α from Table 6

Fig. 12
figure 12

Error curves of F3(x,y) for some values of α from Table 6

The fourth test vector valued function is F4(x,y) = (x2 + y2 − 4,ex + y − 1)T with real roots \(\bar {x}_{1}\approx (-1.8163,0.8374)^{T}\) and \(\bar {x}_{2}\approx (1.0042,-1.7296)^{T}\). The conformable Jacobian matrix of F4(x,y) is

$$ {F_{a}^{\alpha(1)}}_{4}(x,y)= \left( \begin{array}{cc} (x-a_{1})^{1-\alpha}(2x) & (y-a_{2})^{1-\alpha}(2y)\\ (x-a_{1})^{1-\alpha}(e^{x}) & (y-a_{2})^{1-\alpha}(1) \end{array}\right), $$

being a = (a1,a2) = (− 10,− 10).

We observe in Table 7 for F4(x,y) that again, conformable vectorial Newton’s scheme requires less iterations than classical Newton’s method for all values of α. It can also be seen that ACOC is around 2. We can see in Figs. 13 and 14 that errors are decreasing with iterations.

Table 7 Results for \(F_{4}(x,y)=\hat {0}\) with initial estimation x(0) = (− 2.5,− 3.5)T
Fig. 13
figure 13

Error curves of F4(x,y) for all values of α from Table 7

Fig. 14
figure 14

Error curves of F4(x,y) for some values of α from Table 7

In Table 8, we observe for F4(x,y) that conformable vectorial and classical Newton’s method require the same amount of iterations, and ACOC is around 2. Again, the errors are decreasing with iterations in Figs. 15 and 16.

Table 8 Results for \(F_{4}(x,y)=\hat {0}\) with initial estimation x(0) = (− 2.5,3.5)T
Fig. 15
figure 15

Error curves of F4(x,y) for all values of α from Table 8

Fig. 16
figure 16

Error curves of F4(x,y) for some values of α from Table 8

Our fifth test vector valued function is \(F_{5}(x)=\left (f_{1}(x),\dots ,f_{15}(x)\right )^{T}\), being \(x=(x_{1},\dots ,x_{15})^{T}\) and \(f_{i}:\mathbb {R}^{n}\longrightarrow \mathbb {R}\), \(i=1,2,\dots ,14,15\), such that

$$ \begin{array}{@{}rcl@{}} f_{i}(x)&=&x_{i}x_{i+1}-1,\quad i=1,2,\dots,13,14 \\ f_{15}(x)&=&x_{15}x_{1}-1, \end{array} $$

with real roots \(\bar {x}_{1}=(-1,\dots ,-1)^{T}\) and \(\bar {x}_{2}=(1,\dots ,1)^{T}\). The conformable Jacobian matrix of F5(x) is

$$ {F_{a}^{\alpha(1)}}_{5}(x)= \left( \begin{array}{ccccccc} \chi_{1,1} & \chi_{1,2} & 0 & {\dots} & {\dots} & 0 & 0 \\ 0 & \chi_{2,2} & \chi_{2,3} & 0 & {\dots} & 0 & 0 \\ & & & {\vdots} & & & \\ 0 & 0 & {\dots} & {\dots} & 0 & \chi_{14,14} & \chi_{14,15} \\ \chi_{15,1} & 0 & {\dots} & {\dots} & {\dots} & 0 & \chi_{15,15} \end{array}\right), $$

where

$$ \begin{array}{@{}rcl@{}} \chi_{1,1}&=&(x_{1}-a_{1})^{1-\alpha}(x_{2}) \\ \chi_{1,2}&=&(x_{2}-a_{2})^{1-\alpha}(x_{1}) \\ \chi_{2,2}&=&(x_{2}-a_{2})^{1-\alpha}(x_{3}) \\ \chi_{2,3}&=&(x_{3}-a_{3})^{1-\alpha}(x_{2}) \\ \chi_{14,14}&=&(x_{14}-a_{14})^{1-\alpha}(x_{15}) \\ \chi_{14,15}&=&(x_{15}-a_{15})^{1-\alpha}(x_{14}) \\ \chi_{15,1}&=&(x_{1}-a_{1})^{1-\alpha}(x_{15}) \\ \chi_{15,15}&=&(x_{15}-a_{15})^{1-\alpha}(x_{1}), \end{array} $$

being \(a=(a_{1},\dots ,a_{15})=(-10,\dots ,-10)\).

It can be observed in Tables 9 and 10 for F5(x) that classical Newton’s method and conformable vectorial Newton’s scheme have a similar behaviour as in amount of iterations as in ACOC. We can see that Figs. 171819 and 20 show that errors are decreasing with iterations, so, erratic behaviour is not observed.

Table 9 Results for \(F_{5}(x)=\hat {0}\) with initial estimation \(x^{(0)}=(-1.5,\dots ,-1.5)^{T}\)
Table 10 Results for \(F_{5}(x)=\hat {0}\) with initial estimation \(x^{(0)}=(2.5,\dots ,2.5)^{T}\)
Fig. 17
figure 17

Error curves of F5(x) for all values of α from Table 9

Fig. 18
figure 18

Error curves of F5(x) for some values of α from Table 9

Fig. 19
figure 19

Error curves of F5(x) for all values of α from Table 10

Fig. 20
figure 20

Error curves of F5(x) for some values of α from Table 10

The sixth test vector valued function is \(F_{6}(x)=\left (f_{1}(x),\dots ,f_{10}(x)\right )^{T}\), where \(x=(x_{1},\dots ,x_{10})^{T}\) and \(f_{i}:\mathbb {R}^{n}\longrightarrow \mathbb {R}\), \(i=1,2,\dots ,9,10\), such that

$$ f_{i}(x)=x_{i}-1.5\sin(x_{1}+x_{2}+\cdots+x_{9}+x_{10}-x_{i}),\quad i=1,2,\dots,9,10, $$

with real roots \(\bar {x}_{1}\approx (-0.9691,\dots ,-0.9691)^{T}\), \(\bar {x}_{2}\approx (-0.7569,\dots ,-0.7569)^{T}\), \(\bar {x}_{3}\approx (-0.3248,\dots ,-0.3248)^{T}\), \(\bar {x}_{4}=(0,\dots ,0)^{T}\), \(\bar {x}_{5}\approx (0.3248,\dots ,0.3248)^{T}\), \(\bar {x}_{6}\approx (0.7569,\dots ,0.7569)^{T}\) and \(\bar {x}_{7}=(0.9691,\dots ,0.9691)^{T}\). The conformable Jacobian matrix of F6(x) is

$$ {F_{a}^{\alpha(1)}}_{6}(x)= \left( \begin{array}{ccccccc} \chi_{1,1} & \chi_{1,2} & {\dots} & {\dots} & {\dots} & \chi_{1,9} & \chi_{1,10} \\ \chi_{2,1} & \chi_{2,2} & {\dots} & {\dots} & {\dots} & \chi_{2,9} & \chi_{2,10} \\ & & & {\vdots} & & & \\ \chi_{9,1} & \chi_{9,2} & {\dots} & {\dots} & {\dots} & \chi_{9,9} & \chi_{9,10} \\ \chi_{10,1} & \chi_{10,2} & {\dots} & {\dots} & {\dots} & \chi_{10,9} & \chi_{10,10} \end{array}\right), $$

where

$$ \begin{array}{@{}rcl@{}} \chi_{1,1}&=&(x_{1}-a_{1})^{1-\alpha}(1) \\ \chi_{1,2}&=&(x_{2}-a_{2})^{1-\alpha}(-1.5\cos(x_{1}+x_{2}+\cdots+x_{9}+x_{10}-x_{2})) \\ \chi_{1,9}&=&(x_{9}-a_{9})^{1-\alpha}(-1.5\cos(x_{1}+x_{2}+\cdots+x_{9}+x_{10}-x_{9})) \\ \chi_{1,10}&=&(x_{10}-a_{10})^{1-\alpha}(-1.5\cos(x_{1}+x_{2}+\cdots+x_{9}+x_{10}-x_{10})) \\ \chi_{2,1}&=&(x_{1}-a_{1})^{1-\alpha}(-1.5\cos(x_{1}+x_{2}+\cdots+x_{9}+x_{10}-x_{1})) \\ \chi_{2,2}&=&(x_{2}-a_{2})^{1-\alpha}(1) \\ \chi_{2,9}&=&(x_{9}-a_{9})^{1-\alpha}(-1.5\cos(x_{1}+x_{2}+\cdots+x_{9}+x_{10}-x_{9})) \\ \chi_{2,10}&=&(x_{10}-a_{10})^{1-\alpha}(-1.5\cos(x_{1}+x_{2}+\cdots+x_{9}+x_{10}-x_{10})) \\ \chi_{9,1}&=&(x_{1}-a_{1})^{1-\alpha}(-1.5\cos(x_{1}+x_{2}+\cdots+x_{9}+x_{10}-x_{1})) \\ \chi_{9,2}&=&(x_{2}-a_{2})^{1-\alpha}(-1.5\cos(x_{1}+x_{2}+\cdots+x_{9}+x_{10}-x_{2})) \\ \chi_{9,9}&=&(x_{9}-a_{9})^{1-\alpha}(1) \\ \chi_{9,10}&=&(x_{10}-a_{10})^{1-\alpha}(-1.5\cos(x_{1}+x_{2}+\cdots+x_{9}+x_{10}-x_{10})) \\ \chi_{10,1}&=&(x_{1}-a_{1})^{1-\alpha}(-1.5\cos(x_{1}+x_{2}+\cdots+x_{9}+x_{10}-x_{1})) \\ \chi_{10,2}&=&(x_{2}-a_{2})^{1-\alpha}(-1.5\cos(x_{1}+x_{2}+\cdots+x_{9}+x_{10}-x_{2})) \\ \chi_{10,9}&=&(x_{9}-a_{9})^{1-\alpha}(-1.5\cos(x_{1}+x_{2}+\cdots+x_{9}+x_{10}-x_{9})) \\ \chi_{10,10}&=&(x_{10}-a_{10})^{1-\alpha}(1), \end{array} $$

being \(a=(a_{1},\dots ,a_{10})=(-10,\dots ,-10)\).

We can see in Table 11 for F6(x) that conformable vectorial Newton’s procedure requires, in general, much less iterations than classical Newton’s method. It can also be observed that ACOC may be slightly greater than 2. In Figs. 21 and 22, we can see that error curves stop erratic behaviour in later iterations.

Table 11 Results for \(F_{6}(x)=\hat {0}\) with initial estimation \(x^{(0)}=(2,\dots ,2)^{T}\)
Fig. 21
figure 21

Error curves of F6(x) for all values of α from Table 11

Fig. 22
figure 22

Error curves of F6(x) for some values of α from Table 11

In Table 12, we can observe for F6(x) that classical Newton’s method does not find any solution in 500 iterations, whereas conformable one converges for most of values of α. We can see that ACOC is around two, but much greater than 2 when α = 0.1. No results are shown when α = 0.7 because conformable Jacobian matrix becomes singular. Again, in Figs. 23 and 24, we can observe that error curves stop erratic behaviour in later iterations.

Table 12 Results for \(F_{6}(x)=\hat {0}\) with initial estimation \(x^{(0)}=(3,\dots ,3)^{T}\)
Fig. 23
figure 23

Error curves of F6(x) for all values of α from Table 12

Fig. 24
figure 24

Error curves of F6(x) for some values of α from Table 12

In Tables 11 and 12, some errors are zero because double precision arithmetic is used. A value very close to zero could be observed if a variable precision arithmetic be used.

4.1 Numerical stability

In this section, we study the stability of conformable vectorial Newton’s method tested above. In that sense, we analyze the dependence on initial estimates by means of convergence planes, which is defined in [15], and used in [3,4,5]. Only two dimensions can be visualized in convergence planes, so we are going to provide them for vector valued functions F1, F2, F3 and F4.

For the construction of convergence planes, we consider from initial estimates (x0,y0), the points x0 in horizontal axis, and values of α ∈ (0,1] in vertical axis. Each one of 8 planes in each figure is representing a different value of y0 from initial estimates (x0,y0). Each color represents a different solution found, and it is painted in black when no solution is found after 500 iterations. Each plane is made with a 400 × 400 grid, with a maximum of 500 iterations, and tolerance 0.001.

In Fig. 25, we can see for F1(x,y) that in (e), (f), (g) and (h) almost 100% of convergence is obtained, whereas in (a), (b), (c) and (d) it is obtained around 86% of convergence. For each case, this method converges to all roots, even to complex root with real initial estimate.

Fig. 25
figure 25

Convergence planes of F1(x,y). \(\bar {x}_{1}\): green, \(\bar {x}_{2}\): red, \(\bar {x}_{3}\): blue

In Fig. 26, for F2(x,y) almost 100% of convergence is obtained for each plane. In (a), (b), (c) and (d) this method converges to 2 of 4 roots, and in (e), (f), (g) and (h) this method converges to the other 2 roots.

Fig. 26
figure 26

Convergence planes of F2(x,y). \(\bar {x}_{1}\): red, \(\bar {x}_{2}\): green, \(\bar {x}_{3}\): blue, \(\bar {x}_{4}\):yellow

In Fig. 27, for F3(x,y) we can observe that between 77% and 98% of convergence is obtained. For each plane, this method converges to both real roots.

Fig. 27
figure 27

Convergence planes of F3(x,y). \(\bar {x}_{1}\): red, \(\bar {x}_{2}\): blue

In Fig. 28, for F4(x,y) we can see that 100% of convergence is obtained in some cases, and almost in other cases. For (a), (b), (c), (d) and (e) this method converges to both real roots, and for (f), (g) and (h) converges to one root.

Fig. 28
figure 28

Convergence planes of F4(x,y). \(\bar {x}_{1}\): green, \(\bar {x}_{2}\): blue

We can also observe, in general, it is possible to find several solutions with the same initial estimate by choosing distinct values for α.

5 Conclusion

In this work, the first conformable fractional Newton-type iterative scheme for solving nonlinear systems was designed. Also, we have introduced all the analytical tools required to construct this method. The convergence analysis was made, and the quadratic convergence is held as in classical Newton’s method for nonlinear systems. It was concluded that, by using the conformable Taylor series introduced in this work, and the classical one, the same error equation is obtained in both versions (the conformable scalar method in [5], and the conformable vectorial method proposed in this work). Numerical tests were made, error curves were provided, and the dependence on initial estimates was analyzed, supporting the theory. We could observe that the conformable vectorial Newton-type method presents, in some cases, a better numerical behaviour than classical one in terms of amount of iterations, ACOC, and wideness of basins of attractions of the roots. We also could observe that complex roots may be obtained with real initial estimates, and several roots may be obtained with the same initial estimate by choosing different values of α.