1 Introduction

When considering the numerical approximation of fractional differential equations (FDEs), due to the nonlocal nature of the involved operators, the matrices are intrinsically dense, even in the case where the approximation technique is of local nature (Finite Differences, Finite Elements, Finite Volumes, Isogeometric Analysis etc.); see [14, 18,19,20, 22, 25, 34,35,36,37] and references therein. The same situation is present also in the context of distributed order FDEs (see [21] and references therein). When employing equispaced gridding, the resulting structures have the advantage of being of Toeplitz nature, so that the cost of a matrix-vector multiplication is almost linear i.e. of \(O(n\log n)\) arithmetic operations, where n is the matrix order and the constant hidden in the big O is very moderate (we refer to [8, 23] and to the references there reported). These computational features, joint with the usually large dimensions of the considered linear systems, lead to the search for suitable iterative solvers: typically the most successful iterative procedures belong to the class of preconditioned Krylov methods, to the algorithms of multigrid type, or to clever combinations of them [8, 10,11,12, 23, 29, 31].

A crucial piece of information for an appropriate design of an efficient solver of such a kind is the precise knowledge of the asymptotic behavior of the minimal eigenvalue, which in the positive definite case is also related to the asymptotic spectral conditioning: in our setting we emphasize that the considered matrices are real, symmetric, and positive definite.

In this work, starting from [5, 21], we improve the bounds present in the literature. We take into account the techniques already developed in [4, 6]: the additional difficulty, in the present setting, relies on the fact that the generating function is not fixed but rather depends on the matrix order n, which is somehow a challenging novelty with respect to the classical work on the subject [4, 6, 27].

In reality, in technical terms, we consider the real-valued symbol

$$\begin{aligned} f_{n}(\theta )\equiv \sum _{j=0}^{n-1}|\theta |^{2-jh}h^{jh}, \end{aligned}$$

with \(h\equiv \frac{1}{n}\). In this article we obtain a real interval where the smallest eigenvalue of the Toeplitz matrix \(A_{n}\equiv T_{n}(f_{n})\) belongs to. Recall that if f is a complex-valued function on the unit circle \({\mathbb {T}}\) or, equivalently, on \([-\pi ,\pi ]\) with Fourier coefficients \(\hat{f}_j\) (\(j\in {\mathbb {Z}}\)), then \(T_{n}(f)\) stands for the matrix \((\hat{f}_{j-k})_{j,k=1}^{n}\). The function f is then called the generating function or the symbol of \(T_{n}(f)\). In our case not only the matrix dimension of \(A_{n}\) but also the symbol \(f_{n}\) depends on n.

Based on the spectral information, few algorithmic proposals are also discussed, starting from those presented in [12, 13, 21] and being of the type mentioned before: preconditioned conjugate gradient (PCG) algorithms, multigrid solvers (typically the V-cycle), and combinations of them, that is, a V-cycle where the smoothing iterations (one step) incorporate the proposed PCG choices.

The work is organized as follows. Section 2 contains the setting of the problem regarding the minimal eigenvalue of \(A_{n}\) and its study and solution. Section 3 is devoted to numerical experiments concerning the solution of large linear systems with coefficient matrix \(A_{n}\), including few evidences of the spectral clustering at one of the proposed preconditioned matrix-sequences.

2 The problem and its solution

Let \(h\equiv \frac{1}{n}\) and consider the function \(f_{n}:[-\pi ,\pi ]\rightarrow {\mathbb {R}}\) given by

$$\begin{aligned} f_{n}(\theta )\equiv \theta ^{2}\sum _{j=0}^{n-1} |\theta |^{-jh} h^{jh} =\theta ^{2}\frac{1-\frac{1}{n|\theta |}}{1-\big (\frac{1}{n|\theta |}\big )^{\frac{1}{n}}}. \end{aligned}$$

Note that \(f_{n}\) coincides with the function \({\hat{F}}_{n}\) used in [5]. We employ the notation \(A_{n}\equiv T_{n}(f_{n})\), that is, we consider the \(n \times n\) Toeplitz matrix with the symbol \(f_n\). Using that

$$\begin{aligned} 1-\bigg (\frac{1}{n|\theta |}\bigg )^{\frac{1}{n}}= & {} 1-\exp \bigg \{-\frac{\log (n|\theta |)}{n}\bigg \}\\= & {} \frac{\log (n|\theta |)}{n}\bigg \{1+O\bigg (\frac{|\log (|n|\theta |)|}{n}\bigg )\bigg \}\quad \text{ as }\quad n\rightarrow \infty , \end{aligned}$$

we obtain

$$\begin{aligned} f_{n}(\theta )= & {} \frac{n\theta ^{2}}{\log (n|\theta |)}\bigg \{1-\frac{1}{n|\theta |}\bigg \}\bigg \{1+O\bigg (\frac{|\log (n|\theta |)|}{n}\bigg )\bigg \}\\= & {} \frac{|\theta |(n|\theta |-1)}{|\log (n|\theta |)|}\bigg \{1+O\bigg (\frac{|\log (n|\theta |)|}{n}\bigg )\bigg \}\\= & {} \frac{|\theta |(n|\theta |-1)}{\log (n|\theta |)}+O\bigg (\theta ^{2}+\frac{|\theta |}{n}\bigg ). \end{aligned}$$

To simplify the previous expression, we consider now the function g given for \(\sigma >0\) by

$$\begin{aligned} g(\sigma )\equiv \frac{\sigma ^{2}-\sigma }{\log (\sigma )}. \end{aligned}$$
(2.1)

Then we infer the relation \(nf_{n}(\theta )=g(n|\theta |)+r_{n}(\theta )\) where \(r_{n}(\theta )=O(n\theta ^{2}+|\theta |)\) as \(n\rightarrow \infty \).

We now establish a connection with the simple-loop method [3, 4]. Consider the Toeplitz matrix \(T_{n}(a)\) with symbol \(a:{\mathbb {T}}\rightarrow {\mathbb {R}}\) given by

$$\begin{aligned} a(t)=-\frac{1}{t}+2-t=4\sin ^{2}\bigg (\frac{\theta }{2}\bigg ), \end{aligned}$$

where \(t=\mathrm {e}^{\mathrm {i}\theta }\). For a symbol b, let \(\lambda _{j}(T_{n}(b))\) and \(\psi _{j}(T_{n}(b))\) be its jth eigenvalue and normalized eigenfunction, respectively. Since a is real-valued, its eigenvalues must be real and we arrange them in increasing order, that is,

$$\begin{aligned} \lambda _{1}(T_{n}(a))\leqslant \lambda _{2}(T_{n}(a))\leqslant \cdots \leqslant \lambda _{n}(T_{n}(a)). \end{aligned}$$

It is well-known that (see, for example, [4, 7]) that

$$\begin{aligned} \lambda _{1}(T_{n}(a))= & {} 4\sin ^{2}\Big (\frac{s}{2}\Big ),\nonumber \\ \psi _{1}(T_{n}(a))(\theta )= & {} \frac{c_{n}\, \mathrm {e}^{\frac{\mathrm {i}}{2}(n+1)\theta }}{(n+1)^{\frac{3}{2}}}\cdot \frac{\cos \big (\frac{(n+1)\theta }{2}\big )}{\sin \big (\frac{\theta -s}{2}\big )\sin \big (\frac{\theta +s}{2}\big )}, \end{aligned}$$
(2.2)

where \(s\equiv \frac{\pi }{n+1}\) and \(c_{n}\) is bounded when \(n\rightarrow \infty \). The constant \(c_{n}\) can be calculated from the relation \(\Vert \psi _{1}(T_{n}(a))\Vert _{2}=1\), see Fig. 1.

Fig. 1
figure 1

The constant \(c_{n}\) in Eq. (2.2) for different values of n. The horizontal gridlines correspond to the value of \(c_{n}\) for some selected numbers n in order to enhance the convergence

2.1 Upper bound for \(\lambda _{1}(A_{n})\)

Theorem 2.1

We have

$$\begin{aligned} n\lambda _{1}(A_{n})\leqslant k_{1}+O\Big (\frac{1}{n}\Big ),\quad \text{ as }\quad n\rightarrow \infty , \end{aligned}$$

where the constant \(k_{1}\) is given by

$$\begin{aligned} k_{1}=\bigg \{\int _{0}^{\infty } \frac{u^{2}-u}{(u^{2}-\pi ^{2})^{2}}\cdot \frac{\cos ^{2}(\frac{u}{2})}{\log (u)}\mathrm{d}u\bigg \}\bigg \{\int _{0}^{\pi }\frac{\cos ^{2}(\frac{u}{2})}{(u^{2}-\pi ^{2})^{2}}\mathrm{d}u\bigg \}^{-1}, \end{aligned}$$

and can be numerically approximated as 12.9301.

Proof

Let \(\langle \cdot ,\cdot \rangle \) be the inner product of the Hilbert space \(L^{2}({\mathbb {T}})\). Note that the essential ranges of the symbols \(n^{2} a\) and \(n f_{n}\) have a similar behavior in an interval of the type \(\big [0,O\big (\frac{1}{n}\big )\big ]\). Hence using the well-known formula

$$\begin{aligned} \lambda _{1}(A_{n})=\inf \{\langle A_{n}\psi ,\psi \rangle :\Vert \psi \Vert _{2}=1\}, \end{aligned}$$

we obtain

$$\begin{aligned} \lambda _{1}(A_{n})\leqslant & {} \langle A_{n}\psi _{1}(A_{n}),\psi _{1}(A_{n})\rangle \nonumber \\= & {} \frac{1}{2\pi }\int _{-\pi }^{\pi } f_{n}(\theta )|\psi _{1}(A_{n})(\theta )|^{2}\mathrm{d}\theta \nonumber \\= & {} \mathcal I_{1}(n)+\mathcal I_{2}(n), \end{aligned}$$
(2.3)

where

$$\begin{aligned} \mathcal I_{1}(n)\equiv & {} \frac{1}{2\pi n}\int _{-\pi }^{\pi } g(n|\theta |)|\psi _{1}(A_{n})(\theta )|^{2}\mathrm{d}\theta ,\\ \mathcal I_{2}(n)\equiv & {} \frac{1}{2\pi n}\int _{-\pi }^{\pi }r_{n}(\theta )|\psi _{1}(A_{n})(\theta )|^{2}\mathrm{d}\theta . \end{aligned}$$

It is easy to see that \(\lim _{n\rightarrow \infty }n\,\mathcal I_{1}(n)<\infty \). Indeed

$$\begin{aligned} n\,\mathcal I_{1}(n)= & {} \frac{c_{n}^{2}}{2\pi (n+1)^{3}}\int _{-\pi }^{\pi } g(n|\theta |)\cdot \frac{\cos ^{2}\big (\frac{(n+1)\theta }{2}\big )}{\sin ^{2}\big (\frac{\theta -s}{2}\big )\sin ^{2}\big (\frac{\theta +s}{2}\big )}\mathrm{d}\theta \nonumber \\\sim & {} \frac{16nc_{n}^{2}}{\pi }\int _{0}^{\pi } g(n\theta )\cdot \frac{\cos ^{2}\big (\frac{n\theta }{2}\big )}{((n\theta )^{2}-\pi ^{2})^{2}}\mathrm{d}\theta \nonumber \\\sim & {} \frac{16c_{n}^{2}}{\pi }\int _{0}^{\infty } \frac{u^{2}-u}{(u^{2}-\pi ^{2})^{2}}\cdot \frac{\cos ^{2}\big (\frac{u}{2}\big )}{\log (u)}\mathrm{d}u. \end{aligned}$$
(2.4)

Here the notation \(f\sim g\) means \(\lim _{n\rightarrow \infty }\frac{f(n)}{g(n)}=1\). Using (2.2), a similar calculation produces

$$\begin{aligned} \begin{aligned} c_{n}^{-2}\sim&{} \frac{16n}{\pi }\int _{0}^{\pi }\frac{\cos ^{2}\big (\frac{n\theta }{2}\big )}{((n\theta )^{2}-\pi ^{2})^{2}}\mathrm {d}\theta \\ \sim&{} \frac{16}{\pi }\int _{0}^{\infty }\frac{\cos ^{2}\big (\frac{u}{2}\big )}{(u^{2}-\pi ^{2})^{2}}\mathrm {d}u\\ =&\frac{2}{\pi ^{2}}. \end{aligned} \end{aligned}$$
(2.5)

Note that the last integral does not depend on n. Therefore, the numerical value of \(c_{n}\) is \(c_{n}=\frac{\pi }{\sqrt{2}}\approx 2.2214\), which agrees with the Fig. 1. For the second integral in (2.3) we recall that \(r_{n}(\theta )=O(n\theta ^{2}+|\theta |)\) and write \(\mathcal I_{2}(n)=O(\Phi (n))\) where

$$\begin{aligned} \Phi (n)= & {} \frac{1}{n^{2}}\int _{-\pi }^{\pi } \{(n|\theta |)^{2}-n|\theta |\}|\psi _{1}(A_{n})(\theta )|^{2}\mathrm{d}\theta \\= & {} \frac{c_{n}^{2}}{n^{5}}\int _{-\pi }^{\pi } \{(n|\theta |)^{2}-n|\theta |\}\frac{\cos ^{2}\big (\frac{n|\theta |}{2}\big )}{((n|\theta |)^{2}-\pi ^{2})^{2}}\mathrm{d}\theta \\\sim & {} \frac{16c_{n}^{2}}{n^{2}}\int _{0}^{\infty } \frac{u^{2}-u}{(u^{2}-\pi ^{2})^{2}}\cos \Big (\frac{u}{2}\Big )\mathrm{d}u, \end{aligned}$$

which combined with (2.5) produces

$$\begin{aligned} \mathcal I_{2}(n)=O\Big (\frac{1}{n^{2}}\Big ). \end{aligned}$$
(2.6)

Finally, combining (2.3), (2.4), (2.5), and (2.6) we obtain the thesis. \(\square \)

2.2 Lower bound for \(\lambda _{1}(A_{n})\)

In this part we will implement the trick used in [6] which works as follows. Let \(b\in C({\mathbb {T}})\) and \(q_{n}:{\mathbb {T}}\rightarrow {\mathbb {C}}\) given by

$$\begin{aligned} q_{n}(t)\equiv \sum _{j=n}^{\infty } q_{j,n} t^j+\sum _{j=n}^{\infty } q_{-j,n}t^{-j}. \end{aligned}$$
(2.7)

Since \(T_{n}(q_{n})\) is the zero matrix we clearly have \(T_{n}(a)=T_{n}(a+q_{n})\). Thus instead of working with \(T_{n}(a)\) we can use \(T_{n}(a+q_{n})\) which under a swiftly selected symbol \(q_{n}\) can be advantageous.

Theorem 2.2

We have

$$\begin{aligned} n\lambda _{1}(A_{n})\geqslant k_{2}+O\Big (\frac{1}{n}\Big ),\quad \text{ as }\quad n\rightarrow \infty , \end{aligned}$$

where the constant \(k_{2}\) is given by

$$\begin{aligned} k_{2}=\frac{1}{\pi }\int _{0}^{\pi }\frac{\sigma ^{2}-\sigma }{\log (\sigma )}\mathrm{d}\sigma , \end{aligned}$$

which can be approximated numerically as 2.2945.

Proof

In our case instead of working with the symbol \(nf_{n}\) we will use

$$\begin{aligned} g_{n}(\theta )\equiv & {} nf_{n}(\theta )+p(n|\theta |),\\= & {} g(n|\theta |)+p(n|\theta |)+r_{n}(\theta ), \end{aligned}$$

where \(p(\sigma )\equiv \sum _{j=1}^{\infty } p_{j}\cos (\sigma j)\), \(p_{j}\) are real constants, and \(r_{n}\) is given in (2.1). Let \(t=\mathrm {e}^{\mathrm {i}\theta }\). Then we can write

$$\begin{aligned} p(n|\theta |)=p(n\theta )=\sum _{j=1}^{\infty } \frac{p_{j}}{2} (t^{nj}+t^{-nj}), \end{aligned}$$

which clearly shows the form (2.7). Additionally, the function p is \(2\pi \)-periodic, even, and satisfies \(\int _{0}^{\pi }p(\sigma )\mathrm{d}\sigma =0\). We thus deduce \(T_{n}(p(n|\theta |))=0\), and hence

$$\begin{aligned} T_{n}(nf_{n})= & {} T_{n}(g_{n})\\= & {} T_{n}(g(n|\theta |))+T_{n}(p(n|\theta |))+T_{n}(r_{n})\\= & {} T_{n}(g(n|\theta |))+T_{n}(r_{n}). \end{aligned}$$

Keeping in mind that, for any real-valued symbol \(\beta \), the smallest eigenvalue of \(T_{n}(\beta )\) must be greater than or equal to the infimum of \(\beta \), we obtain

$$\begin{aligned} n\lambda _{1}(A_{n})\geqslant & {} \inf \{g_{n}(\theta ):\theta \in [-\pi ,\pi ]\}\\= & {} \inf \{g(n|\theta |)+p(n|\theta |)+r_{n}(\theta ):\theta \in [-\pi ,\pi ]\}\\= & {} \inf \Big \{g(\sigma )+p(\sigma )+O\Big (\frac{\sigma ^{2}+\sigma }{n}\Big ):\sigma \in [0,n\pi ]\Big \}, \end{aligned}$$

where in the last line we used the variable change \(\sigma =n|\theta |\). Then we obtain

$$\begin{aligned} n\lambda _{1}(A_{n})\geqslant \inf \Big \{g(\sigma )+p(\sigma )+O\Big (\frac{\sigma ^{2}+\sigma }{n}\Big ):\sigma \in [0,M]\Big \}, \end{aligned}$$

for a sufficiently large constant M. Thus we have

$$\begin{aligned} n\lambda _{1}(A_{n})\geqslant \inf \{g(\sigma )+p(\sigma ):\sigma \in [0,M]\}+O\Big (\frac{1}{n}\Big ). \end{aligned}$$
(2.8)

In order to obtain a neat lower bound for \(\lambda _{1}(A_{n})\), we need to choose the coefficients \(p_{j}\) in such a way that

$$\begin{aligned} m\equiv \inf _{0\leqslant \sigma \leqslant M}\{g(\sigma )+p(\sigma )\} \end{aligned}$$

be maximal. Since p is \(2\pi \)-periodic and g is strictly increasing with \(g(0)=0\) and \(g(\infty )=\infty \), the infimum of \(g+p\) must be in the interval \([0,\pi ]\), and consequently we infer that

$$\begin{aligned} m=\inf _{0\leqslant \sigma \leqslant \pi }\{g(\sigma )+p(\sigma )\}. \end{aligned}$$

Consider the integral

$$\begin{aligned} k_{2}\equiv \frac{1}{\pi }\int _{0}^{\pi } \{g(\sigma )+p(\sigma )\}\mathrm{d}\sigma =\frac{1}{\pi }\int _{0}^{\pi } g(\sigma )\mathrm{d}\sigma , \end{aligned}$$

which satisfies \(k_{2}\geqslant m\), and take p as

$$\begin{aligned} p(\sigma )={\left\{ \begin{array}{ll} k_{2}-g(\sigma ),&{}\sigma \in [-\pi ,\pi ];\\ k_{2}-g(\sigma -2\pi j),&{}\sigma \in [\pi j,\pi (j+2)],\quad j\in {\mathbb {Z}}. \end{array}\right. } \end{aligned}$$

It is immediate that the function p is \(2\pi \)-periodic, \([g(\sigma )+p(\sigma )]_{\sigma \in [-\pi ,\pi ]}\equiv k_{2}\), \(g(\sigma )+p(\sigma )\geqslant k_{2}\) for \(\sigma \notin [-\pi ,\pi ]\), which implies \(m=k_{2}\). This combined with (2.8) proves the theorem. \(\square \)

Finally, combining the Theorems 2.1 and 2.2 we obtain

$$\begin{aligned} 2.2945\approx k_{2}\leqslant n\lambda _{1}(A_{n}) \leqslant k_{1}\approx 12.9301, \end{aligned}$$

for every sufficiently large n.

Remark 2.1

In [5] the authors proved that \(\lambda _{1}(A_{n})=O\big (\frac{1}{n}\big )\). The next section will show that our bounds are more precise.

3 Few selected numerical experiments

The current section is divided into two parts. In the first we discuss our theoretical results regarding the bounds on the minimal eigenvalue of \(A_{n}\) as a function of the matrix order n in comparison with the bounds present in the literature. Regarding [21], based on [26], the only relevant observation is that the minimal eigenvalue of \(T_{n}^{-1}(|\theta |^{2})h^\alpha T_{n}(|\theta |^{2-\alpha })\) is well separated from zero, for any choice of \(\alpha \in (0,2)\), and this provides a qualitative indication that the minimal eigenvalue of \({A_{n} \over n}\) converges to zero with a speed of \({1 \over n^{2}}\). Concerning [5], the latter claim is indeed proved formally, but the constants are not computed, while in the present article we improve the findings, by determining quite precise lower and upper bounds (see Fig. 2).

Fig. 2
figure 2

The normalized minimum eigenvalue \(n\lambda _{1}(A_{n})\) of \(A_{n}\) for different values of n. The left image includes the lower and upper bounds given by Theorems 2.2 and 2.1, respectively, while the right image shows a standard data range

The second part is devoted to exploiting the spectral information for the use and the design of specialized iterative algorithms, when solving large linear systems having \(\frac{1}{n}A_{n}\) as coefficient matrix. In Table 1 we employ the standard conjugate gradient (CG), since the coefficient matrix is positive definite, but we do not expect optimality due to the ill-conditioning of the related matrix-sequence. We then use the preconditioned conjugate gradient (PCG) with Strang and Frobenius optimal preconditioners (see [8, 9, 23, 28] taking the preconditioner in the algebra of the circulant matrices and in the algebra of sine transforms, containing the Finite Difference discrete Laplacian \(T_{n}(2-2\cos \theta )\)). Also in this case it is nontrivial to obtain optimality, due to the ill-conditioning of the involved matrices, which grows to infinity as the matrix size tends to infinity.

Few observations are in order: the considered matrix \(A_{n}\) is real, symmetric, and Toeplitz and the \(\tau \) algebra is again inherently real, symmetric, and with Toeplitz generator [9]. The latter statement represents a qualitative explanation of the fact that the \(\tau \) preconditioners perform substantially better than the analogs in the circulant algebra (see Table 1): the theoretical ground to the preceding qualitative remark relies in the notion of good algebras developed in [30].

Notice that the tridiagonal \(\tau \) discrete Finite Difference Laplacian \(\Delta _{n}\) is optimal: the related linear system solution is extremely cheap both in a sequential model (via the Thomas algorithm) and in a parallel model of computation (via e.g. classical multigrid methods [17]). We also notice the remarkable difference between the performance of the optimal \(\tau \) preconditioner and of the optimal circulant preconditioner. The reason is spectral (plus the good algebra argument [30]). The minimal eigenvalue of the optimal circulant preconditioner is averaged, due to a Cesàro sum effect [9, 28], and hence it behaves as \({1\over n}\) (instead of \(1\over n^{2}\)) and this explains the reason why the number of iterations grows as \(\sqrt{n}\), in the light of the classical results on conjugate gradient methods [1]. On the other hand, the optimal \(\tau \) preconditioner matches very well the extremal eigenvalues of \(A_{n}\) (see e.g. [9]).

Thus, as a conclusion, we deduce that the preconditioners \(C_{S,n}\) (Strang-Circulant), \(\tau _{N,n}\) (Natural-\(\tau \)), \(\tau _{F,n}\) (Frobenius-Optimal \(\tau \)), and \(\Delta _{n}\) (Discrete FD Laplacian) are all optimal in the sense that the iteration count is bounded by constants independent of the matrix size n and the cost per iteration is that of the Fast Fourier Transform, which amounts to \(O(n\log n)\) arithmetic operations (for a formal notion of optimality see [2]). As the data indicate, the preconditioners \(\tau _{N,n}\), \(\tau _{F,n}\) are the best, but also \(\Delta _{n}\) is of interest given its sparsity.

In Tables 2, 3 and 4 the minimal and maximal eigenvalues of the considered preconditioned matrices are reported to highlight the efficiency of the proposed preconditioner. For sure, both the data regarding the preconditioners \(\tau _{N,n}\) and \(\tau _{F,n}\) deserve further attention. The outliers analysis reported in Tables 5 and 6, respectively, seems to show a strong cluster at 1. For sure, a weak clustering is observed, being the outliers percentage decreasing as long as n increases: we notice that the weak clustering can be deduced theoretically using the GLT theory [16], while the strong clustering is nontrivial given the ill-conditioning of \(A_{n}\).

The last choice is a multigrid method designed with the use of the spectral information available. In fact, the projector and the restriction operators are those classically used when dealing with the standard discrete Laplacian \(\Delta _{n}=T_{n}(2-2\cos \theta )\). Indeed, even if \(\frac{1}{n}A_{n}\) is dense and seemingly there are no similarities with \(\Delta _{n}\), from a spectral point of view both matrix-sequences have minimal eigenvalue collapsing to zero as \(1\over n^{2}\), up to some moderate constants.

More precisely, the transfer operators are designed, in a very standard way, as the product of the Toeplitz matrix generated by the symbol \(2+2\cos \theta \) and a proper cutting matrix (see e.g. [15, 17]). A purely algebraic multigrid is considered, so that matrices at coarser levels are obtained by projection via the transfer operators according to the Galerkin condition. When setting smoothers, we tested several choices by considering a standard Gauss-Seidel iteration, or PCG with sine transform preconditioners, according to the natural approach and the Frobenius optimal choice: \(\nu _{\text {pre}}\) steps are applied for the presmoother and \(\nu _{\text {post}}\) steps for the postsmoother, respectively.

Table 1 Number of PCG’s iterations to reach convergence with respect to scaled residual less than \(10^{-7}\) with preconditioner \(I_{n}\) (no preconditioning), \(C_{S,n}\) (Strang-Circulant), \(C_{F,n}\) (Frobenius-Optimal Circulant), \(\tau _{N,n}\) (Natural-\(\tau \)), \(\tau _{F,n}\) (Frobenius-Optimal \(\tau \)), and \(\Delta _{n}\) (Discrete FD Laplacian)
Table 2 Minimal and maximal eigenvalues of preconditioned matrices with preconditioner \(C_{S,n}\) (Strang-Circulant) and \(C_{F,n}\) (Frobenius-Optimal Circulant)

As it can be seen in Table 7, the numerical results with the multigrid choice are fully satisfactory, as well: the method is optimal [2] in the sense that the iteration count is bounded by a constant independent of n and the cost per iteration is proportional to that of the matrix-vector product.

A special mention deserves a last test in which we set as presmoother the PCG with Finite Difference discrete Laplacian preconditioner with \(\nu _{\text {pre}}=1\), and as postsmoother the PCG with Natural-\(\tau \) or Frobenius-Optimal \(\tau \) preconditioner with \(\nu _{\text {post}}=1\), but only at the finest level. In all the coarser levels the smoothers are simply chosen as one iteration of the standard Gauss-Seidel iteration, so further reducing the computational cost. The number of required iterations is 3 as for the case \(\gamma \).

Though the efficiency is comparable with those of the preconditioned PCG in Table 1 in the present unilevel setting, multigrid could become the most promising choice in the multilevel one, due to the theoretical barriers studied in [24, 32, 33], regarding the non optimality of the PCG with matrix-algebra preconditioners in the multilevel context.

Table 3 Minimal and maximal eigenvalues of preconditioned matrices with preconditioner \(\tau _{N,n}\) (Natural-\(\tau \)) and \(\tau _{F,n}\) (Frobenius-Optimal \(\tau \))
Table 4 Minimal and maximal eigenvalues of preconditioned matrices with preconditioner \(\Delta _{n}\) (discrete FD Laplacian)
Table 5 Number of left (\(n_{\text {out}}^{l}\)) and right (\(n_{\text {out}}^r\)) outliers (eigenvalues not belonging to \((1 -\varepsilon , 1 + \varepsilon )\) with \(\varepsilon =10^{-1}\) and \(10^{-2}\) and their percentage with respect to the dimension in the case of Natural-\(\tau \) preconditioner \(\tau _{N,n}\)
Table 6 Number of left (\(n_{\text {out}}^{l}\)) and right (\(n_{\text {out}}^r\)) outliers (eigenvalues not belonging to \((1 -\varepsilon , 1 + \varepsilon )\) with \(\varepsilon =10^{-1}\) and \(10^{-2}\) and their percentage with respect the dimension in the case of Frobenius-Optimal \(\tau \) preconditioner \(\tau _{F,n}\)
Table 7 Number of multigrid iterations to reach convergence with respect to scaled residual less than \(10^{-7}\): Case \(\alpha \) = Gauss-Seidel (\(\nu _{\text {pre}}=1\))/Gauss–Seidel (\(\nu _{\text {post}}=1\)), case \(\beta \) = Gauss–Seidel (\(\nu _{\text {pre}}=1\)) /PCG with Natural-\(\tau \) or Frobenius-Optimal \(\tau \) preconditioner \(\nu _{\text {post}}=1\), case \(\gamma \) = PCG with Discrete FD Laplacian preconditioner (\(\nu _{\text {pre}}=1\) or 2) /PCG with Natural-\(\tau \) or Frobenius-Optimal \(\tau \) preconditioner (\(\nu _{\text {post}}=1\)), case \(\delta \) = PCG with Discrete FD Laplacian preconditioner (\(\nu _{\text {pre}}=1\)) /PCG with Natural-\(\tau \) or Frobenius-Optimal \(\tau \) preconditioner (\(\nu _{\text {post}}=2\))Concerning CPU timings, they are consistent with the iteration count, but they are not reported since our MATLAB implementation of the multigrid is not optimized

4 Conclusions

In the current article we have considered a type of matrix stemming when considering the numerical approximations of distributed order FDEs (see [5, 21] for example). The main contribution relies on explicit bounds for the minimal eigenvalue of the involved matrices. In fact the new presented bounds improve those already present in the literature and give a more accurate spectral information. The latter knowledge has been used in the design of fast numerical algorithms for the associated linear systems approximating the given distributed order FDEs: an interesting challenge is to consider a d-dimensional version of the considered FDE (see [21]), in order to show how the presented techniques and numerical methods can be adapted and extended in d-level setting with \(d\geqslant 2\).