Spectral Coarse Spaces for the Substructured Parallel Schwarz Method

The parallel Schwarz method (PSM) is an overlapping domain decomposition (DD) method to solve partial differential equations (PDEs). Similarly to classical nonoverlapping DD methods, the PSM admits a substructured formulation, that is, it can be formulated as an iteration acting on variables defined exclusively on the interfaces of the overlapping decomposition. In this manuscript, spectral coarse spaces are considered to improve the convergence and robustness of the substructured PSM. In this framework, the coarse space functions are defined exclusively on the interfaces. This is in contrast to classical two-level volume methods, where the coarse functions are defined in volume, though with local support. The approach presented in this work has several advantages. First, it allows one to use some of the well-known efficient coarse spaces proposed in the literature, and facilitates the numerical construction of efficient coarse spaces. Second, the computational work is comparable or lower than standard volume two-level methods. Third, it opens new interesting perspectives as the analysis of the new two-level substructured method requires the development of a new convergence analysis of general two-level iterative methods. The new analysis casts light on the optimality of coarse spaces: given a fixed dimension m, the spectral coarse space made by the first m dominant eigenvectors is not necessarily the minimizer of the asymptotic convergence factor. Numerical experiments demonstrate the effectiveness of the proposed new numerical framework.


Introduction
Consider a linear problem of the form Au = f , which we assume well posed in a vector space V . To define a two-level method for the solution to this problem, a one-level method and a coarse-correction step are required. One-level Schwarz methods are generally based on a splitting technique: the operator A : V → V is decomposed as A = M − N , where M : V → V is assumed invertible and represents the preconditioner. This splitting leads to a stationary iteration, namely u k+1 = M −1 N u k + M −1 f , for k = 0, 1, . . ., and to a preconditioned system M −1 Au = M −1 f . These are strongly related, since the stationary iteration, if it converges, produces the solution of the preconditioned system; see, e.g., [1] and references therein. Therefore, DD methods can be used as stationary iterations or preconditioners; see, e.g., [2][3][4][5][6]. Unfortunately, one-level DD methods are in general not scalable and a coarse correction step is often desirable. See, e.g., [7][8][9][10][11][12] for exceptions and detailed scalability and non-scalability analyses.
A two-level method is characterized by the combination of a one-level method, defined on V , and a coarse correction step, performed on a coarse space V c . The coarse space V c is finite dimensional and it must satisfy the condition dim V c dim V . The mappings between V and V c are realized by a restriction operator R : V → V c and a prolongation operator P : V c → V . In general, the restriction of A : V → V on V c is defined as A c = R AP, which is assumed to be an invertible matrix. Now, we distinguish two cases: a two-level stationary method and a two-level preconditioning method. In the first case, a stationary method is used as first-level method. After each stationary iteration, which produces an approximation u app , the residual r = f − Au app is mapped from V to V c , the coarse problem A c e = Rr is solved to get e ∈ V c , and the coarse correction step is defined as u new = u app + Pe. This correction provides the new approximation u new . By repeating these operations iteratively, one gets a two-level stationary method. The preconditioner corresponding to this method is denoted by M s,2L . Notice that this idea is very closely related to two-grid methods. In the second case, the first-level method is purely a preconditioner M −1 . The corresponding two-level preconditioner, denoted by M 2L , is generally obtained in an additive way: the one-level preconditioner M −1 is added to the coarse correction matrix P A −1 c R. When used with appropriate implementations, the two preconditioners M 2L and M s,2L require about the same computational effort per Krylov iteration. However, their different structures can lead to different performances of Krylov methods. The literature about two-level DD methods is very rich. See, e.g., [7,[12][13][14][15][16][17][18][19], for references considering two-level Schwarz stationary methods, and, e.g., [20][21][22][23][24][25][26][27][28][29][30][31][32], for references considering two-level Schwarz preconditioners, and, e.g., for references considering two-level substructuring preconditioners [31,[33][34][35]. See also general classical references as [4][5][6] and [36,37]. For any given one-level Schwarz method (stationary or preconditioning), the choices of V c , P and R influence very strongly the convergence behavior of the corresponding twolevel method. A common choice would be to use as a coarse space the span of the dominant eigenfunctions of the one-level iteration operator G := M −1 N . Such a coarse space, and more generally coarse spaces obtained as the span of some given functions, are usually called spectral coarse spaces to distinguish them from geometric coarse spaces built implicitly using coarser meshes as in a multigrid framework.
In a more general context, fundamental results are presented in [38]: for a symmetric and positive definite A, it is proved that the coarse space of size m that minimizes the energy norm of the two-level iteration operator is the exactly the spectral coarse space made by the first m dominant eigenfunctions of G. The sharp result of [38] provides a concrete (optimal) choice of V c minimizing the energy norm of the two-level operator associated to a symmetric and positive definite A.
Unfortunately, computing the eigenfunctions of the one-level method is often unfeasible, and thus several works have proposed other spectral coarse spaces which are cheaper to obtain, but still contain information about the slow eigenspace of the one-level method.
As a matter of fact, focusing on a Schwarz iterative procedure, error and residual have generally very special forms. The error is harmonic, in the sense of the underlying PDE operator, in the interior of the subdomains (excluding the interfaces). Moreover, it is predominant in the overlap. The residual is predominant on the interfaces and zero outside the overlap. For examples and more details, see, e.g., [13,17]. This difference motivated, sometimes implicitly, the construction of different coarse spaces. On the one hand, many references use different techniques to define coarse functions in the overlap (where the error is predominant), and then extending them on the remaining part of the neighboring subdomains; see, e.g., [22-26, 28-30, 33]. On the other hand, in other works the coarse space is created by first defining basis function on the interfaces, and then extending them (in different ways) on the portions of the neighboring subdomains; see, e.g., [7, 12, 13, 15, 17-21, 27, 28]. For a good, compact and complete overview of several of the different coarse spaces, we refer to [28,Section 5]. For other different techniques and related discussions, see, e.g., [4, 14-16, 31, 39].
This work differs from the existing literature as we introduce for the first time two-level Schwarz substructured methods. These are two-level stationary iterative methods, based on the Schwarz iteration, and the term "substructured" indicates that both the one-level iteration and coarse spaces are defined on the interfaces (or skeletons). 1 As in this manuscript we consider coarse spaces obtained as the span of certain interface functions, we call our twolevel substructured Schwarz methods as Spectral 2-level Substructured (S2S) methods. With this respect, they are defined in the same spirit as two-level methods whose coarse spaces are extensions in volume of interfaces basis functions.
Our S2S framework can accommodate several choices for the coarse space, e.g., as the span of the one-level (substructured) Schwarz iteration operator, or as the ones proposed in several papers as [17,18,27,28]. From a numerical point of view, the S2S framework has several advantages if compared to a classical two-level Schwarz methods defined in volume. Since the coarse space functions are defined on the interfaces, less memory storage is required. For a three-dimensional problem with mesh size h, a discrete interface coarse function is an array of size O(1/h 2 ). This is much smaller than O(1/h 3 ), which is the size of an array corresponding to a coarse function in volume. For this reason the resulting interface restriction and prolongation operators are smaller matrices, and thus the corresponding interpolation operations are cheaper to be performed. Therefore, assuming that the one-level stationary iteration step and the dimension of the coarse space are the same for a S2S method and a method in volume, each S2S iteration is generally computationally less expensive. In terms of iteration number, our S2S methods perform similarly or faster than other two-level methods that use the same DD smoother. Notice also, that the pre-computation part, that consists mainly in constructing the coarse space V c and assembling the operators P, R and A c requires the same computational effort of a method in volume. Moreover, the substructured feature of the S2S framework allows us to introduce two new procedures, based on a PCA and neural networks, to numerically build an efficient coarse space V c . Direct numerical experiments will show that the coarse spaces generated by these two approaches either outperform the spectral coarse space and other commonly used coarse spaces, or they lead to a very similar convergence behavior.
In our substructured framework, the matrix A is non-symmetric. Thus the optimality result of [38] does not hold. Relying on our previous work [41], we provide a new general convergence analysis based on an infinite-matrix representation of two-level operator which covers our substructured formulation. This analysis has a rather general applicability, it can be used to tackle non-symmetric problems, and allows us to show, surprisingly, in which cases a spectral coarse space is not (asymptotically) optimal. Indeed, even for symmetric matrices, [38] provides an optimality result for the norm of the iteration operator, which is generally only an upper bound for the asymptotic convergence factor. Specifically, we show that a spectral coarse space made of the first m dominant eigenfunctions of G is not necessarily the coarse space of dimension m minimizing the spectral radius of two-level operator based on the Schwarz iteration. We show this both theoretically and numerically, using principal component analysis (PCA) and deep neural networks to build numerically more efficient coarse spaces.
This paper is organized as follows. In Sect. 2, we formulate the classical parallel Schwarz method in a substructured form. This is done at the continuous level and represents the starting point for the S2S method introduced in Sect. 3. A detailed convergence analysis is presented in Sect. 4. Section 5 discusses both PCA-based and deep neural networks approaches to numerically create an efficient coarse space. Extensive numerical experiments are presented in Sect. 6, where the robustness of the proposed methods with respect to mesh refinement and physical (jumping) parameters is studied. We present our conclusions in Sect. 7. Finally, in the "Appendix" important implementation details are discussed.

Consider a bounded Lipschitz domain
⊂ R d for d ∈ {2, 3}, a general second-order linear elliptic operator L and a function f ∈ L 2 ( ). Our goal is to introduce new domain decomposition methods for the efficient numerical solution of the general linear elliptic problem which we assume to be uniquely solved by a u ∈ H 1 0 ( ). To formulate our methods we need to fix some notation. Given a bounded set with boundary ∂ , we denote by ρ (x) the function representing the distance of x ∈ from ∂ . We can then introduce the H 1/2 00 ( ) the space which is also known as the Lions-Magenes space; see, e.g., [5,42,43]. Notice that H 1/2 00 ( ) can be equivalently defined as the space of functions in H 1/2 ( ) such that their extensions by zero to a superset of are in H 1/2 ( ) [43].
Next, consider a decomposition of into N overlapping Lipschitz subdomains j , that is = ∪ j∈I j with I := {1, 2, . . . , N }. For any j ∈ I, we define the set of neighboring indexes N j := { ∈ I : j ∩ ∂ = ∅}. Given a j ∈ I, we introduce the substructure of j defined as S j := ∪ ∈N j j ∩ ∂ , that is the union of all the portions of ∂  Figure 1 provides an illustration of substructures corresponding to a commonly used decomposition of a rectangular domain. The substructure of is defined as S := ∪ j∈I S j . We denote by E 0 j : L 2 (S j ) → L 2 (S) the extension by zero operator. Now, we consider a set of continuous functions χ j : S j → [0, 1], j = 1, . . . , N , such that and j∈I E 0 j χ j ≡ 1, which means that the functions χ j form a partition of unity on S. Further, we assume that the functions χ j , j ∈ I, satisfy the condition For any j ∈ I, we define int j := ∂ j ∩ ∪ ∈N j and introduce the following trace and restriction operators . It is well known that (1) is equivalent to the domain decomposition system where f j ∈ L 2 ( j ) is the restriction of f on j (see, e.g., [5]). Notice that, since τ u ∈ H 1/2 (S ), the properties of the partition of unity functions χ guarantee that χ τ u lies in H Given a j ∈ I such that ∂ j \ int j = ∅, we define the extension operator for v ∈ H 1/2 ( int j ). The DD system (3) can be then written as If we define v j := χ j τ j u j , j ∈ I, then system (6) becomes where g j := χ j τ j E j (0, f j ) and the operators G j, : H System (7) is the substructured form of (3). The equivalence between (3) and (7) is explained by the following theorem.
Theorem 1 (Equivalence between (3) and (7)) Let Proof The first statement is proved before Theorem 1, where the substructured system (7) is derived. To obtain the second statement, we use (7) and the definition of The claim follows by using this equality together with the definitions of u j and E j .
Take any function w ∈ H 1 0 ( ) and consider the initialization u 0 j := w| j , j ∈ I. The parallel Schwarz method (PSM) is given by for n ∈ N + , and has the substructured form initialized by v 0 j := χ j τ j (u 0 j ) ∈ H 1/2 00 (S j ). Notice that the iteration (10) is well posed in the sense that v n j ∈ H 1/2 00 (S j ) for j ∈ I and n ∈ N. Equations (10) and (7) allow us to obtain the substructured PSM in error form, that is for n ∈ N + , where e n j := v j − v n j , for j ∈ I and n ∈ N. Equation (7) can be written in the where I d, j are the identities on L 2 (S j ), j ∈ I. Similarly, we define G as
It is a standard result that the PSM iteration v n = Gv n−1 +b converges; see, e.g., [11] for a convergence result of the PSM in a substructured form, [8-10, 12, 40] for other convergence results and [4,6] for standard references. The corresponding limit is the solution to the problem Av = b.

S2S: Spectral Two-Level Substructured Schwarz Method
The idea of the S2S method is to use a coarse space V c defined as the span of certain linearly independent functions defined on the skeletons of the subdomains j , for j ∈ I. Consider the space H, endowed with an inner product ·, · , and a set of m > 0 linearly independent functions ψ ψ ψ k , k = 1, . . . , m. Notice that each ψ ψ ψ k has the form ψ ψ ψ To define a two-level method, we need restriction and prolongation operators. Once the coarse space V c is constructed, the choice of these operators follows naturally. We define the prolongation operator P : R m → H and the restriction operator R : H → R m as Pv := m k=1 v k ψ ψ ψ k , and Rh := ψ ψ ψ 1 , h , · · · , ψ ψ ψ m , h , for any v = (v 1 , . . . , v m ) ∈ R m and h ∈ H. Notice that, if the functions ψ ψ ψ k are orthogonal, P is the adjoint operator of R and we have that R P = I m , where I m is the identity matrix in R m×m . The restriction of the operator A on V c is the matrix A c ∈ R m×m obtained in a Galerkin manner, A c = R AP.
With the operators P, R and A c in hands, our two-level method is defined as a classical twolevel strategy applied to the substructured problem (7) and using the domain decomposition iteration (10) as a smoother. This results in Algorithm 1, where n 1 and n 2 are the numbers of the pre-and post-smoothing steps. The well posedness of Algorithm 1 is proved in the next lemma.
Lemma 1 (Well posedness of S2S) Consider the inner product space (H, ·, · ), a set of linearly independent functions {ψ ψ ψ k } k=1,...,m , for some m > 0, and let V c := span{ψ ψ ψ 1 , . . . , ψ ψ ψ m } be a finite-dimensional subspace of H. Let P and R be defined as in (13) (with ·, · ). If A c = R A P is invertible and the initialization vector u 0 is chosen in H, then u n 2 (computed at Step 5

of Algorithm 1) is in H.
Proof It is sufficient to show that for a given u 0 ∈ H all the steps of Algorithm 1 are well posed. Since b ∈ H, G : H → H and A : H → H, Step 1 and Step 2 produce u n 1 and r in H.
Step 3 is well posed because A c is assumed to be invertible. Since V c is a subset of H, Pu c and u 0 in Step 4 lie in H. Clearly, the element u n 2 produced by Step 5 is also in H. Therefore, by induction Algorithm 1 is well posed in H.
The key hypothesis of Lemma 1 is the invertibility of the coarse matrix A c . An equivalent characterization of this property is proved in Sect. 4.1. This result (and the discussion thereafter) allows us to obtain the invertibility of A c if, e.g., V c is a spectral coarse space. Moreover, it is worth to remark that, the pseudo-inverse of A c can be used in case A c is not invertible; see [38].
Let us now turn our attention to the coarse space V c . We distinguish two general classes of coarse space functions: global and local coarse functions. Global coarse functions refer to functions defined directly on the global skeleton of . An ideal choice of global coarse functions would be to define V c as the span of of the dominating eigenfunctions of the onelevel operator G. In the context of multigrid methods, this choice is extensively discussed in [38], where the authors prove that, if A and the preconditioner corresponding to the one-level iteration are symmetric, the spectral coarse space minimizes the energy norm of T . This sharp result provides a concrete optimal choice of V c minimizing the energy norm of T , but it is generally an upper bound for the asymptotic convergence factor ρ(T ) and does not extend to the non-symmetric case (the substructured matrix A is non-symmetric), as we will see in Sect. 4.2. Moreover, we will show in Sect. 5 two numerical approaches, based on a PCA approach and neural networks, for the construction of global coarse space functions. These are generally different from the spectral ones and may lead to a better convergence.
Another possibility is to build local coarse functions using eigenfunctions of the local operators G j . However, the eigenfunctions of G j (or G) are known only in very special cases and their numerical computation could be quite expensive. To overcome this problem one could define V c as the span of some Fourier basis functions, that could be obtained by solving a Laplace-Beltrami eigenvalue problem on each interface (or skeleton); see, e.g., [27,28]. In this case, assuming that local basis functions ψ j k ∈ H 1/2 00 (S j ) (endowed with inner product ·, · j ) are available, the coarse space V c can be constructed as for some positive integer m, where ⊗ denotes the standard Kronecker product and e j , for j ∈ I, are the canonical vectors in R N . In this case prolongation and restriction operators defined in (13) are for any v 1 , . . . , v N ∈ R m and any (h 1 , . . . , h N ) ∈ H. We wish to remark, that the choice of the inner product ·, · (or ·, · j for j ∈ I) in the definition of P and R is arbitrary. One possible choice is the classical H 1/2 inner product. However, this could be too expensive from a numerical point of view. Another possibility would be to consider the classical L 2 inner product, which is the choice we make in our implementations.
A detailed convergence analysis that covers our S2S method, is presented in Sect. 4. This is based on the general structure of a two-level iteration operator. A direct calculation reveals that one iteration of the S2S method can be written as where I is the identity operator over H; see, also, [13,15,37]. Here, M is an operator which acts on the right-hand side vector b. Such operator can be regarded as the preconditioner corresponding to our two-level method. In error form, the iteration (15) becomes where e new := u − u new and e old := u − u old . Hence, to prove convergence of the S2S method we study the operator T .

Convergence Analysis
In this section, we provide convergence results for two-level iterative methods in a general framework that covers the setting of the S2S domain decomposition method presented in Sect. 3.
Let (X , ·, · ) be a complex 3 inner-product space and Ax = b a linear problem, where the operator A : X → X is bijective and b ∈ X is a given vector. Consider a set of m > 0 linearly independent functions {ψ ψ ψ k } k=1,...,m , and denote by V c the finite-dimensional subspace of X defined as the span of the functions {ψ ψ ψ k } k=1,...,m . We denote by P : C m → X and R : X → C m the prolongation and restriction operators defined as in (13), and define the matrix A c := RAP ∈ C m×m . Given a smoothing operator G : X → X , a two-level iterative method (as the one defined in Algorithm 1) is characterized by the iteration operator T : X → X defined by where I : X → X is the identity operator. In what follows the properties of T are analyzed. In particular, the invertibility of A c is characterized in Sect. 4.1, the convergence (spectral) properties of T are discussed in the case of global coarse functions in Sect. 4.2 and in the case of local coarse functions in Sect. 4.3.

Invertibility of the Coarse Matrix
The well-posedness of a two-level method (like the S2S) is essentially related to the invertibility of the coarse operator A c . Even though one could replace the inverse of A c with its pseudo-inverse, as discussed in, e.g., [38], in our analysis we will assume that A c is invertible. The next Lemma provides an equivalent characterization for the invertibility of A c .

Lemma 2 (Invertibility of a coarse operator
, then A c = RAP has full rank. This result follows from the rank-nullity theorem, if we show that the only element in the kernel of A c is the zero vector. To do so, we recall the definitions of P and R given in (13). Let us now consider a vector z ∈ C m . Clearly, Pz = 0 if and only if z = 0. Moreover, for any z ∈ C m the function Pz is in V c . Since A is invertible, then APz = 0 if and only if z = 0. Moreover, by our assumption it holds that We can now write that A c z = R(APz) = 0, which implies that A c has not full rank.
The following example shows that the invertibility of A does not necessarily imply the invertibility of A c . This gives V ⊥ c := span{e 2 }. The prolongation and restriction operators are P = e 1 and R = P . Clearly, we have that Ae 1 = e 2 , which implies that

Example 1 Consider the invertible matrix
is satisfied for operators of the form A = I − G, as for instance those defined in (12), if the functions ψ ψ ψ k are eigenfunctions of G. However, it represents only a sufficient condition for the invertibility of A c . As the following example shows, there exist invertible operators A that do not satisfy this condition, but lead to invertible A c .

Global Coarse Functions
In this section, we study general convergence properties of the operator T. The first theorem characterizes the relation between the kernel of T and the coarse space V c .
Theorem 2 (Kernel of T, coarse space V c ) Let P and R be defined as in (13) by linearly independent functions ψ ψ ψ 1 , . . . , ψ ψ ψ m such that A c = RAP is invertible. For any ψ ψ ψ ∈ X it holds that Proof Assume that ψ ψ ψ ∈ V c . This implies that there exists a vector z such that ψ ψ ψ = Pz. Hence, we can compute Let us now prove the reverse, that is We proceed by contraposition and assume that ψ ψ ψ / To continue our analysis we construct a matrix representation of the operator T defined in (17). From now on, for the sake of simplicity, we set n 1 = 1 and n 2 = 0. We further consider the following assumptions: (H1) V c is the span of m linearly independent functions {p k } m k=1 ⊂ X , which are used to define the operators P and R as in (13). (H2) The operators A and G have the same linearly independent eigenvectors {ψ ψ ψ k } ∞ k=1 , The corresponding eigenvalues of A and G are denoted by λ k and λ k , respectively.
Remark 1 Notice that the hypothesis (H2) is valid in the context of our S2S method, where the operators A and G satisfy the relation A = I−G. Hence, they have the same eigenvectors. Moreover, the hypothesis (H3) is satisfied if G corresponds to a classical parallel Schwarz method, as in the case of our S2S method. The classical damped Jacobi method is another important instance that satisfies (H2) and (H3). Moreover, if one supposes that the vectors {ψ ψ ψ k } ∞ k=1 in (H2) are orthogonal or X is finite dimensional, then (H2) and (H4) Let us now construct a matrix representation of the operator T. Since V c ⊆ span {ψ ψ ψ k } m k=1 , the structure of T allows us to obtain that the set span Therefore, for any ψ ψ ψ j there exist at most m + 1 nonzero coefficients t j, such that Tψ ψ ψ j = t j, j ψ ψ ψ j + m =1, = j t j, ψ ψ ψ . If we order the coefficients t j, into an infinite matrix denoted by T , we obtain that The infinite matrix T can be regarded as a linear operator acting on the space of sequences.
The matrix representation (20) turns to be very useful to analyze the convergence properties of the operator T. Now, we can compute by an induction argument that If the matrix T m is nilpotent with degree q ∈ N + , that is T p m = 0 for all p ≥ q, then we get for n > q that Let us begin with a case where the linear operators A and G are bounded and self-adjoint, and the functions {ψ ψ ψ k } ∞ k=1 form an orthonormal basis with respect to an inner product ·, · (not necessarily equal to ·, · ) such that (X , ·, · ) is a Hilbert space. We denote by · X the norm induced by ·, · , and by the corresponding operator norm. Notice that, since A and G are bounded, T is bounded as well. Thus, we can study the asymptotic convergence factor ρ(T) defined as lim n→∞ T n 1/n X = ρ(T); see, e.g., [44,Chapter 17]. Since we assumed that {ψ ψ ψ k } ∞ k=1 are orthonormal with respect to ·, · , a direct calculation 4 allows one to prove that T X = T 2 , where Hence, we obtain ρ(T) = lim n→∞ T n 1/n X = lim n→∞ T n 1/n 2 . Notice that since T is a bounded operator and T X = T 2 , the operator T is bounded in the · 2 norm. Thus, the submatrices X , m and T m are bounded in the · 2 norm as well. Therefore, T a and T b are also bounded in the · 2 norm. Thus, Eq. (22) allows us to estimate ρ(T): Now, recalling (24), one obtains for n > q that where e m+1 ∈ 2 is the m + 1-th canonical vector. This estimate implies that ρ(T) = lim n→∞ T n 1/n 2 ≥ |λ m+1 |, and thus ρ(T) = |λ m+1 |. Using Theorem 2, it is possible to see In this case |λ m+1 | = |λ m+1 |. We can summarize these findings in the next theorem.
Theorem 3 (Convergence of a two-level method) Let the hypotheses (H1), (H2), (H3) and (H4) be satisfied, A and G be self-adjoint, and assume that the functions {ψ ψ ψ k } ∞ k=1 form an orthonormal basis with respect to an inner product ·, · such that (X , ·, · ) is a Hilbert where · X is the operator norm defined in (23).
In the case of a spectral coarse space, the expression of T in (20) simplifies. The following result holds.
where m is defined in (20).
, Theorem 2 implies that T m = 0. Thus, to obtain the result, it is sufficient to show that all the components of the submatrix X (see (20)) are zero. These components are x j, = t j, for j > m and ≤ m. Thus, we assume that j > m and ≤ m, recall the formula Tψ ψ ψ j = t j, j ψ ψ ψ j + m k=1,k = j t j,k ψ ψ ψ k , and multiply this by ψ ψ ψ to obtain ψ ψ ψ , Tψ ψ ψ j = t j, . Since A and G are self adjoint, one obtains by a direct calculation that Using this property and recalling the structure of T, we can compute as ≤ m, the orthogonality of the functions {ψ ψ ψ k } ∞ k=1 and the hypothesis (H4) imply that [I − APA −1 c R]ψ ψ ψ , ψ ψ ψ j = 0. Hence, the result follows.
Theorem 4 implies directly that Let us now assume that A is positive definite, and thus there exists a unique positive square root operator [45,Theorem 6.6.4]. A straight calculation leads to S A = A 1/2 SA −1/2 X (see, e.g., [46,Section C.1.3] for a finite-dimensional matrix counterpart). Notice that, as for T and T , we can obtain the matrix representation 1 , then Theorem 4 implies that It has been proved in [38,Theorem 5.5], that this result is optimal in the sense that, if A and G are symmetric and positive (semi-)definite, then the coarse space V c = span {ψ ψ ψ k } m k=1 minimizes the energy norm of the two-level operator T. Clearly, if A has positive and negative eigenvalues (even though it remains symmetric), this result is no longer valid. In this case, as we are going to see in Theorem 6, the coarse space V c = span {ψ ψ ψ k } m k=1 is not necessarily (asymptotically) optimal.
The situation is very different if the functions {ψ ψ ψ k } ∞ k=1 are not orthogonal and A is not symmetric. To study this case, we work in a finite-dimensional setting and assume that X = C N = span {ψ ψ ψ k } N k=1 . Thus, both T and T are matrices in C N ×N and it holds that ψ ψ 1 , . . . , ψ ψ ψ N ]. This means that T and T are similar matrices and, thus, have the same spectrum. Hence, using Theorem 2 we obtain a finite-dimensional counterpart of Theorem 3, which does not require the orthogonality of {ψ ψ ψ k } N k=1 . Theorem 5 (Convergence of a two-level method in finite-dimension) Assume that X = C N and let the hypotheses (H1), (H2), (H3) and (H4) be satisfied.
is not necessarily (asymptotically) optimal. A different choice can lead to better asymptotic convergence or even to a divergent two-level method. To show these results, we consider an analysis based on the perturbation of functions belonging to the coarse space V c = span {ψ ψ ψ k } m k=1 . We have seen in Theorem 2, that an eigenvector of G is in the kernel of the two-level operator T if and only if it belongs to V c . Assume that the coarse space cannot represent exactly one eigenvector ψ ψ ψ of G. How is the convergence of the method affected? Let us perturb the coarse space V c using the eigenvector ψ ψ ψ m+1 , that In this case, (21) holds with m = m + 1 and T ∈ C N ×N becomes where we make explicit the dependence on ε. Notice that ε = 0 clearly leads to T m (0) = diag (0, . . . , 0, λ m+1 ) ∈ C m× m , and we are back to the unperturbed case with T (0) = T having spectrum {0, λ m+1 , . . . , λ N }. Now, notice that min ε∈R ρ( T (ε)) ≤ ρ( T (0)) = |λ m+1 |. Thus, it is natural to ask the question: is this inequality strict? Can one find an ε = 0 such that ρ( T ( ε)) = min ε∈R ρ( T (ε)) < ρ( T (0)) holds? If the answer is positive, then we can conclude that choosing the coarse vectors equal to the dominating eigenvectors of G is not an optimal choice. Moreover, one could ask an opposite question: can one find a perturbation of the eigenvectors that leads to a divergent method (ρ( T (ε)) > 1)? The next key result provides precise answers to these questions in the case m = 1.
Proof Since m = 1, a direct calculation allows us to compute the matrix given in (26). Hence, point (A) follows recalling (25).
Theorem 6 and its proof say that, if the two eigenvalues λ 1 and λ 2 have opposite signs (but they could be equal in modulus), then it is always possible to find an ε = 0 such that the coarse space V c := span{ψ ψ ψ 1 + εψ ψ ψ 2 } leads to a faster method than V c := span{ψ ψ ψ 1 }, even though both are one-dimensional subspaces. In addition, if λ 3 = 0 the former leads to a twolevel operator T with a larger kernel than the one corresponding to the latter. The situation is completely different if λ 1 and λ 2 have the same sign. In this case, the orthogonality parameter γ is crucial. If ψ ψ ψ 1 and ψ ψ ψ 2 are orthogonal (γ = 0), then one cannot improve V c := span{ψ ψ ψ 1 } by a simple perturbation using ψ ψ ψ 2 . However, if ψ ψ ψ 1 and ψ ψ ψ 2 are not orthogonal (γ = 0), then one can still find an ε = 0 such that ρ( T (ε)) < ρ( T (0)).
Notice that, if |λ 3 | = |λ 2 |, Theorem 6 shows that one cannot obtain a ρ(T ) smaller than |λ 2 | using a one-dimensional perturbation. However, if one optimizes the entire coarse space V c (keeping m fixed), then one can find coarse spaces leading to better contraction factor of the two-level iteration, even though |λ 3 | = |λ 2 |.
Theorem 6 has another important meaning. If the eigenvectors ψ ψ ψ j are not orthogonal and one defines the coarse space V c using approximations to ψ ψ ψ j , then the two-level method is not necessarily convergent. Even though the one-level iteration characterized by G is convergent, a wrong choice of coarse functions can lead to a divergent iteration. This phenomenon is observed numerically in Sect. 6. However, the analysis performed in Theorem 6 suggests a remedy to this situation.

Corollary 1 (Correction of perturbed coarse space functions) Let the hypotheses of Theorem 6 be satisfied. For any r ∈ N it holds that
Moreover, if the coarse space V c is replaced by G r V c (hence ε is replaced by ε r ), there exists an r ∈ N such that ρ(ε r , γ ) < 1 for any γ ∈ [−1, 1].
Corollary 1 has the following important consequence. If some "bad-convergent" eigenvectors of G are not sufficiently well represented by the coarse space functions, one can apply r smoothing steps to the coarse space functions. The new space G r V c is a better approximation to the "bad-convergent" eigenfunctions of G. Therefore, one can replace V c by G r V c to improve the convergence properties of the two-level method.

Local Coarse Functions
In this section, we consider an operator G having the block form and defined on the space X := X × X , where X is a Hilbert space endowed by an inner product ((·, ·)). The corresponding operator A is A = I − G. Moreover, we assume that the operators G j , j = 1, 2, have the same eigenvectors {ψ k } ∞ k=1 forming an orthonormal basis of X with respect to ((·, ·)). The eigenvalues of G j , for j = 1, 2, are denoted by θ j (k). This is exactly the structure of the substructured domain decomposition problem introduced in Sect. 2 and corresponding to two subdomains, as the following examples show.
The restriction of A onto the coarse space V c is A c = RAP. Notice that, since in this case A(V c ) ⊆ V c , Theorem 2 guarantees that the operator A c is invertible. Now, we study the spectral properties of T defined in (17).
Theorem 7 (Convergence of the two-level method with local coarse space functions) Consider the coarse space V c = (span{ψ 1 , ψ 2 , . . . , ψ m }) 2 and the operators P and R defined in (29). All pairs (ψ k , ψ ) with k, ≤ m are in the kernel of the operator T. Moreover, for any S ∈ L(X ) denote by Proof Let us suppose that both n 1 and n 2 are even. The other cases can be treated similarly.
For n 1 even we define π n 1 (k) := θ n 1 2 1 (k)θ n 1 2 2 (k) and study the action of the operator T on a vector [ψ k , ψ ] : We begin with the case k ≤ m and ≤ m. First, let us compute the action of the operator RAG n 1 on [ψ k , ψ ] . Since the operators G j are diagonalized by the basis {ψ k } k one obtains Since A is invertible and has the form A = I − G, the eigenvalues θ j (k) must different from one. Hence, the product A π n 1 (k)ψ k , π n 1 ( )ψ = 0. Now, the application of the restriction operator R on A π n 1 (k)ψ k , π n 1 ( )ψ gives us where e k and e are canonical vectors in R m and := I −θ 1 ( )I −θ 2 (k)I I , with I the m × m identity matrix. We have then obtained Now, by computing A c π n 1 (k)e k π n 1 ( )e =RA π n 1 (k)ψ k π n 1 ( )ψ = R π n 1 (k)ψ k − π n 1 ( )θ 1 ( )ψ π n 1 ( )ψ − π n 1 (k)θ 2 (k)ψ k = π n 1 (k)e k π n 1 ( )e one obtains the action of A −1 c on π n 1 (k)e k π n 1 ( )e , that is Using (30) and (31) we have This means that all the pairs (ψ k , ψ ) with k ≤ m and ≤ m are in the kernel of T. The result for n 1 odd follows by similar calculations.
Next, let us consider the case k > m and ≤ m. Recalling that the basis {ψ k } k is orthonormal, one has Similarly as before, we compute which implies that 0 Thus, we have For the remaining case k > m and > m, the same arguments as before imply that We can now study the norm of T. To do so, we first use (32), (33) and (34), and that {ψ k , ψ } k, is a basis of X , to write for any v ∈ X . Since |θ 1 (k)| and |θ 2 (k)| are non-increasing functions of k, |π(k)| is also a non-increasing function of k. Therefore, using that the basis {ψ k , ψ } k, is orthonormal, we get This upper bound is achieved at v = [ψ m+1 , 0] . Hence, T op = |π n 1 +n 2 (m + 1)|. Now, a similar direct calculation leads to T n op = |π n(n 1 +n 2 ) (m + 1)|, which implies that ρ(T) = lim n→∞ ( T n op ) 1/n = |π n 1 +n 2 (m + 1)|.
Theorems 2, 5 and 7 show that the choice of the basis functions to construct V c can affect drastically the convergence of the method. On the one hand, an inappropriate choice of V c can lead to a two-level method that performs as the corresponding one-level method. On the other hand, a good choice of V c can even make convergent a non-converging stationary method; see, e.g., [13].

Numerical Construction of the Coarse Space
The construction of a good coarse space V c is not an easy task. Several works rely on the solution of generalized eigenvalue problems on the interfaces; see, e.g., [13,17,18,21,28]. Despite one could re-use these techniques to build a coarse space for the S2S method, see the S2S-HEM method discussed in Sect. 6, we now present two alternative numerical approaches for the generation of coarse space functions. The first one relies on the principal component analysis (PCA) and share some similarities with some of the strategies presented in [38,49]. The second approach is based on modeling the two-level iteration operator as a deep neural network where the coarse space functions are regarded as variables to be optimized. A similar approach has been presented in the context of multigrid methods in [50]. We refer to [51] for a recent review of altenative ways to combine machine learning and DD methods.
We remark that the S2S framework facilitates the use of these two numerical techniques which could be even numerically unfeasible if applied to a two-level volume method. Indeed, at the discrete level, the substructured coarse functions are much shorter vectors than the corresponding volume ones. This means that, for the PCA approach, one has to compute the SVD decomposition of a much smaller matrix, while for the deep neural network approach, the neural net has much less parameters to optimize.

A PCA Approach for Coarse Space Generation
The idea that we present in this section is to construct an approximation of the image of the smoother G r , for some positive integer r . In fact, the image of G r contains information about the "bad converging" eigenvectors of G. Notice that im(G r ) = im(G r X ) for any surjective matrix X . Therefore, the idea is to construct a coarse space using the information contained in G r X , for some randomly chosen matrix X . Clearly, if ρ(G) < 1 and r is large, then one expects that the slowest convergent eigenvectors are predominant in G r X . Notice also the relation of this idea with the perturbation Theorem 6 and Corollary 1.
Motivated by these observations, we use a principal component analysis (PCA), also known as proper orthogonal decomposition (POD); see, e.g., [52] and references therein. We consider the following procedure.

Consider a set of q linearly independent randomly generated vectors {s
where N s is the number of degrees of freedom on the interfaces, and define the matrix S = [s 1 , . . . , s q ]. Here, q ≈ m and m is the desired dimension of the coarse space. 2. Use the vectors s k as initial vectors and perform r smoothing steps to create the matrix W = G r S. This computation can be performed in parallel, applying G to each column of S separately. Further, we assume that r is "small". 3. Compute the SVD of W : W = U V . This is cheap (O(q(N s ) 2 )) because W ∈ R N s ×q is "small", since q is "small" and v k are interface vectors. 4. Since the left-singular vectors (corresponding to the non-zero singular values) span the image of W , we define V c := span{u j } m j=1 and P := [u 1 , . . . , u m ]. We wish to remark that, in light of Theorem 6 and Corollary 1, one can also use approximations of the eigenfunctions of G (if available) in the matrix S (in step 1 above). A numerical study of the above procedure is presented in Sect. 6. To qualitatively describe the obtained coarse space, we prove the following bound.
Proof Using the triangle inequality, we get The first term on the right-hand side is equal to σ +1 by the best approximation properties of the SVD. The second term can be bounded as Despite its very simple proof, Lemma 3 allows us to describe the quality of the created coarse space. Larger values of q and lead to a smaller error in the approximation of the image of G. Moreover, a smoother G with good contraction properties, namely G 2 1, leads to a better approximation. Clearly, one can improve the approximation by enlarging r at the cost of extra subdomain solves.

Generating the Coarse Space by Deep Neural Networks
Theorem 6 shows that the spectral coarse space obtained by the first dominant eigenvector of G is not necessarily the one-dimensional coarse space minimizing ρ(T). Now, we wish to go beyond this one-dimensional analysis and optimize the entire coarse space V c keeping its dimension m fixed. This is equivalent to optimize the prolongation operator P whose columns span V c . Thus, we consider the optimization problem min P∈R N s ×m ρ(T (P)).
To solve approximately (35), we follow the approach proposed by [50]. Due to the Gelfand formula ρ(T ) = lim k→∞ k T k F , we replace (35) with the simpler optimization problem min P T (P) k 2 F for some positive k. Here, · F is the Frobenius norm. We then consider the unbiased stochastic estimator [53] where z ∈ R N s is a random vector with Rademacher distribution, i.e. P(z i = ±1) = 1/2. Finally, we rely on a sample average approach, replacing the unbiased stochastic estimator with its empirical mean such that (35) is approximated by where z i are a set of independent, Rademacher distributed, random vectors. The action of T onto the vectors z i can be interpreted as the feed-forward process of a neural net, where each layer represents one specific step of the two-level method, that is the smoothing step, the residual computation, the coarse correction and the prolongation/restriction operations. In our setting, the weights of most layers are fixed and given, and the optimization is performed only on the weights of the layer representing the prolongation step. The restriction layer is constraint to have as weights the transpose of the weights of the prolongation layer. To solve (36), we rely on the stochastic gradient algorithm which requires at each iteration to compute k times the action of T . The optimization parameters are heuristically tuned to a learning rate of 0.1, a mini-batch of ten vectors z i , maximum 20,000 epochs, and the optimization is stopped when the loss functions increases for 5 consecutive updates. Each stochastic gradient iteration is expensive as it is equivalent to perform k iterations of the two-level method. Hence, the proposed deep neural network approach is not computationally efficient to build coarse spaces, unless one considers an offline-online paradigm or a many query context. We will use this approach in Sect. 6 to go beyond the result of Theorem 6 and show numerically that given an integer m, a spectral coarse made by the first m dominant eigenvectors of G is not necessarily the asymptotic optimal coarse space of dimension m.

Numerical Experiments
This section is concerned with the numerical validation of the framework proposed in this manuscript. We first consider a Poisson equation in 2D and 3D rectangular boxes and we show the convergence behavior of the S2S method with different coarse spaces and of the SHEM method [17]. In this simplified setting, we also report the computational time and memory storage requirements of the S2S and SHEM methods. We then solve a Poisson problem with many-subdomain decompositions and discuss a further way to build a substructured coarse space, that is, using the SHEM interface functions. Next, we focus on a diffusion problem with highly jumping coefficients and validate Theorem 6 showing how a perturbed coarse space can affect the convergence of the methods. For the sake of completeness, let us recall that the SHEM coarse space is defined as the union of a multiscale and of a spectral coarse space. The multiscale coarse space is built solving for each internal vertex v k of the nonoverlapping decomposition = ∪ N i=1 i , a homogeneous boundary value problem on each of the edges i, j := ∂ i ∩ ∂ j , which have the vertex v k as extreme, imposing a Dirichlet boundary condition equal to 1 on the vertex v k and zero on the opposite vertex. The solutions of these interface boundary value problems φ k i, j are then extended (through the PDE operator, see eq (14) in [27]) on both subdomains i and j and set to zero on the rest of . The spectral coarse space is obtained by solving localized eigenvalue problems on each of the internal edges i, j , and then extending a certain number of the eigenfunctions harmonically on both i and j and zero otherwise (see eq (15) in [27]).

Poisson Equation in 2D and 3D Rectangular Boxes
Let us consider a rectangular domain = 1 ∪ 2 , where 1 = (−1, δ) × (0, 1) and 2 = (−δ, 1) × (0, 1), and a Poisson equation − u = f with homogeneous Dirichlet boundary condition on ∂ . Given an integer ≥ 2, we discretize each subdomain with a standard second order finite difference scheme with N y = 2 − 1 points in the y direction and N x = N y points in the x direction. The overlap has size 2δ with δ = N ov h, where h is the mesh size and N ov ∈ N. In our computations, we consider f = 1 and initialize the iterations with a random initial guess. Figure 2 shows the relative error decay for several methods. Specifically, we compare the one-level parallel Schwarz method (G s (Schwarz) in the figures), a S2S method with a coarse space made by eigenfunctions of G (S2S-G), a S2S method with a coarse space made of eigenfunctions of the operators G j (S2S-G j ), a S2S method with a coarse space obtained with the PCA procedure (S2S-PCA), a S2S method with coarse space obtained using deep neural networks (S2S-DNN), and the spectral volume method based on the SHEM coarse space (SHEM), see [27]. For the PCA coarse space, we average the relative error decay over 30 realizations and the parameters for the PCA procedure are q = 2 dimV c and r = 2, where dimV c is the desired size of the spectral coarse space. For the deep neural network approach, the parameters are N = N s and k = 4. Figure 2 shows that most of the spectral methods have a very similar convergence. Indeed, we have numerically observed that the S2S-G, the S2S-G j and the SHEM methods all have the same spectral radius in this simplified setting. We remark that the S2S-PCA method has on average the same convergence behavior as the other two-level methods, even tough sometimes it could be slightly different (faster or slower). The S2S-DNN method outperforms the others. In this particular setting, the eigenvalues of G are λ j = ± √ μ j , where μ j > 0, ∀ j = 1, . . . , N s are the eigenvalues of G 1 = G 2 , and A is symmetric. Hence, we are under the assumptions of point (C) of Theorem 6, and Fig. 2 confirms that a spectral coarse space is not necessarily the coarse space leading to the fastest convergence. As we claimed that the deep neural network approach is computationally expensive, it is worth remarking that the PCA approach builds a coarse space as efficient as the spectral ones performing q · r subdomains solves in parallel, instead of solving eigenvalue problems as required by all others two-level methods, either locally (as the S2S-G j and SHEM methods) or on the whole skeleton (as the S2S-G method).
Next, we compare the computational costs required by the S2S method and a spectral volume method in Table 1. For simplicity we assume n 1 = 1, n 2 = 0. Let A v = M − N be a volume matrix of size N v × N v and A be the substructured matrix of size N s × N s , P and R the substructured restriction and prolongation operators, while P v and R v are the corresponding volume operators. On each subdomain, we suppose to have N sub unknowns and m is the dimension of the coarse space. The cost of the smoothing step is equal in both case to γ s (N sub ), where γ s depends on the choice of the linear solver, e.g. for a Poisson problem, γ s (N sub ) = N sub log(N sub ) if a fast Poisson solver is used, or γ s (N sub ) = bN sub Table 1 Computational cost (C.C.) per iteration
Notice that the smoother in volume is written as a standard stationary method based on the splitting A v = M−N for sparse banded matrices with bandwidth b; see, e.g., [54]. Further, the cost of solving the coarse problem is identical as well, equal to γ c (m), where m is the size of the coarse space and γ c depends on the linear solver used. The coarse matrices are usually small, fully dense, matrices so that it is reasonable to factorize them using an LU decomposition. In a standard implementation, the S2S method requires to perform subdomain solves when computing the residual, as the matrix vector multiplication with the matrix A is needed. To avoid this extracost, in the "Appendix 8" we show two alternative algorithms to implement smartly the S2S method, where the residual is computed cheaply and the two applications of the smoothing operators per iteration are avoided. We further show that these two alternatives have the same convergence behavior of Algorithm 1.
The main advantage of the S2S method is that the restriction and prolongation operators are performed on the substructures, with objects which are smaller than the corresponding volume counterparts. Thus the S2S method naturally requires less memory storage. We now discuss the cost of the off-line computation phases. To build prolongation and restriction operators in the volume case, one needs to define some functions, usually by solving eigenvalue problems, along the interfaces between non-overlapping subdomains or in the overlap region between overlapping subdomains. These functions are then extended in the interior of the subdomains and this extension costs γ s (N sub ). Notice that the way of extending these functions is not unique and we refer to [28,Section 5] for an overview. In the substructured framework, we have analyzed theoretically several ways among which a global eigenvalue problem (S2S-G), local eigenvalue problems (S2S-G j ), and randomized approaches using either PCA (S2S-PCA), or deep neural networks (S2S-DNN). The relative costs of these approaches with respect to the volume ones are difficult to estimate as they depend on the features of the problem at hand. Nevertheless, for any method used to generate the interface functions, we do not need to perform any extension step in the substructured framework. Besides the approaches studied theoretically, we emphasize that one can use the interface functions computed in a volume method as a basis for the S2S coarse space. In this way one avoids the extension step and exploits at best the intrinsic substructured nature of the S2S method. In the next section we show numerical results where we used the SHEM interface functions as a basis for the S2S method (called the S2S-HEM method). The overlap parameter is constant N ov = 4. On the right, memory usage expressed in megabyte to store restriction and prolongation operators of S2S and SHEM To conclude, we consider a Poisson equation on a three-dimensional box = (−1, 1) × (0, 1) × (0, 1) decomposed into two overlapping subdomains 1 = (−1, δ) × (0, 1) × (0, 1) and 2 = (−δ, 1) × (0, 1) × (0, 1). Table 2 shows the computational times to reach a relative error smaller than 10 −8 , and the computational memory required to store the restriction and interpolation operators in a sparse format in Matalb for the S2S method and the SHEM method. The experiments have been performed on a workstation with 8 processor cores Intel Core i7-6700 CPU 3.40GHz and with 32 GB of RAM. We remark that the S2S method requires drastically less memory than the SHEM method, which becomes inefficient for large problems from the memory point of view. Concerning computational times, we observe that the two methods are equivalent in this setting. The substructured restriction and prolongation operators are faster than the volume ones, since to compute the action for instance of the substructured prolongation operator on the largest problem takes about 7 · 10 −4 seconds compared to 3 · 10 −2 seconds of the volume prolongation. However, the bottleneck here is represented by the two, very large, subdomain solves. A many subdomain decomposition and a parallel implementation on a high performance programming language should make more evident the advantage of using substructured coarse spaces in terms of computational time.

Decomposition into Many Subdomains
In this section, we consider a Poisson equation in a square domain decomposed into M × M nonoverlapping square subdomains j , j = 1, . . . , M 2 = N . Each subdomain j contains N sub := (2 − 1) 2 interior degrees of freedom. The subdomains j are extended by N ov points to obtain subdomains j which form an overlapping decomposition of . Each discrete local substructure is made by one-dimensional segments. Figure 3 provides a graphical representation. Figure 4 compares several versions of the S2S method to the SHEM method. Specifically, we consider a S2S method with a coarse space made by eigenfunctions of G (S2S-G), a S2S method with a coarse space obtained with the PCA procedure (S2S-PCA), and a S2S method with a coarse space which is inspired by the SHEM coarse space (S2S-HEM, that is S2S Harmonically Enriched Multiscale) and a S2S method with a coarse space obtained with the deep neural network approach (S2S-DNN).
In more detail, we create the HEM coarse space by computing harmonic functions and solving interface eigenvalue problems on each one-dimensional segment that forms the local discrete substructure, similarly to the SHEM coarse space [27]. However notice that the SHEM method extends these interface functions into the interior of the nonoverlapping subdomains as the method is naturally defined in volume. We do not need to perform this extra step of extending the functions in the neighboring subdomains. We report that we have Fig. 3 The domain is divided into nine non-overlapping subdomains (left). The center panel shows how the diagonal non-overlapping subdomains are enlarged to form overlapping subdomains. On the right, we zoom on the central subdomain to show the local discrete substructure formed by the degrees of freedom lying on the blue segments (Color figure online) also tried to build the a coarse space by simply restricting the volume functions of the SHEM coarse space onto the substructures and we observed a similar behavior compared to the HEM coarse space. For the PCA approach, we generated q = 2×dimV c random vectors and we set r = 2. The result we plot is averaged over 30 different random coarse spaces. For the deep neural network, we used k = 4 and N = N s . The size of the coarse space is set by the SHEM coarse space. In the top-left panel, we consider only multiscale functions without solving any eigenvalue problem along the interfaces. In the top-right panel, we include the first eigenfunctions on each interface, and on the bottom-central panel we include the first and the second eigenfunctions. In all cases we observe that the methods have a similar convergence, which is slightly faster for the substructured methods for smaller coarse spaces. As we already remarked, S2S-G is not necessarily the fastest.

Diffusion Problem with Jumping Diffusion Coefficients
In this paragraph, we test the S2S method for the solution of a diffusion equation −div(α∇u) = f in a square domain := (0, 1) 2 with f := sin(4π x) sin(2π y) sin(2π x y). The domain is decomposed into 16 non-overlapping subdomains and we suppose α = 1 everywhere except in some channels where α takes the values large values. Each nonoverlapping subdomain is discretized with N sub = 2 2 cells and enlarged by N ov cells to create an overlapping decomposition with overlap δ = 2N ov h. We use a finite-volume scheme and we assume that the jumps of the diffusion coefficients are aligned with the cell edges. We consider two configurations represented in Fig. 5.
We remark that λ(γ , ε * ) is the convergence factor of T on span ψ ψ ψ 1 + ε * ψ ψ ψ 5 , so that the convergence of the S2S method is now determined by second largest eigenvalue of T , i.e. λ 2 = −0.9990 as Fig. 7 shows. We then investigate the performances of the S2S methods and we compared them with the SHEM coarse space in the multiple channel configuration. We set = 4, N = 16, which correspond to N v = 4096 degrees of freedom, and N ov = 2. Table 3 shows the number of iterations to reach a relative error smaller than 10 −8 for the S2S-G, S2S-PCA, S2S-HEM and SHEM methods. The relative error is computed with respect to the first iterate. We consider coarse spaces of dimensions 84, 132 and 180, which, for the SHEM and S2S-HEM methods, correspond to multiscale coarse spaces enriched by respectively the first, second and third eigenvectors of the interface eigenvalues problems. For the PCA coarse space, we set q = 2N c and r = 6 if α = 10 6 , r = 4 if α = 10 4 and r = 2 if α = 10 2 . We remark that for smaller values of r , the S2S-PCA method diverges. This increase in the value of r can be explained noticing that for the multichannel configuration, the smoother G has several eigenvalues approximately 1 for large values of α. Thus the PCA procedure, which essentially relies on a power method idea to approximate the image of G, suffers due to the presence of several clustered eigenvalues, and hence does not provide accurate approximations of the eigenfunctions of G. Similarly also the HEM coarse space obtained Table 3 For each spectral method and value of α, we report the number of iterations to reach a relative error smaller than 10 −8 with a coarse space of dimension 84 (left), 132 (center) and 180 (right) The discretization parameters are N v = 4096 and N ov = 2. The top table refers to a two-channels configuration, the bottom table refers to the multiple channel configuration depicted in Fig. 5 by solving on each segment of a skeleton an eigenvalue problem could lead to a divergent method. Thus, to improve this coarse space, we apply few iterations of the smoother to obtain a better V c . Tables 3 and 4 report the number of iterations to reach a tolerance of 10 −8 when the algorithms are used either as stationary methods or as preconditioners. We remark that all spectral methods have very similar performance, and all methods are robust with respect to the strength of the jumps.

Conclusions
In this work we introduced a new computational framework of two-level substructured Schwarz methods. This is called S2S and is based on coarse spaces defined exclusively on some interfaces provided by the overlapping decomposition of the domain. We presented a broader convergence analysis for two-level iterative methods, which covers the proposed substructured framework as a special case. The analysis pushes forward the current understanding of asymptotic optimality of coarse spaces. From the computational point of view, we have discussed approaches based on the PCA and deep neural networks for the numerical computation of efficient coarse spaces. Finally, the effectiveness of our new methods is confirmed by extensive numerical experiments, where stationary elliptic problems (with possibly highly jumping diffusion coefficients) are efficiently solved.
Acknowledgements Gabriele Ciaramella and Tommaso Vanzan are members of GNCS (Gruppo Nazionale per il Calcolo Scientifico) of INdAM.
Funding Open access funding provided by EPFL Lausanne. No funding was received.
Data Availability No datasets were generated or analysed during the current study.

Conflict of interest
The authors declare that they have no conflict of interest.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

Appendix
In this Appendix, important implementation details of our substructured two-level methods are discussed. We reformulate Algorithm 1 in equivalent forms that are computationally more efficient. This is essential to make our methods computationally equal or more efficient than other existing strategies. As already remarked in Sect. 6, a naive implementation of Algorithm 1 would lead to a quite expensive method as the computation of the residual involves a matrix multiplication with A, which requires to perform subdomain solves. Hence, one would need two subdomain solves per iteration. To avoid this extra cost, we use the special form of the matrix A = I − G and propose two new versions of Algorithm 1. These are called S2S-B1 and S2S-B2 and given by Algorithms 2 and 3. The relations between S2S, S2S-B1 and S2S-B2 are given in the following theorem. Notice that Algorithm 2 requires for the first iteration two applications of the smoothing operator G, namely two subdomains solves. The next iterations, given by Steps 6-10, need only one application of the smoothing operator G. Theorem 8 (a) shows that Algorithm 2 is equivalent to Algorithm 1. This means that each iteration after the first one of Algorithm 2 is computationally less expensive than one iteration of a volume two-level DD method. Since two-level DD methods perform generally few iteration, it could be important to get rid of the expensive first iteration. For this reason, we introduce Algorithm 3, which overcomes the problem of the first iteration. Theorem 8 (b) guarantees that Algorithm 3 is exactly an S2S method with no pre-smoothing and one post-smoothing step. Moreover, it has the same convergence of Algorithm 2.
We wish to remark that, the reformulations S2S-B1 and S2S-B2 require to store the matrix P := G P, which is anyway needed in the assembly phase of the coarse matrix, hence no extra cost is required, if compared to a volume two-level DD method. Finally, we stress that these implementation tricks can be readily generalized to a general number of pre-and post-smoothing steps.