On the Eigenvalues of the ADER-WENO Galerkin Predictor

ADER-WENO methods have proved extremely useful in obtaining arbitrarily high-order solutions to problems involving hyperbolic systems of PDEs. For example, it has been demonstrated that for the same computational cost as a Runge-Kutta scheme of a certain order, one can obtain an ADER scheme of one higher order of accuracy. Additionally, Runge-Kutta schemes suffer from the presence of Butcher barriers, limiting the order of temporal accuracy that one can comfortably achieve. There are no such limitations present in ADER-WENO schemes. The cumbersome analytical derivation of the temporal derivatives of the solution required by the original ADER formulation has been replaced by the use of a cell-wise local Galerkin predictor. The predictor can take either a discontinuous or a continuous form. The Galerkin predictor is a high-order polynomial reconstruction of the data in both space and time, found as the root of a non-linear system. It has been conjectured that the eigenvalues of certain matrices appearing in these non-linear systems are always zero, leading to desirable system properties for certain classes of PDEs. It is proved here that this is in deed the case for any number of spatial dimensions and any desired order of accuracy, for both the discontinuous and continuous Galerkin variants. This result is independent of the choice of reconstruction basis polynomials.


Background
ADER-WENO methods have proved extremely useful in obtaining arbitrarily high-order solutions to problems involving hyperbolic systems of PDEs. For example, it has been demonstrated that for the same computational cost as a Runge-Kutta scheme of a certain order, one can obtain an ADER scheme of one higher order of accuracy (see Balsara et al. [1]). Additionally, Runge-Kutta schemes suffer from the presence of Butcher barriers (see Butcher [3]), limiting the order of temporal accuracy that one can comfortably achieve. There are no such limitations present in ADER-WENO schemes.
The cumbersome analytical derivation of the temporal derivatives of the solution required by the original ADER formulation (see Toro [10]) has been replaced by the use of a cell-wise local Galerkin predictor. The predictor can take either a discontinuous or a continuous form (see Dumbser et al. [4] and Balsara et al. [2], respectively). The Galerkin predictor is a high-order polynomial reconstruction of the data in both space and time, found as the root of a non-linear system.
It has been conjectured that the eigenvalues of certain matrices appearing in these non-linear systems are always zero, leading to desirable system properties for certain classes of PDEs. It is proved here that this is in deed the case for any number of spatial dimensions and any desired order of accuracy, for both the discontinuous and continuous Galerkin variants. This result is independent of the choice of reconstruction basis polynomials.
The Einstein summation convention is to be assumed throughout this paper.

The ADER-WENO Method
Take a non-homogeneous, non-conservative (and for simplicity, one-dimensional) hyperbolic system of the form: Email address: hj305@cam.ac.uk (Haran Jackson) where Q is the vector of conserved variables, F is the conservative nonlinear flux, B is the matrix corresponding to the purely non-conservative component of the system, and S (Q) is the algebraic source vector.
Take the set of grid points x 0 < x 1 < . . . < x K and define ∆x i = x i+1 − x i . Take the time steps t 0 < t 1 < . . . while defining ∆t n = t n+1 − t n . Following the formulations presented in Dumbser et al. [5,6], Balsara et al. [2], the WENO method and Galerkin method produce at each time step t n a local polynomial approximation to Q on each space-time cell Define the scaled space variable: Take a basis {ψ 0 , ..., ψ N } of P N and inner product ·, · . This basis can either be nodal (ψ i χ j = δ i j where {χ 0 , . . . , χ N } are a set of nodal points, such as the Gauss-Legendre abscissae), or modal (such as the Jacobi polynomials).
The WENO method (as used in Dumbser et al. [8]) produces an order-N polynomial reconstruction of the data at time t n in cell [x i , x i+1 ], using {ψ 0 , . . . , ψ N } as a basis. This is denoted: This spatial reconstruction at the start of the time step is to be used as initial data in the problem of finding the Galerkin predictor.
Now define the scaled time variable: Thus, (1) becomes: where

The Continuous Galerkin Method 2
The non-dimensionalization notation and spacetime cell indexing notation will be dropped for simplicity in what follows. Now define the set of spatio-temporal basis functions: Denoting the Galerkin predictor by q, take the following set of approximations: If {ψ 0 , ..., ψ N } is a nodal basis, the nodal basis representation may be used: where χ β , τ β are the coordinates of the node corresponding to basis function θ β .
If a modal basis is used, F β , B β , S β may be found from the previous values of q β in the iterative processes described below.

The Discontinuous Galerkin Method
This method of computing the Galerkin predictor allows solutions to be discontinuous at temporal cell boundaries, and is also suitable for stiff source terms.
Integrating (11) by parts in time gives: where w is the reconstruction obtained at the start of the time step with the WENO method. Define the following: Thus: This nonlinear system in q β is solved by a Newton method. The source terms must be solved implicitly if they are stiff. Note that W has no dependence on q.

The Continuous Galerkin Method
This method of computing the Galerkin predictor is not suitable for stiff source terms, but it provides substantial savings on computational cost and ensures continuity across temporal cell boundaries.
{ψ 0 , ..., ψ N } must be chosen in such a way that the first N + 1 elements of θ β have only a spatial dependence. The first N + 1 elements of q are then fixed by demanding continuity at τ = 0: where w is spatial the reconstruction obtained at the start of the time step with the WENO method.
For a given vector v ∈ R (N+1) 2 and matrix X ∈ M (N+1) 2 ,(N+1) 2 (R), let v = v 0 , v 1 and X = X 00 X 01 X 10 X 11 where v 0 , X 00 are the components relating solely to the first N + 1 components of v. We only need to find the latter components of q, and thus, from (11), we have: Thus: Note that, as with the discontinuous Galerkin method, W has no dependence on the degrees of freedom in q.

Conjecture
Extending the Galerkin method described in the previous section to three dimensions, the following system must be solved for q: where now we have the 3 scaled spatial variables χ 1 , χ 2 , χ 3 and G, H are the flux components in the second and third spatial directions, respectively. In the case of the continuous Galerkin method, it is assumed that (19) is to be solved for only the nonfixed degrees of freedom in q. The matrices V i αβ are defined thus: For the discontinuous Galerkin method, W α now takes the form: where Ψ γ (χ 1 , χ 2 , χ 3 ) is an element of the following set, enumerated by γ: For the continuous Galerkin method, W α now takes the form: Dumbser et al. [4] remark that for the continuous Galerkin case, the eigenvalues of U −1 V i are all 0 for 0 ≤ N ≤ 5, for i = 1, 2, 3. Dumbser and Zanotti [7] state the same result for the discontinuous Galerkin case. This implies that in the conservative, homogeneous case (B = S = 0), owing to the Banach Fixed Point Theorem, existence and uniqueness of a solution are established, and convergence to this solution is guaranteed. As noted in Dumbser and Zanotti [7], in the linear case it is implied that the iterative procedure converges after at most N + 1 iterations.
In Dumbser et al. [4] it is conjectured that the result concerning the eigenvalues of U −1 V i holds for any N, and any number of spatial dimensions. A solution to this conjecture is provided here. For the theory in linear algebra required for this section, please consult a standard textbook on the subject, such as Nering [9].

The Discontinuous Galerkin Case
First, given the basis polynomials {ψ 0 , . . . , ψ N }, define the following matrices: Note that ℵ is the Gram matrix, which by linear independence of {ψ 0 , ..., ψ N } is invertible. Note also that if p ∈ P N then p = a j ψ j for some unique coefficient vector a. Thus, taking inner products with ψ i , we have ψ i , ψ j a j = ψ i , p for i = 0, ..., N. This produces the following result: Without loss of generality, take the ordering: where 0 ≤ α t , α x , α y , α z ≤ N. Using the same ordering for β, we have: where C i j = ψ i (1) ψ j (1) − ji . Thus: Therefore: A matrix X is nilpotent (X k = 0 for some k ∈ N) if and only if all its eigenvalues are 0. The conjecture will be proved if it is shown that ℵ −1 k = 0 for some k ∈ N, as this would imply that U −1 V 1 k = 0, and thus all eigenvalues of U −1 V 1 are 0.
This proof is easily adapted to show that U −1 V 2 and U −1 V 3 are nilpotent, and clearly extends to any number of spatial dimensions. No specific choice has been made for N ∈ N and thus the result holds in general.

The Continuous Galerkin Case
In addition to ℵ, , we now define ℵ ′ , ′ where each new matrix is equal to the original, with its first row and column removed (the row and column corresponding to the constant-term polynomial ψ 0 ). Take the following ordering: α = α t (N + 1) 3 + α x (N + 1) 2 + α y (N + 1) + α z where now 0 ≤ α x , α y , α z ≤ N and 0 ≤ α t ≤ N − 1. Using the same ordering for β, we now have: The proof for the continuous case follows in the same manner as the proof for the discontinuous case, with: