Wave splitting of Maxwell's equations with anisotropic heterogeneous constitutive relations

The equations for the electromagnetic field in an anisotropic media are written in a form containing only the transverse field components relative to a half plane boundary. The operator corresponding to this formulation is the electromagnetic system's matrix. A constructive proof of the existence of directional wave-field decomposition with respect to the normal of the boundary is presented. In the process of defining the wave-field decomposition (wave-splitting), the resolvent set of the time-Laplace representation of the system's matrix is analyzed. This set is shown to contain a strip around the imaginary axis. We construct a splitting matrix as a Dunford-Taylor type integral over the resolvent of the unbounded operator defined by the electromagnetic system's matrix. The splitting matrix commutes with the system's matrix and the decomposition is obtained via a generalized eigenvalue-eigenvector procedure. The decomposition is expressed in terms of components of the splitting matrix. The constructive solution to the question on the existence of a decomposition also generates an impedance mapping solution to an algebraic Riccati operator equation. This solution is the electromagnetic generalization in an anisotropic media of a Dirichlet-to-Neumann map.


Introduction
Wave field decomposition is a tool for analyzing and computing waves in a configuration characterized by a certain directionality. The wave-field decomposition, or wave splitting, has been used to separate the wave field constituents which are of importance for the analysis on a boundary, both for direct and inverse scattering problems [25,6,14,11,9,35,39] and for the analysis of boundary conditions (see for example [5,27,2,23]).
A remaining challenge in seismic prospecting methods is to incorporate anisotropy into the analysis. The enormous data sets used for studying such inverse problems are on the border or beyond today's computers [3]. A common method to access such problems is to use wave field approximations. Such approximations have been developed for and applied to a wide range of hyperbolic equations describing wave propopagation in isotropic media. One class of such approximations is based on a decomposition of the wave-field into up-/down-going components. Such a decomposition is usually denoted a wave splitting or a wave-field decomposition. There are essentially two types of limitations to the present theory of wave splitting: The traditional operator based approach of wave-splitting has been limited to heterogeneous isotropic materials see e.g., [38,14,9,28] or up-/down symmetric media [16]. This method is based essentially on constructing a certain square root operator and it fails, once the media becomes inherently anisotropic. Whereas wavesplitting by spectral decomposition of a certain matrix is restricted to homogeneous or depth-independent material. Here both anisotropic and bi-anisotropic materials have been considered [12,30].
The present paper removes both of these limitations. We present a derivation of a three dimensional wave splitting for electromagnetic fields in the presence of inherently anisotropic loss-less heterogeneous constitutive relations. We show the existence of a decomposition by a constructive argument. The decomposition is given for media with anisotropic permittivity and permeability that is described by self-adjoint, heterogeneous, positive definite matrices. These conditions are sufficient but not necessary material conditions for the resolvent set of a certain operator to contain the strip around the imaginary axis. The requirements of the material parameters, in time-Laplace domain, are corresponding to a medium with only instantaneous lossless response. The analysis which use pseudodifferential calculus is straightforward if one assumes that the material coefficients depend smoothly on the spatial variables. In order to simplify the analysis this assumption is made. Physically this should not be regarded as a restriction since the smooth functions densely approximate the square integrable ones.
The decomposition is constructed through a generalized eigenvalue-eigenvector procedure and a certain commutation of two operators. The construction of the commuting operator, the splitting matrix, is made by means of a functional analysis approach using the resolvent of the electromagnetic system's matrix and is analyzed with pseudodifferential calculus with parameters, to prove its existence and to study its behavior. The method is a generalization of the stratified-media case first presented in [12] and extends the theory form the linear acoustic [22] case to the considerably more complex electromagnetic case. The challenge in going from the anisotropic-acoustic case to the electromagnetic case includes a more complex differential operator with a non-trivial null-space, as well as the analysis of a resolvent operator which here is the inverse of a 4x4-matrix of operators.
There is a wealth of literature on wave splitting and their applications. A few references are mentioned below. For time-domain wave-splitting see [11], where both the wave equation and the Maxwell's equations are considered with both applications and theory. Wave-splitting in connection with Bremmer series for linear acoustics [14,8,9] and uniform asymptotics and normal modes [7,15] has been used to analyze the wave-field constituents. An extension to include dispersion is presented in [28] and wave-splitting on structural elements in [21]. The square-root of a certain operator is a key step in isotropic wavesplitting this operator has been carefully studied in [17]. A reciprocity theorem approach to decomposition is used by [33]. The results for anisotropic media includes [12,16,22].
Applications of the existing wave-splitting techniques include several successful analyzing tools of the wave-field including Bremmer series, normal modes and uniform asymptotics. Another spin off is the development of fast numerical codes to calculate the wave fields. Among their implementations we have 'rational approximations' and 'generalized screens' and 'multiple-forescattering-single-backscattering approximation' [36,18,31,39]. In the active field of time-reversal mirrors see e.g., [34,40], the wave-splitting techniques have been used see e.g., [23]. It is our hope that the extension of the wave-splitting techniques to the inherently anisotropic case will provide a base for generalization of the above mentioned applications to analysis and fast numerical codes to general anisotropic media.
The present paper is organized in a set of three propositions that step by step introduce and prove the necessary properties and tools to obtain the decomposition. The analysis is preformed in the time-Laplace domain and the procedures impose limitations on the Laplace parameter. In §2 the problem is formulated after a rewriting of Maxwell's equations to a suitable form. In §3 the properties, mostly the spectral properties, of the electromagnetic system's matrix are stated and proved using functional analysis. The propositions impose only the natural condition that the Laplace parameter has to belong to the right-hand half plane of the complex space. In §4 the splitting matrix is constructed and several of its properties are shown. The analysis utilize that the material parameters are self-adjoint, positive, and furthermore, require a mild constraint on the Laplace parameter, in order to obtain a certain ellipticity condition that is needed in the subsequent analysis. The most important property shown in this section is that the generalized eigenvectors of the splitting matrix can be obtained explicitly in terms of the elements of the splitting matrix. In §5 the decomposition is derived in terms of the generalized eigenvectors of the splitting matrix. The last section concludes with a discussion and some observations. Some lengthy intermediate derivations of the electromagnetic system's matrix in §2 are detailed in Appendix A. In Appendix B the special case of an isotropic homogeneous medium is treated using the approach developed in the present paper and the results are compared with traditional methods. In Appendix C the determinant of the symbol of the electromagnetic systems matrix is given. Furthermore, explicit integrations of the resolvent in symbol representation are presented in terms of residue calculus and the integrals are stated in terms of the roots of the determinant of the principal symbol of the electromagnetic system's matrix.

.1 The two-way equations for Maxwell equations
We consider electromagnetic wave motion in heterogeneous anisotropic media with instantaneous response. Let x ∈ R 3 be a point in space and t ∈ R is time. The media is assumed to be independent of time. The following initial conditions of the fields ensures causality, E(x, t) = 0 , D(x, t) = 0 for t < 0 and all x ∈ R 3 , (2.1) H(x, t) = 0 , B(x, t) = 0 for t < 0 and all x ∈ R 3 . The quantities E, D, H, B are all functions of space, x ∈ R 3 and time, t ∈ R, with values in R 3 , we return to which function spaces that they belong at a latter point in the present paper. Above we have used standard units and notation see [19].
The electromagnetic field satisfies the first-order hyperbolic system of partial differential equations in time domain, Maxwell's equations see e.g., [13,20]. We consider Maxwell's equations in time-Laplace domain. That is, the time-Laplace parameter s, that is in general complex valued and lies in the right-hand plane Re {s} > 0. With the specified initial condition (2.1), we have ∂ t → s. In this paper x = {x 1 , x 2 , x 3 } are right-handed orthogonal Cartesian coordinates. All the subsequent analysis is carried out in the domain of (x, s) hence there is no need to distinguish between the time dependent field, E(x, t), and the Laplace-parameter dependent field, E(x, s).
To explicitly introduce the material parameters into the equations we assume the following constitutive relations where µ = relative anisotropic permeability tensor, ǫ = relative anisotropic permittivity tensor, µ 0 = empty space permeability [H/m] and ǫ 0 = empty space permittivity [F/m]. The relative permeability and permittivity are assumed to be self adjoint and positive definite 3 × 3 tensors of second rank, that is, the media under consideration has only instantaneous response. Inserting the constitutive relations into the Maxwell equations gives Before proceeding we re-scale and change dimension of the equations analogous to e.g., [9, p.10] to simplify the subsequent analysis: where c 0 is the speed of light in vacuum. Upon substituting (2.5) into (2.4), All the following considerations refer to this transformed space and for notational simplicity we remove the·, but remember the change in dimension, in particular thatś has dimension m −1 ,H andÉ have dimension (J/(m 3 )) 1/2 ,J e andḰ e has dimension (J/(m 5 )) 1/2 . The transformation above is for dimensional convenience, in particular in the calculus of pseudodifferential operators see §4.
The 'evolution' of the wave field in space, along a direction of preference, can be expressed in terms of the change of the wave field in the directions perpendicular to it. The direction of preference is taken to be along the x 3 -axis (or 'vertical' axis) and the remaining ('horizontal') coordinates are denoted by x α , x β , α, β ∈ {1, 2} or x ′ = {x 1 , x 2 } when convenient. The procedure requires a separate treatment of the vertical components of E and H. From (2.6) we find the vertical field components to be where Einstein's summation convention for Cartesian tensors are employed for repeated indices α, β ∈ {1, 2}, e.g., µ 3β H β = 2 β=1 µ 3β H β . To project out the third component of a vector we have used the subscript 3, to explicitly show the notation consider (2.9) By replacing E 3 , H 3 in (2.9) with (2.7) we arrive (the derivation is detailed in Appendix A) to (I∂ 3 + A)F = N , (2.10) where N is a linear combinations of the sources and their derivatives cf. (A.12) and where the elements of the electromagnetic field matrix, F , are given by To simplify some of the following calculations we introduce the notatioñ There are several possible orderings of the transverse components of E, H in F . The particular choice of combinations given in (2.11) has two advantages. First, the given choice ensures that both the matrix operators A 12 and A 21 are invertible. Secondly, we have that the third component of the Poynting vector, equals (E ×H) 3 where· denotes the complex conjugate. The 4x4 electromagnetic system's matrix, A, is here represented by four 2x2 blocks where each block-matrix is given by (2.14) in which (2.15) The permeability and permittivity are symmetric, 3 × 3 tensors of rank (tensor order) 2 that are bounded from below and from above. Hence the upper-left 2 × 2 matrices of ǫ and µ are bounded below by the constantsǫ 0 andμ 0 respective. To show the notation we have u α ǫ αβūβ ≥ǫ 0 u αūα , (2. 16) for any complex field u = (u 1 , u 2 ). From the definitions of ε αβ and ν αβ it is clear that they are symmetric matrices (since ε and ν are symmetric and all elements are real valued. Furthermore, each is bounded below by the constantsǫ 1 andμ 1 respectively. This follows from the identity (summation over repeated index α, β ∈ {1, 2} and j, k ∈ {1, 2, 3}) where for any complex field u. Since ǫ is positive definite ε must also be positive definite and analogously for µ and ν.

Preliminaries
We consider the electromagnetic system's matrix and other operators on Sobolev spaces. Let H r (R 2 ; C) be the set of functions belonging to the Sobolev space of order r ∈ N with domain in R 2 and values in C, with a weighted inner product to compensate for the dimension of the derivative. To extend this scalar space to vectors we introduce the notation for a 4 × 1 matrix, with each element in the set of functions belonging to the Sobolev space of order r. Let F = (Ẽ a ,H a ) and G = (Ẽ b ,H b ). Then we define the inner product to be where y 0 is a constant of dimension length and it is used to normalize the change of dimension from the derivatives. All components of the field depend on x, but inner products and norms refer to x ′ and we treat x 3 as a parameter. We have adopted the multi-index notation of pseudodifferential calculus [32] above and use k ∈ N 2 together with The norm corresponding to the inner product is The set H r with the inner product (·, ·) r is a Hilbert space. For the case r = 0 we recover the Lebesgue space of square integrable functions, {L 2 , (·, ·) 0 }. The analysis of unbounded operators -such as the electromagnetic system's matrix -requires that one specifies the domain of the operator and its embedding space. Below we consider operators on the space {L 2 , (·, ·) 0 }, that is A : {L 2 , (·, ·) 0 } → {L 2 , (·, ·) 0 }. Since the operator is an unbounded operator, we also need to specify its domain, which is H 2 . The domain of the operator on a space is fundamental for the analysis. Here all operators have dense domains and when necessary, with restrictions to dense subsets of the their domains for the operation under consideration to be defined. In the case of such a restriction we use the notation A| q for the operator restricted to this dense subset of its domain and indicate by q what dense subset is understood to be the restricted domain.
One can also consider A s,λ as an operator on {H r , (·, ·) r }, (if r > 0 this is a restriction of the operator defined above) with domain H r−2 , and the analysis extends trivially to this case. An alternative method was detailed in [22], where one instead of A s,λ consider the operator γ −r A s,λ γ r , where γ r := (1 − y 2 0 ∂ α ∂ α ) r/2 . The results in this paper hold also for this class of operators.

Formulation of the problem
To be able to solve the scattering process along the vertical direction separately from the scattering process in the horizontal directions, we diagonalize the operator on the left-hand side of (2.10). This procedure will possibly lead to an additional source term on the right-hand side that accounts for the coupling. To achieve this, we construct a linear operator L which convert two-way fields F to one-way field constituents W , by We require that L when introduced into (2.10) gives, so as to make V, defined by AL = LV , (2.24) a block diagonal matrix of operators. We call L the composition operator, and W the wave matrix. The elements of the wave matrix represent locally the down-and up-going constituents. The expression in parentheses on the left-hand side of (2.23) represents the two so-called one-way wave operators. The first term on the right-hand side of (2.23) represents the scattering due to variations of the medium properties in the vertical direction. The scattering due to variations of the medium properties in the horizontal directions is contained in V and, implicitly, in L also. To investigate whether solutions {L, V} of (2.24) exist, we introduce the column matrices, or generalized eigenvectors, L ± , according to Upon writing the block diagonal elements of V (generalized eigenvalues) as Eqn. (2.24) decomposes into the two systems of equations where S ± are 2 × 2 matrices. The central problem that we consider in the present paper is to show that there exists an operator pair, {L, V}, such that the above operator equation, (2.24), is satisfied. Since the operators are unbounded we need to modify (2.24) and (2.27) with respect to the domain of the respective operator.
Note that the upper 2 × 2 matrix of the operators L ± combines the transverse electric field strength, and the lower, the magnetic field strength, whereas the elements of W may be physically 'non-observable'.
We now focus on the fundamental question: Does there exist a composition operator L that decomposes A in the above sense? To begin to show that there exists such a decomposition of A, we derive properties of the resolvent set of A which enable us to define a certain operator which commute with A.

Properties of the A s,λ operator
In this section we show that the directional decomposition of the electromagnetic field is closely related to the spectral properties of the operator A. The definition of the splitting matrix requires that there exists a region around the imaginary axis which is free from the spectrum. We therefore state the definition of the spectra explicitly. Consider first the operator A s,λ defined on {L 2 , (·, ·) 0 } with domain H 2 : if the scalar λ ∈ C is such that the range of A s,λ is dense in {L 2 , (·, ·) 0 } and A s,λ has a bounded inverse, λ is in the resolvent set, P(A), of A, and we denote this inverse by A −1 s,λ and call it the resolvent (at s, λ) of A. All complex numbers not in the resolvent set form a set Σ(A) called the spectrum of A.
To simplify some of the upcoming calculations we use the notations s = s r e iσ = s r cos σ + is r sin σ and λ = λ R + iλ I . (3. 2) The following proposition gives as a corollary that there exists a strip that belongs to the resolvent set of A.
Remark 1.2. The underlying requirement of self-adjoint material parameters can be replaced by positivity of the real part of the eigenvalues of the two matrices sµ, sǫ in the case of up/down symmetric materials, i.e. when µ 3α = µ α3 = 0 and ǫ 3α = ǫ α3 = 0, for α = 1, 2. This follows directly from the proof of part 1. It is not clear that this extension is valid for the general anisotropic case, we do not pursue this since our proof of Proposition 2 makes use of the self adjoint property of ǫ, µ.
From Proposition 1 it directly follows that: Corollary 1.1. For any fixed s ∈ C such that Re {s} > 0 the resolvent set of the electromagnetic system's matrix contains the strip of all λ ∈ C such that That is, the strip belongs to the resolvent set, P(A). where I is 2 × 2 unit matrices. From (A.13) we note that A satisfy the identity
Proof of Corollary (1.1). Given Re {s} > 0, and the strip (3.3) we find such (s, λ) ∈ Q, and hence A satisfy the properties 1-3 of Proposition 1. These properties are the conditions needed for a point λ to be in the resolvent set.

Properties of a quadratic form
To prove Proposition 1 we introduce some properties on an auxiliary quantity, a quadratic form, C s,λ [F ], defined in Lemma 1.1. Using the notation introduced in Proposition 1 and in the definition of the norm in (2.21), we have: where F = (Ẽ,H) T . Then C s,λ is well defined for F ∈ H 1 and Proof. That C s,λ is well defined for F ∈ H 1 is clear as the quadratic form contain at most one derivative for each field component in each term. To see that C s,λ is bounded from below we take the real part of the integrand and using the notation introduced in (3.2) we obtain due to that ε and ν are self-adjoint. Since {s, λ} ∈ Q, it follows that cos σ > 0 and we have For all η > 0 we have the inequality Thus we require that η ∈ (0, 1). The largest |λ R |-strip is obtained in the limit η → 1, thus Hence for given fixed s such that Re {s} > 0 and for a fixed λ that fulfils (3.9), there exists an optimal η such that the bound from below in (3.8) is maximal. Thus the best choice of bound from below with the given estimates is We note that the argument above is a continuous function of η on [0, 1], and the maximalvalue is attained (no need for sup) in the interior of the interval. Furthermore, the optimal value η for which the C 0 (s, λ) is obtained as a solution of a second order equation in η, but here it suffices to know that it exists and is positive. Upon integration we find that

Proof of Proposition 1, part 1
To start the proof that the operator A s,λ is bounded from below, we employ Schwartz' inequality with F ∈ D(A) = H 2 and where and I is a 2 × 2-matrix. K is a unitary for · 0 . On the block element level where F = (Ẽ,H) (cf. (2.12) we havē where the repeated index q indicates summation over the four components andẼ * = (Ẽ) T . The first term becomes after integration over x ′ and integration by parts, and the second becomes 14) The fourth term becomes after simplification and integration by parts 15) and the last term Since ǫ and µ are self-adjoint matrices the two terms in (3.15) and (3.16) combine to for F ∈ D(A s,λ ) ⊂ H 1 . By Lemma 1.1 and (3.11) we obtain where C 0 (s, λ) is defined in the lemma, and C 0 (s, λ) > 0 for {s, λ} ∈ Q.

Proof of Proposition 1, part 2
The inequality with C 0 > 0 from part 1, implies that the null space only contains the zero element. By [29, p.171, theorem 4.4.1] an operator with trivial null space is one-to-one (injective). Hence, the operator A s,λ is one-to-one for {s, λ} ∈ Q.

Proof of Proposition 1, part 3
Let A * s,λ denote the adjoint of A s,λ on L 2 . To show that the operator has dense range it is sufficient to show that the kernel of A * s,λ is trivial. That is, thus if A * s,λ is bounded from below then the desired result follows directly, cf. Proposition 1, part 2.
The adjoint of A s,λ with respect to the inner product (·, ·) 0 is where we have used that ǫ, µ are self adjoint, and hence that their diagonals are realvalued. The domain of the adjoint is the set To show that A * s,λ is bounded from below we will use the same method as in Proposition 1, part 1. First we need a small enough set that contains the domain; from the form of A * To obtain the quadratic form needed to use Lemma 1.1, we once again use the Schwartz estimate. Let G ∈ D(A * s,λ ), and F := KG, where the matrix K was introduced in (3.12) and is unitary on (·, ·) 0 and has the properties K = K * = K −1 and KG 0 = G 0 . Similarly to (3.11) we find (3.25) and with F = {Ẽ,H}. Let {s, λ} ∈ Q, from the properties of C s,λ , we find where C s,λ is defined for F ∈ H 1 . From (3.25) and (3.24) it follows that A * s,λ is bounded from below for all G ∈ D(A * s,λ ) ⊂ H 1 . For the bound from below of A * s,λ it directly follows that A * s,λ has trivial kernel and thus A s,λ has dense range for the condition of {s, λ} ∈ Q.

Proof of Proposition 1, part 4
In Section 4 below we consider a Dunford-Taylor integral over the resolvent and the analysis simplifies if A is closed. From the form of A * s,λ we note that C ∞ (R 2 , C 4 ) ⊂ D(A * s,λ ) and hence it is densely defined for {s, λ} ∈ Q and thus, by [24, p.168, §III.5.5], the operator A s,λ is closable. The closure is denoted by {A s,λ } cl .
The range for the closed operator is still dense in To show that the closed operator is bounded from below, we rely on Corollary VI.1.19 [24]. This corollary applies to sesquilinear forms in Hilbert spaces, but due to example 1.23 and example 1.3 in [24] we draw the conclusion that we can construct the sesquilinear form (A s,λ F, A s,λ G) 0 and that it is only closable when A s,λ is closable. Thus, by the above mentioned corollary, we obtain that the closed form is bounded from below with the same constant and thus, the closed operator is bounded from below for {s, λ} ∈ Q.

Proof of Proposition 1, part 5
Given (s, λ) ∈ Q we have shown in part 1-3 that A s,λ is one-to-one, has dense range and it is bounded from below. Consequently we know that the inverse exists and is unique. The operator A s,λ can be explicitly inverted in terms of the inverse of two 2 × 2 matrix operators through a quasi-diagonalization. Once again introduce the matrix K (cf. (3.12)), with K −1 = K. We find and from the form of A 21 , (2.14) we find hence the inverse exists and is bounded for Re {s} > 0. Thus A −1 21 is well defined. The operator KA s,λ is then diagonalized as follows where and The characteristic operator, E s,λ , a matrix extension of the of the 'transverse Helmholtz' operator [14]. For each fixed λ, the operators T 1;s,λ , T 2;s,λ have the inverses: and respectively. From the quasi-diagonalization (3.29) we obtain an explicit expression for and since (s, λ) ∈ Q the bound from below is positive and E s,λ is invertible. Starting from (3.29) and inverting term by term, gives

The Splitting Matrix
We proceed with the decomposition of the electromagnetic system's matrix. As the spectrum is absent from the strip (see Corollary 1.1), we define a certain commuting operator through a resolvent integral, this operator will satisfy a number of properties, and will be called the splitting matrix. We note that if an operator has a spectral resolution, or even a part of the spectrum which is bounded, then one can define a projector with help of a Cauchy type integral, also called Dunford's integral, over the resolvent with integration path around the bounded spectral region, see [24,III.6.4], and also [41,32]. For the electromagnetic system's matrix such information about the spectrum is not known, we do know however that the spectrum is separated into two parts by a strip around the imaginary axis. The idea here is to accomplish a decomposition by introducing an operator defined by an integral over the resolvent of A, as to try to split the two parts we know exist, similarly to the case of bounded spectral regions. We use the Dunford-Taylor integral applied to a closed, unbounded, operator as in [4], cf. [24] for accretive operators. This theory is given only for closed paths, or absolutely bounded integrals, hence the extension needed here to non-closed paths require that we prove that the operator is well defined.
In this section we prove a number of properties of the splitting matrix, among them that it is well defined as a pseudodifferential operator with a parameter, that it is an involution, and that it commutes with the electromagnetic system's matrix. Once the splitting matrix is shown to be well defined we derive its generalized eigenvalues and eigenvectors; the generalized eigenvectors are the key components for the decomposition detailed in the next section.

Definition of the Splitting Matrix
Given a fixed positive constant S R > 0, let From Proposition 1 we note that Q 1 ⊂ Q, hence by Corollary 1.1 the strip |Re {λ} | < S R √ǫ 1μ1 belongs to the resolvent set of the electromagnetic system's matrix. Thus we can consider the operator defined through The spatial and time-Laplace dependence is present but not explicit in the notation. The integration path is: and hence the integral path is in the resolvent set. Some of the considerations that follow become simpler if we consider the operators restriction to C ∞ (R 2 , C 4 ), and similarly for any operator with the notation • . With the above introduction, we have the following proposition.
Proposition 2. Let B be defined as in (4.2) with {s, K n } ∈ Q 1 then B 1. is a pseudodifferential operator with parameters of order 0; 2. has a restriction B| q , which maps H q into H q−1 ; 3. 'commutes' with A in the sense that on the set H 3 we have

has a restriction,
• B, that is an involution;

has a restriction
• B that has a generalized eigenvector • L ± ; unique up to a normalization, and with a corresponding scalar 'eigenvalue' γ = ±1, satisfying the equation The explicit form of where • N ± is a normalization in the form of invertible 2 × 2 operator matrices; 6. has a restriction • B that is one-to-one on a core and thus its element • B 21 is invertible on its range; 7. has 'generalized eigenvectors', i.e., that is the extension of 1 as a mapping betweenH andẼ, i.e., an impedance mapping. The corresponding map in linear acoustic is a map between the pressure and the vertical particle velocity cf. [22]. Both these mappings are the acoustic, and electromagnetic respective equivalent maps to a Dirichlet-to-Neumann map for the wave equation [11].

Proof of Proposition 2, part 1
The operator B is defined through an improper integral over the resolvent. To prove that B is well defined as a pseudodifferential operator with parameter we consider first the parametrix of A s,λ and then integrate each term of the asymptotic expansions with respect to λ and prove that this step is well defined. Hence we obtain an asymptotic expansion for the symbol of B, via the usual calculus of pseudodifferential operators we thus construct a well defined operator B.

Pseudodifferential preliminaries
The calculus of pseudodifferential operators can be introduced by means of a Fourier transform, thus defining signs and symbols. For simplicity we use standard notation for the symbols and their compositions. Throughout this paper we use the left symbol (in the notation of [32]). The Fourier transform, F , in the plane with respect to the first two variables x ′ = {x 1 , x 2 } has an inverse given bỹ To obtain the left symbol of A 21 we let it act upon (4.5) and obtain that the integrand expression in front This is the left symbol of A 21 . To find the appropriate behavior of the symbols we have to consider symbols with parameters. We consider s to be a parameter of the same order as ξ i . We hence find the principal symbol of a 21 to be and a 21;1 is homogeneous of order one in (ξ ′ , s).

Ellipticity of A s,λ
For matrix valued operators it is the determinant of the symbol that controls the regularity and existence of its parametrix. We require that the coefficients to (ξ ′ , s, λ) are arbitrarily smooth for each term in A s,λ , and thus we can use the criteria in Definition 5.1 together with Proposition 5.1 ′ of [32, pp.38,39] to define ellipticity of A s,λ . To construct the principal symbol of the operator A, a ;1 (x, ξ ′ ; s), we proceed as above and obtain for the remaining elements Then α ;1 have homogeneity degree 1 in (ξ ′ , s, λ). The remaining part of the symbol of A s,λ has a lower degree of homogeneity in (ξ ′ , s, λ). To ensure the ellipticity of an operator in the parameters (ξ ′ , s, λ) the following estimate is needed for some R such that |ξ ′ | 2 +|s| 2 +|λ| 2 > R 2 and with proper restrictions on the parameters {s, λ}. The properly supported requirement for ellipticity follows from the fact that A is a classical pseudodifferential operator with smooth coefficients [32]. The upper limit of (4.11) follows directly from the fact that det α ;1 is a polynomial, homogeneous of order four in (ξ ′ , s, λ) (see Appendix C), together with the fact that we can dominate this polynomial by (|ξ ′ | 2 + |s| 2 + |λ| 2 ) 2 and a constant, for some constant R 0 such that |ξ ′ | 2 + |s| 2 + |λ| 2 > R 2 0 . To prove the lower limit we need a more subtle method.
The lower limit of (4.11): This is a multi-step proof, and a complication arises since the region in ξ ′ , s, λ is not conical. First we prove that the determinant is non-zero on a surface (see Figure 4.1), then we use a scaling argument to extend this to a bound from below of the form (4.11) for a conical region with the surface in figure 4.1 as 'bottom surface'. In the last step we extend the obtained result to the non-conical domain, so as to include λ R .
Non-zero determinant of α ;1 Let λ R = 0, to explicitly show that the determinant of α ;1 is non-zero is difficult due to the large number of terms that it contains (see Appendix C). Our scaling argument needs only that the determinant is non-zero on a surface, here part of a sphere, see figure 4.1. Thus let |s| 2 + |ξ| 2 + |λ I | 2 = R 2 e , for some positive constant R e . We consider two cases; s = 0 and s = 0. For the first case with s = 0, we find from Appendix C that (4.12) Figure 4.1: A schematic picture of the surface |ξ ′ | 2 + |s| 2 + |λ I | 2 = R 2 e , | arg s| < π/2. In particular note that the condition | arg s| < π/2 shrinks the surface away from a half sphere.
Using the restriction that µ and ǫ are self adjoint, together with the estimate for η 1 > 0 and similarly for 2λ where R 2 e s=0,λR=0 = |λ I | 2 + |ξ ′ | 2 and ε (η1) To find an explicit expression for C a , let us choose η 1 > 1 so that The right-hand side of this equation gives the minimum of the lower eigenvalue of the matrix ε (η1) γδ normalized with ǫ 33 . There exists an η 1 > 1 that fulfils this equation sincê ǫ 1 > 0, hence the left-hand side of the above expression is positive. The same way we find an η 2 > 1 such that The constant C a becomes For the case where s = 0 we use Schwartz' inequality on an inner product. Thus we introduce the 'matrix' norm with a corresponding inner product defined analogously and denoted by ·, · s . Both the norm and inner product depend on (x, ξ ′ , s, λ I , 0). Consider the normal 4 × 4 matrix α * ;1 α ;1 , for λ R = 0, that have eigenvalues κ 1 , . . . , κ 4 each with a variable dependence (x, ξ ′ ; s, λ I , 0). From the definition of eigenvalues it follows that 0 ≤ κ i ∈ R, where i = 1, . . . , 4, and we use the convention κ 4 ≥ · · · ≥ κ 1 . From the relation we find that it is enough to prove that κ 1 = 0. Schwartz' inequality (cf. (3.11)) gives Thus if for λ R = 0 and s = 0 we can obtain an estimate of the form where C b > 0, then from (4.19) and (4.22) it follows that By (4.20) we obtain, under some restrictions on s, to be derived. Now with the explicit form of α ;1 we obtain where we have used the notation of Proof of Proposition 1, part 1. Since s = 0 we take the real part and obtain if Re {s} > 0 and here is positive if |σ| = | arg s| < π/2 and s = 0. By the above argument, (4.14) and (4.24) the determinant is non-zero on the surface R 2 e = |ξ ′ | + |s| 2 + |λ I | 2 , |σ| < π/2 (4.28) if R e = 0. Hence there exists a lower constant C e = min(C b , C a R 4 e ) such that |det α ;1 | λR=0 ≥ C e > 0 (4.29) on this surface. for |ξ ′ | + |s| 2 + |λ I | 2 = R 2 e and |σ| < π/2 to a proper bound from below, we use a scaling argument. The homogeneity of det α ;1 allow us to scale ξ ′ , s, λ I , to an arbitrary radius greater then R e and det α ;1 (x, ξ ′ ; s r , σ, λ I , 0) = R −4 e (|ξ ′ | 2 + |s| 2 + |λ I | 2 ) 2 det α ;1 (x,ξ ′ ;s r , σ,λ I , 0) , where the· variables are normalized to lie on the surface |ξ ′ | 2 + |s| 2 + |λ I | 2 = R 2 e . Thus from (4.30) we have obtained in the conical domain, ξ ′ ∈ R 2 , λ I ∈ R , s ∈ C and | arg s| < π/2 , The case λ R = 0: To extend the argument above to include the case λ R = 0 we impose the condition Re {s} > S R = 1 2 S R √ǫ 1μ1 . Let λ R ≤ τ and R e > S R . For large enough R e , where |ξ ′ | 2 + |s| 2 + |λ I | 2 ≥ R 2 e , the worst case for the bound from below of the determinant is whereC τ is chosen to include the sum of the maximal material parameters in front of λ R . Using the Hölder and the Jensen inequalities [10,p.28,Theorem 19] gives where and Comparing with (4.11) we obtain the condition Thus for some given, arbitrary R e > 0, there exists a large enough R such that | det α ;1 | > C 1 |z| 4 , for |z| 2 > R 2 when (C 1 , R) satisfy the following constraints: and Hence for ξ ′ ∈ R 2 and {s, λ} ∈ Q 1 , the operator is elliptic. Note that the quadratic form argument of Lemma 1.1 can be applied to the symbol α ;1 to yield a positive lower bound. Consequently (4.23) and (4.20) imply for {s, λ} ∈ Q and |s| 2 + |ξ ′ | 2 + |λ| 2 = 0. However the desired increase in |s| 2 + |ξ ′ | 2 + |λ| 2 does not follow directly, since the domain is not conical. Observe also that (4.41) is true in the region {s, λ} ∈ Q 1 . This result will be used in the end of the proof of part 1.

The parametrix of A s,λ
We have above shown that A s,λ is elliptic in pseudodifferential sense, hence the corresponding parametric is well defined. In the subsequent analysis we are interesting only in the principal part. From Proposition 1, part 5, we know that the inverse can be efficiently expressed in terms of E −1 s,λ for s, λ ∈ Q and different combinations of 2× 2 matrices. Therefore we introduce the notation of· on 2 × 2 matrices defined bŷ which is a polynomial homogeneous of order 4 in ξ ′ , s, λ. That | det e ;1 | = 0 follows directly from that | det α ;1 | = 0 (see (4.41)) together with the observation that | det a 21;1 | = 0 and that | det a 21;1 | is bounded above and below by constants times |ξ ′ | 2 +|s| 2 . Hence it follows that the parametrix of E s,λ is well defined. This is to be expected since the inverse of E s,λ was shown to be well defined in Proposition 1, part 5. With the above consideration we find that the components of the principal symbol of the resolvent, r ;−1 := α −1 ;1 = (a − λI) −1 , are (cf. Proposition 1 part 5) Upon integration of the parametrix with respect to λ, the element (r ;−1 ) 12 has an unsuitable form, therefore we use the identityê ;1 e ;1 = I det e ;1 and rewrite (r ;−1 ) 12 into (det α ;1 )(r ;−1 ) 12 =â 21;1 (ê ;1 e ;1 + (det a 21;1 ) −1 (a 22;1 − Iλ)ê ;1 (a 11;1 − Iλ)â 21;1 ) where 4m−n ′ > 1. We have above deduced the λ dependence for all terms in the symbolic expansion of the symbol of A −1 s,λ , and furthermore, we have the principal part explicitly.

B is a pseudodifferential with parameter of order 0
Given the parametrix of the resolvent, we integrate each term of the asymptotic series with respect to λ. To validate this procedure we show below that each of the terms is finite and that the integration does not rearrange the terms with respect to order, i.e., the principal term remains the principal term. We also show that B is an operator corresponding to a symbol that is homogeneous of order 0 in (ξ, s).
As shown in the previous section, each element of the resolvent has the form λ n (det α ;1 ) m , with 4m − n ≥ 1. The principal symbol has m = 1 and n = 0, 1, 2, 3, all other terms have homogeneity 0 or lower. In the evaluation of the integral we distinguish between two different cases, the principal valued integral corresponding to n = 3, m = 1 and the other cases. We observe that due to the homogeneity of the parametrix terms the case n = 3, m = 1 is the only principal integral.
The case m = 1, n = 3 gives a finite result, which we find by evaluation the integral over the integrand (4.55). In order to do this claim that we can use the representation where the eigenvalues, i.e., the roots of the fourth order polynomial det α ;1 , are denoted by λ ± 1 , λ ± 2 where the +(−) indicates that they have positive (negative) real part. Indeed, to show that two of the eigenvalues of a ;1 have positive(negative) real part we consider the isotropic case. The isotropic det α ;1 have the λ-roots (cf. Appendix B) i.e., two double roots on each side of the strip |Re {λ} | < S R inf x ′ ǫ iso µ iso . The lower bound on A s,λ in Proposition 1, part 1 shows that for each anisotropic material with instantaneous response, the area around the imaginary axis is free from eigenvalues. We introduce a parameter γ in the material coefficients by and analogously for µ (γ) . The lower bound on α ;1 , that is obtained by applying Lemma 1.1 and Proposition 1, part 1-3, to α ;1 , and apparent in (4.41), ensures us that for all γ ∈ [0, 1] the | det α by partial fraction decomposition. Assume initially that there are no equal roots then where all D depended only on λ ± 1,2 . Each such fraction is integrated over the imaginary axis to become where the branch cut is along the negative imaginary axis and where |λ R | < |Re λ ± 1,2 |. Concerning the choice of branch cut, observe that the integral above should be summed over each eigenvalue, thus the branch cut of the logarithm has to be chosen such that it agrees for all eigenvalues, hence the negative imaginary axis. Thus for n = 0, . . . , 3. Here , and each term is homogeneous of order 0. To show that the sum is bounded from above we have to eliminate (λ + 1 − λ + 2 ) and (λ − 1 − λ − 2 ) from the denominator. We find that and , hence the denominator is bounded away from zero by (2τ ) 4 , since |λ ± R | ≥ τ = S R √ǫ 1μ1 , this follows from the Corollary 1.1 that shows that the strip |λ R | < τ is free from eigenvalues. Hence the integral is bounded.
The case with equal eigenvalues follows similarly. Assume λ + 2 = λ + 1 and λ − 1 = λ − 2 then , With the integral and (4.60) we find that and hence, it is bounded from above and homogeneous of order 0. The case where λ − 1 = λ − 2 and λ + 2 = λ + 1 is totally analogous. For the case with two equal eigenvalues we have Hence we need only to find D +,1 and D −,1 , and hence the integral is homogeneous of order 0 and bounded from above since the denominator is bounded from below. We have thus shown that the integral (4.58) is well defined, and homogeneous of order 0 for all possible combinations of λ-roots in det α ;1 . Next we show that the remaining terms have homogeneous degree +1 compared to the corresponding term in the polyhomogeneous expansion of r and consequently that the integral over λ does not rearrange the symbol expansion. From the construction of the parametrix of A s,λ we know that its asymptotic symbol expansion has a λ dependence of the form (4.54). Using that the determinant det α ;1 is homogeneous of degree 4 in (ξ ′ , s, λ), we find that where we have used 1) that the limits of the integral goes to infinity, 2) that the λ-roots scale with η, implying that the stripλ R ≤ ητ is free from poles, 3) the scaled integration path is equivalent to the integration path ofλ R = τ /2 since A −1 s,λ is analytical in λ in the resolvent set and 4) that η is such that ηRe {s} > S R . Thus I n,m is homogeneous of degree n − 4m + 1 in ξ ′ , s. Each term in the polyhomogeneous expansion of r have a λ-dependence in the form of I n,m with a λ independent coefficient. It follows that each integrated term of the expansion has a homogeneous degree that is one order higher than the homogeneous degree of each term of r. Let To show that each of the integrals I n,m is bounded from above, for fixed ξ ′ , s such that z = 0 we use the estimate of the lower bound of the determinant for (s, λ) ∈ Q 1 (cf. §4.2.2 and (4.41)), The principal case 4m − n = 1 is taken care of above see (4.58). For 4m − n > 1 we have where for |λ I | < |z| we use and for |λ I | > |z| we use the estimate (4.74) Inserting the above estimates into (4.72) yields and hence the integral is bounded from above since |z| > 0 and 4m − n − 1 > 0. Thus we find that the asymptotic series expansion of r can be integrated, since each term is finite for s > S R and arg s < π/2. Furthermore, the λ-integral of r − m, which is homogeneous of order −m, results in b −m+1 which is homogeneous of order −m + 1. We have hence a well defined polyhomogeneous asymptotic expansion of a pseudodifferential operator with a parameter of homogeneous degree 0 in {ξ ′ , s}, the corresponding operator is represented in the usual way through an oscillatory integral.
One can use the residue theorem to evaluate the integrals in terms of the roots of the equation det α ;1 = 0. This is done for arbitrary I m,n , 4m − n > 1 in Appendix C.
We have above found an oscillatory integral representation of the desired operator B through the λ-integral of the symbol expansion of the resolvent. Its principal symbol is dλ r ;−1 , where as usual r ;−1 := α −1 ;1 = (a − λI) −1 . One question remains in order to associate B with dλ A −1 s,λ . It can be reduced to a question of the order of iterated integrals. Towards this end we use an alternative representation of the λ-integral. We note that where we used the following identity which similar to the (first) resolvent equation: Denote the right-hand side of the above identity w ;−1 (λ, ξ ′ ; s, x). Clearly this identity holds also if a ;1 is replaced with the operator A. By analyticity of the resolvent we can choose to integrate along the positive imaginary axis, i.e., λ R = 0. Let u be an arbitrary vector in H 1 , and consider the two integrals and Here we have once again used F to denote the Fourier transform with respect to x ′ and F −1 ξ ′ →x ′ to denote the inverse Fourier transform from ξ ′ to x ′ variables. The first integral is the standard way of representing the action of principal part of B ;0 on u. That is B ;0 u = V 1 . The second integral V 2 is the λ-integral of the first term of the parametrix corresponding to A s,λ . We thus have two, possible different, representations of an operator. Below we will show that the two representations are equal. For the principal term the problem is reduced to showing that the two iterated integrals exist and are equal, e.g., that V 1 = V 2 . We have the following result Lemma 2.1. Let u ∈ H 1 then for Re {s} > S R > 0 and arg s < π/2 it follows that This result is shown after the proof of Proposition 2 part 2. We have defined the operator B as the oscillatory integral of the λ-integral of the symbol representation of the resolvent expansion, and above shown that such an operator exists. The desired splitting matrix is however the λ-integral over the oscillatory integral over the resolvent expansion. The above lemma shows that the principal term of both these expressions are equal for functions on a dense set in the domain. To continue and show that the remaining terms in the respective symbol expansions are equal we can once again construct two iterated integrals and apply the proof of Lemma 2.1, e.g., the Fubini theorem on this term, and since all the assumptions carry over the result remains the same.
Hence we have shown that the two representations of the splitting matrix indeed are equal and can be applied to the wave-splitting procedure below.

Proof of Proposition 2, part 2
The symbol of B for the isotropic homogeneous medium case is given in (B.12) (see Appendix B) and by counting its powers of ξ ′ it follows that the corresponding operator B can be restricted to an unbounded operator on {L 2 , (·, ·) 0 } with domain H 1 and range in H 0 .
To show that this result holds also in general we need to show that the ξ ′ -growth in each of the b-terms is at most linear. To obtain such a result we need good control of the shape of the parametrix r of A s,λ , which is an asymptotic series of poly-homogeneous terms r = r ;−1 + r ;−2 + r ;−3 + . . .. Recall that (see e.g., [32]) r ;−1 := α −1 ;1 and where η ∈ N 2 i.e., a multi-index, D xj = 1 i ∂ xj and if η = (η 1 , η 2 ), then D η x = D η1 x1 D η2 x2 . Let the linear space of homogeneous polynomials of order n be denoted by hp n . The explicit shape of det α ;1 in Appendix C ensure that det α ;1 ∈ hp 4 (s, λ, ξ ′ ; x). Here we use hp n (s, λ, ξ ′ ; x) to indicate that det α ;1 is a homogeneous polynomial in s, λ, ξ ′ , and have C ∞ -coefficients depending on x. Given the homogeneous polynomials p 1 ∈ hp n , p 2 ∈ hp m we can consider the space of hq n−m of homogeneous rational functions, as elements of the form q = p 1 /p 2 and q ∈ hq n−m . We will restrict hq n even further and require that p 2 is a power of det α ;1 . We note two useful properties: Let q ∈ hq n (s, λ, ξ ′ ; x), then D η x q ∈ hq n (s, λ, ξ ′ ; x) and if q 1 ∈ hq n , q 2 ∈ hq m then q 1 q 2 ∈ hq n+m . In addition to these two spaces we need two additional spaces. The first is a space of block-diagonal matrices hP n , where the two 2x2 blocks have elements which are homogeneous polynomials of order n. That is if p ∈ hP n , then p = diag 2 (P 1 , P 2 ), where P 1 and P 2 are 2x2-blocks with each element, (P m ) ij ∈ hp n , for m = 1, 2 and i, j = 1, 2. Clearly for g ∈ hP n (λ, ξ ′ ; x) and h ∈ hP m (λ, ξ ′ ; x) we have hg, gh ∈ hP m+n (λ, ξ ′ ; x) and D η x g ∈ hP n (λ, ξ ′ , x). The second and final space is W −m and an element h −m is in W −m if it can be written in the form where q (m,k,n) j ∈ hq j for a given m, k = 0, ..., 4m, and n ∈ N k and g (m,k,n) j ∈ hP j for a fixed m, k = 0, ..., 4m, and all n. The matrix K is given in (3.12). Here we have used · (m,k,n) as a way to index the components of h −m . We require the number of elements for each (m, k)-level to be finite, i.e., |N k | < ∞ where N k ⊂ N. We find here the nice properties that if h ∈ W −m then D η x h ∈ W −m and if in addition g ∈ W −n then we have gh ∈ W −n−m . Note that the representation (4.82) is not unique due to that the numerator of q m,·,· −4m may contain a power of s. This non-uniqueness of the representation will be used constructively in the proof of the lemma below.
We have the following technical lemma: Furthermore, for a fixed m denote the 2x2-block diagonal elements of g (m,0,n) 4m by P 1n , P 2n , then P 1n = u T w 1n and P 2n = v T w 2n , where each of the elements in the (1,2)-vectors w jn are in hp 4m−1 (λ, ξ ′ ; x) and |N k | < ∞ for each k.
(4.85) Similarly let P 3 , P 4 denote the diagonal 2x2 blocks of g (2,0,0) 8 then each of these terms are of the form u T w 3 and v T w 4 respective, where w k are (1,2)-vectors with each element in hp 7 (λ, ξ ′ ; x).
The construction of r ;−m−1 in (4.81) is a product of finitely many terms, we consequently find that r ;−m−1 has at most a finite number of terms. This ensures that the |N k | < ∞ for all k.
To show that the lemma is valid for an arbitrary m ≥ 3 we make the recursive assumption that r ;−m ∈ W −m and r ;−m+1 ∈ W −m+1 with the desired outer-product structure on their respective leading matrices g for the respective range of j and k.
We now calculate r ;−m−1 from (4.81) and show that it satisfies the lemma. Towards this end we consider the following three 4x4 matrices c = diag 2 (c 1 , c 2 ) ∈ hP 0 (ξ ′ ; x), f = diag 2 (f 1 , f 2 ) ∈ hP 0 (ξ ′ ; x), and d. The matrix d is defined by d : Indeed, the terms containing a ;0 and ∂ η ξ a ;1 with |η| = 1 belong to the kind in (4.86), and the terms containing a ;−1 , ∂ η ξ a ;0 , |η| = 1 and ∂ η ξ a ;1 , |η| = 2 are of the kind (4.87). The lemma follows if we can show that the resulting products of (4.86) and (4.87) are elements in W −m−1 with the desired outer product-structure on the leading order terms. The three terms containing c, d and f respectively are considered separately. For the first term we note that ch 1 ∈ W −m . Since α −1 ;1 ∈ W −1 we immediately find that α −1 ;1 ch 1 ∈ W −m−1 . The outer-product structure on the leading term survives since the leading order term in α −1 ;1 is of the form (4.85). Consider the second term containing d. We explicitly write out the leading order elements in α −1 ;1 1 s dKh 1 : Here we let g (m,k,·) · denote the k:th element in h 1 . The first of these terms p m+2 = g The matrices in the product all have an outer product structure explicitly given above and from the observations that and uv T = 0, vu T = 0 we find that p m+2 = 0. To show that s −m−1 p m+1 + p 0 ∈ W −m−1 consider where we have once again have used the notation (4.89). Upon multiplying block-diagonal matrices with other block diagonal matrices all these with elements which are homogeneous polynomials yield that g ∈ hq −4m−4 . This suffice for s −m−1 p m+1 to be a leading term of an element in W −m−1 . Similar matrix algebra for a typical term in p 0 for a given s j -order we find that each such term fits into a W −m−1 element. The remaining issue of the d-containing terms is the outer product structure of p m+1 . Similarly to the c-terms it is clear that M 1n has the appropriate outer product structure. The term M 2jk is a bit more subtle, and we need to use the outer-product structure of each of the three matrices. There are two types of terms in g (1,1,j) 3 = p 3 + p 4 , where p 3 := diag 2 (p 1 I, p 2 I) with p 1 , p 2 ∈ hp 3 (λ, ξ ′ ; x), and where p jk are (1,2)-vectors with each element in hp 2 (λ, ξ ′ ; x). The p 3 -term in M 2jk yields and it has the desired outer product structure. To obtain this result we have repeatedly used that uv T = 0, vu T = 0. Similarly for the p 4 term we have which once again has the appropriate outer product structure. We have hence shown that the terms containing d are elements in W −m−1 with the desired outer product structure. The last kind of terms are these which contain f . These terms are of the form The leading order term q m+1 is explicitly Kf Kg ∈ hP 4m . Observe however that q 0 / det α ;1 ∈ hq −4m−4 and that det α ;1 = p 0 (λ, ξ ′ ; x) + s 2 p 1 (λ, ξ ′ ; x) + s 4 p 2 (λ, ξ ′ ; x) where p 0 ∈ hp 4 , p 1 ∈ hp 2 and p 2 ∈ hp 0 . We note that g 0 p 0 ∈ hP 4m+4 and that terms of the form s 2j−m−1 g 0 p j , j = 1, 2 fit nicely as lower order terms in W −m−1 . Similarly we can consider the terms of q 0 by explicitly calculating the typical s j -order terms and see that each such term fits into W −m−1 to finally draw the conclusion that α −1 ;1 1 s f Kh 2 ∈ W −m−1 and consequently that r ;−m−1 ∈ W −m−1 . The outer product structure of the f -terms follows directly from (4.94) and the fact that g (1,0,0) 4 has the appropriate outer product form.
We can now by a recursion argument draw the conclusion that the lemma is valid for all m ≥ 1.
To show that the operator corresponding to (b ;−m ) maps H n−1 → H n , it suffices to show that (b ;−m ) jk ∈ S 1 1,0 , the space of symbols first defined by Hörmander. This means that we need to show that for η, β ∈ N 2 . From the property of W −m we know that ∂ β x r ;−m ∈ W −m for (s, λ) ∈ Q 1 . To show (4.95) for η = (0, 0) we recall that when (s, λ) ∈ Q 1 is det α ;1 (4.11) elliptic. This imply that a typical term of r −m can be bounded as The leading orders k = 0 have the outer product structure, and we can therefore apply (4.96) to find that the λ-integral of these terms grows at most linearly in ξ ′ for large values of ξ ′ . Applying the derivative ∂ η ξ ′ on an element in r ;−m we find that this reduces the growth in ξ ′ with |η|-order for the corresponding b ;−m+1 symbol, since the integrand is a sum of rational functions; it is a sum of polynomials in s, ξ ′ , λ over powers of det α ;1 both with smooth coefficients. The partial derivative is hence a bounded or a more regular function in ξ ′ , and due to the ellipticity of det α ;1 and Re {s} ≥ S R there exists a λ-integrable ξ ′ -independent function so that we can apply the dominated convergence theorem to show that the interchange of λ-integral and ∂ η ξ ′ is allowed. It is clear that the partial derivative exists everywhere because that the elements of r ;−m are rational functions. We thus find that (b ;−m+1 ) jk ∈ S 1 1,0 and the corresponding operator maps H n to H n−1 as desired. This result is valid for any m, and hence we have the same result for B.

Proof of Lemma 2.1
To show this result on iterated integrals is a standard application of Fubini's theorem for positive integrands, see e.g. [26]. It states that a non-negative measurable function on the usual Lebesgue-measure over the λ, ξ ′ -domain has its iterated integrals equal and finite if one of them is finite. We will apply this to the iterated integrals V 1 and V 2 with integrand: We split the integrand into positive real parts g = g R+ − g R− + i(g I+ − g I− ), where g k > 0 for each k ∈ {R+, R−, I+, I−}. We consider the iterated integral over each of these positive functions separately. In order to show that g k for any k ∈ {R+, R−, I+, I−} is measurable, it is enough to note that w ;−1 is continuous in both λ and ξ ′ and so is e iξ ′ ·x ′ , their respective restriction, e.g., the real and non-negative (or any of the other combination) is also measurable since they are piecewise continuous. From the assumptions of the lemma we find that u ∈ L 2 and it is hence ξ ′ -measurable and consequently by trivially extending u to be a function on the product space, it is measurable in both λ and ξ ′ jointly. We then utilize that products of measurable functions are measurable. Consequently we know that the parts of the integrals V 1 and V 2 corresponding e.g., the real positive part exist and are equal. To finish the lemma we note that the operator corresponding to b ;−m maps H 1 to L 2 . This was shown in the previous section. Hence

Proof of Proposition 2, parts 3 and 4
That B and A commutes in the sense that = I one can introduce two projectors defined from the lambda-integral over the resolvent, B can be shown to be the difference of these two projectors, and the sum of the projectors equals the identity. The squaring of the operator is hence the identity operator. The key to this proof is to show that the two operators P ± := 1 2 (I ± • B), are projectors, it is done by utilizing the (first) resolvent equation. A proof of the projector properties is detailed in [22] Proposition 2, part 3 and 4 for the acoustic case. The electromagnetic case follows analogously, and since is somewhat lengthy, we will not repeat it here. Once this is known we note that P + + P − = I, and P − P + = 0 etc., on an core set. Consequently, on operator level

Proof of Proposition 2, part 5
Let γ ba a scalar, find all (γ, Writing (4.100) explicitly with 2 × 2 blocks gives and collecting similar terms yields )  Thus γ = ±1 since • L ± is assumed to be non-zero. The corresponding eigenvectors are obtained by solving the following linear system Comparison with (4.110) gives that the generalized eigenvectors have the form for arbitrary normalization operators Hence with use of (4.114) we find that • L ± is also a solution to (4.111). An alternative form of • L ± is obtained if we use (4.115) and (4.111)

Proof of Proposition 2, part 6
That • B is one-to-one follows directly from the fact that • B is an involution. Indeed for any

Proof of Proposition 2, part 7
We now extend • L ± to a larger domain. This is done by using ( B| q ) 11 and ( B| q ) 21 in the place of • B 11 and • B 21 in (4.113), the generalized eigenvector, together with the extension of • N ± to a bounded invertible operator N ± : H q (R 2 , C 2 ) → H q (R 2 , C 2 ) we obtain a generalization of where the domain of L ± | q follows from the domain of B| q .

Directional decomposition
We have above collected enough information to proceed and answer the initial question about the existence of {L, V}, i.e., does the decomposition of A exists. Most of the proof are done for the set C ∞ (R 2 , C 4 ), but in the end we extend the results to the general case.
With the definition of the splitting matrix in Proposition 2, in particular the commutation between the splitting matrix and the electromagnetic system's matrix (see Proposition 2, part 3), we obtain the decomposition by the following proposition.

Proposition 3. The equation
has a solution where the columns of L| q are the generalized eigenvectors L ± | q to B| q for q = 1, 3 and where V is a block diagonal matrix with the elements S ± : , representing a generalization of the vertical wave number and where N | q = N + | q = N − | q , q = 1, 3. then, upon eliminating • S ± , one finds that establishing the decomposition is equivalent to solving the equation

Proof of Proposition 3
To show that L ± | q decomposes A, we begin with the proof on C ∞ (R 2 , C 4 ). By Proposition 2, part 3 and 5 we have • BH ± = ±H ± , (5.5) but from Proposition 2, part 5 we know that for some arbitrary normalization operator, M ± , we have From (5.9) we obtain that the range of the left and right hand sides have to agree, thus the left hand side is within the range of • B 21 and hence the inverse is defined on this range and thus is well defined and it also is the generalized eigenvalue to A. To see that (5.8) gives the same result, we use two of the equations implied by the commutation of We rewrite (5.8) and apply (5.11) to obtain Applying ±I − • B 11 to both sides and using (4.101) and (4.102) gives and using (5.12) gives and hence an equation for • S ± that is equivalent to (5.9). Thus we have shown that the two expressions (5.8) and (5.9) are equivalent and that (5.10) is the solution to both.
Before we extend (5.7) to the general one, we introduce the matrix operators and furthermore, we may rewrite this equation into where by the definition of B| q we know that L| q is well defined, the choice of L| 1 follows since L| 1 : H 1 → H 0 . To extend the domain to a larger set, let {F n } ∞ n=1 ⊂ C ∞ (R 2 , C 4 ) be a Cauchy sequence. Then for fixed η > 0 and large enough n we have 6 Discussion of the result By applying functional analysis to the problem of decomposition of the wave field for the electromagnetic system's matrix we have extended the wave-splitting procedure to an anisotropic media whose properties vary with all three spatial coordinates. The result extends beyond the up/down symmetric case. The analysis of the spectrum shows that a strip around the imaginary axis is in the resolvent set. We define a resolvent integral whose contour lies in this strip. Due to the explicit form of the systems matrix, we do not have a full spectral resolution of the operator. Still, the resolvent integral over a path in the resolvent strip is shown to be well defined by applying the elliptic theory of pseudodifferential operators with parameters. Using this resolvent integral we define a splitting matrix. This matrix has the feature that one can construct its generalized eigenvectors of operators corresponding to the (generalized) eigenvalues ±1. We have above shown that the splitting matrix commutes with the electromagnetic system's matrix. One consequence of this 'commutation' of the operators is that the generalized eigenvectors of the splitting matrix also are generalized eigenvectors to the electromagnetic system's matrix. The corresponding generalized eigenvalue to the system's matrix (a 2 × 2 matrix operator) is the key ingredient in the definition of one-way equations for the electromagnetic case. This 'eigenvalue' is the electromagnetic generalization of the vertical wave number obtained in the linear acoustic case. One of the features of this procedure of decomposition is that we have constructed the composition operator without having to invert any of the elements of the splitting matrix: the construction relies on the fact that the splitting matrix is an involution than in the corresponding acoustic case. However, this result can also be carried over to the acoustic case. The removal of the inverse of an element in the splitting matrix from the splitting process, is not complete. It remains in the one-way equation obtained after the splitting, even though we have been able to remove it from the composition operator. The generalized eigenvectors to the splitting matrix is used to generate the composition matrix that decomposes the electromagnetic system's matrix.
The traditional approach to the decomposition of the system's matrix gives an algebraic Riccati operator equation. The splitting matrix construction of the decomposition gives us a family of solutions to this operator equation in terms of the elements of an integral over the resolvent of the electromagnetic system's matrix.
Once we have obtained the wave decomposition, we can proceed and use the one-way representation to study direct problems or by applying the generalized Bremmer coupling series to study direct and inverse scattering problems.
hence this can be removed by eliminating common factors in the sum of the two residues, similarly to (4.62) and (4.63).
For the case of λ + 1 = λ + 2 we obtain that the two residue collapse to one and becomes Res λ n (det α ;1 ) m ; λ + 1 = n p=0 n p