Linearized internal functionals for anisotropic conductivities

This paper concerns the reconstruction of an anisotropic conductivity tensor in an elliptic second-order equation from knowledge of the so-called power density functionals. This problem finds applications in several coupled-physics medical imaging modalities such as ultrasound modulated electrical impedance tomography and impedance-acoustic tomography. We consider the linearization of the nonlinear hybrid inverse problem. We find sufficient conditions for the linearized problem, a system of partial differential equations, to be elliptic and for the system to be injective. Such conditions are found to hold for a lesser number of measurements than those required in recently established explicit reconstruction procedures for the nonlinear problem.


1.
Introduction. In the context of hybrid medical imaging methods, a physical coupling between a high-contrast modality (e.g. Electrical Impedance Tomography, Optical Tomography) and a high-resolution modality (e.g. acoustic waves, Magnetic Resonance Imaging) is used in order to benefit from the advantages of both. Without this coupling, the high-contrast modality, usually modeled by an inverse problem involving the reconstruction of the constitutive parameter of an elliptic PDE from knowledge of boundary functionals, results in a mathematically severely ill-posed problem and suffers from poor resolution. The analysis of this coupling usually involves a two-step inversion procedure where the high-resolution modality provides internal functionals, from which we reconstruct the parameters of the elliptic equation, thus leading to improved resolution [2,5,14,17,22].
A problem that has received a lot of attention recently concerns the reconstruction of the conductivity tensor γ in the elliptic equation ∇ · (γ∇u) = 0 (X), from knowledge of internal power density measurements of the form ∇u·γ∇v, where u and v both solve (1) with possibly different boundary conditions. This problem is motivated by a coupling between electrical impedance imaging and ultrasound imaging and also finds applications in thermo-acoustic imaging. Explicit reconstruction procedures for the above non-linear problem have been established in [9,6,20,19,17], successively in the 2D, 3D, and nD isotropic case, and then in the 2D and nD anisotropic case. In these articles, the number of functionals may be quite large. The analyses in [17] were recently summarized and pushed further in [18]. If one decomposes γ into the product of a scalar function τ = (det γ) 1 n and a scaled anisotropic structureγ such that detγ = 1, the latter reference establishes explicit reconstruction formulas for both quantities with Lipschitz stability for τ in W 1,∞ , and involving the loss of one derivative forγ.
In the isotropic case, several works study the above problem in the presence of a lesser number of functionals. The case of one functional is addressed in [4], whereas numerical simulations show good results with two functionals in dimension n = 2 [1,12]. Theoretical and numerical analyses of the linearized inverse problem are considered in [15,16]. The stabilizing nature of a class of internal functionals containing the power densities is demonstrated in [16] using a micro-local analysis of the linearized inverse problem. The above inverse problem is recast as a system of nonlinear partial differential equations in [3] and its linearization analyzed by means of theories of elliptic systems of equations. It is shown in the latter reference that n + 1 functionals, where n is spatial dimension, is sufficient to reconstruct a scalar coefficient γ with elliptic regularity, i.e., with no loss of derivatives, from power density measurements. This was confirmed by two-dimensional simulations in [7]. All known explicit reconstruction procedures require knowledge of a larger number of internal functionals.
In the present work, we study the linearized version of this inverse problem in the anisotropic case, i.e. we write an expansion of the form γ ε = γ 0 + εγ with γ 0 known and ε ≪ 1, and study the reconstructibility of γ from linearized power densities (LPD). We first proceed by supporting the perturbation γ away from the boundary ∂X and analyze microlocally the symbol of the linearized functionals, and show that, as in [16], a large enough number of functionals allows us to construct a left-parametrix and set up a Fredholm inversion. The main difference between the isotropic and anisotropic settings is that the anisotropic part of the conductivity is reconstructed with a loss of one derivative. Such a loss of a derivative is optimal since our estimates are elliptic in nature. It is reminiscent of results obtained for a similar problem in [8].
Secondly, we show how the explicit inversion approach presented in [17,18] carries through linearization, thus allowing for reconstruction of fully anisotropic tensors supported up to the boundary of X. In this case, we derive reconstruction formulas that require a smaller number of power densities than in the non-linear case, giving possible room for improvement in the non-linear inversion algorithms.
For additional information on hybrid inverse problems in other areas of (mostly medical) imaging, we refer the reader to, e.g., [2,5,21,22]. 3 We set boundary conditions (g 1 , . . . , g m ) and call u ε i the unique solution to (1) with u ε i | ∂X = g i , 1 ≤ i ≤ m and conductivity γ ε . We consider the measurement functionals

LINEARIZED INTERNAL FUNCTIONALS FOR ANISOTROPIC CONDUCTIVITIES
Considering an expansion of the form γ ε = γ 0 + εγ, where the background conductivity γ 0 is known, uniformly elliptic and ε so small that the total γ ε remains uniformly elliptic, we first look for the Fréchet derivative of (2) with respect to γ at γ 0 . Expanding the solutions u ε i accordingly as The measurements then look like where the v i 's are linear functions in γ according to (4). In both subsequent approaches, reconstruction formulas are established under the following two assumptions about the behavior of solutions related to the conductivity of reference γ 0 . The first hypothesis deals with having a basis of gradients of solutions of (3) over a certain subset Ω ⊆ X.
For an open set Ω ⊆ X, there exist (g 1 , . . . , g n ) ∈ H 1 2 (∂X) n such that the corresponding solutions (u 1 , . . . , u n ) of (3) with boundary condition Once Hypothesis 2.1 is satisfied, any additional solution u n+1 of (3) gives rise to a n × n matrix As seen in [17,18], such matrices can be computed from the power densities {∇u i · γ 0 ∇u j } n+1 i,j=1 and help impose orthogonality conditions on the anisotropic part of γ 0 . Once enough such conditions are obtained by considering enough additional solutions, then the anisotropy is reconstructed explicitly via a generalization of the usual cross-product defined in three dimensions. In the linearized setting, we find that one additional solution such that Z has full rank is enough to reconstruct the linear perturbation γ. We thus formulate our second crucial assumption here: Hypothesis 2.2. Assume that Hypothesis 2.1 holds over some fixed Ω ⊆ X. There exists g n+1 ∈ H 1 2 (∂X) such that the solution u n+1 of (3) with boundary condition u n+1 | ∂X = g n+1 has a full-rank matrix Z (as defined in (7)) over Ω.

GUILLAUME BAL CHENXI GUO AND FRANÇ OIS MONARD
Remark 1 (Case γ 0 constant). In the case where γ 0 is constant, then it is straightforward to see that denotes an invertible constant matrix such that Q : γ 0 = 0, then the boundary condition g n+1 := 1 2 q ij x i x j | ∂X fulfills Hypothesis 2.2, since we have Q = Z.
Throughout the paper, we use for (real-valued) square matrices A and B the contraction notation A : Remark 2. In the treatment of the non-linear case [6,20,17,18], it has been pointed out that Hypothesis 2.1 may not be systematically satisfied globally in dimension n ≥ 3. A more general hypothesis to consider would come from picking a larger family (of cardinality > n) of solutions whose gradients have maximal rank throughout X. While this additional technical point would not alter qualitatively the present reconstruction algorithms, it would add complexity in notation which the authors decided to avoid; see also [8].
2.1. Past work and heuristics for the linearization. In the reconstruction approach developped in [19,17,18] for the non-linear problem, it was shown that not every part of the conductivity was reconstructed with the same stability. Namely, consider the decomposition of the tensor γ ′ into the product of a scalar function τ = (det γ ′ ) 1 n and a scaled anisotropic structureγ ′ with detγ ′ = 1. The following results were then established. Starting from n solutions whose gradients form a basis of R n over a subset Ω ⊂ X, it was shown that under knowledge of a W 1,∞ (X) anisotropic structureγ ′ , the scalar function log det γ ′ was uniquely and Lipschitzstably reconstructible in W 1,∞ (Ω) from W 1,∞ power densities. Additionally, if one added a finite number of solutions u n+1 , . . . , u n+l such that the family of matrices Z (1) , . . . , Z (l) defined as in (7) imposed enough orthogonality constraints onγ ′ , then the latter was explicitely reconstructible over Ω from the mutual power densities of (u 1 , . . . , u n+l ). The latter reconstruction was stable in L ∞ for power densities in W 1,∞ norm, thus it involved the loss of one derivative.
Passing to the linearized setting now (recall γ ε = γ 0 + εγ), and anticipating that one scalar quantity may be more stably reconstructible than the others, this quantity should be the linearized version of log det γ ε . Standard calculations yield log det(γ 0 + εγ) = log det γ 0 + log det(I n + εγ −1 0 γ) = log det γ 0 + εtr (γ −1 0 γ) + O(ε 2 ), and thus the quantity that should be stably reconstructible is tr (γ −1 0 γ). The linearization of the product decomposition (τ,γ ′ ) above is now a spherical-deviatoric one of the form where dev is the linear projection onto the hyperplane of traceless matrices A dev := A − tr A n I n . 2.2. Microlocal inversion. The above inverse problem in (4)-(6) may be seen as a system of partial differential equations for (γ, {v j }). This is the point of view considered in [3]. However, {v j } may be calculated from (4) and the expression plugged back into (6). This allows us to recast dH as a linear operator for γ, which is smaller than the original linear system for (γ, {v j }), but which is no longer differential and rather pseudo-differential. The objective in this section is to show, following earlier work in the isotropic case in [16], that such an operator is elliptic under appropriate conditions. We first fix Ω ′ ⊂⊂ X and assume that suppγ ⊂ Ω ′ , so that integrals of the form R n e ιx·ξ p(x, ξ) :γ(ξ)dξ are well-defined, with p(x, ξ) a matrix-valued symbol whose entries are polynomials in ξ (see [11, p.267]) and where the hat denotes the Fourier Transformγ(ξ) = R n e −ιx·ξ γ(x) dx. We also assume that γ 0 ∈ C ∞ (Ω ′ ) and can be extended smoothly by γ 0 = I n outside Ω ′ . As pointed out in [16], in order to treat this problem microlocally, one must introduce cutoff versions of the dH ij operators, which in turn extend to pseudo-differential operators (ΨDO) on R n . Namely, if Ω ′′ is a domain satisfying Ω ′ ⊂⊂ Ω ′′ ⊂⊂ X and χ 1 is a smooth function supported in X which is identically equal to 1 on a neighborhood of Ω ′′ , the operator γ → χ 1 dH ij (χ 1 γ) can be made a ΨDO upon considering L 0 = −∇ · (γ 0 ∇) as a second-order operator on R n and using standard pseudo-differential parametrices to invert it [13]. We will therefore not distinguish the operators dH ij from their pseudodifferential counterparts. The task of this section is then to determine conditions under which a given collection of such functions becomes an elliptic operator of γ over Ω ′ .
Using relations (4) and (6), we aim at writing the operator dH ij in the following form with symbol M ij (x, ξ) (pseudo-differential terminology is recalled in Sec. 3.1). We first compute the main terms in the symbol expansion of dH ij (call this expansion . From these expressions, we then directly deduce microlocal properties on the corresponding operators. The first lemma shows that the principal symbols M ij | 0 can never fully invert for γ, no matter how many solutions u i we pick. When Hypothesis 2.1 is satisfied, then the characteristic directions of the principal symbols {M ij (x, ξ)} 1≤i,j≤n reduce to a n − 1-dimensional subspace of S n (R). Here and below, we recall that the colon ":" denotes the inner product A : B = tr (AB T ) for (A, B) ∈ S n (R) and ⊙ denotes the symmetric outer product (ii) Suppose that Hypothesis 2.1 holds over some Ω ⊆ X. Then for any x ∈ Ω, if P ∈ S n (R) is such that then P is of the form P = γ 0 ξ ⊙ η for some vector η satisfying η · ξ = 0.
Since an arbitrary number of zero-th order symbols can never be elliptic with respect to γ, we then consider the next term in the symbol expansion of dH ij . We must also add one solution u n+1 to the initial collection, exhibiting appropriate behavior, i.e. satisfying Hypothesis 2.2. The collection of functionals we consider below is thus of the form and emanates from n + 1 solutions (u 1 , . . . , u n+1 ) of (3) satisfying Hypotheses 2.1 and 2.2.
In order to formulate the result, we assume to construct a family of unit vector fieldsξ homogeneous of degree zero in ξ, smooth in x and everywhere orthonormal. We then define the family of scalar elliptic zeroth-order ΨDO which can be thought of as a microlocal change of basis after which the operator dH(γ) becomes both diagonal and elliptic. Indeed, we verify (see section 3.4) that for any k ≥ 1 and γ sufficiently regular, we have The above estimates come from standard result on pseudo-differential operators [13]. The presence of the constant C 2 indicates that T can be inverted microlocally, but may not injective.
Composing the measurements dH ij with appropriate scalar ΨDO of order 0 and 1, we are then able to recover each component of the operator (13). The wellchosen "parametrices" are made possible by the fact that the collection of symbols M ij | 0 + M ij | −1 becomes elliptic over Ω ′ when Hypotheses 2.1 and 2.2 are satisfied. Rather than using the full collection of measurements dH (12), we will consider the smaller collection {dH ij } 1≤i,j≤n augmented with the n measurement operators where (µ 1 , . . . , µ n , µ)(x), known from the measurements {H ij } n+1 i,j=1 , are the coefficients in the relation of linear dependence µ 1 ∇u 1 + · · · + µ n ∇u n + µ∇u n+1 = 0.
We also define the operator L 1 2 0 ∈ Ψ 1 with principal symbol −ι A 0 ξ . Our conclusions may be formulated as follows: Proposition 1. Let the measurements dH defined in (12) satisfy Hypotheses 2.1 and 2.2.
(ii) For any 1 ≤ α ≤ n − 1, there exist {B αi } 1≤i≤n ∈ Ψ 0 such that the following relation holds where the remainder R α • R can be expressed as a zeroth-order linear combination of the components T 00 and {T pq } 1≤p≤q≤n−1 reconstructed in (i).

LINEARIZED INTERNAL FUNCTIONALS FOR ANISOTROPIC CONDUCTIVITIES 7
The presence of the L 1 2 0 term in part (ii) of Prop. 1 accounts for the loss of one derivative in the inversion process. From Prop. 1, we can then obtain stability estimates of the form The above stability estimate holds for k = 0 using the results of Proposition 1 and in fact for any k ≥ 0 updating by standard methods (not detailed here [13]) the parametrices in (16) and (17) to inversions modulo operators in Ψ −k (i.e., classical ΨDO of order −k [13]) provided that the coefficients (γ 0 , {u j }) are sufficiently smooth. The presence of the constant C 2 indicates that the reconstruction of γ may be performed up to the existence of a finite dimensional kernel as an application of the Fredholm theory as in [16].
Equation (18) means that some components of γ are reconstructed with a loss of one derivative while other components are reconstructed with no loss. The latter components are those that can be spanned by the components T 00 γ and {T αβ γ} 1≤α,β≤n−1 . Some algebra shows that the only such linear combination is confirming the heuristics of Sec. 2.1. It can be shown that all other components of γ (i.e. any part of γ d in (8)) are, to some extent, spanned by the components T 0α γ, and as such cannot be reconstructed with better stability than the loss of one derivative in light of (18). Combining the above results with (14), we arrive at the main stability result of the paper: Such an estimate holds for any k ≥ 1.
The above estimate holds with C 2 = 0 when γ → dH(γ) is an injective (linear) operator. Injectivity cannot be verified by microlocal arguments since all inversions are performed up to smoothing operators; see [3] in the isotropic setting. In the next section, we obtain an injectivity result, which allows us to set C 2 = 0 in the above expression. However, the above stability estimate (19) is essentially optimal. An optimal estimate, which follows from the above and the equations for (γ, {v j }) is the following: The left-hand-side inequality is a direct consequence of (19) and the expression of dH. The right-hand side is a direct consequence of the expression of dH. The above estimate is clearly optimal. The operator M |0 is of order 0. If it were elliptic, then γ would be reconstructed with no loss of derivative. However, M |0 is not elliptic and the loss of ellipticity is precisely accounted for by the results in Lemma 2.3. As we discussed above, it turns out that the only spatial coefficient controlled by M |0 γ is tr (γ −1 0 γ), and hence (19).

Explicit inversion:
Now, allowing γ to be supported up to the boundary, we present a variation of the non-linear resolution technique used in [17,18]. First considering n solutions generated by boundary conditions fulfilling Hypothesis 2.1, we establish an expression for γ in terms of the remaining unknowns (v 1 , . . . , v n ): where [∇U ] and [∇V ] denote n × n matrices whose j-th columns are ∇u j and ∇v j , respectively, and where H = {H ij } n i,j=1 and dH = {dH ij } n i,j=1 . In particular we find from (20) Plugging (20) back into the second equation in (1) for 1 ≤ i ≤ n, one can deduce a gradient equation for the quantity tr (γ −1 0 γ) which in turn allows to reconstruct tr (γ −1 0 γ) in a Lipschitz-stable manner with respect to the LPD {dH ij } n i,j=1 (i.e. without loss of derivative). Now turning to the full reconstruction of γ, we consider an additional solution u n+1 generated by a boundary condition fulfilling Hyp. 2.2. The following proposition then establishes how to reconstruct (v 1 , . . . , v n ) from dH: Proposition 2. Assume that (g 1 , . . . , g n+1 ) fulfill Hypotheses 2.1 and 2.2 over X and consider the linearized power densities dH = {dH ij : 1 ≤ i ≤ j ≤ n + 1, i = n + 1}. Then the solutions (v 1 , . . . , v n ) satisfy a strongly coupled elliptic system of the form (22) where the vector fields W ij are known and only depend on the behavior of γ 0 , Z and u 1 , . . . , u n , and where the functionals f i are linear in the data dH ij .
When the vector fields W ij are bounded, system (22) satisfies a Fredholm alternative from which we deduce that if (22) with a trivial right-hand side admits no non-trivial solution, then (v 1 , . . . , v n ) is uniquely reconstructed from (22). We can then reconstruct γ from (20).
Remark 3 (Case γ 0 constant). In the case where γ 0 is constant, choosing solutions as in Remark 1, one arrives at a system of the form (22) where W ij = 0 if i = j, so that the system is decoupled and clearly injective.
The conclusive theorem for the explicit inversion is thus given by Theorem 2.4. Assume that (g 1 , . . . , g n+1 ) fulfill Hypotheses 2.1 and 2.2 over X and consider the linearized power densities dH = {dH ij : 1 ≤ i ≤ j ≤ n + 1, i = n + 1}. Assume further that the system (22) with trivial right-hand sides has no non-trivial solution. Then γ is uniquely determined by dH and we have the following stability estimate 3. Microlocal inversion.

Preliminaries.
Linear algebra. In the following, we consider the n × n matrices M n (R) with the inner product structure for which M n (R) admits the orthogonal decomposition A n (R) ⊕ S n (R). For two vectors U = (u 1 , . . . , u n ) T and V = (v 1 , . . . , v n ) T in R n we denote by U ⊗ V the matrix with entries {u i v j } n i,j=1 , and we also define the symmetrized outer product With · denoting the standard dotproduct on R n , we have the following identities Pseudo-differential calculus. Recall that we denote the set of symbols of order m on X by S m (X), which is the space of functions p ∈ C ∞ (X × R n ) such that for all multi-indices α and β and every compact set K ⊂ X there is a constant C α,β,K such that Suppose {m j } ∞ 0 is strictly decreasing and lim m j = −∞, and suppose p j ∈ S m k (X) for each j. We denote an asymptotic expansion of the symbol p ∈ S m0 (X) as Given two ΨDO P and Q with respective symbols σ P and σ Q and orders d P and d Q , we will make repetitive use of the symbol expansion of the product operator QP ≡ Q • P (see [11,Theorem (8.37)] for instance) where O(|ξ| α ) denotes a symbol of order at most α. As we will need to compute products of three ΨDO R, P and Q, we write the following formula for later use, obtained by iteration of (28) In the next derivations, some operators have matrix-valued principal symbols. However we will only compose them with operators with scalar symbols, so that the above calculus remains valid.
For Y a smooth vector field, we will also need in the sequel to express the operator Y · ∇ as ΨDO, the symbol of which is denoted σ Y ·∇ = σ Y ·∇ | 1 := ιξ · Y . We now write dH ij as a ΨDO of γ with symbol M ij as in (9). dH ij belongs to Ψ 0 (X) and we will compute in this paper the first two terms in the expansion of M ij (call them M ij | 0 and M ij | −1 ), which in turn relies on constructing parametrices of L 0 of increasing order and doing some computations on symbols of products of ΨDO based on formula (28). If Q is a parametrix of L 0 modulo Ψ −m , i.e. K ≡ QL 0 − Id ∈ Ψ −m , then straightforward computations based on the relation L 0 v i = P i γ yield the following relation where For any i, L −1 0 P i denotes the operator γ → v i where v i solves (4), and standard elliptic theory allows to claim that L −1 0 P i smoothes by one derivative so that the error operator K ij defined in (33) smoothes by m derivatives. In particular, upon computing a parametrix Q of L 0 modulo Ψ −m , the first three terms in (32) are enough to construct the principal part of the symbol M ij modulo Ψ −m . Computation of M ij | 0 . In light of the last remark, we first compute a parametrix Q of L 0 modulo Ψ −1 , that is, since L 0 ∈ Ψ 2 , we look for a principal symbol of the form σ Q = q −2 + O(|ξ| −3 ). Clearly, we easily obtain q −2 = l −1 2 = (ξ · γ 0 ξ) −1 . In this case, the principal symbol of dH ij at order zero is given by, according to (32) and (28), M ij | 0 admits a somewhat more symmetric expression if pre-and post-multiplied by A 0 , the unique positive squareroot of γ 0 , so that we may write, where we have defined ξ 0 := A 0 ξ andx := |x| −1 x for any x ∈ R n − {0} as well as V i := A 0 ∇u i . This last expression motivates the proof of Lemma 2.3.
Proof of (ii): Recall that We write S n (R) as the direct orthogonal sum of three spaces: with respective dimensions 1, n(n−1)/2 and n−1. Decomposing A −1 0 P A −1 0 uniquely into this sum, we write A −1 0 P A −1 0 = P 1 + P 2 + P 3 . Direct calculations then show that is a basis of S n (R) and thus (11) implies that −P 1 + P 2 = 0, i.e. P 1 = P 2 = 0.
In other words, all symbols of order zero M ij | 0 (x, ξ) are orthogonal to the (x, ξ)dependent n − 1-dimensional subspace of symmetric matrices γ 0 ξ ⊙ {ξ} ⊥ . One must thus compute the next term in the symbol exampansion of the operators dH ij , i.e. M ij | −1 . We will then show that enough symbols of the form M ij | 0 + M ij | −1 will suffice to span the entire space S n (R) for every x ∈ Ω ′ and ξ ∈ S 1 , so that the corresponding family of operators is elliptic as a function of γ.

3.3.
Computation of M ij | −1 . As the previous section explained, the principal symbols M ij | 0 can never span S n (R). Therefore, we compute the next term M ij | −1 in their symbol expansion. We must first construct a parametrix Q of L 0 modulo Ψ −2 , i.e. of the form Lemma 3.1. The symbols q −2 and q −3 defined in (36) have respective expressions Proof of Lemma 3.1. Using formula (28) with (Q, P ) ≡ (Q, L 0 ), and using the expansions of σ Q and σ L0 , we get In order to match the expansion 1 + 0 + O(|ξ| −2 ), the expansion above must satisfy, for large ξ, q −2 l 2 = 1 and q −2 l 1 + q −3 l 2 + 1 Now, we easily have ∇ ξ l 2 = 2γ 0 ξ and ∇ x l 2 = ∂ xi [γ 0 ] pq ξ p ξ q e i , where e 1 , . . . , e n is the natural basis of R n . We thus deduce the expression of q 3 , from which (38) holds. q −3 is clearly in S −3 from this expression, since l −3 2 is of order −6. The proof is complete We now give the expression of M ij | −1 (or rather, that of A 0 M ij | −1 A 0 ).

Proposition 3 (Expression of
where we have defined V i := A 0 ∇u i , H i := A 0 ∇ 2 u i A 0 , as well as the vector field Proof of Prop. (39). Assume Q is a parametrix of L 0 modulo Ψ −2 and consider formula (32). Since the term γ : ∇u i ⊙ ∇u j is of order zero, the computation of M ij | −1 consists in computing the second term in the symbol expansion of R i •Q•P j , and the same term with i, j permuted, where we denote R i := γ 0 ∇u i ·∇ with symbol r i,1 = ιγ 0 ∇u i · ξ. Plugging σ Ri = r i,1 , σ Q = q −2 + q −3 and σ Pi = p i,1 + p i,0 into (29) and keeping only the terms that are homogeneous of degree −1 in ξ, we arrive at the expression Note that the multiplications commute because the symbols of Q and R i are scalar, while that of P j is matrix-valued. Since M ij | −1 = σ RiQPj | −1 + σ RjQPi | −1 , equation (39) will be proved when we show that Proof of (42). Starting from (41), plugging the expression r i,1 = ι(V i ·ξ 0 ), using the identity and pre-and post-multiplying by A 0 yields the relation Gathering the first and third terms recombines into ι ξ 0 −1 V i · G(ξ 0 ⊙ V j ) (the last term of (42)). On to the second and fourth terms, we first compute Using this calculation, the second and fourth terms in (43) recombine into thus the argument is complete.
Proof. Symmetry of M is obvious by definition. Taking gradient of (44), we arrive at Pre-and post-multiplying by A 0 , we deduce that The proof is complete since Hyp. 2.1 ensures that µ never vanishes and V is uniformly invertible, and Hyp. 2.2 ensures that Z is uniformly invertible.
The T pq operators. Proof of Prop. 1: As advertised in Sec. 2.2, because of the algebraic form of the symbols of the linearized power density operators, it is convenient for inversion purposes to define the microlocal change of basis T γ = {T pq γ} 1≤p≤q≤n as in (13), i.e.
To convince ourselves that this collection forms a microlocally invertible operator of γ, let us introduce the zero-th order ΨDOs P ijpq with scalar principal symbol σ Pijpq := (e i · A 0ξp )(e j · A 0ξq ) for 1 ≤ i, j, p, q ≤ n. Then for any 1 ≤ i ≤ j ≤ n, the composition of operators n p,q=1 P ijpq • T pq has principal symbol (repeated indices are summed over) where we have used the following property, true for any smooth vector field V : Thus for any 1 ≤ i ≤ j ≤ n, the composition n p,q=1 P ijpq • T pq recovers γ ij = γ : e i ⊗ e j up to a regularization term. This in particular justifies the estimates (14) and the subsequent inversion procedure. We are now ready to prove Proposition 1.
Proof of Proposition 1. From the fact that (V 1 , . . . , V n ) is a basis at every point and given their dotproducts H ij = V i · V j , we have the following formula, true for every vector field W : Proof of (i): Reconstruction of the components T 00 γ and {T αβ γ} 1≤α≤β≤n−1 .
We work with M ij | 0 : which means that upon defining Q αβij ∈ Ψ 0 with scalar principal symbols relation (16) is satisfied in the sense of operators since the previous calculation amounts to computing the principal symbol of the composition of operators in (16).
Proof of (ii): Reconstruction of the components {T 0α γ} 1≤α≤n−1 . It remains to construct appropriate operators that will map dH(γ) to the components T 0α γ for 1 ≤ α ≤ n − 1, which is where the additional measurements dH i,n+1 come into play. Let (µ 1 , . . . , µ n , µ) as in (44) and construct the ΨDO {L i (γ)} n i=1 as in (15). It is easy to see that, since the µ i are only functions of x, the terms of fixed homogeneity in the symbol expansion of L i satisfy Then from equation (34) and relation (44), we deduce that σ Li | 0 = 0, so that L i ∈ Ψ −1 . Moreover, using equation (39) together with relation (44), we deduce thatσ is now the principal symbol of L i . Using relation (46) with W ≡ M −1ξ α , the symmetry of M and multiplying by M, we have the relation Using this relation, we deduce the following calculation, for 1 ≤ α ≤ n − 1 While the second term gives us the missing components T 0α γ, we claim that the first one is spanned byξ 0 ⊙ξ 0 and {ξ α ⊙ξ β } 1≤α≤β≤n−1 . Indeed we have In light of these algebraic calculations, we now build the parametrices. Let L 1 2 0 ∈ Ψ 1 , B αi ∈ Ψ 0 , R ∈ (Ψ 0 ) n×n , R α ∈ Ψ 0 and R αβ ∈ Ψ 0 the ΨDOs with respective principal symbols

GUILLAUME BAL CHENXI GUO AND FRANÇ OIS MONARD
Then the relation (47) implies (17) at the principal symbol level. The operator R can indeed be expressed as the following zero-th order linear combination of the components T 00 and {T αβ } 1≤α,β≤n−1 : so that the left-hand side of (17) is expressed as a post-processing of measurement operators dH ij only. The proof is complete.
4. Explicit inversion. Finally, for A a matrix and V = [V 1 | . . . |V n ], the sum A ij V j is nothing but the i-th column of the matrix V A T .

4.2.
Derivation of (20) from Hypothesis 2.1: Let us start from n solutions (u 1 , . . . , u n ) fulfilling Hypothesis 2.1, and let (v 1 , . . . , v n ) the corresponding solutions of (4). We also denote [∇U ] := [∇u 1 | . . . |∇u n ] and [∇V ] similarly. We first mention that for any vector field V , we have the following formulas which also amounts to the following matrix relations From the relation we deduce, using (49), The previous equation allows us express γ in terms of the remaining unknowns (v 1 , . . . , v n ). Indeed, taking the tensor product of (51) with H ij γ 0 ∇u j and summing over i yields where we have used the identity (49) in the last right-hand side. We may rewrite this as One may notice that the above expression is indeed a symmetric matrix. In matrix notation, using the preliminaries, we arrive at the expression (20).

Algebraic equations obtained by considering additional solutions:
Let us now add another solution u n+1 with corresponding solution v n+1 at order O(ε). By virtue of Hypothesis 2.1, as in section 3.4, ∇u n+1 may be expressed in the basis (∇u 1 , . . . , ∇u n ) as where the coefficients µ i can be expressed as ratios of determinants, or equivalently, computable from the power densities at order ε 0 , see [17,Appendix A.3]. For 1 ≤ i ≤ n, we define Z i := ∇(µ −1 µ i ), and notice that we have the following two algebraic relations The first one is obtained obtained after applying the operator ∇ · (γ 0 ·) to (53) and the second one is obtained after applying an exterior derivative to (53) . Moving on to the study of the corresponding v n+1 solution, we write dH n+1,j + µ i µ dH ij = ∇v n+1 + µ i µ ∇v i · γ 0 ∇u j , 1 ≤ j ≤ n, where we have cancelled sums of the form (53). Using the identity (49), we deduce that ∇v n+1 + (µ −1 µ i )∇v i = H pq dH n+1,p + (µ −1 µ i )dH ip ∇u q .
The left-hand side can be considerably simplified by noticing that the second equation of (54) implies [∇U ]Z T = Z[∇U ] T . With this fact in mind, the left-hand side looks like X p · ∇v p , where we compute Finally, we obtain the more compact equation with Y q given in (57). (56) and (58), the only unknown is the matrix [∇V ] := [∇v 1 , . . . , ∇v n ]. Equations (56) and (58) give us the projection of that matrix onto the space ZA n (R) and onto the line Rγ 0 Z respectively. As in the non-linear case [17,18], we expect that a rich enough set of such equations provided by a certain number of additional solutions (u n+1 , . . . , u n+l ) leads to a pointwise, algebraic reconstruction of [∇V ], however we do not follow that route here.

4.4.
Proof of Proposition 2 and Theorem 2.4. We now show that provided that we use one additional solution u n+1 (on top of the basis (u 1 , . . . , u n )) such that the matrix Z is of full rank, then we can reconstruct (v 1 , . . . , v n ) via a strongly coupled elliptic system of the form (22), after which we can reconstruct γ from (∇v 1 , . . . , ∇v n ) by formula (20). We now show how to derive this elliptic system.