Inverse diffusion from knowledge of power densities

This paper concerns the reconstruction of a diffusion coefficient in an elliptic equation from knowledge of several power densities. The power density is the product of the diffusion coefficient with the square of the modulus of the gradient of the elliptic solution. The derivation of such internal functionals comes from perturbing the medium of interest by acoustic (plane) waves, which results in small changes in the diffusion coefficient. After appropriate asymptotic expansions and (Fourier) transformation, this allow us to construct the power density of the equation point-wise inside the domain. Such a setting finds applications in ultrasound modulated electrical impedance tomography and ultrasound modulated optical tomography. We show that the diffusion coefficient can be uniquely and stably reconstructed from knowledge of a sufficient large number of power densities. Explicit expressions for the reconstruction of the diffusion coefficient are also provided. Such results hold for a large class of boundary conditions for the elliptic equation in the two-dimensional setting. In three dimensions, the results are proved for a more restrictive class of boundary conditions constructed by means of complex geometrical optics solutions.


Introduction
Optical tomography (OT) and electrical impedance tomography (EIT) are medical imaging techniques that utilize the large contrast between the optical and electrical response of certain unhealthy tissues and that of healthy tissues. However, because OT and EIT are modeled by diffusion operators that are highly smoothing, the reconstructions are typically very low resolution [3,5,27]. Several recent imaging techniques aim to combine the high contrast in OT and EIT with a high resolution modality, for instance based on magnetic resonances (MREIT) [19,21,22] or, for what is of interest for us here, ultrasound. There are several ways of combining EIT and OT with ultrasound. One way is to use the photo-acoustic or the electro-acoustic effects; see, e.g., [9,13,23,28]. In this paper, we consider another way based on ultrasound modulation.
The inverse problem of UMEIT may be described as follows. The electric potential u solves a diffusion equation (see (1) below) with unknown conductivity σ. The ultrasound modulation whose modeling is recalled in section (2) provides access to the power density H(x) = σ(x)|∇u| 2 (x) for all x inside the domain of interest. Reconstruction of σ from this one internal functional H(x) is typically not feasible unless additional constraints are imposed; see e.g., [6]. In this paper, we assume that several such power densities corresponding to, e.g., different boundary conditions in (1) are known. By polarization or direct measurements as shown in section 2, we can have access to H ij (x) = σ(x)∇u i · ∇u j (x) for u i and u j solutions to the same elliptic equation with different boundary conditions. The case of the spatial dimension n = 2 was treated in [12]. This paper aims to generalize these results to the case n = 3 (which also generalize to the case n ≥ 4, see [20], although we shall not present the details here) and to show that UMEIT is a stable inverse problem, in the sense that in an appropriate norm, errors on the measurement of the power densities H ij result in errors on the reconstruction of σ that are of the same order. This should be contrasted with the case of EIT and OT, where the error on σ is roughly proportional to the logarithm of (and hence much larger than) the error on the available (Cauchy) data. Moreover, our derivation is constructive in the sense that it produces an explicit procedure to reconstruct σ from knowledge of a sufficiently large number of internal measurements of the form H ij .
In dimension n = 2, one set of measurements for 1 ≤ i, j ≤ 2 is sufficient to uniquely characterize σ. In dimension n = 3, sets of measurements 1 ≤ i, j ≤ 3 are sufficient locally but not necessarily globally. We show the existence of sets of measurements 1 ≤ i, j ≤ 4, which uniquely (and stably) characterize σ. This construction is based on showing that the determinant of a matrix whose columns are the gradients of three of the four solutions to the elliptic equation is always positive at any point x of the domain. The three chosen solutions depend on the point x. Such constructions are based on the use of complex geometric optics solutions, which are a convenient tool to show that solutions to elliptic equations satisfy desired qualitative properties [6,7,9,26].
The rest of the paper is structured as follows. The derivation of the power density measurements from ultrasound modulations of media of interest is recalled in section 2. The main results obtained in this paper are presented in section 3. The proofs of the main theorems are given in the following two sections. Section 4 provides the proof of uniqueness, stability, and reconstruction procedures on a domain Ω on which the determinant of a matrix composed of n gradients of solutions is shown to be bounded below by a positive constant. How such results can be patched together to provide global reconstruction results is given in section 5. That determinants can be shown to have a prescribed sign is based on the use of complex geometric optics (CGO) solutions that are presented in section 5.

Derivation of internal functionals
We consider the following elliptic equation Here, σ is the unknown diffusion coefficient, which we assume in this paper is a realvalued, scalar, function defined on a simply connected domain X ⊂ R n for n = 2 or n = 3. We assume that σ is bounded above and below by positive constants so that the above equation admits a unique solution. We also assume that σ is sufficiently smooth so that the solution to the above equation is continuously differentiable onX, the closure of X [15]. We denote by ∂X the boundary of X and by g(x) the imposed (sufficiently smooth) Dirichlet boundary conditions. The coefficient σ(x) may model the electrical conductivity in the setting of electrical impedance tomography (EIT) or a diffusion coefficient of particles (photons) in the setting of optical tomography (OT). Both EIT and OT are modalities with high contrast, in the sense that σ(x) takes different values in different tissues and allows one to discriminate between healthy and non-healthy tissues. In OT, high contrasts are mostly observed in the absorption coefficient, which is not modeled here [8].
A methodology to couple high contrast with high resolution consists of perturbing the diffusion coefficient acoustically. Let an acoustic signal propagate through the domain. In this presentation, we assume that the sound speed is constant and that the acoustic signal is a plane wave of the form p cos(k · x + ϕ) where p is the amplitude of the acoustic signal, k its wave-number and ϕ an additional phase. The acoustic signal modifies the properties of the diffusion equation. We assume that such an effect is small and that the coefficient in (1) is modified as where we have defined c = c(x) = cos(k · x + ϕ) and where ε = pΓ is the product of the acoustic amplitude p ∈ R and a measure Γ > 0 of the coupling between the acoustic signal and the modulations of the constitutive parameter in (1), see [18] and references therein. We assume that ε ≪ 1 so that the influence of the acoustic signal on σ ε admits an asymptotic expansion that we truncated at the second order as displayed in (2). The size of the terms in the expansion are physically characterized by ζ and depend on the specific application. Let u and v be solutions of (1) with fixed boundary conditions g and h, respectively. When the acoustic field is turned on, the coefficients are modified as described in (2) and we denote by u ε and v ε the corresponding solutions. Note that u −ε is the solution obtained by changing the sign of p or equivalently by replacing ϕ by ϕ + π.
By the standard continuity of the solution to (1) with respect to changes in the coefficients and regular perturbation arguments, we find that u ε = u 0 + εu 1 + O(ε 2 ). Let us multiply the equation for u ε by v −ε and the equation for v −ε by u ε , subtract the resulting equalities, and use standard integrations by parts. We obtain that Here, ds(x) is the standard surface measure on ∂X. We assume that σ ε ∂ ν u ε and σ ε ∂ ν v ε are measured on ∂X, at least on the support of v ε = h and u ε = g, respectively, for all values ε of interest. Note that the above equation holds if the Dirichlet boundary conditions are replaced by Neumann boundary conditions. Let us define We assume that the real valued functions J m = J m (k, ϕ) are known (measured functions). Notice that such knowledge is based on the physical boundary measurement of the Cauchy data of the form (u ε , σ ε ∂ ν u ε ) and (v ε , σ ε ∂ ν v ε ) on ∂X.
Equating like powers of ε, we find that at leading order This may be acquired for all k ∈ R n and ϕ = 0, π 2 , and hence provides the Fourier transform of Note that when v ε = u ε , then we find from the expression in (3) that J 2 = 0 in (4) so that the expression for J 1 may be obtained from available measurements in (4) with an accuracy of order O(ε 2 ). Note also that by polarization. In other words, the limiting measurements (for small ε) in (6) may also be obtained by considering expressions of the form (3) with u ε = v ε . In the setting of optical tomography, the coefficient σ ε in (2) takes the form whereσ ε is the diffusion coefficient, c ε is the light speed, and n is spatial dimension. When the pressure field is turned on, the location of the scatterers is modified by compression and dilation. Since the diffusion coefficient is inversely proportional to the scattering coefficient, we find that 1 Moreover, the pressure field changes the index of refraction (the speed) of light as follows where γ is a constant (roughly equal to 1 3 for water). This shows that In the setting of electrical impedance tomography, we simply assume that ζ models the coupling between the acoustic signal and the change in the electrical conductivity of the underlying material. The value of ζ thus depends on the application.
The objective of ultrasound modulated optical tomography (UMOT) and ultrasound modulated electrical impedance tomography (UMEIT) is to reconstruct the coefficient σ(x) from measurements of the form (6), i.e., since we assume that ζ is known, from measurements of the form H(x) = σ(x)∇u(x) · ∇v(x), where u and v are two solutions of the unperturbed equation with (possibly) different boundary conditions.
In the setting where v 0 = u 0 , measurements are of the form H 00 (x) = σ(x)|∇u 0 | 2 . Plugging the latter expression into the elliptic equation yields the nonlinear equation We thus observe that the reconstruction of σ may be recast as solving the above nonlinear partial differential equation. It turns out that the above equation is not directly amenable to analysis. However, the above expression makes clear the connection between inverse problems with internal information and nonlinear partial differential equations. The methodology to obtain (6) follows the presentation in [8] and is very similar in spirit to the derivation obtained in [18]. An alternative method based on physical focusing in time of acoustic signals has been presented in [2].
Assuming the validity of the above derivation, the mathematical problem of interest in this paper is as follows. We want to reconstruct σ(x) from knowledge of the interior functionals where u j is the solution to the equation for appropriate choices of the boundary conditions g i on ∂X. Obviously, we would like m to be as small as possible and ideally equal to 1. Our results propose explicit reconstructions for m = n in dimension n = 2 and m = n or m = n + 1 depending on additional parameters in dimension n = 3. The case of the dimension n ≥ 4 can be handled in a similar fashion although we shall not present the mathematically more involved and practically less interesting details in this paper.

Statement of the main results
The main strategy we use to reconstruct σ from knowledge of H = {H ij } ij is first to write equations for the quantities S i := √ σ∇u i that are independent of σ and then to show that σ is uniquely determined when S i is known. This was implemented in the two dimensional setting in [12] and the main result of this paper is the extension to the three dimensional case. The extension to arbitrary dimensions is also feasible although we shall restrict ourselves to the cases n = 2, 3 for concreteness. We define: Using the equations (10), the definitions (11) and the fact that σ − 1 2 S i is a gradient, we obtain: where we have defined for n = 2 the product [A, B] := A x B y − A y B x and [∇, A] := ∂ x A y − ∂ y A x for smooth vector fields A and B, while for n = 3, × is the standard cross product.
We now wish to eliminate F from such equations and get a closed form equation for the vectors S i with sources that only involve the known matrix H. Such an elimination requires that we find n vectors S i that form a basis of R n for n = 2, 3. In dimension n = 2, it is not difficult to find such a basis for all x ∈ X. In dimension n ≥ 3, we are able to construct such bases on subset of R n that cover X using triplets of vectors that depend on the subset. We thus consider the above equations for S i and F over an open subset Ω ⊂ X, over which we make the assumption that where the n vectors S j for 1 ≤ j ≤ n are chosen among the m vectors considered in (9). Since the determinants are central in our derivations, we define for all x ∈ Ω. Note that det(S 1 (x), . . . , S n (x)) = (detH) Remark 3.1. Since H is invertible and is a symmetric non-negative matrix, it satisfies the following inequalities where the second inequality is a general property of gramian matrices (i.e. matrices of dotproducts of a given family of vectors after successively using the Cauchy-Schwarz inequality and a property of the cross-product. Then, since H = S T S ∈ C 0 (Ω) (by regularity of the solutions and assuming σ continuous), the assumption (15) is equivalent to H being invertible over Ω. In this case H −1 is uniformly bounded over Ω, that is H −1 ∞ < +∞, and so In two dimensions, the constraint (15) is satisfied over the whole domain Ω = X for a large class of boundary conditions g i and we then choose m = n = 2 as shown in [1,Theorem 4]. For instance, we may choose g j (x 1 , x 2 ) = x j on ∂X and we are then guaranteed that (x 1 , x 2 ) → (u 1 , u 2 ) is a diffeomorphism from X to its image so that the determinant is clearly signed, and by continuity (assuming that σ is sufficiently smooth so that gradients are continuous functions, which we also assume for the rest of the paper [15]), bounded from below by a positive constant.
In three dimensions, there is no known guarantee that the determinant of three solutions with prescribed boundary conditions remains positive throughout the domain X independent of the conductivity σ. For boundary conditions of the form g j (x) = x j on ∂X for 1 ≤ j ≤ 3, it is known that the determinant can change signs for some conductivities [10].
Yet, (15) is an important constraint in the derivation of our results. By means of specific complex geometric optics solutions, we are able to construct boundary conditions g i such that (15) is valid on subsets of the domain X. More precisely, we shall k=1 Ω k . In other words, we can choose m = 4 boundary conditions such that X is decomposed into a superposition of overlapping simply connected subsets where the determinant of three out of the four vectors is positive. Generically, we denote by Ω any of the subdomains Ω k . We shall see that σ can be reconstructed on all of Ω provided that σ and all S i are known at one point x 0 ∈ Ω. After patching local reconstructions together, this provides a (unique) reconstruction in the whole domain X.
On each of the subdomains Ω ≡ Ω k , the available information is H ij (x) = S i (x) · S j (x) for 1 ≤ i ≤ j ≤ n, abbreviated by a function H : Ω → S n (R) (S n (R) denotes the n × n real-valued symmetric matrices). Although our goal is ultimately to recover only the conductivity σ, our approach requires the reconstruction of the vector fields S i , 1 ≤ i ≤ m and F . Since the data H(x) give us S(x) (the matrix with columns given by the vectors S i ) up to an SO n (R)-valued function R(x) (i.e., R(x) is a rotation matrix), the unknowns are now the functions R and F , which are of dimension n(n−1) 2 and n, respectively. As will be given in more detail in section 4, the local reconstruction of R and F proceeds as follows: one first derives divergence and curl equations for the R j 's, column vectors of the function R : Ω → SO n (R), with right-hand-sides involving F , the R j 's and the data. Using structural properties of SO n (R) satisfied at every x ∈ Ω, one is then able to derive an equation for F of the form where the vector fields V ij depend only on the data. Plugging this equation back into the divergence/curl system closes the system for the R j 's. From this closed sytem, one is then able to derive gradient-type equations for the R j 's, where the right-hand sides either depend only on the data (case n = 2), or also depend polynomially on the R j 's (case n = 3). In either case, these gradient equations can be integrated along segments of the form [x 0 , x] for x, x 0 ∈ Ω, parameterized by the curve in order to reconstruct locally the R j 's at x from knowledge of their value at x 0 . Once the R j 's (and therefore the function R) are locally reconstructed around x 0 , one can reconstruct σ(x) around x 0 from the knowledge of σ(x 0 ) and integrating (18) along segments [x 0 , x].
As we mentioned earlier, we can set Ω = X when n = 2 by choosing illuminations g = (g 1 , g 2 ) such that (15) is satisfied throughout X. Then the above reconstruction procedure yields unique and stable reconstructions, as described in the following theorem.
Theorem 3.2 (2D global uniqueness and stability). Let (H, H) be two data sets with coefficients in W 1,∞ (X) corresponding to the same illumination g = (g 1 , g 2 ) and with determinants satisfying the condition Let σ andσ be the corresponding conductivities and assume that σ(x 0 ) =σ(x 0 ) for some x 0 ∈ X. Then, we have the following stability estimate This result also ensures uniqueness, since (21) implies H = H =⇒ σ =σ.
Note that theorem 3.2 does not require the prescription of the S i 's at the point x 0 because the data H together with the maximum principle allow us to access their values at appropriate boundary points, as will be seen in section 4.2.1, paragraph "reconstruction procedure".
In the three dimensional case, as mentioned earlier, we do not know of any illumination g that guarantees the constraint (15) throughout X. We then work with m ≥ 3 solutions of (10) and assume that there exists an open covering The next lemma, whose proof is given in the beginning of section 5, gives an example of such a setting based on the use of complex geometric optics solutions, if one assumes some regularity on σ.
for ǫ i andǫ i equal to ±1.
Using m ≥ 3 solutions that satisfy assumption (22), the local reconstruction approach can be applied to reconstruct σ on each of the Ω i 's, and some additional work is then required to patch these reconstructions together and show that the global reconstruction scheme ensures unique and stable reconstructions of σ over X. Global reconstruction procedures are thus presented in section 5 and summarized in the following: Theorem 3.4 (3D global uniqueness and stability). Let X ⊂ R 3 be an open convex bounded domain, and let two sets of m ≥ 3 solutions of (10) generate measurements (H, H) whose components belong to W 1,∞ (X). Assume that one can define a couple (O, τ ) such that (22) is satisfied for both sets of solutions S and S. Let also be given. Let σ andσ be the conductivities corresponding to the measurements H and H, respectively. Then we have the following stability estimate: where ǫ 0 is the error at the initial point x 0 The uniqueness and stability estimate results are constructive. As we shall see, the reconstruction depends on the choice of curves along which the sets of ordinary differential equations coming from the gradient equations are solved. In the presence of noise-free data, the reconstruction is unique as we just presented. In the presence of noise, the available data may then no longer be in the range of the measurement operator mapping the unknown coefficient σ to the measurements H(x). In this case, the reconstruction may depend on the choice of the curve along which integrations are performed. The compatibility conditions that the data need to verify in order for the reconstruction to be independent of the choice of path (Poincaré-type lemma) is straightforward in dimension n = 2 but seems to be more challenging in dimension n = 3.
Both theorems provide us with Lipschitz constants of stability in dimensions n = 2 and n = 3. These estimates show that reconstructions of the conductivity from knowledge of a sufficient number of power densities is a well-posed problem, unlike what is observed in the Calderón problem, which consists of reconstructing the diffusion coefficient from knowledge of the Dirichlet to Neumann map [27]. Thus UMEIT and UMOT allow us to combine the high contrast of optical and electrical properties of domains with the high resolution capabilities of ultrasound.
The rest of the paper is structured as follows. In section 4 we derive the formulas that are necessary for local (global when n = 2) reconstructions and stability results. Section 5 explains the global reconstruction scheme in the case n = 3 and includes the proof of theorem 3.4.

Derivation of local reconstruction formulas
In this section, we prove Theorem 3. . This fact can be seen for instance by noticing that the orientation-preserving Gram-Schmidt procedure that creates the orthonormal R j 's from the S i 's with det R det S > 0 only involves coefficients that depend on the inner products S i · S j = H ij , which are indeed known. Note that SH − 1 2 (x) is also a rotation-valued function on Ω.
More generally, we take T (x) = {t ij (x)} 1≤i,j≤n a matrix-valued function on Ω that satisfies the properties Two examples are the symmetric T = H − 1 2 and the lower-triangular T in (59) below obtained by the Gram Schmidt procedure. Denote T the matrix constructed from H in the same manner as T . We impose the existence of a constant C T > 0 such that here C T only depends on H and H and is completely determined by the way we construct T and T . The following simple lemma, whose proof is given in section 4.2.3, justifies the above stability result for the Gram-Schmidt procedure: If (T, T ) are built from (H, H) using the Gram-Schmidt procedure, then the stability estimate holds where C n (ξ) is a real polynomial function in ξ that only depends on the dimension n.
We also define T −1 = {t ij } 1≤i,j≤n and the vector fields Here and below, repeated indices are summed over. Let V be a vector-valued matrix constructed from H as V is constructed from H. A simple calculation yields Here, (26).
From S and T , let us now build the matrix R by defining From conditions (25) and the fact that S(x) T S(x) = H(x) for every x ∈ Ω, the matrix R (30) satisfies R T (x)R(x) = I n for all x ∈ Ω, as well as det R(x) = 1, thus R(x) ∈ SO n (R) for all x ∈ Ω. Moreover, plugging relation (30) into equation (12) and using the vector calculus identity ∇ · (f V ) = ∇f · V + f ∇ · V , we obtain Thus the R i 's satisfy the following divergence equation: In a similar manner, one can derive the following curl-type equations for the R i 's: We now show that the redundancies in the systems (31)-(32) when n = 2 and (31)-(33) when n = 3 allow us to derive formula (18) for F in terms of the R i 's and the data. This formula is successively proved for n = 2 and n = 3 using vector calculus identities in each dimension, respectively.
Remark 4.2 (Geometric motivation of the next proofs.). The generalization to n ≥ 4 may be found in [20], where the natural tool is differential geometry, in particular connections, exterior derivatives and the Hodge transform. Once condition (15) is assumed, an expression of equation (18) in the basis (S 1 , . . . , S n ) may be proved in general dimension by studying the properties of the dual basis of (S 1 , . . . , S n ) with respect to the Euclidean metric. Once (18) is derived, the divergence and curl equations that we have for the S i 's (or equivalently the R i 's) have their right-hand-sides that only depend on the data H and its partial derivatives, and on the S i 's to zero-th order. Then the work is to pass from this resulting system of PDE's to full gradient equations (i.e. total covariant derivatives) for the unknown basis (S 1 , . . . , S n ) or (R 1 , . . . , R n ), equivalently.

The case n = 2
First notice that because R ∈ SO 2 (R), we have the relations where ε 12 = −ε 21 = 1 and we have defined J = 0 −1 1 0 . In particular, this implies that as well as the relations, for any vector field A, Together with the system of equations (31)-(32), these relations allow us to get the components of F in the basis (R 1 , R 2 ) (here, (i, j) ∈ I 2 ): and hence the formula for (i, j) ∈ I 2 . We then obtain equation (18) by plugging (35) into the relation F = (F · R 1 )R 1 + (F · R 2 )R 2 , and using the following identity whose proof may be found in section 4.1.3 below.

The case n = 3
We recall the vector calculus identity ∇ · (A × B) = B · ∇ × A − A · ∇ × B, which holds for any two smooth vector fields A and B, together with the relations which hold for all x ∈ Ω since R(x) ∈ SO 3 (R). Combining these relations, we have Using equations (31) and (33), we obtain Above, we sum over m but not over (i, j, k) ∈ A 3 . Thus we get We finally arrive at (18) by plugging the three components F · R i into the relation F = (F · R 1 )R 1 + (F · R 2 )R 2 + (F · R 3 )R 3 and using the following identity, whose proof may be found in section 4.1.3 below.

Proof of identities (36) and (38)
We prove these identities in general dimension n ≥ 2. First notice that for any k = 1 . . . n, the function T (x) satisfies locally the (tautological) relation where we have defined V k := {∂ k (t il )t lj } i,j = (∂ k T )T −1 . Therefore by Liouville's formula, we obtain that Finally noting that det T = d −1 when n = 2 and det T = D −1 when n = 3, the identities (36) and (38) are proved component by component.

Resolution in SO n (R) and stability analyses
Plugging formula (18) into (31), (32) and (33) provides a closed system of equations for the function R. It remains to show that R is then uniquely and stably determined by such equations. We again separate the cases n = 2 and n = 3.

The case n = 2
Since SO 2 (R) is a one-dimensional manifold, R = [R 1 |R 2 ] is described by a S 1 -valued function θ(x) such that, at each point, R 1 (θ) = (cos θ, sin θ) T and R 2 (θ) = JR 1 (θ). We wish to derive an equation for ∇θ. Plugging the expression (18) of F into (31), we arrive at Let us now derive a differential equation for R = [R 1 |R 2 ]. The relation R T R = I 2 implies that R T ∂ i R ∈ A 2 (R), the space of two-dimensional anti-symmetric matrices. For i = 1, 2, i.e., it can be written in the form for any vector field Z. For the sequel, we need the following vector calculus identity, which holds for any smooth vector field A We decompose α in the basis (R 1 , R 2 ) and use equations (31) We now use (40), which, in this case, becomes (R i · ∇)R i = [∇, R i ]JR i = −(∇ · R j )R j for (i, j) ∈ I 2 , and then equation (39) to obtain that Finally, given the above parameterization R(θ), one checks using the chain rule that R T ∂ i R = (∂ i θ)J, i = 1, 2. Using the preceding calculations, we obtain that Reconstruction procedure. Let H be a given data set corresponding to an illumination g that guarantees condition (15) throughout X. From S, we construct R using the Gram-Schmidt procedure, and parameterize R with a function θ : X → S 1 as above. Let x m ∈ ∂X be the global minimum of the illumination g 1 . Then according to the maximum principle [15], u 1 achieves its minimum over X at x m . In particular, at this point, we have the relation where ν(x m ) denotes the outgoing normal vector to X at x m . Thus θ(x m ) is known. Therefore, we can reconstruct θ(x) at every x ∈ X by integrating (41) along the segment [x m , x] parameterized by γ xm,x : Once θ is recovered throughout X, one then reconstructs σ(x) for all x ∈ X from the knowledge of σ(x 0 ) for some x 0 ∈ X and integrating equation (18) along the segment Proof of Theorem 3.2 We now prove the stability results stated in Theorem 3.2. Let two data sets (H, H) ∈ W 1,∞ (X) correspond to identical illuminations g =g, and conductivities σ andσ that coincide at some x 0 ∈ X. Let T, T be built from H, H such that they satisfy conditions (25) and (29). We then define R = ST T and R = S T T and parameterize R and R by their angle functions θ,θ : X → S 1 as above. Using estimates (29) and taking the difference of (41) for θ andθ yields Moreover, if x m ∈ ∂X is the global minimum of the illumination g 1 , we have seen in the reconstruction procedure that R(θ(x m )) = R(θ(x m )) = −ν(x m ) and thus θ(x m ) =θ(x m ). Therefore we have, for any x ∈ X, where ∆(X) denotes the diameter of X. Combined with (42), this yields This inequality is also a stability statement for the reconstruction of the function θ. On to the stability of σ, we have the pointwise estimate where we have used the Cauchy-Schwarz inequality and the fact that |∂ θ R p | ≤ 1 for p = 1, 2. Using estimates (43) and (29) implies directly The proposition is proved provided that σ andσ coincide at x 0 , which allows us to control any difference | log σ(x) − logσ(x)| by ∆(X) ∇(log σ − logσ) L ∞ (X) . This concludes the proof.

The case n = 3
In the three-dimensional case, the unknown is now an SO 3 (R)-valued function R and is thus parameterized by 3 scalar functions in principle. However, because every 3-parameter chart on SO 3 (R) has singularities, as shown in [24], using such a chart to represent the unknown function R may yield instabilities during the reconstruction procedure. In this paper, we thus represent R as a 9-vector R = (R T 1 , R T 2 , R T 3 ) T . We then derive gradient equations for the components of R, which allow local reconstruction of R by integration of an appropriate system of ordinary differential equations. that involves as few parameters as possible while still guaranteeing stability. This can be achieved by the 4-parameter quaternion representation, whose details will appear elsewhere.
Let us now find differential equations for R. Taking partial derivatives of the relation R T R = I 3 yields that for k = 1, 2, 3, we have that R T ∂ k R ∈ A 3 (R), the space of antisymmetric 3 × 3 matrices, which we write We now focus on expressing the vector fields α i := α k i e k in terms of the data. In order to do so, we recall that A 3 is the set of direct permutations of (1, 2, 3). We notice that We now use the following identities, which hold for any smooth vector fields A and B where A = R k , B = R p , and we take the dot product with R j . For p = i, we obtain For p = j, we get Finally for p = k, we get We have just proved that the vector fields α l can be expressed in terms of the quantities {R p · (∇ × R q )} 1≤p,q≤3 . These quantities, in turn, can be expressed in terms of R and the data using equation (33). For (i, j, k) ∈ A 3 , we have To summarize, using (18) for F and identity (38), we have for (i, j, k) ∈ A 3 that These calculations show that there exist vector fields A pqr such that Left-multiplying equation (44) by R, we obtain the differential equations Since the expression (51) is purely quadratic in the components of R, the above system can be summarized as where the functions Q k α : Ω → R 9 are linear combinations of the data vectors V ij (28).
Local reconstructibility and stability estimates. The system (53) allows us to reconstruct R locally provided that we know its value R(x 0 ) = R 0 for some x 0 ∈ Ω. The reconstruction of R(x) for x ∈ Ω is done by integrating the following ODE d dt R(γ(t)) = G(γ(t), R(γ(t))), R(γ(0)) = R 0 , where γ is chosen to be γ x 0 ,x here, and where the function G is polynomial in the components of R with bounded coefficients provided that H has components in W 1,∞ . Because G is constructed such that G · R = 0, as can be seen in (52), the solution to this system of equations, if it exists, has constant R 2 norm, equal to R 0 2 = 3. Therefore, the function R(x) ∈ R 9 remains in the compact set √ 3S 8 , over which G is Lipschitz in the R variable. Thus by virtue of standard results for ordinary differential equations [16], the solution to (54) exists, is unique, and its existence can be extended up to t = 1. Such a reconstruction procedure is stable in the sense of the following proposition. Then there exist two constants C 0 , C 1 such that for every x, y ∈ Ω, we have the estimate Proof of proposition 4.4. Let two data sets H, H : Ω → S 3 (R) with components in W 1,∞ (Ω) correspond to conductivities (σ,σ). Let T, T be built from H, H such that they satisfy conditions (25) and (29). We then define R = ST T and R = S T T . For the rest of the proof, we denote by R = (R T 1 , R T 2 , R T 3 ) T and R similarly. Recall that we have equations of the type where the Q k α 's are linear combinations of {V l ij } (28). Given the stability condition (29), we obtain estimates of the form We then write the pointwise relation Recall that R and R satisfy R 2 = R 2 = 3 on Ω so each of their components is bounded by √ 3. Therefore, for any 9-index α with |α| = 3, Therefore we have the estimate Writing ∂ k R − R 2 = 2 R − R ∂ k R − R and using the estimates in (57), we obtain for some constants C 0 and C 1 that Let x, y ∈ Ω. Then applying Gronwall's lemma for the function R− R over the segment [x, y] parameterized by γ x,y , we get the estimate This concludes the proof.

Gram-Schmidt decomposition
The Gram-Schmidt decomposition is defined as follows and builds a matrix R from S that is orthogonal by construction. In the three-dimensional case, the transition matrix T such that R = ST T is given by: The vector fields V ij defined in (28) take the following expression When n = 2, T and V ij are given by the top-left 2 × 2 blocs of (59) and (60), respectively.
Proof of lemma 4.1. For simplicity we only prove the Lemma in the case where the dimension is two. The three dimensions can be done similarly. We first notice that H −1 (x) 2 ≤ H −1 ∞ over Ω. Since H is a symmetric matrix we have and so H −1 −1 ∞ ≤ H jj (x) over Ω for 1 ≤ j ≤ 2 . Using these estimates and Remark 3.1 we easily derive the inequalities for 1 ≤ i, j ≤ 2, which proves the result.

Global reconstructions in 3D
As we have seen in the past section, the three-dimensional approach is locally similar to the two-dimensional case, as far as the pointwise derivation of reconstruction formulas is concerned. The main difference is that, because the result [1, Theorem 4] does not hold when n = 3, one must work with a covering of X and build a global reconstruction procedure that patches the local reconstructions over the domains Ω i together in a stable manner.

Proof of lemma 3.3
Let us first justify that the approach presented in the next section is valid in the sense that we can build illuminations such that a couple (O, j) satisfies condition (22).
Proof of lemma 3.3. A way to fulfill condition (22) is to construct solutions whose gradients are taylored to satisfy this condition up to terms that can be made negligible. We do this by using the Complex Geometrical Optic (CGO) solutions, a generalization of the harmonic complex plane waves of the form e ρ·x with ρ ∈ C n such that ρ · ρ = 0, so that ∆e ρ·x = ρ · ρe ρ·x = 0. These functions were first introduced in [11] in the context of linearized inverse problems, then extended in [25] in the context of non-linear inverse problems. The construction can be made for any n ≥ 2.
Let {g i } 1≤i≤4 be the traces of the above CGO solutions (u ℜ ρ 1 , u ℑ ρ 1 , u ℜ ρ 2 , u ℑ ρ 1 ). These illuminations generate solutions that satisfy the desired properties. Any boundary conditions g i in an open set sufficiently close to g i will ensure that the maximum of the determinants stay bounded from below by c 0 > 0: This concludes the proof of the lemma.

Global reconstruction procedure
Let us consider m ≥ 3 solutions of (10) and assume that there exist an open covering O = {Ω i } 1≤i≤N with X ⊂ ∪ N i=1 Ω i , a constant c 0 > 0, and a function τ such that assumption (22) holds. Let us fix x 0 ∈ Ω i 0 such that σ(x 0 ) and {S τ (i 0 ) i (x 0 )} 1≤i≤3 are known. We now assume that there exists an integer K > 0 and two functions such that for every x ∈ X, An example of such a setting is shown in Figure 1. Figure 1: An example of setting for the reconstruction of σ at some x ∈ X. One can see that [x 0 , y 2 (x)] ⊂ Ω 1 , [y 2 (x), y 3 (x)] ⊂ Ω 2 and [y 3 (x), x] ⊂ Ω 4 . In this case, we have ψ(x, 1) = 1, ψ(x, 2) = 2 and ψ(x, 3) = 4.
Here and below, the superscript −T denotes the matrix inverse transpose. That the transfer equation (67) holds can be seen from the following facts: • one can compute S (k−1) (y k ) from R (k−1) (y k ) by writing S (k−1) (y k ) = R (k−1) (y k ) · (T (k−1) (y k )) −T .
Combining the three formulas above gives (67).
In order for this procedure to yield globally stable reconstructions, we not only need local stability inside of each Ω i ∈ O (guaranteed by proposition 4.4), but also stability as one passes from Ω ψ(x,k−1) to Ω ψ(x,k) using formula (67). This is done in the following lemma.
We now conclude the proof of theorem 3.4.
where i, j run over the set τ (ψ(x, K)) and the superscript (K) denotes the fact that we are at the K-th (i.e. last) step of the reconstruction. By writing we obtain the following bounds We now apply alternatively K times proposition 4.4 and lemma 5.1. Define C 4 = max(C 0 , C 2 ) and C 5 = max(C 1 , C 3 ) where C 0 , C 1 are given in proposition 4.4 and C 2 , C 3 are given in lemma 5.1. We obtain Using the fact that R (1) (x 0 ) and R (1) (x 0 ) are built from {S τ (i 0 ) l (x 0 ), S τ (i 0 ) l (x 0 )} 1≤l≤3 , we obtain an inequality of the form Summing up the three bounds (70), (71) and (72), we obtain that Considering that ε(x) is bounded by |ε(x 0 )| + ∆(X) ∇ε ∞ for any x ∈ X, inequality (24) is proved.