Brought to you by:
Paper

A theoretical study for RTE-based parameter identification problems

, and

Published 1 August 2013 © 2013 IOP Publishing Ltd
, , Citation Jinping Tang et al 2013 Inverse Problems 29 095002 DOI 10.1088/0266-5611/29/9/095002

0266-5611/29/9/095002

Abstract

This paper provides a theoretical study of reconstructing absorption and scattering coefficients based on the radiative transport equation (RTE) by using the total variation regularization method. The function space for solutions of the RTE is a natural one from the form of the boundary value problem of the RTE. We analyze the continuity and differentiability of the forward operator. We then show that the total variation regularization method can be applied for a stable solution. Convergence of the total variation-minimizing solution in the sense of the Bregman distance is also obtained.

Export citation and abstract BibTeX RIS

1. Introduction

The radiative transport equation (RTE) as a forward model to describe the propagation or radiation of particles inside a medium arises in a wide variety of applications in sciences and engineering, including neutron transport, optical molecular imaging, infrared and visible light photography in space and atmosphere, heat transfer, astrophysics, atmospheric science (which is also known as remote sensing) and so on. In these areas, based on suitably chosen function spaces, solvability of the RTE, properties of the solutions and numerical analysis of the RTE are intensively studied. We refer the reader to [1, 1113, 31, 34, 36, 37] and the references therein for details on this subject.

As RTE-based inverse problems are difficult to study theoretically and to solve numerically, so far, many studies of the RTE-based inverse problems are limited to employing an approximation of the RTE as the forward model. One of the most popular approximation models is based on the diffusion approximation of the RTE. Theoretical and numerical analysis of diffusion-based biomedical molecular imaging problems can be found in numerous papers; for example, in [16], following the procedure of the standard regularization method with a quadratic penalty term, rigorous theoretical analysis of diffusion-based coefficients reconstruction optical tomography is investigated; in [24], mathematical theory and numerical analysis of diffusion-based bioluminescence tomography (BLT) are provided; in [25], solution existence and convergence of numerical methods are for a BLT approach which optimizes optical coefficients when an underlying bioluminescent source distribution is reconstructed to match the measured data. In addition to diffusion-based biomedical molecular imaging, in [22], a theoretical framework of x-ray dark tomography based on the modified Leakeas–Larsen approximation is investigated.

Recently, there has been much research interest in inverse problems based on the RTE in biomedical imaging applications. For linear light source inverse problems, in [23], a comprehensive mathematical framework on the solution existence, uniqueness, continuous dependence on the data and convergence of numerical methods for the BLT problem via standard Tikhonov regularization is established, and in [21, 30], numerical implementations in the two-dimensional case are described. For nonlinear coefficient identification problems, in [7, 33], by analyzing the singular decomposition of the albedo operator in different regimes, stability estimate is derived and uniqueness of the coefficient identification is proved, and in [2, 3, 14, 15], numerical techniques are proposed based on the L1 space, considered to be the 'physically relevant' function space. For general references on the mathematical background and imaging techniques, we refer the reader to [32] and references therein.

In this paper, we provide a theoretical study of reconstructing the absorption and scattering coefficients based on the RTE by using the total variation regularization method.

In the study of the forward model, we use L-norm for the parameters and choose a function space HA(X × Ω) for the solution of the RTE problem. The function space HA(X × Ω) is a natural one from the form of the RTE and the inflow boundary condition, i.e., it is the space of all functions that are square integrable together with the spatial directional derivative along the angular direction and are such that their values on the incoming boundary are square integrable. We show the continuity and differentiability of the forward operator with respect to the chosen norms.

The inverse problem of parameter identifications is ill-posed and regularization is needed in solving the inverse problem. Since the parameters are piecewise smooth functions or even piecewise constants in applications, it is beneficial to use total variation regularization. Regularization methods for parameter identifications have attracted much attention and many papers are devoted to study the solution existence, uniqueness and, most importantly, the stability, convergence and convergence rate of regularization methods. A general reference on the regularization methods is [17]. For total variation regularization methods on nonlinear inverse problems, theoretical analysis on the convergence and convergence rate can be found in [6, 810, 18, 20, 28, 29, 35]. From these papers, under some source conditions, if the forward operator satisfies the nonlinearity conditions in the sense of metric or Bregman distance, the convergence rate can be obtained. But in many practical parameter identification problems, the nonlinearity conditions for the forward problem are difficult to check, especially when the selected parameter space is a non-reflexive Banach space, e.g. a BV space. In [26], with the use of the compactness properties of the forward solution spaces, convergence rate for the total variation regularization of coefficient identification problems in elliptic problems is obtained. In this paper, we demonstrate that the total variation regularization method can be applied for a stable solution and show the convergence of the solutions of the regularized problem as the regularization parameter approaches zero. The convergence is in the sense of the Bregman distance.

The rest of the paper is organized as follows. In section 2, we introduce the notation and function spaces used in this paper and then review the solvability result of the RTE problem based on a naturally chosen function space HA(X × Ω). In section 3, we analyze the continuity and differentiability of the forward operator and then identify the adjoint problem. In section 4, we study the solution existence, stability estimate and convergence of the total variation-minimizing solutions in the sense of Bregman distance.

2. Notation and physical model

The propagation of light within biological media is described by the RTE. In this section, we discuss the RTE problem.

2.1. Notation and the radiative transport equation

We assume that the spatial domain $X\subset \mathbb {R}^{3}$ is bounded with a C1 boundary ∂X. The outward normal vector is denoted by ${\boldsymbol {\nu }}({\boldsymbol {x}})$ for ${\boldsymbol {x}}\in \partial X$. By $\Omega :=\left\lbrace {\boldsymbol {\omega }}\in {\mathbb {R}^{3}}\mid |{\boldsymbol {\omega }}|=1\right\rbrace$, we denote the unit sphere. We will need the following incoming and outgoing boundaries as subsets of Γ = ∂X × Ω:

Denote by ${\rm d}\sigma ({\boldsymbol {\omega }})$ the infinitesimal area element on the unit sphere Ω. In the spherical coordinate system, ${\boldsymbol {\omega }}=(\sin \theta \cos \psi ,\,\sin \theta \sin \psi ,\,\cos \theta )^T$, for 0 ⩽ θ ⩽ π and 0 ⩽ ψ < 2π, we have ${\rm d}\sigma ({\boldsymbol {\omega }})=\sin \theta \,{\rm d}\theta \,{\rm d}\psi$. We need an integral operator S defined by

Equation (2.1)

with η a nonnegative normalized phase function:

Equation (2.2)

In many applications, the function η is independent of ${\boldsymbol {x}}$. However, in our general study, we allow η to depend on ${\boldsymbol {x}}$. Moreover, we can allow η to be a general function of ${\boldsymbol {x}}$, ${\boldsymbol {\omega }}$ and $\hat{{\boldsymbol {\omega }}}$, i.e., in the form $\eta ({\boldsymbol {x}},{\boldsymbol {\omega }},\hat{{\boldsymbol {\omega }}})$. One well-known example is the Henyey–Greenstein phase function (cf [27]):

Equation (2.3)

where the parameter g ∈ ( − 1, 1) is the anisotropy factor of the scattering medium. Note that g = 0 for isotropic scattering, g > 0 for forward scattering and g < 0 for backward scattering.

With the above notation, we can write the boundary value problem (BVP) of the RTE:

Equation (2.4)

Equation (2.5)

Here μt = μa + μs, μa is the macroscopic absorption cross section, μs is the macroscopic scattering cross section, uin is the incoming flow on the incoming boundary and $f({\boldsymbol {x}},{\boldsymbol {\omega }})$ is the internal source function. The expression ${\boldsymbol {\omega }}\cdot \boldsymbol{\nabla }u({\boldsymbol {x}},{\boldsymbol {\omega }})$ is a directional derivative,

In this paper, we consider the inverse problem of parameter identification through knowledge of the incoming flow and corresponding measurement data um on the outgoing boundary. There is no internal source so that the forward problem is

Equation (2.6)

Equation (2.7)

We impose the following assumptions on the coefficients.

Assumptions

  • (A1) The function $\mu _{t}({\boldsymbol {x}})$ is uniformly positive and bounded; i.e., there exist two positive constants $\mu _{t}^{1}$ and $\mu _{t}^{2}$, such that $0<\mu _{t}^{1}\le \mu _{t}\le \mu _{t}^{2}<\infty$ a.e. in X.
  • (A2) The function $\mu _{s}({\boldsymbol {x}})$ is uniformly positive and bounded; i.e., there exist two positive constants $\mu _{s}^{1}$ and $\mu _{s}^{2}$, such that $0<\mu _{s}^{1}\le \mu _{s}\le \mu _{s}^{2}<\infty$ a.e. in X.
  • (A3) There is a constant c0 > 0 such that μt − μsc0 > 0 a.e. in X.

Next, we will introduce a function space for $u({\boldsymbol {x}},{\boldsymbol {\omega }})$, and show the regularity of solution of the above BVP under this space frame.

2.2. Function spaces

We introduce some function spaces that will be needed later in the paper:

is a Hilbert space with the inner product

and the corresponding norm $\Vert v\Vert _{H^{1,2}(X\times \Omega )}:=(v,v)_{H^{1,2}(X\times \Omega )}^{1/2}$. We also need the function space L2±) on Γ±, which are Hilbert spaces of functions v on Γ± with the inner products

and corresponding norms $\Vert v\Vert _{L^{2}(\Gamma {\pm })}$.

From a result on the trace of an H1, 2(X × Ω) function ([1]), we know that if vH1, 2(X × Ω) and $v|_{\Gamma _{-}}\in {L^{2}(\Gamma _{-})}$, then $v|_{\Gamma _{+}}\in {L^{2}(\Gamma _{+})}$, and for some constant c depending only on X, the following inequality holds:

Equation (2.8)

It is known that for each uinL2), the problem (2.6)–(2.7) has a unique solution uH1, 2(X × Ω). The solution of the BVP (2.6)–(2.7) will be sought from the function space

with the norm

Proposition 2.1. If vHA(X × Ω), then $v|_{\Gamma _{\pm }}\in {L^{2}(\Gamma _{\pm })}$, and for some constant c depending only on X,

Obviously, we can take c = 1 in the above inequality in bounding $\Vert v|_{\Gamma _{-}}\Vert _{L^{2}(\Gamma _{-})}$.

Corresponding to equation (2.6), we introduce the linear integro-differential operator L by the formula

Then we have the following result (cf [1, theorem 3.1]).

Proposition 2.2. Under the assumption on coefficients, in the space HA(X × Ω), the norm $\Vert \cdot \Vert _{H_{A}(X\times \Omega )}$ is equivalent to the following norm:

2.3. Solvability of the RTE problem

Consider the BVP (2.6)–(2.7), assuming that uinL2). Introduce the following variational problem.

Variational problem. Find a function uHA(X × Ω), such that

Equation (2.9)

where

Equation (2.10)

The following result is standard (cf [1]).

Theorem 2.1. Assume (A1)–(A3). Let uinL2). Then

  • (1)  
    there exists a unique solution uHA(X × Ω) to the variational problem (2.9);
  • (2)  
    the solution u satisfies the equation (2.6) a.e. in X × Ω and the boundary condition (2.7) a.e. on Γ;
  • (3)  
    for some constants c and $\tilde{c}>0,$ independent of u and uin, the following bounds hold:
    Equation (2.11)

Based on (2.9), we introduce the weak formulation of the BVP (2.6)–(2.7):

Equation (2.12)

where

Equation (2.13)

Recall the assumptions (A1)–(A3). With the norm equivalence result, proposition 2.2, we can show the boundedness and $\mathit {V}$-ellipticity of the bilinear form a( ·, ·):

Applying the Lax–Milgram lemma, we conclude that the weak formulation (2.12) has a unique solution uHA(X × Ω). We will use the weak formulation to analyze the properties of the forward operator.

2.4. Measurements

The measured light intensity in the optical tomography experiments is generally taken on the boundaries of the regions of interest. There are mainly two types of imaging systems for measurements frequently used in optical tomography, namely, optical fiber-based imaging systems and CCD camera-based imaging systems ([2, 33]). We can treat the measurement data from the imaging system as angularly averaged data or angularly resolved data. Let B be the measurement operator acting in the space HA(X × Ω). For angularly averaged data, the measurement operator can be defined as

Equation (2.14)

where

This is, the outgoing current of photons on the boundary. For the angularly resolved data, the measurement operator can be defined as ([33])

This is simply the outgoing boundary value of the solution of the transport equation. Obviously, the angularly resolved data are richer than the angularly averaged data because angular dependence is allowed. However, in practice, collecting angularly resolved data is too expensive and angularly averaged data are more popularly in use. In this paper, we will focus on angularly averaged measurement data (2.14).

Proposition 2.3. The angularly averaged measurement data-based measurement operator B: HA(X × Ω) → L2(∂X) is well defined, linear and bounded, and for some constant c > 0 depending only on X, we have

Equation (2.15)

Proof. From proposition 2.1,

Let us bound $\Vert Bu\Vert _{L^{2}(\partial X)}$ in terms of $\Vert u\Vert _{L^{2}(\Gamma _{+})}$:

Thus, (2.15) holds. □

3. Forward operator

In an optical tomography problem, the object is illuminated by an incident light source on the boundary and the resulting light intensity u is measured on the boundary, denoted as Bu; as is mentioned previously, we use the angularly averaged measurement. This establishes a nonlinear relation between the optical coefficients μt and μs, characterizing the medium and the corresponding measurements Bu on the boundary ∂X. The mathematical model is the following.

Forward operator. For a given light source uinL2), define

Equation (3.1)

where u denotes the solution of the BVP (2.6)–(2.7), whereas the admissible set is

From theorem 2.3 and proposition 2.4, we see that the forward operator is well defined on D(F).

In the following, we analyze properties of the forward operator.

3.1. Continuity and differentiability

To prove the continuity and differentiability of the forward operator F, we first recall the solvability of the governing BVP

where fL2(X × Ω). Similar to theorem 2.3, the following result holds.

Theorem 3.1. Assume (A1)–(A3) and fL2(X × Ω), uinL2). Then there is a unique solution uHA(X × Ω) to the variational problem:

Moreover,

with a constant C depending only on the domain and the bounds of the coefficients.

Theorem 3.2. Assume (A1)–(A3) on (μt, μs) and $(\tilde{\mu }_{t},\tilde{\mu }_{s})$, and let u and $\tilde{u}$ denote the solutions of (2.6)–(2.7) with uinL2) corresponding to coefficients (μt, μs) and $(\tilde{\mu }_{t},\tilde{\mu }_{s})$, respectively. Then

Equation (3.2)

with a constant C depending only on the domain and the bounds of the coefficients.

Proof. We use the weak formulation (2.12) for the solution $\tilde{u}$ with the parameters $(\tilde{\mu }_{t},\tilde{\mu }_{s})$ and the solution u with the parameters (μt, μs). Then the difference $w:=u-\tilde{u}$ satisfies

where $\tilde{L}$ denotes the linear operator in (2.6) corresponding to $(\tilde{\mu }_{t},\tilde{\mu }_{s})$ and we used the fact that $\tilde{L}\tilde{u}=0$ a.e. in X × Ω.

Applying theorem 3.1,

Observe that

We can then obtain the inequality (3.2). □

As a corollary of theorem 3.2 and proposition 2.4, we can obtain the Lipsichitz continuity property of the forward operator.

Theorem 3.3 (Lipsichitz continuity). Let the assumptions (A1)–(A3) hold. Then the forward operator F: D(F) → L2(∂X) is Lipschitz continuous with respect to the topologies of L(X) × L(X) and L2(∂X).

The Lipschitz continuity of the forward operator F indicates a certain differentiability. Differentiability is very important in analyzing convergence properties of the regularization method.

Theorem 3.4 (Differentiability). Let uinL2) be given, and let (μt, μs) ∈ D(F), (δμt, δμs) ∈ D(F) such that (μt + αδμt, μs + αδμs) ∈ D(F) for all $\alpha \in {\mathbb {R}}$ with |α| sufficiently small. Then the Fr$\acute{e}$chet derivative of the forward operator F at (μt, μs) in the direction (δμt, δμs) is given by

where w is the solution of the following problem:

Equation (3.3)

for all vHA(X × Ω), and u is the solution of BVP (2.6)–(2.7). Moreover,

Equation (3.4)

with a constant C depending only on the domain and the bounds of the coefficients.

The proof of theorem 3.4 is similar to that of theorem 3.2 and that of theorem 3.5 below, and is hence omitted. Thus, F'(μt, μs) defines a bounded linear operator mapping from L(X) × L(X) to L2(∂X) and the estimate (3.4) holds.

Theorem 3.5. The linear Taylor expansion of the forward operator F around (μt, μs) ∈ D(F) is second-order accurate, i.e., for $(\tilde{\mu }_{t},\tilde{\mu }_{s})\in {D(F)}$ the following bound holds:

Equation (3.5)

with a constant CL > 0 depending only on the domain and the bounds of the coefficients, where $\tilde{u}$ and u are the solutions of the BVP (2.6)–(2.7) corresponding to $(\tilde{\mu }_{t},\tilde{\mu }_{s})$ and (μt, μs), respectively, and w is the solution to the problem (3.3).

Proof. Let $m:=\tilde{u}-u-w$. Using (2.12) and (3.3), we can obtain

for all vHA(X × Ω). Applying theorems 3.1 and 3.2, we obtain from the previous equality that

The result (3.5) then follows. □

3.2. Adjoint problem

A formula for the derivative operator F'(μt, μs) was derived in theorem 3.4. Here, we derive an adjoint equation for the calculation of the adjoint of the derivative operator F'(μt, μs). The adjoint equation is important in numerical calculations as well as in analyzing the back transport effect in application. Recall that the $Fr\acute{e}chet$ derivative of the forward operator F can be computed by solving the following BVP (cf (3.3)):

Equation (3.6)

Equation (3.7)

Theorem 3.6 (Adjoint problem). Let (μt, μs) ∈ D(F). Assume uinL). Then the adjoint of the derivative operator F'(μt, μs) is given by

where

Equation (3.8)

Equation (3.9)

Here, u denotes the solution to the forward problem (2.6)–(2.7) and $\varphi ({\boldsymbol {x}},{\boldsymbol {\omega }})$ is the solution of the following adjoint problem:

Equation (3.10)

Equation (3.11)

Proof. We multiply both sides of the equation (3.6) by φ and integrate

Equation (3.12)

The left side of (3.12) is equal to

whereas the right side of (3.12) is

So

with dμt defined by (3.8) and dμs defined by (3.9).

Since uinL), by [1, lemma 3.2], uL(X × Ω). By a variant of theorem 2.3, φ ∈ L2(X × Ω). Then it is easy to see dμtL2(X). Similarly, dμsL2(X). □

4. Inverse problem

The goal of the inverse problem is to reconstruct the optical properties of tissues, i.e., the functions $\mu _{t}({\boldsymbol {x}})$ and $\mu _{s}({\boldsymbol {x}})$, from knowledge of the map Λ: uinBu. In practice, only a finite source–detector pair can be used in the measurement process, i.e., we have only partial information on the map Λ. This makes the problem undetermined, causing the parameter identification problem to be severely ill-posed. To overcome the ill-posedness, multiple excitation and finite observations are used in the numerical analysis, where the finite observations are the discretized measurements at some locations on ∂X ([16]). Here, for convenience in presenting the theory, we consider the observation corresponding to every excitation in the continuous form. We do the experiment a few times and determine (μt, μs) by matching the predictions calculated from the forward problem with the measured data on ∂X. More precisely, let k0 be the number of experiments. For k = 1, ..., k0, let us denote by $u_{\rm in,\mathit {k}}$ the inflow value function for the kth experiment and by mk the corresponding angularly averaged measurement. Then our inverse problem is to determine (μt, μs) such that for k = 1, ..., k0, the solution uk := ukt, μs) of the BVP (2.6)–(2.7) with uin replaced by $u_{\rm in,\mathit {k}}$ satisfies

Mathematically, it means to solve a system of nonlinear equations:

Equation (4.1)

where $F_{k}(\mu _{t},\mu _{s};{\boldsymbol {x}})$ is the forward operator corresponding to the inflow boundary value $u_{\rm in, \mathit {k}}$. Because of the effects of the experimental environment and the accuracy of laboratory equipment, the measurement mk is not accurate. For k = 1, ..., k0, let δk ⩾ 0 be the noise level, i.e. $\Vert m_{k}-m_{t,k}\Vert _{L^{2}(\partial X)}\le \delta _{k}$, where mt, k denotes the true value of mk on ∂X. Then define a noise level vector $\delta =(\delta _{1},\ldots ,\delta _{k_{0}})$. The inverse problem (4.1) is ill-posed; hence some regularization methods are used in order to obtain a stable solution. To treat well the possibly discontinuous or highly oscillating coefficients, we use the output least-squares method with total variation regularization. That is, we minimize the following functional:

Equation (4.2)

over the admissible set

for the coefficient pair μ = (μt, μs). Here β > 0 is a regularization parameter,

The symbol ∫X|∇μt| denotes the total variation semi-norm ([19]) of μtL1(X):

and ∫X|∇μs| is defined similarly. BV(X) denotes the bounded variation space

It is a Banach space under the norm

For k = 1, ..., k0, let $\lbrace m_{t,k}\rbrace _{k=1}^{k_0}$ be the exact measurement data on ∂X. To simplify the notation, we denote (μt, μs) as μ, and denote the first item on the right hand of (4.2) as J1(μ), the second item as βJ2(μ). Then our purpose is to analyze the minimization problem:

In the following, we will prove that the problem (Pβ) has a solution μβ. Furthermore, the problem

also has a solution μ† which is called the total variation-minimizing solution of the system of equations Fk(μ) = mt, k, 1 ⩽ kk0, where

4.1. The regularized problems

The main result here is on the solution existence to both problems (Pβ) and (P). First, we recall some properties of the BV(X) space ([5, 19]).

Proposition 4.1. 

  • (i)  
    Let {μn} be a bounded sequence in BV(X). Then there are a subsequence $\lbrace \mu ^{n_j}\rbrace$ and an element μ ∈ BV(X) such that $\lbrace \mu ^{n_j}\rbrace$ converges to μ in L1(X).
  • (ii)  
    Let {μn} be a sequence in BV(X) which converges to μ in L1(X). Then μ ∈ BV(X) and

Note that if {μn}⊂BV(X) converges to μ ∈ BV(X), then

Equation (4.3)

Theorem 4.1. 

  • (i)  
    A solution μβ of the problem (Pβ) exists.
  • (ii)  
    A solution μ† of the problem (P) exists.

Proof. (i) It is clear that Jβ(μ) ⩾ 0 on the admissible set Qad. There exists a minimizing sequence {μn} ∈ Qad such that

Equation (4.4)

Hence {Jβn)} is bounded, Jβn) ⩽ J0 for some constant J0. By the definition of Jβ(μ) with μ replaced by μn,

Here, we denote by $u_{k}^{n}=U_{k}(\mu ^{n})$ the solution of the BVP (2.6)–(2.7) with μ replaced by μn, and

is the corresponding measurement on the boundary. From the last inequality, $\lbrace m_{k}^{n}\rbrace$ is bounded in L2(∂X), $\lbrace \mu _{t}^{n}\rbrace$ and $\lbrace \mu _{s}^{n}\rbrace$ are bounded in BV(X). On the other hand, from theorem 2.3,

So $\lbrace u_{k}^{n}\rbrace$ is uniformly bounded in HA(X × Ω). By possibly extracting a subsequence, there exist $\mu _{t}^{*}$, $\mu _{s}^{*}$ and $u^*_k$ satisfying

Equation (4.5)

Then, there exist further subsequences $\lbrace \mu _t^{n_2}\rbrace \subset \lbrace \mu _t^{n_1}\rbrace$ and $\lbrace \mu _s^{n_2}\rbrace \subset \lbrace \mu _s^{n_1}\rbrace$ such that $\mu _t^{n_2}\rightarrow \mu _t^{*}$ and $\mu _s^{n_2}\rightarrow \mu _s^{*}$ a.e. on X. Note that

Equation (4.6)

Since $\mu _{t}^{n_{2}}\in {Q_{1}}$ for all n2, we have $\mu _{t}^{*}\in {Q_{1}}$, and hence $\mu _{t}^{*}\in {Q_{1}\cap BV(X)}$. Similarly, $\mu _{s}^{*}\in {Q_{2}\cap BV(X)}$. From proposition 4.1,

Equation (4.7)

Now let us show that for k = 1, ..., k0, $F_k(\mu ^{*})=m_k^{*}$ on ∂X. We first note that

Equation (4.8)

Here, $L^{n_{2}}$ denotes the linear integro-differential operator corresponding to the coefficients $\mu ^{n_{2}}$. Next, let us denote by L* the linear integro-differential operator with the coefficients μ*. Let vHA(X × Ω) be arbitrary but fixed. We will show that

Equation (4.9)

Since $u_{k}^{n_{2}}\rightharpoonup u_{k}^{*}$ in HA(X × Ω), by the definition of space HA(X × Ω), $u_{k}^{n_{2}}\rightharpoonup u_{k}^{*}$ also in L2), which implies that

Moreover,

Equation (4.10)

For the first item on the right,

Now,

The values $\lbrace \Vert L^{n_{2}}u_{k}^{n_{2}}\Vert _{L^2(X\times \Omega )}\rbrace$ are uniformly bounded. Since $|\mu _{t}^{n_{2}}-\mu _{t}^{*}|\le \mu _{t}^{2}-\mu _t^1$ and $\mu _{t}^{n_{2}}$ converge to $\mu _{t}^{*}$ almost everywhere, an application of the Lebesgue dominant convergence theorem on the term $\left\Vert (\mu _{t}^{n_{2}}-\mu _{t}^{*})v\right\Vert _{L^2(X\times \Omega )}$ shows that

Similarly,

Thus, we obtain

For the second term on the right of (4.10), write

By the same argument, we then conclude that

For the last term on the right of (4.10), we have

due to the condition that $u_{k}^{n_{2}}\rightharpoonup u_{k}^{*}$ in HA(X × Ω).

Combining the above results, we see that (4.9) is valid, implying that for 1 ⩽ kk0, $u_{k}^{*}$ is the solution of the BVP (2.6)–(2.7) with the coefficients $\mu _t^*$ and $\mu _s^*$ corresponding to the inflow value $u_{\rm in,\mathit {k}}$. Furthermore, $F_{k}(\mu ^{*})=m_{k}^{*}$ on ∂X.

Now, let us check that $u_{k}^{*}$ is a minimizer of problem (Pβ). Note that $u_{k}^{n_{2}}\rightharpoonup u_{k}^{*}$ in HA(X × Ω), so by the definition of space HA(X × Ω) and [1, lemma 2.2], we can conclude that $m_{k}^{n_{2}}\rightharpoonup m_{k}^{*}$ in L2(∂X). Then, by [4, exercise 2.7.2], for 1 ⩽ kk0,

so

Together with (4.7), we have

This completes the proof of (i).

(ii) Let {μn} be a sequence in Qad({mt, k}) such that

Equation (4.11)

From the assumption on the absorption and scattering coefficients, it is obvious that {μn} is bounded in L1(X). Following from (4.11), we know that {μn} is bounded in BV(X). Hence, there exist a subsequence $\lbrace \mu ^{n_{1}}\rbrace$ of {μn} and an element μ† such that $\lbrace \mu ^{n_{1}}\rbrace$ converges to μ† in L1(X) and

Equation (4.12)

A subsequence $\lbrace \mu ^{n_{2}}\rbrace$ of $\lbrace \mu ^{n_{1}}\rbrace$ exists, converging to μ† a.e. on X. Since $\lbrace \mu ^{n_{2}}\rbrace \subset Q_{1}\times Q_{2}$, μ† ∈ Qad. On the other hand, for k = 1, ..., k0, we have

As in the proof of (i), we have $u_{k}^{n_{2}}\rightharpoonup u_{k}^{\dagger }$ in HA(X × Ω), $u_{k}^{\dagger }$ is the solution of the BVP:

and satisfies the measurement boundary condition. Thus, μ† ∈ Qad({mt, k}). Furthermore, from inequalities (4.11) and (4.12), we have

This means μ† is a solution of the problem (P). Thus, the proof is completed. □

Theorem 4.2 (Stability). Let the regularization parameter β > 0 be fixed. For 1 ⩽ kk0, let $\lbrace {m_{k}^{n}}\rbrace$ be a sequence which converges to mk in L2(∂X), and let ${\mu _{\beta }^{n}}$ be the minimizers of the problems:

Here, to show explicitly the dependence of J1 on the measurement data, we use $J_{1}(\mu ;m_{k}^{n})$ instead of J1(μ). Then, a subsequence $\lbrace {\mu _{\beta }^{n_2}}\rbrace$ of $\lbrace \mu _{\beta }^{n}\rbrace$ and $\tilde{\mu }\in {Q_{\rm ad}}$ exist, such that $\lbrace {\mu _{\beta }^{n_2}}\rbrace$ converges to $\tilde{\mu }$ in L1(X) and

Equation (4.13)

Furthermore, $\tilde{\mu }$ is a solution to (Pβ).

Proof. For all n and μ ∈ Qad, we have

Equation (4.14)

Since $\lbrace m_{k}^{n}\rbrace$ is a bounded sequence in L2(∂X), from the above inequality, it follows that the sequence $\lbrace \mu _{\beta }^{n}\rbrace$ is bounded in BV(X). By proposition 4.1, a subsequence $\lbrace \mu _{\beta }^{n_{1}}\rbrace$ of $\lbrace \mu _{\beta }^{n}\rbrace$ and $\tilde{\mu }\in {BV(X)}$ exist such that $\lbrace \mu _{\beta }^{n_{1}}\rbrace$ converges to $\tilde{\mu }$ in L1(X) and

Equation (4.15)

Since $\mu _{\beta }^{n_{1}}\in {Q_{1}\times Q_{2}}$ for all n1, it follows that $\tilde{\mu }\in {Q_{1}\times Q_{2}}$. Hence, $\tilde{\mu }\in Q_{\rm ad}$. By the argument used in the proof of theorem 4.2, a subsequence $\lbrace \mu _{\beta }^{n_{2}}\rbrace$ of $\lbrace \mu _{\beta }^{n_{1}}\rbrace$ exists such that $U_{k}(\mu _{\beta }^{n_{2}})$ weakly converges to $U_{k}(\tilde{\mu })$ in HA(X × Ω), and $F_{k}(\mu _{\beta }^{n_{2}})$ weakly converges to $F_{k}(\tilde{\mu })$ in L2(∂X). Thus, we have

So

which means that

Equation (4.16)

Together with (4.14) and (4.15), we obtain that for any μ ∈ Qad,

This implies that $\tilde{\mu }$ is a solution of the problem (Pβ) and

Equation (4.17)

Finally, let us show (4.13) holds, that is, $J_{2}(\mu _{\beta }^{n_{2}})\rightarrow J_{2}(\tilde{\mu })$ as n2. Assume $J_{2}(\mu _{\beta }^{n_{2}})\nrightarrow J_{2}(\tilde{\mu })$. Then from (4.15), we have

Equation (4.18)

From the definition of limit superior, there exists a subsequence $\lbrace \mu _{\beta }^{n_{3}}\rbrace$ of $\lbrace \mu _{\beta }^{n_{2}}\rbrace$ such that $\mu _{\beta }^{n_{3}}$ converges to $\tilde{\mu }$ almost everywhere, $F_{k}(\mu _{\beta }^{n_{3}})\rightharpoonup F_{k}(\tilde{\mu })$ and $J_{2}(\mu _{\beta }^{n_{3}})\rightarrow J_{0}$. Since (4.17) holds, we obtain that

which contradicts (4.16). Thus $J_2(\mu _{\beta }^{n_2})\rightarrow J_2(\tilde{\mu })$, i.e., (4.13) holds. □

4.2. Convergence

We will show that the solutions of the problem (Pβ) with the noisy measurement data $\lbrace m^n_{t,k}\rbrace$ converge toward a solution of the problem (P) as β → 0 and the noise level $\delta _{k}^n\rightarrow 0$, 1 ⩽ kk0. For the noise level vector $\delta ^n=(\delta _1^n,\ldots ,\delta _{k_0}^n)$, the arithmetic mean of the square of k0 noise levels is $m(\delta ^{n})=\big[\big(\delta _1^n\big)^2+\cdots +\big(\delta _{k_0}^n\big)^2\big]/k_{0}$.

We measure the convergence with respect to the Bregman distance, which is briefly recalled here. Let B be a Banach space with B* being the dual space of it; R: B → ( − , ] is a proper convex functional and ∂R(q) stands for the subdifferential of R at q ∈ Dom R := {qBR(q) < } ≠ ∅ defined by

The set ∂R(q) may be empty; however, if R is continuous at q, then it is nonempty. Furthermore, ∂R(q) is convex and weak* compact. In the case ∂R(q) ≠ ∅, for any fixed pB, we denote

Then for a fixed element q* ∈ ∂R(q),

is called the Bregman distance with respect to R and q* of two elements p, qB.

The Bregman distance is not a metric on B. However, for each q* ∈ ∂R(q), the $D_{R}^{q^{*}}(p,q)\ge 0$ for any pB and $D_{R}^{q^{*}}(q,q)= 0$. Furthermore, in the case R is the strictly convex function, $D_{R}^{q^{*}}(p,q)=0$ if and only if p = q.

Theorem 4.3. (Convergence) For any positive vector sequence δn → 0 such that for βn := β(δn) → 0, mn)/βn → 0 as n, let $\lbrace m_{k}^{n}\rbrace$ be a sequence in L2(∂X) such that for every k, $\Vert m_{k}^n-m_{t,k}\Vert _{L^2(\partial X)} \le \delta _k^n$, and let $\lbrace \mu _{\beta _{n}}^{n}\rbrace$ be the minimizers of the problems

Then, a subsequence $\lbrace {\mu _{\beta _{n_2}}^{n_2}}\rbrace$ of $\lbrace {\mu _{\beta _{n}}^{n}}\rbrace$ and an element $\hat{\mu }\in Q_{\rm ad}(\lbrace m_{t,k}\rbrace )$ exist such that

Equation (4.19)

and

Equation (4.20)

Furthermore, $\hat{\mu }$ is a solution to the problem (P) and for all $l\in {\partial (\int _{X}|\nabla (\cdot )|)(\hat{\mu })}$,

Equation (4.21)

Proof. For all n, by the definition of $\mu _{\beta _{n}}^{n}$, for any μ ∈ Qad, we have

In particular, for any μ ∈ Qad({mt, k}),

From the above inequality, we have

Equation (4.22)

Since mn)/βn → 0, $\lbrace \mu _{\beta _n}^n\rbrace$ is bounded in BV(X). By proposition 4.1, we conclude that there exist a subsequence $\lbrace \mu _{\beta _{n_{1}}}^{n_{1}}\rbrace$ of $\lbrace \mu _{\beta _{n}}^{n}\rbrace$ and $\hat{\mu }\in {Q_{\rm ad}}$, such that

Equation (4.23)

Equation (4.24)

By the same reasoning as in the proof of theorem 4.3, a subsequence $\lbrace \mu _{\beta _{n_{2}}}^{n_{2}}\rbrace$ of $\lbrace \mu _{\beta _{n_{1}}}^{n_{1}}\rbrace$ exists, such that, for 1 ⩽ kk0,

On the other hand,

When $\beta _{n_{2}}\rightarrow 0$ and $m(\delta _{n_{2}})\rightarrow 0$, the first item on the right of the above inequality approaches zero, and so

Moreover, from

and $\Vert m_{k}^{n_{2}}-m_{t,k}\Vert _{L^{2}(\partial X)}\le \delta _k^{n_2}\rightarrow 0$, we conclude that

i.e., $F_{k}(\mu _{\beta _{n_{2}}}^{n_{2}})\rightarrow m_{t,k}$ as n in L2(∂X). Recall that $F_{k}(\mu _{\beta _{n_{2}}}^{\delta _{n_{2}}})\rightharpoonup M_{k}(\hat{\mu })$ in L2(∂X). By the uniqueness of the weak limit, $M_{k}(\hat{\mu })=m_{t,k}$, i.e., $\hat{\mu }\in {Q_{\rm ad}(\lbrace m_{t,k}\rbrace )}$.

It remains to prove (4.21). For any μ ∈ Qad({mt, k}), by (4.22) and (4.24), we have

This means that $\hat{\mu }$ is a solution to the problem (P). Again using (4.22) and (4.24), we have

So

Equation (4.25)

From (4.23) and (4.25), by [5, proposition 10.1.2], the sequence $\lbrace \mu _{\beta _{n_{2}}}^{n_{2}}\rbrace$ weakly converges to $\hat{\mu }$ in BV(X) . Thus, for all $l\in {\partial (\int _{X}|\nabla (\cdot )|)(\hat{\mu })}$ ,

which is (4.21). □

5. Summary

In this paper, we present a theoretical analysis of the parameter identification problem for the radiative transport equation (RTE) with angularly averaged measurement data. The parameters are sought in subspaces of L(X), whereas the solution of the RTE BVP is from the space of square integrable functions that have square integrable directional derivatives along the angular direction and that have square integrable traces on the incoming boundary. We analyze properties of the forward operator in this function space setting, including the continuity and differentiability with respect to the parameters. The total variation regularization method is considered for the parameter identification problem where the parameters may have discontinuity or high oscillation and we show that the method leads to a stable solution. We also show the convergence of the total variation-minimizing solution in the sense of the Bregman distance. Lack of compactness property of the function spaces presents a major challenge for the theoretical analysis of the problem and this challenge is overcome with careful arguments.

Acknowledgments

The authors thanks the anonymous referees for valuable comments. This work is supported by the National Natural Science Foundation of China under grant no. 91230119.

Please wait… references are loading.