Identification of Lamé parameters in linear elasticity: a fixed point approach

A fixed point iterative scheme is used for the simultaneous recovery of Lame parameters in linear 
elasticity. Auxiliary problems principle applied to an output least-squares based regularized minimization problem results 
in a strongly convergent iterative scheme. When the (coefficient-dependent) energy norm is used, the condition ensuring the strong 
convergence are much milder and avoid any possibility of over-regularization.

1. Introduction. The following system describes the response of an isotropic membrane or body to a traction applied to its boundary: −∇ · σ = 0 in Ω, (1.1a) σ = 2µε(u) + λtr(ε(u))I, (1.1b) u = 0 on Γ 1 , (1.1c) σn = h on Γ 2 . (1.1d) The domain Ω is a subset of R 2 in the case of a membrane or R 3 in the case of a body, and ∂Ω = Γ 1 ∪ Γ 2 is a partition of its boundary. In this paper, we consider the two-dimensional problem, with some comments about the extension to R 3 . The vector-valued function u = u(x) represents the displacement of the elastic membrane, and is the (linearized) strain tensor. The tensor σ is the resulting stress tensor, and the stress-strain law or constitutive equation (1.1b) is derived from the assumption that the elastic membrane is isotropic and that the displacement is small (so that a linear relationship holds approximately). The coefficients µ and λ are the Lamé from a measurement of u (see [4,15,23]). In this work we employ a fixed point iterative algorithm for numerical treatment of the inverse problem of recovering the Lamé parameters in linear elasticity (see [2,3]). Although the idea of using iterative algorithms is by no means new, some recent success with these methods in image processing (see [1]) attracts us to employ these methods for solving inverse problems (cf. [8]). A rather complete theoretical foundation for some projection-type algorithms for parameter identification has already been laid by Kluge [14] (see also [13]). In this paper we generalize the approach of Kluge [13] in two different directions. In the first approach, for the simultaneous recovery of Lamé parameters we consider the output least-squares type functional earlier employed by Kluge [13] and several others, but instead of a projection type algorithm we consider a more general iterative scheme which is based on the so-called auxiliary problem principle. For the second approach we propose a new convex objective functional to minimize. It is shown that in this case the conditions ensuring strong convergence are much less stringent. Recently the auxiliary problem principle (APP) has attracted a great deal of attention in optimization community and it has been noticed that several other important algorithms such as proximal point methods and Newton's type methods can be recovered from the auxiliary problem principle. In view of this, we believe that the powerful theory behind the auxiliary problem principle can be used to establish a general approach for solving ill-posed problems. However, in this short paper we just present a very simple version of the APP and postpone more detailed study and comparison of several important particular cases for a companion paper.
This work is divided into six sections. In the next section, we review some results for later use. Section 3 deals with the formulation of the inverse problems. In this section we justify the need for regularization and discuss some features of the regularized problems. Section 4 and Section 5 contain the main results which include convergence analysis of the proposed algorithm for the objective functional used in Kluge [13] and a new convex objective functional. The paper concludes with some remarks about the approach.
2. Preliminaries. We begin by fixing some notation. Throughout this work, the dot product of tensors A, B will be denoted by A · B: The L 2 norm of a tensor-valued function A = A(x) is defined by Moreover, the L 2 norm of a vector-valued function u = u(x) is defined by while the H 1 norm of u is defined by Green's identity, in the present context, is where σ is assumed to be a symmetric tensor. Using (2.3), we now derive the weak form of (1.1). Multiplying both sides of (1.1a) by a test function v, integrating over Ω, and applying (2.3) yield Applying (1.1b-1.1d), we obtain the weak form: By Korn's inequality (see [6], for example), there exists a constant C > 0 such that The following inequality, which holds pointwise in Ω, is easy to establish: Combining (2.5) and (2.6), we obtain where α = C 2 min {2µ + 2λ, 2µ}. 1 This proves that the bilinear form defining Elements of K will often be denoted by = (µ, λ). Provided h is regular enough satisfying (2.4). We mention that in the scalar case (cf. (1.2)) H 2 -coefficients were recovered, for example, in [10] and [16]. For notational convenience we will define a mapping T : K × V × V → R, linear in all three arguments, as follows: We reformulate the variational form (2.4) as follows: Find u ∈ V such that where the functional m(·) is defined by the right-hand side of (2.4).
2.1. An abstract setting. In this section we present an abstract formulation of the problem discussed above. Henceforth, unless the contrary is mentioned explicitly, X and Y are two real Hilbert spaces, A a nonempty closed and convex subset of X, T : X × Y × Y → R a trilinear form and m : Y → R a bounded linear functional. We assume that there exist positive constants α, β such that for all u, v ∈ Y and for all a ∈ X, the following estimates hold: By the Riesz representation theorem, the variational equation has a unique solution u for each a ∈ A. We therefore define F : A → Y by the condition that u = F (a) is the solution to (2.10).
We begin with recalling some useful properties of the solution operator F. 2 3. Formulation of the Inverse Problem. We will henceforth assume that a (possibly noisy) measurement z ofū is available, whereū andā together satisfy (2.10). The purpose of this paper is to propose and analyze a method for estimatinḡ a from z. We define the functional J 0 : A → R by is the unique solution of (2.10) corresponding to a. The functional J 0 is related to output least-squares functionals considered by several authors, see, for example [14].
In principle, it is reasonable to estimateā by minimizing J 0 over a ∈ A, that is, by solving the following problem: Find a ∈ A such that (3.13) However, since the inverse problem under consideration is ill-posed, it is necessary to regularize J 0 . In the present approach the following observation also justifies the need of regularization. A necessary optimality condition forā ∈ A to be a solution of (3.13) is the following variational inequality (see [13]) where As mentioned earlier in this work we intend to employ some iterative methods based on the variational inequality formulation. However, it is well-known that, in the context of (3.14), the majority of such algorithms will demand for some kind of strong monotonicity of D + J 0 (·) for strong convergence. To check the availability of this strong hypothesis, we begin by exploiting the form of H. It is known that for J 0 (·) given in (3.11), we have where we have used the fact that H (a) = F (a) − z.
We begin by noticing that H is monotone, that is By choosing u = F (a k ) and v = F (ā) in the above inequality, we obtain This further implies that (cf. Lemma 2.1) and consequently In other words, Now by employing Lemma 2.1 once again, we have Combining the above three inequalities, we obtain Therefore, the required strong monotonicity argument is not fulfilled. In his paper [14], Kluge proposed to incorporate a strongly convex and differentiable regularizing operator R so that τ J + ρR , with R (·) as the derivative of R, becomes strongly monotone. Here τ and ρ are strictly positive constants. More precisely, we define J : A → R by where R is a strongly convex Gateaux differentiable functional, that is, R is strongly monotone with modulus of monotonicity as κ 0 . Notice that the condition ρκ 0 − τ κ > 0 (3.20) now ensures that τ F (·) * H (·)+ρR is strongly monotone, that is, for all a 1 , a 2 ∈ A, we have τ F (a 1 ) * H (a 1 )+ρR (a 1 )−τ F (a 2 ) * H (a 2 )−ρR (a 2 ), a 1 −a 2 ≥ (ρκ 0 −τ κ) a 1 −a 2 2 . (3.21) Although all the arguments here are valid for an arbitrary strongly convex functional, we shall stick to the choice for someã ∈ X. That is, κ 0 = 1 in (3.20). Therefore, instead of (3.13) we consider the following regularized problem: Find It follows by standard arguments that the above problem leads to the following variational inequality: Find a ∈ A such that

A Fixed Point Algorithm.
In this section we employ the so-called auxiliary problem principle (see [5,7,21]) to solve (3.23) and hence to recover the Lamé coefficients in linear elasticity. Assume that A : X → R is convex and Gateaux differentiable with A as its Gateaux derivative, and assume that δ > 0. For an approximate solution to (3.23) we proceed as follows. We begin with an initial guess a 0 and solve (4.24) Assume that the functional A is so chosen that the above minimization problem is uniquely solvable. We denote the unique solution by a 1 and continue further by updating a 0 .
More precisely, we consider the following algorithm: Algorithm 1: (i) At k = 0 start with a 0 .
(ii) At step k = n solve the following problem: Assume that a n+1 is the solution.
(iii) Stop if a n+1 − a n is below some threshold. Otherwise go back to the previous step. We remark that for an important choice A(·) = 1 2 · 2 , (4.25) leads to the following well-known projection algorithm: a n+1 = P A [a n − δ(τ F (a n ) * (F (a n ) − z) + ρR (a n ))] where P A is the projection operator to the convex set A.
The following result deals with convergence of the above algorithm.
Theorem 4.1. Assume that A : A → R is proper convex and Gateaux differentiable and its Gateaux derivative A is strongly monotone with modulus γ. Assume thatā is the unique solution to (3.23). Assume that there exist 0 < s < 1 and r > 0 such that for c 1 = r+α z r , c 2 = 2τ β 2 r 2 α 4 and m ≤ r, the following estimates hold: (4.26b) Then the iterative scheme developed by Algorithm 1 converges strongly toā.
Proof. Following the ideas of Cohen [5], we introduce the function whereā is a solution of (3.23).
In view of the hypotheses on the functional A and its derivative A , the following estimate holds where γ is the modulus of strong monotonicity for A .

MARK S. GOCKENBACH AND AKHTAR A. KHAN
To prove the strong convergence, we begin with analyzing the following difference: However, for the solutionā of (3.23), we have By setting, b = a k+1 we have Also from the auxiliary problem (4.25), we obtain For the notational simplicity we write G(·) = τ (F (·) * (·) − z) + ρR (·). Combining the above three inequalities we get The above inequality implies that (4.30) Here κ 1 = ρ − τ κ is given by condition (3.20).
Furthermore, we have Therefore the sequence {Φ(a k )} is strictly decreasing and bounded from below (see (4.27) and (4.28) )and hence it must converge. Consequently, the difference Φ(a k+1 ) − Φ(a k ) must go to zero which implies strong convergence of a k+1 toā.
Remark 4.2. The conditions (4.26), although ensuring the strong convergence and uniqueness, are quite stringent. These conditions suggest that to get the strong convergence it may be necessary to over-regularize, that is, to take ρ large in comparison to τ .

A Convex Objective Functional.
We have noticed that the major drawback of the functional studied in the previous section is its nonconvexity. In this section, we consider another functional to minimize which enjoys the much needed convexity property. Assume that X is a reflexive Banach space and Y is a Hilbert space. Assume that A is a closed and convex subset of X with nonempty topological interior. As earlier, we define a trilinear form T : X × Y × Y → R such that there are constants α and β satisfying α u 2 Y ≤ T (a, u, u). Using our notation, we propose to consider the objective functional is J : A → R defined by where z is as before.
It is shown in [9] that the functional J is convex and infinitely differentiable, and its first derivative is given by We define a regularized objective functional J ρ : X → R by where R is a Gateaux differentiable and strongly convex functional, with R as its Gateaux derivative. We consider the following regularized problem: Find a ∈ A such that It follows by standard arguments that a necessary optimality condition for the above problem is the following variational inequality: Find a ∈ A such that With an auxiliary operator A, as in the previous section, and a positive scalar δ, we consider the following algorithm:

Algorithm 2:
(i) At k = 0 start with a 0 . (ii) At step k = n solve the following problem: find a ∈ A such that min a∈A Aa + δ(J ρ (a k )) − A (a k ), a . (5.37) Assume that a n+1 be the solution.
(iii) Stop if a n+1 − a n is below some threshold. Otherwise go back to the previous step.
The following theorem is devoted to the convergence of the above algorithm.
Theorem 5.1. Assume that A : A → R is proper convex and Gateaux differentiable and its Gateaux derivative A is strongly monotone with modulus γ. Assume that L is the modulus of Lispchitz continuity for J ρ , and assume thatā is the unique solution to (5.36). If 0 < δ < 2ργ L 2 then the sequence {a k } developed by Algorithm 2 converges strongly toā.
Proof. The proof is very similar to that of Theorem 4.1 and can also be obtained from the original contribution of Cohen [5].
6. Concluding Remarks. We conclude this paper by conducting a very brief comparison of two functionals considered in this paper. An essential difference between (3.11) and (5.33) is that the former leads to a variational inequality with an operator F such that F(x) − F(x), x − y ≥ −K 1 x − y 2 for all x, y.
with K 1 > 0. In view of this for strong convergence we may need to over-regularize the problem. On the contrary, the functional (5.33) is convex and hence a very small regularization parameter suffices to ensure the strong convergence. There are several other advantages of considering (5.33). For example, the parameter τ is not needed and also that the principle of iterative regularization can easily be applied to (5.36). Moreover, if instead of the auxiliary problem principle we use the socalled extra gradient type methods (see [22]) then strong convergence can be shown without any regularization.