Kozlov-Maz'ya iteration as a form of Landweber iteration

We consider the alternating method of Kozlov and Maz'ya for solving the Cauchy problem for elliptic boundary-value problems. Considering the case of the Laplacian, we show that this method can be recast as a form of Landweber iteration. In addition to conceptual advantages, this observation leads to some practical improvements. We show how to accelerate Kozlov-Maz'ya iteration using the conjugate gradient algorithm, and we show how to modify the method to obtain a more practical stopping criterion.


Introduction
The Cauchy problem for elliptic equations, where Dirichlet and Neumann data are prescribed simultaneously on a strict subset of the domain boundary, is a prototypical ill-posed problem. It is a linear problem, and can be approached using any of a number of standard regularization techniques such as Tikhonov regularization [CDJP01], as well as logarithmic convexity methods [Pa87]. Kozlov and Maz'ya [KM90] (see also [KMF91]) introduced a novel method for solving this problem that, while related to the general class of iterative methods, was not evidently one of the standard ones. The method (sometimes known as the alternating method and called here Kozlov-Maz'ya iteration) is straightforward to implement numerically, and is therefore an attractive choice for practical use. There are some drawbacks, however. The formal stopping criterion for Kozlov-Maz'ya iteration involves error estimates in certain Sobolov spaces with fractional derivatives. Arriving at such estimates for real data poses some difficulty. Moreover, it has been observed that Kozlov-Maz'ya iteration suffers from being slow. Although there have been efforts to accelerate the method using certain relaxation factors [JN99] [JLM04], formal proofs of the stability of these ad-hoc techniques are not available.
Our primary result is a demonstration that Kozlov-Maz'ya iteration is, in fact, a form of Landweber iteration [La51] between function spaces equipped with suitable norms. This observation yields a number of advantages. First, the extensive body of literature concerning Landweber iteration can be brought to apply to Kozlov-Maz'ya iteration. Proofs concerning its stability and rate of convergence can then be quoted from textbooks. Second, standard techniques for accelerating Landweber iteration can be applied to Kozlov-Maz'ya iteration. We indicate here a variation based on the conjugate gradient method that is nearly as simple as standard Kozlov-Maz'ya iteration and leads to a fast, order-optimal regularization method. Finally, we show how to modify one of the function spaces involved in the Landweber and conjugate gradient iterations to obtain a similar method with a more practical stopping criterion.
The motivation for this work comes from an inverse problem in glaciology [MTAS08], which considered the Cauchy problem for a nonlinear elliptic PDE. In that paper, the linearized inverse problems were solved using Kozlov-Mazya iteration, accelerated by the techniques described below in Section 5. For simplicity, we focus our attention here on a model elliptic problem for the Laplacian; the extension to more general elliptic operators is straightforward. We remark that Kozlov-Maz'ya iteration has been extended to certain parabolic and hyperbolic inverse problems [BL01] as well as to a degenerate elliptic problem (the Stokes system) [BJKL05]. We do not treat these problems, but it hoped that the ideas presented here might also be useful in these cases.

Formulation of the model Cauchy problem
Let Ω ⊆ R n be an open, bounded, and connected set with a Lipschitz boundary ∂Ω. Suppose S and B are nonempty open subsets of ∂Ω sharing a common boundary Π and that ∂Ω = S∪B∪Π is a Lipschitz dissection as defined in [Mc00] (effectively Π is an embedded Lipschitz hypersurface of Γ). We suppose that boundary data is known on S but unknown on B; the notation suggests that S is an accessible surface and B is an inaccessible base. The Cauchy problem for the Laplacian is the following: (1) Here f ∈ (H −1 (Ω)) * , σ ∈ H 1/2 (S), τ ∈ H −1/2 (S), and ∂ n denotes the the normal derivative.
Our notation and conventions for Sobolov spaces follow [Mc00], except for one case noted below. The space H s (S) is the set of restrictions of distributions in H s (∂Ω) to S and has the quotient norm; in this paper we will only use the cases s = −1/2, 0, 1/2. The subset of distributions σ ∈ H s (∂Ω) such that σ| B = 0 is denoted by H s 00 (S) (and by H s (S) in [Mc00]). It is a closed subspace of H s (∂Ω) and inherits the norm from the larger space. We will consider elements of H s 00 (S) as elements of H s (S) or as elements of H s (∂Ω) interchangeably and without comment. Because of the regularity of the sets S and B, there is a natural identification of H −1/2 (S) with the dual space of H 1/2 00 (S). More details concerning these conventions can be found in the Appendix.
The key step of recasting Kozlov-Maz'ya iteration as Landweber iteration requires making a judicious choice of (equivalent) norms on these boundary Sobolov spaces so that the adjoints of certain operators take on a natural form. Proofs of the equivalence of these norms can be found in the Appendix. For brevity we describe these norms here for distributions on S with obvious adjustments needed for distributions on B.
Let ψ ∈ H −1/2 (S) and let v ψ be the solution of Then Similarly, if ψ 1 , ψ 2 ∈ H −1/2 (S) then This norm was described in the original work on Kozlov-Maz'ya iteration [KM90]. Then and there is a corresponding inner product Let φ ∈ H 1/2 (S). Let r φ be the solution of Then where |S| = S 1. There is a corresponding inner product defined analogously to the other spaces.

Kozlov-Maz'ya iteration
Kozlov-Maz'ya iteration proceeds by alternating between solving boundary-value problems involving the Dirichlet data (σ) and Neumann data (τ ) in turn. Given The solvability of these equations is discussed briefly in the Appendix.
Then ψ 1 = ∂ n w 0 | B . Sequences {ψ k }, {v k } and {w k } are obtained by repeating these operations. Letting we see that ψ k+1 = KM (ψ k ). Note that we use the convention that operators with a script font yield distributions on Ω, whereas operators with a roman font yield distributions on a subset of ∂Ω.
It was proved in [KMF91] that u ∈ H 1 (Ω) is a solution of the Cauchy problem (1) if and only if ∂ n u| B is a fixed point of KM . Moreover, if there exists a solution u of equation (1), then the functions v n and w n converge to u in H 1 (Ω). And finally, for approximate boundary data (σ δ , τ δ ), stopping the iteration early according to a discrepancy principle leads to a regularization strategy for solving the Cauchy problem.
There is a dual formulation of Kozlov-Maz'ya iteration obtained via the operator M K : (i.e. M K is KM performed in the reverse order). In the following two sections we will show that the original form of Kozlov-Maz'ya iteration and its less-studied dual formulation can be exhibited as forms of Landweber iteration.
The maps D, N , KM , and M K are affine, and it will be useful to have notation for their linear parts. Let D 0 , N 0 , KM 0 , and M K 0 be defined as before but with homogeneous data (f = 0, σ = 0, τ = 0). It then follows that for any ψ,ψ ∈ H −1/2 (B) and φ,φ ∈ H 1/2 (B) 3 Landweber iteration (Neumann version) We will show in this section that the previously described alternating technique is, in fact, the Landweber method applied to the operator equation Note that if u is a solution of the Cauchy problem (1), then N (∂ n u| B ) = τ . Moreover, if ψ solves equation (3), then u = N (ψ) solves the Cauchy problem (1).
Let N 0 be defined similarly to N using N 0 in place of N (i.e using homogeneous data). So Hence the operator equation (3) can be rewritten The Landweber method provides a regularization technique for solving equation (4) that proceeds by minimizing the functional using a steepest descent algorithm. The gradient of J at ψ is where a is a fixed constant chosen so that 0 < a ≤ 1/||N 0 || 2 . The Landweber method then produces iterates ψ k+1 = L N (ψ k ) starting from an initial value ψ 0 , and the functions ψ k are then approximate solutions of the original operator equation (3).
Computation using the Landweber method requires knowledge of the adjoint N * 0 , which has a natural form given our chosen inner products. Lemma 1. Let ξ ∈ H −1/2 (S), and define q to be the solution of Then N * 0 (ξ) = ∂ n q| B .
Proof. Let ψ ∈ H −1/2 (B) and ξ ∈ H −1/2 (S) be arbitrary. We then let v, w, q and r be the solutions of the following boundary value problems: Then (6) The first equality follows from the definition of the inner product on H −1/2 (S) and we have used q = 0 on B and ∂ n v = ∂ n w on S in the subsequent equalities. Similarly, (7) From equalities (6) and (7) we conclude and since ψ ∈ H −1/2 (B) is arbitrary, N * 0 (ξ) = q| B as claimed.
The alternating method of [KM90] is exactly the Landweber method (with the constant a = 1) applied to minimizing the functional J.
Proof. Lemmas 2 and 3 proved below establish that and Hence where in the last step we have used the decomposition of the affine map KM .
It remains to establish equations (8) and (9), which is done in following two lemmas.
Proof. Let ψ ∈ H −1/2 (B) and let v and w solve the boundary-value problems Then N 0 (ψ) = ∂ n v| S and KM 0 (ψ) = ∂ n w| B . On the other hand, let q = v − w. Then Lemma 3.
Proof. Let v and w be the solutions of Then By Lemma 1, Remark 1. It follows from Lemma 2 that the operator KM 0 : is self adjoint. This fact was proved independently in [KM90], and it played a central role in their results. The new observation in the current work is that this self-adjointedness arises because of equation (8) and that the Kozlov-Maz'ya iteration is intimately connected with a minimization procedure.
We have now proved all of the ingredients of Proposition 1.
To justify the use of Landweber iteration with relaxation factor a = 1 in equation (5), we require an estimate of the norm of N 0 .
Proof. Let v, w, and q be solutions of the following boundary value problems: Then N 0 ψ = ∂ n v| S = ∂ n q| S and Then r is harmonic, r| B = 0, and ∂ n r = ∂ n v on S.
It is well known that Landweber iteration, together with a stopping principle for the iterations, is a regularization method for solving equation (3) [EHN00]. Translating these standard results to Kozlov-Maz'ya iteration we obtain the following (which can be deduced, except for the statement concerning optimal convergence rates, from the original paper of [KM90]).
The previous result assumes that the Dirichlet data σ is known exactly. If σ is only approximately known, this corresponds to error in the operator N . It is straightforward to transform this error into increased uncertainty in the right-hand side of equation (4). To do this, we define A simple computation (left to the reader) shows the following.
Lemma 5. Suppose that σ ∈ H 1/2 (S) satisfies ||σ − σ || H 1/2 (S) < . Let N be the corresponding operator in equation (3), and let τ = F DN (σ − σ). Then for all ψ ∈ H −1/2 (B), As a consequence, the corresponding termination condition for Kozlov-Maz'ya iteration when there is error in both τ and σ should be adjusted to It is worth remarking that this is an inconvenient criterion to work with in practice: it requires both an estimate for operator norm of the Dirichlet-to-Neumann map F DN as well as the size of the error in τ in the space H −1/2 (S), which has a rather abstract norm. In fact, in many applications (including the work in [MTAS08] that motivates this paper) the Neumann data τ is known exactly (e.g. τ = 0) but there is error in the Dirichlet data σ.
Hence we now consider the Dirichlet version of operator equation (3).

Landweber iteration (Dirichlet version)
In the previous section we recast Kozlov-Maz'ya iteration with operator KM as a form of Landweber iteration by considering a map from Neumann data on B to Neumann data on S with fixed Dirichlet data on S. The dual formulation obtained by swapping the roles of Dirichlet and Neumann data corresponds to Kozlov-Maz'ya iteration with operator M K, but posing it requires a little care. The natural operator equation to consider is where D(φ) = D(φ)| S . Defining D 0 using D 0 in place of D, we rewrite this equation as We would like to consider D 0 : H 1/2 (B) → H 1/2 (S), but the challenge is to find inner products on these spaces such that the resulting adjoint D * 0 leads to a lemma analogous to Lemma 2. Unfortunately, this is not true for the norm defined in Section 1.1.3, and it is not clear how to adjust it to remedy this situation.
Proof. The claims all follow from the following observation: if φ 1 and φ 2 are distributions in H 1/2 (S) that admit extensionsφ 1 andφ 2 in H 1/2 (∂Ω) that are equal on B, then φ 1 −φ 2 ∈ H 1/2 (S). Indeed, Here we treat D 0 as a map from H  (11) corresponds to starting with an initial estimate η 0 ∈ H 1/2 00 (B) and computing subsequent iterates η k+1 = L D (η k ). We then obtain approximate solutions φ k =φ + η k of equation (10). We will show that the iterates φ k are exactly the iterates produces by Kozlov-Maz'ya iteration with then operator M K starting with the initial estimate φ 0 =φ + η 0 . To do this, we first compute the adjoint of D 0 and then prove analogues of Lemmas 2 and 3.
Lemma 7. Let γ ∈ H 1/2 00 (S), and let q and r be the solutions of the boundary-value prob- Then Proof. Let φ ∈ H 1/2 00 (B) and γ ∈ H 1/2 00 (S) be arbitrary, and let v and w be solutions of the following boundary-value problems: The equation for w is well-posed since φ ∈ H 1/2 00 (S) and hence the prescribed boundary values lie in H 1/2 (∂Ω); a similar remark holds for q in equation (12).
Notice that v − w and q are harmonic, equal zero on B, and equal D 0 (φ) and γ respectively on S. By the definition of the inner-product on H 1/2 00 (S) we conclude that But Proof. Let φ ∈ H 1/2 00 (B) and let u, v, and w be solutions of the following boundary-value problems: Let q = u − v and r = v − w. Then q and r solve Noting that u| S = D 0 (φ) and (∂ n v − ∂ n u)| B = −∂ n q| B it follows from Lemma 7 that But v| B = φ and w| B = M K 0 (φ). Hence Proof. Let u, v, and w be solutions of the following boundary-value problems: Notice that v| S = D(φ) and w| B = M K(φ). Let q = u − v and r = w − u. Then q and r solve The proof of the following proposition exactly follows the proof of Proposition 1 using Lemmas 8 and 9 in place of Lemmas 2 and 3. We omit the proof. Consequently, the iterates produced by the (Dirichlet) Landweber method and the (Dirichlet) Kozlov-Maz'ya alternating method are identical.
Just as with the Neumann formulation, the operator norm of D 0 is bounded above by 1, which justifies setting the relaxation constant a = 1 in our definition of L D .
Proof. Let η ∈ H 1/2 00 (B) and let u, v, and w satisfy the following boundary-value problems: Then D 0 η = u| S and Notice that u − v is harmonic, equals 0 on B, and equals u on S. Standard results for Landweber iteration (interpreted in the language of Kozlov-Maz'ya iteration) imply the following analogue of Proposition 2.
Proposition 4. Let u be a solution of the Cauchy problem (1) with Dirichlet data σ and Neumann data τ . Suppose (σ δ ) are approximations of σ such that ||σ − σ δ || H 1/2 00 (S) < δ. Let φ 0 ∈φ + H 1/2 00 (B) (i.e. let φ 0 admit an extension in H 1/2 (Ω) that equals σ on S) and let φ δ n be the first Kozlov-Maz'ya iterate for the data (σ δ , τ ) starting from φ 0 such that ||σ δ − φ n || H 1/2 00 (S) < λδ where λ > 1 is a fixed constant. Then Moreover, the rate of convergence is order optimal. That is, if u| B −φ = (D * 0 D 0 ) µ (ξ) for some µ > 0 and some ξ ∈ H The previous result assumes that τ is known exactly. This actually holds in many application of interest where τ = 0 represents a stress-free or perfectly insulating boundary condition. In particular, it holds in the motivating problem from in [MTAS08]. If τ is only known approximately, then error in τ can be rewritten as expanded error in σ in a procedure analogous to the one described in Lemma 5.
A more serious weakness of Proposition 4 is that it assumes ||σ − σ δ || H 1/2 00 (S) → 0 which morally implies that the values of σ at the interface of S and B are known exactly. We would prefer to have a theorem treating the case ||σ − σ δ || H 1/2 (S) → 0. Nevertheless, Proposition 2 has some application in this case as well.
Suppose σ δ → σ in H 1/2 (S), and let u δ and v δ be solutions of the following boundaryvalues problems and let u 0 and v 0 be the solution of these problems with σ δ replaced with its true value σ.
if and only if w solves the Cauchy problem Noting that v δ ∈ H 1/2 00 (S), and that v δ | S → v 0 | S in H 1/2 (S), Proposition 2 can be applied to the Cauchy problems (13). The stopping criterion then involves the operator norm of the map taking σ δ ∈ H 1/2 (S) to v δ | S ∈ H 1/2 00 (S).

Conjugate gradient alternative
Since the Kozlov-Maz'ya alternating method is simply a form of the Landweber method, it becomes clear how it might be effectively accelerated. One standard, attractive choice is to use the conjugate gradient method. This strategy, together with the Morozov discrepancy principle, provides a fast, order-optimal regularization scheme (see, eg., [Ha95]).
For definiteness we treat the Dirichlet case and consider the normal equation The conjugate gradient algorithm for this problem then reads is harmonic, equals D * 0 (w| S ) on B and equals zero on S.
Proof. That z is harmonic and equals zero on S follows immediately from the definition of N 0 . On the other hand, inspecting Lemma 7 with w playing the role of q and z playing the role of r in equations (12) we see that z| B = D * 0 (w| S ).
It is worth remarking that z in equation (14) satisfies the weak formulation that z| S = 0 and Ω ∇z · ∇χ = Ω ∇w · ∇χ for all test functions χ that equal zero on S. Hence the exterior derivative ∂ n w need not be explicitly found when solving for z.
r k+1 = r k − αs; 10 q k+1 = −N 0 (∂ n r k+1 | S ); 11 β = Ω |∇q k+1 | 2 Ω |∇q k | 2 ; 12 d = q k+1 + βd; 13 k = k + 1; 14 end Algorithm 2: Simplified Dirichlet conjugate gradient approach When the main loop is terminated (e.g. using the discrepancy principle), the regularized solution of the Cauchy problem is then D(u k | B ). Each iteration of the loop requires solving exactly two boundary-value problems (one for D 0 and one for N 0 ), just as for Kozlov-Maz'ya iteration. In Section 7 we demonstrate how the number of iterations needed for the conjugate gradient algorithm can be substantially less than standard Kozlov-Maz'ya iteration.

Variations of the conjugate gradient approach
We have worked with solving the equation where D : H 1/2 (B) → H 1/2 (S). By changing the source or target spaces to be L 2 spaces, one obtains three alternative possibilities for the conjugate gradient algorithm.
This variation was treated in [HL00]. Because the choice of domain has lower regularity than H 1/2 , one expects the reconstructed solutions to exhibit lower regularity than Kozlov-Maz'ya iteration. Indeed, one step of the algorithm involves a boundary condition of the form where v is a previously computed harmonic function. This step has the effect of lowering the regularity of u and is perhaps responsible for oscillations observed in [HL00] Figures 2, 4, and 6. In particular, we performed a reconstruction of [HL00] Figure 2 using the H 1/2 → H 1/2 method (as well as the H 1/2 → L 2 method described below) and these oscillations are absent.
• [L 2 (B) → H 1/2 (S)] This approach appears in [Kn04], although it is not presented as such. That paper considers the functional and w is the solution of Noting that (v−w) is harmonic and equal to zero on B, we see that G can be rewritten as , which is the functional being minimized by the Dirichlet Landweber procedure presented in Section 4. However, [Kn04] initially uses the L 2 (B) gradient of G. Since G is not defined on all of L 2 (we do not expect solutions with L 2 boundary data to lie in H 1 (Ω)), the L 2 gradient leads to a loss of regularity, and the algorithm has a step similar to equation (15). This trouble is ameliorated in [Kn04] by introducing a smoothing step, effectively recasting the domain as a subspace of H 1 (B) and factoring the map through L 2 (B). Since the norm used for H 1 (B) involves a PDE defined only on the domain boundary, the additional step adds some complication to the algorithm when compared to Kozlov-Maz'ya iteration.
This combination does not appear to have been previously addressed in the literature, and has some potential interest. Since the domain is H 1/2 (B), we expect a reconstruction with higher regularity than the method of [HL00]. And since the range is L 2 (S), the associated stopping principle will involve L 2 error estimates, which are much easier to obtain than the rather abstract H 1/2 error estimates.
As in Section 4, we consider the operator equation where D : H 1/2 (B) → H 1/2 (S) and where ι : H 1/2 (S) → L 2 (S) is the natural embedding. Note that we work with H 1/2 (B) rather than the more awkward space H 1/2 00 (B). Equation (16) can be rewritten and to apply the Landweber or conjugate gradient methods we need to be able to compute (ι • D 0 ) * .
Proposition 5. Let γ ∈ L 2 (S), and let w be the solution of (17) Proof. We first note that there is a solution w ∈ H 1 (Ω) of system (17). Indeed, the Neumann data (which belongs to L 2 (∂Ω) ⊆ H −1/2 (∂Ω)) satisfies the compatibility condition ∂Ω ∂ n w = 0. Hence the PDE admits a solution in H 1 (Ω) determined uniquely up to a constant. The final equation then determines the value of the constant. Now let φ ∈ H 1/2 (B) and γ ∈ L 2 (S) be arbitrary. Let w be the solution of system (17) and let v and q solve Then D 0 (φ) = v| S and Moreover, since ∂ n u = 0 on S and q = w on B. Combining all these equations we conclude .
The conjugate gradient algorithm for this problem starts with an initial value φ 0 ∈ H 1/2 (B) and a corresponding u 0 = D(φ 0 ). Given γ ∈ L 2 (S), we define W 0 (γ) = w| B where w is the solution of system (17). We then obtain an analogue of Algorithm 2 by tracking elements of H 1/2 (B) as harmonic functions with zero Neumann data on S.

Numerical Results
Let Ω be the domain in the plane bounded above by the x-axis for −1 ≤ x ≤ 1 and bounded below by a parabola passing through the points (−1, 0), (1, 0), and (0, −d) (Figure 1, left). We take S to be the region on the x-axis with −1 < x < 1 and B to be the portion of the boundary with y < 0. The Cauchy problem to solve is where f 0 is a constant. This is a model for a glaciological inverse problem where Ω is the cross-section of a glacier and u represents the component of ice velocity orthogonal to the cross-section. The homogeneous Neumann condition arises as a consequence of a zerostress hypothesis at the ice surface S, and surface velocity measurements are represented by σ. We consider synthetic data σ obtained by numerically solving the problem where φ is a prescribed function; then σ = u| S . We used a bump-function where u 0 and s are constants. Figure 1 (right) illustrates values of u on B and S.
All computations were done using the finite element method (and in particular using the FEniCS [LW10] framework). In the following we used depth d = 1/2, forcing term f 0 = 8, peak basal speed u 0 = 1/2, and standard deviation s = 1/3. The forcing term was selected so that the peak value u max of the solution u of system (18) was approximately 1.
The surface measurements σ were perturbed by spatially uncorrelated gaussian noise with standard deviation u max p 100 , where p is a constant describing 'percent-noise'. We then applied standard Kozlov-Maz'ya iteration, the H 1/2 → H 1/2 conjugate gradient method (Algorithm 2), and the H 1/2 → L 2 conjugate gradient method (Algorithm 3) to solve the inverse problem. The starting estimate was φ 0 = 0 for all reconstructions.
Estimating error in the H 1/2 (S) norm for use in the discrepancy principle is impractical, whereas we have good estimates for the error in the L 2 (S) norm. For all three algorithms, we therefore terminated early based on the L 2 discrepancy. This is formally justified only for the H 1/2 → L 2 algorithm. Nevertheless, the other two algorithms using this modified discrepancy principle appeared to remain stable. It would be interesting to have a formal proof of this observation. Figure 2 shows a graph of the L 2 discrepancies ||u − u k || L 2 (S) for the p = 0.1 reconstruction for each of the three algorithms. As would be expected, the conjugate gradient based algorithms required substantially fewer iterations to reach their target discrepancies. This improvement was less substantial for the larger (and more physically relevant) value of p = 1 where the conjugate gradient algorithms each used six iterations and Kozlov-Maz'ya iteration used nine iterations.
All three algorithms gave qualitatively similar reconstructions for p = 1 and p = 0.1 (Figure 3). The algorithms using H 1/2 (S) for the surface norm tended to have stronger oscillations near the interface of S and B. Figure 4 shows a detail of these oscillations near the boundary point (1, 0) in the p = 0.1 case. Although present in all three reconstructions, the oscillations were more damped by the H 1/2 → L 2 algorithm.
Of the three algorithms used, the H 1/2 → L 2 algorithm had a number of slight advantages. In addition to having the speed of the conjugate gradient algorithm, it has a provably rigorous stopping principle for an easily obtained error estimate, and it generated subtly better reconstructions near the interface of S and B.
We define H s 00 (S) to be the closure in H s (∂Ω) of the Lipschitz functions with compact support in S . In particular, H s 00 (S) is a closed subspace of H s (∂Ω), though we will commonly identify such functions as elements of H s (S). Because of the regularity of the common boundary Π, where V ∈ H s (∂Ω is any function with V | S = v. By reflexivity we also have an isomorphsim between H s (S) and H −s 00 (S) * . Note that [Mc00] denotes H s 00 byH s .

Mixed Boundary Value Problems
We wish to solve − ∆ u = f in Ω u = φ on S ∂ n u = τ on B.

(19)
Let H 1 (Ω, S) = {u ∈ H 1 (Ω) : u| S = 0}. This is a closed subspace of H 1 (Ω) (being the kernel of restriction to the boundary followed by restriction to S). For u, v ∈ H 1 (Ω, S) let This is evidently a continuous bilinear form. Moreover, it is strongly coercive so long as S is nonempty (since we have assumed additionally that S is open). The argument is completely analogous to the corresponding one for the well-known case where S = ∂Ω. Now suppose f ∈ (H 1 (Ω)) * , φ ∈ H 1/2 (S), and τ ∈ H −1/2 (B). A weak solution of equation (19) is a function u ∈ H 1 (Ω) such that u = φ in S and such that for all v ∈ H 1 (Ω, Γ 1 ). The integrals on the right-hand side of equation (20) are to be interpreted as shorthand for the application of linear functionals. In particular, the integral Let ψ ∈ H −1/2 (S) and let u be the solution of − ∆ u = 0 in Ω ∂ n u = ψ on S u = 0 on B.