Generalized Tikhonov Method and Convergence Estimate for the Cauchy Problem of Modiﬁed Helmholtz Equation with Nonhomogeneous Dirichlet and Neumann Datum

: We investigate a Cauchy problem of the modified Helmholtz equation with nonhomogeneous Dirichlet and Neumann datum, this problem is ill-posed and some regularization techniques are required to stabilize numerical computation. We established the result of conditional stability under an a priori assumption for an exact solution. A generalized Tikhonov method is proposed to solve this problem, we select the regularization parameter by a priori and a posteriori rules and derive the convergence results of sharp type for this method. The corresponding numerical experiments are implemented to verify that our regularization method is practicable and satisfied.


Introduction
In some practical and theoretical application fields, such as Debye-Huckel theory, implicit marching strategies of the heat equation, the linearization of the Poisson-Boltzmann equation, etc., modified Helmholtz equation has many important applications (please see [1][2][3][4]). It is also because of this, in the past century the forward problem for it caused extensive attention and has been studied deeply. However, in some science research, there exist some inverse problems for this equation. For instance, we usually do not know the data of the entire boundary, the data of the partial boundary, or certain internal spots of one domain can merely be received. The Cauchy problem of the modified Helmholtz equation belongs to this kind of inverse problem. In the present article, the Cauchy problem, outlined in Equation (1), of the modified Helmholtz equation is studied.
This paper establishes the conditional stabilities of problems (2) and (3), and constructs a kind of generalized Tikhonov regularization method to solve these two problems (see Section 3). Our work is not only an extension for the boundary (or revised) Tikhonov method [20], but also is a supplement for the one in [27]. In [27], the author presented a generalized Tikhonov method to solve an abstract Cauchy problem with inhomogeneous Dirichlet and Neumann datum in bounded domain, and derive the a priori convergence results for regularized solutions, the author has not established the a posteriori convergence estimates. In this work, we shall derive some a priori and a posteriori sharp convergence results for our regularization solutions, and give an a posteriori selection rule for the regularization parameter which is relatively rare in solving the Cauchy problem of modified Helmholtz equation.
The paper is organized as follows: Section 2 derives the conditional stabilities for (2) and (3). Sections 3 constructs the regularization methods, Sections 4 states some preparation knowledge. In Section 5, the a priori and a posteriori convergence estimates of sharp type are established. The numerical experiments are done to verify the computation effect of regularized solution in Section 6. Section 7 makes some conclusions and the corresponding discussion.

Conditional Stability
We know that (2) and (3) are all ill-posed in the sense of Hadamard, their solutions discontinuity depends on the given Cauchy data. However in the research of inverse problems, by assuming certain a priori condition on the solution, we can often obtain the stability of the considered problem, i.e., the conditional stability (see [28][29][30]). Below, we give and proof of the conditional stabilities for problems (2) and (3). Define where ·, · denotes the inner product in L 2 (0, π), X n := X n (x) = 2 π sin(nx) is the eigenfunctions of the space L 2 (0, π). According to (4), we define the norm for the space D ξ γ as Using the separation of variables, the solutions of (2) and (3) can be expressed as Theorem 1. Let E > 0, K = 1 + k 2 , u(T, x) satisfy an a priori bound condition then for each fixed 0 < y ≤ T, it holds that Proof of Theorem 1. Note that, for 0 < y ≤ T, n ≥ 1, e √ n 2 +k 2 y /2 ≤ cosh( √ n 2 + k 2 y) ≤ e √ n 2 +k 2 y , n 2 + k 2 ≥ 1 + k 2 , then from (6), (8) and Hölder inequality, we have Theorem 2. Suppose that v(T, x) satisfies the a priori condition then for the fixed 0 < y ≤ T, we have v(y, x) L 2 (0,π) ≤ 2 y 2T Proof of Theorem 2. For n ≥ 1, we notice that sinh( √ n 2 + k 2 y) ≤ e √ n 2 +k 2 y , and n 2 + k 2 ≥ 1 + k 2 := K, Ky )/2, then from (7), (10) and Hölder inequality, we have From the inequality above, we can derive the conditional stability result (11).
In considering an inverse problem, it is very necessary to research the conditional stability, and it has important theoretical significance. For instance, by the result of stability, we can often obtain the uniqueness of a solution and the convergence estimate of one regularization method. The common stability result is f ≤ ω( g ), here ω is called the stability function, it is monotonically increasing nonnegative, and satisfies ω(δ) → 0(δ → 0), and the form of stability result mainly have two types: (1) Hölder type (ω(δ) = δ θ , θ ∈ (0, 1)), (2) logarithmic type (ω(δ) = (ln(1/δ)) −1 ). We know that the stability result of Hölder type can tend to zero quickly as δ → 0, but the logarithmic type result is relatively slow. Now, we interpret the conditional stability results of Theorems 1 and 2 in details. We point out that, in establishing a result of conditional stability, the a priori assumption should be imposed appropriately. This is because: if the a priori condition is too strong, then the derived result extremely depends on the a priori information of the solution; if it is too weak, then we cannot derive the estimate of condition stability easily. From (9) we notice that, by imposing the a priori assumption (8), the solution u depends continuously on the Cauchy data ϕ; (11) indicates that the solution v depends continuously on the data ψ under the a priori condition (10); meanwhile the occurring constants are in relation to γ, y, K, T. According the description of the preceding paragraph, we know that the stability results (9) and (11) both belong to the Hölder type, and based on these two estimates we will derive the a-posterior convergence estimates for regularization methods in Section 5.

Regularization Method
We found that there is a large amount of recent papers on conditional stability estimates in combination with variational regularization methods based more or less on reference [29]. In recent years, there have been some new works and results in this field, such as in Hilbert spaces, Hilbert scales, and Banach space settings, please see [31][32][33][34][35], etc.
From (6) and (7), we know that cosh( are unbounded as n → ∞, which can enlarge the errors of measured datum, so problems (2) and (3) are both ill-posed problems. In the following, we design the regularized methods to restore the stability of solutions given by (6) and (7). Our method focuses on the generalized Tikhonov regularization under conditional stability estimates.

Regularization Method for Problem (2)
For all k > 0, based on the mentality of [27], we can transform (2) into the operator equation equivalently is a linear self-adjoint and bounded compact operator, its eigenvalue is 1/ cosh( is a linear self-adjoint positive defined operator, the eigenvalue and eigenfunction are n 2 + k 2 and X n , respectively. Let u δ (0, x) = ϕ δ (x) be the error data, setting γ ≥ 1, we solve the following minimization problem to construct a generalized Tikhonov regularized solution u δ α (y, x) hence u δ α (y, x) is the solution of Euler equation From (14), the regularization solution of (2) can be written as where ϕ δ n = ϕ δ , X n L 2 (0,π) , the error data ϕ δ satisfies δ denotes the bound of measured error, α is the regularization parameter. Note that, as is a boundary (or revised) Tikhonov solution (see [20], etc.), so our work is an extension on predecessors' ones.

Regularization Method for Problem (3)
As in Section 3.1, for all k > 0, we can convert (3) into the operator equation below where is a linear self-adjoint and bounded compact operator, whose eigenvalue is be the noisy data, and γ ≥ 1. We solve the minimization problem below to design a generalized Tikhonov regularized solution of (3) using the first order essential condition, we can obtain that the regularization solution v δ β (y, x) satisfies Euler equation from (19), we can define the regularization solution of (3) as here, ψ δ n = ψ δ , X n L 2 (0,π) , the error data ψ δ satisfies δ denotes the bound of measured error, and β is the regularization parameter.

Convergence Estimate
This section respectively selects the regularization parameter by the a priori and a posteriori rules, and derives the convergence estimates of sharp type for our method. (2) 5.1.1. a priori Convergence Estimate Theorem 5. Let the exact solution of (2) be given by (6), and the regularized solution u δ α is defined by Equation (15), ϕ δ is the measured data and satisfies (16). We assume that

Convergence Estimate for the Method of Problem
the regularized parameter α is selection as then, it can be obtained the following convergence result Proof of Theorem 5. Using triangle inequalities, we get that where u α is the solution of (15) for the exact data ϕ. For 0 < y ≤ T, as n ≥ 1, e √ n 2 +k 2 y /2 ≤ cosh( √ n 2 + k 2 y) ≤ e √ n 2 +k 2 y , from (15), (16) and (26), we note that On the other hand, by (6), (15), (26) and (28), we have Finally, the proof can be completed by (29) and (31)-(33).

A Posteriori Convergence Estimate
In Theorem 5, the regularized parameter α is selected by (29), this is an a priori selection rule that needs to know a bound E of exact solution. But in practice we can not acquire the a priori bound easily, so it is unrealistic. Below, we adopt a kind of a posteriori rule to select α, and this method need not know a bound of the solution, and the parameter α depend on the measured data ϕ δ and measured error bound δ. The reference [37] describes the a posteriori rule in selecting the regularization parameter.
We select the regularization parameter α by the following equation here, the constant τ > 1. We give and proof two Lemmas, which are necessary in establishing the convergence results of a posteriori form.

Proof of Lemma 2. It can be easily proven by setting
According to the intermediate value theorem of continuous function on closed interval, we know that (34) has a unique solution when ϕ δ > τδ > 0.
Meanwhile, according to the definition in (5) and a priori condition (28), we have then, using the result of conditional stability in (9), we can derive that Finally, combining (40) with (44), we can obtain the convergence estimate (38).

A Priori Convergence Estimate
Theorem 7. Let the exact solution of (3) is given in (7), the regularization solution v δ β is defined by (20), the error data ψ δ satisfies (21). We suppose that v satisfies v(T, ·) 2 and β is taken as then, for 0 < y ≤ T, we can establish the error estimate where C 1 is given in Theorem 4.

A Posteriori Convergence Estimate
We find β such that here, τ > 1 is a constant.
According to the intermediate value theorem of continuous function on closed interval, we know that (51) exists a unique solution as ψ δ > τδ > 0.

Theorem 8.
Let the exact solution of (3) is given in (7), the regularization solution v δ β is defined in (20), the error data ϕ δ satisfies (21). We assume v satisfies the a priori bound (45), and the regularization parameter is chosen by an a posteriori rule (51), then we have where , C 1 is given in Theorem 4.

Proof of Theorem 8. Notice that
By (49) and Lemma 5, we get Below, we do the estimate for the second term of (55). For fixed 0 < y ≤ T, we have using (21), (51) and (57), we can obtain Meanwhile, according to the definition in (5) and the a priori bound condition (45), we have then, by the condition stability result (11), we can get that v β (y, ·) − v(y, ·) ≤ 2 y 2T Finally, combining (56) with (60), we can derive the inequality in (54).

Remark 1.
In order to derive the error estimates of sharp type for our method, we impose the stronger a priori assumptions (28) and (45), and apply the inequalities in Theorems 3 and 4 to proof Theorems 5-8. It can be verify that there have some functions that satisfy these two assumptions. For instance, we make a verification on the feasibility of the condition (28). We take u(y, x) = sin(x) cosh( √ 1 + k 2 y), it can be found that For each fixed k, γ ≥ 1, we always can find the positive numbers l and µ, such that l > k, µ > γ, then it holds that i.e., there exists a positive number E = E(l, µ) = (π/2)(1 + l 2 ) 2µ e 4T √ 1+l 2 cosh 2 ( √ 1 + l 2 T). This shows that the assumption (28) is practicable, and the function u(y, x) = sin(x) cosh( √ 1 + k 2 y) satisfies (28). In fact, we can verify that the functions u(y, x) = sin(mx) cosh( √ m 2 + k 2 y) (m ≥ 1 is a positive integer) all satisfy the condition (28). About the explanation for the rationality of assumption (45), the procedure is similar with the one above, here we skip it.

Numerical Experiments
This section verifies the calculated effect of regularized method by making some special experiments. For the simplification, we only investigate numerical efficiency of the regularization method for (2), which is similar to the case of inhomogeneous Neumann data (3).
For k = 0.5, 1.5, ε = 0.01, γ = 2, the numerical results for exact solution u(y, x) and regularized solution u δ α (y, x) at y = 0.4, 0.6, 0.8, 1 are shown in Figures 1 and 2, respectively. For k = 0.5, 1.5, γ = 3, the calculated errors for various noisy level ε are shown in Tables 1 and 2. For k = 0.5, 1.5, taking ε = 0.01, we also compute the corresponding errors for various γ, the computation results are presented in Tables 3 and 4.    Table 3. k = 0.5, ε = 0.01, the relative root mean square errors for various γ at y = 0.6, 1.  Table 4. k = 1.5, ε = 0.01, the relative root mean square errors for various γ at y = 0.6, 1.  Tables 1 and 2 show that numerical results become better as ε goes to zero, which verifies the convergence of our method in practice. Tables 3 and 4 show that, for the same ε, the error decreases as γ becomes large. So in order to guarantee obtaining a good computational result, we should choose the parameter γ as a relatively large positive number, this can also be found from the expressions of the regularization solutions (15) and (20).

Conclusions and Discussion
This paper gives the estimates of conditional stability for (2) and (3) under an a priori bound assumption for an exact solution. We use a generalized Tikhonov regularization method to overcome the ill-poseness of two problems. By combining a priori and an a posteriori rules of the regularization parameter, we derive the results of sharp types of error estimates for this method. We also verify the feasibility of our method by doing the corresponding numerical experiments.
We point out that we write the expression of the solution by using the method of separation of variables, so this regularization technique can also be used to investigate some other similar problems in cylindrical region. However we can not apply this method to deal with some problems in a more general domain, which is a limitation of this work.