Self-adaptive alternating direction method of multiplier for a fourth order variational inequality

We propose an alternating direction method of multiplier for approximation solution of the unilateral obstacle problem with the biharmonic operator. We introduce an auxiliary unknown and augmented Lagrangian functional to deal with the inequality constrained, and we deduce a constrained minimization problem that is equivalent to a saddle-point problem. Then the alternating direction method of multiplier is applied to the corresponding problem. By using iterative functions, a self-adaptive rule is used to adjust the penalty parameter automatically. We show the convergence of the method and give the penalty parameter approximation in detail. Finally, the numerical results are given to illustrate the eﬃciency of the proposed method.


Introduction
Alternating direction method of multiplier (ADMM) is also known as Uzawa block relaxation method, and this method has many applications in structured optimization problems; see, e.g., variational inequality [1,2], contact problem [3][4][5][6].For each iteration, only a linear problem needs to be solved while the auxiliary unknown and Lagrange multiplier are computed explicitly.The ADMM is globally convergent for any positive penalty parameter.However, the convergence speed of the method is very sensitive to the parameter and it is difficult to choose the optimal parameter in practice [7].
The paper is organized as follows.In Sect.2, some basic results are introduced in order to express of fourth-order variational inequalities and the ADMM is applied to the problem.In particular, we propose the SADMM and show the convergence of the method in Sect.3.
In Sect.4, we present a self-adaptive rule in detail and some numerical results to illustrate the performance of the method.Finally, some conclusions and perspectives are given in Sect. 5.

Fourth order variational inequality and the alternating direction method of multiplier
Let be a bounded polygonal domain in R 2 with the boundary = ∂ .For given f ∈ L 2 ( ) and ψ ∈ C( ¯ ) ∩ C 2 ( ) with ψ < 0 on ∂ , we consider the following fourth-order variational inequality: find u ∈ K such that where a(u, v): and The bilinear form a(u, v) is H 2 0 -elliptic and Lipschitz continuous, i.e., for some α > 0 and β > 0 independently of u, v, w in H 2 0 ( ).We can formulate the variational inequality problem (2.1) formulated equivalently as a constrained minimization problem It is well known that the unique solution u ∈ K of (2.1) can be characterized by (2.2) [8,9,16].
To obtain the solution of the minimization problem (2.2) by using the ADMM [2,3,7], we introduce the set and its indicator functional Let us introduce an auxiliary unknown p ∈ L 2 ( ) and p = u in , then (2.2) is equivalent to the following constrained minimization problem For this problem, we define an augmented Lagrangian L ρ as where {v, q, μ} ∈ H 2 0 ( ) × L 2 ( ) × L 2 ( ).We consider the following saddle-point problem Then we have the following result for problems (2.2) and (2.6) [1,7].
Lemma 1 Let {(u, p), λ} be the solution of the saddle-point problem (2.6), then u is the solution of (2.2) and p = u.
From Lemma 1 we can apply the ADMM to the saddle-point problem (2.6) and obtain the solution of the minimization problem (2.2) [2].The method consists of computing p n+1 , u n+1 in minimizations and the multiplier update successively as follows (2.7) (2.8) (2.9) In this method, p n+1 in minimization (2.7) can be explicitly solved by u n and λ n .Minimization (2.8) leads to a variational problem that has a unique solution for given λ n , p n+1 and ρ > 0 [2,3].Then we obtain the following ADMM algorithm.

Self-adaptive alternating direction method of multiplier
As is known to us, the Algorithm 1 (ADMM) is convergent for any fixed parameter ρ > 0. However, the rate of convergence of the method strongly depends on the penalty parameter and it is difficult to choose the optimal parameter for individual problems.To improve the efficiency of the ADMM, we use iterative functions and propose a self-adaptive rule to adjust the parameter [2,[18][19][20][21][22].In this paper, we suppose that there is a nonnegative sequence {s n } such that Then we propose a self-adaptive alternating direction method of multiplier (SADMM), with an automatic penalty parameter selection as following.
For the mapping v → (v) + in L 2 ( ), we have some basic results that will be needed in the following analysis.
For the convergence of Algorithm 2, we now give a lemma that will be needed in the following analysis.
here we use the mean inequality and the important limit lim We denote by It follows from Algorithm 2 (SADMM) and (2.3)-(2.4)that we have the following convergence result of the algorithm [2,7].

Theorem 2
The sequence {u n , p n } generated by Algorithm 2 (SADMM) is such that where u is the solution of (2.4)-(2.5).
Proof From s n ≥ 0, 0 < ρ n+1 ≤ (1 + s n )ρ n and (3.18), we have We substitute (3.22) into (3.15) and have (3.23) so that From (3.18) and ρ n > 0, we have It follows that Let ξ n := 2s n + s 2 n , we define C 0 , C 1 by then from Lemma 4 we have C 0 < +∞ and C 1 < +∞.From (3.24) it then follows that this implies that there exists a constant C > 0 such that Then, we from (3.23) also have It follows from (3.18) and (3.27) that where ρ L = inf{ρ n } +∞ n=0 > 0.Then, we have and Therefore, u n converges to u in H 2 0 ( ).It follows that then we prove that p n converges to p in L 2 ( ).
In order to obtain the convergence of the sequence {λ n }, we consider the discrete version of Algorithm 2 (SADMM) using linear finite element triangulation.Let N be the number of nodes of the triangulation.We will use the following notations: (a) A ∈ R N×N is the stiffness matrix defined by the bilinear form a(u, v); (b) M ∈ R n×N is mass matrices; (c) f ∈ R N is the discrete external forces; (d) p ∈ R N is the discrete auxiliary unknown; (e) λ ∈ R N is the discrete lagrange multiplier.Both the matrices A and M are symmetric, positive definite.For a vector x ∈ R N in finite dimensional Hilbert space, x is the norm of x, defined by the inner product (x, x) = x T x and x T denotes the transpose of x.We have the following convergence theorem [19].
Proof As the proof the (3.15), we can from the discrete version of Algorithm 2 easily obtain where δu n = A -1 Mδλ n .Then we can easily prove the convergence of sequence {u n , p n }.It is well known that the bounded sequence must have convergent subsequence.It follows from (3.30) and positive definiteness of A and M that the sequence {λ n } is bounded, and there must exist λ and a subsequence {λ n i } such that λ n i converges to λ.Consider that the sequence (λ n i , u n i ) satisfies (3.3)-(3.4),we have and Eliminating the auxiliary unknown p n i +1 , we obtain Taking the limit in the above equation, we arrive at Moreover, from (3.2), (3.8) and (3.30), we have It follows from 0 < ρ L = inf{ρ n } +∞ n=0 ≤ ρ n i and Lemma 3 that This implies that Then, from (3.31) and (3.32), we obtain the fact that ( λ, u) satisfy the discrete formulation of (2.4)-(2.5).Since the solution to the system (2.4)-(2.5) is unique and we have λ = λ.That is, the subsequence (λ n i , u n i ) converge to (λ, u).Now we prove that entire sequence λ n converges λ.Assume that there exist λ ∈ R N such that a subsequence (λ n j , u n j ) converge to ( λ, u) and λ = λ.Note that the above subsequences (λ n i , u n i ) and (λ n j , u n j ) converge to (λ, u) and ( λ, u) respectively, we suppose there exists a n 0 ≥ 0 and n i ≥ n 0 , n j ≥ n 0 such that and where For a fixed n i ≥ n 0 and ∀n j ≥ n i ≥ n 0 , it follows from (3.24) and (3.33) that so that For ∀n j ≥ n i , we then from (3.35) have We have reached a contradiction with (3.34) and the assumption that the subsequence {λ n j } converges λ.Then it follows that the entire sequence {λ n } has exactly one limit, i.e., the algorithm is same as Algorithm 1 with the fixed parameter ρ.

Numerical experiments
For Algorithm 2, we now propose a procedure to compute penalty parameter automatically using iterative functions.In the proof of Theorem 3, it follows from (3.30) that the sequence {(u n , p n ), λ n } generated by Algorithm 2 satisfies the following inequality [2,19], To accelerate the convergent speed, we hope that Using λ n+1 and u n+1 to replace λ and u respectively, we have For a given τ > 0, we obtain a self-adaptive rule, which adjusts the parameter ρ n+1 by where w n = λ n+1 -λ n u n+1 -u n and the sequence {s n } is generated as follows,  Here c n denotes the change times of {ρ n }, i.e., For a given constant integer c max > 0, it is easy to see that the sequences {s n } and {ρ n } satisfy the conditions (3.1) and (3.5), respectively.
In this section, we have implemented the proposed methods in FreeFem+(+) with quadratic finite element, on a HP Z1G6 desktop computer.In the following tests, we choose τ = 10 in (4.1), c max = 200 in (4.2) and stopping criterion u n+1 -u n 2 ≤ 10 -4 u n+1 2 for all computations.

Example 1
We first consider a fourth-order variational inequality on the pentagon = {x ∈ (-0.5, 0.5) 2 : x 1 + x 2 < 0.5} with f = 0 and ψ(x) = 1 -9|x| 2 .We first choose step size h = 1 80 and apply the proposed to this problem with the penalty parameter ρ = 1000.We present the graphs of the numerical solution u h and the contact zone u = ψ in Figs. 1 and 2, respectively.Our numerical results appear to be in very good agreement with the results in [9].
For the sake of comparison of the ADMM and the SADMM, we give the number of iterations and corresponding CPU time (seconds) in Table 1 and Table 2 respectively, with respect to the step size h and the initial parameter ρ.One can see that the SADMM is    better than the ADMM with fixed parameter ρ, because it uses the self-adaptive rule to adjust the parameters ρ n .In addition, the number of iterations for the SADMM is almost same for different h and ρ.

Example 2
In this example, we consider the obstacle problem on the L-shaped domain = (-0.5,0.5) 2 \ [0, 0.5] 2 with f = 0 and As in the previous example, we first consider the meshes of size h = 1 80 with ρ = 1000.Figures 3 and 4 plot the numerical solution u h and the contact zone u = ψ, respectively.We observe that our numerical results agree very well with the corresponding results in [9,11,14].We also investigate the convergence behavior of proposed methods for this example.Tables 3 and 4 display the numerical results for the number of iterations and CPU time.We notice that the SADMM also shows good performance for all initial penalty parameters ρ.

Conclusion
In this paper, we extend the ADMM to fourth-order variational inequalities for the numerical solution.To improve the performance of the method, we propose the SADMM, which uses a self-adaptive rule to adjust the penalty parameter automatically.The advantage of the methods is that each iteration only considers a linear problem and easily chooses the penalty parameter by using iterative functions.The numerical results show that the selfadaptive rule is attractive and necessary for different initial penalty parameters ρ.Our analysis also can be applied to fourth order variational inequalities with curvature obstacle, which we will consider in a forthcoming study.

Figure 1 Figure 2
Figure 1 The numerical solution of example 1

Figure 3
Figure 3 The numerical solution of example 2

Figure 4 Table 3
Figure 4 The contact zone of example 2

Table 1
Number of iterations for each method

Table 2
CPU time for each method

Table 4
CPU time for each method