MULTIGRID METHODS FOR SOME QUASI-VARIATIONAL INEQUALITIES

. We introduce four variants of a multigrid method for quasi-variational inequalities composed by a term arising from the minimization of a functional and another one given by an operator. The four variants of the method diﬀer from one to another by the argument of the operator. The method assume that the closed convex set is decomposed as a sum of closed convex level subsets. These methods are ﬁrst introduced as subspace correction algorithms in a general reﬂexive Banach space. Under an assumption on the level decomposition of the closed convex set of the problem, we prove that the algorithms are globally convergent if a certain convergence condition is satisﬁed, and estimate the global convergence rate. These general algorithms become multilevel or multi-grid methods if we use ﬁnite element spaces associated with the level meshes of the domain and with the domain decompositions on each level. In this case, the methods are multigrid V -cycles, but the results hold for other iteration types, the W -cycle iterations, for instance. We prove that the assumption we made in the general convergence theory holds for the one-obstacle problems, and write the convergence rate depending on the number of level meshes. The convergence condition in the theorem imposes a upper bound of the number of level meshes we can use in algorithms.

1. Introduction.The multigrid or multilevel methods for the constrained minimization of functionals have been introduced for the first time in the Mandel's papers [20], [21] and [11].Related methods have been introduced by Brandt and Cryer in [9] and Hackbush and Mittelmann in [14].Later, the method has been studied by Kornhuber in [16], and a variant of this method using truncated nodal basis functions has been introduced by Hoppe and Kornhuber in [15] and analyzed by Kornhuber and Yserentant in [19].Evidently, the above list of citations is not exhaustive and, for further information, we can see the review article [13] written by Gräser and Kornhuber.
The above cited papers refer almost exclusively to the complementarity problems and establish asymptotic convergence rates of the methods.A global convergence rate has been estimated by Tai in [22] for a subset decomposition method.For more complicated convex sets in the case of the constrained minimization of nonquadratic functionals, two-level methods have been introduced by Badea, Tai and

LORI BADEA
Wang in [8] for a multiplicative method, and by Badea in [2] for its additive variant.In these papers, global convergence rates have been established, too.Also, for variational inequalities arising from the minimization of non-quadratic functionals, global convergence rates have been derived by Badea in [4] (see also [5]) for multigrid methods with constraint level decomposition, and in [3] for standard multigrid methods.
Some of these methods have been extended for variational inequalities of the second kind and quasi-variational inequalities by Kornhuber in [17] and [18], and Badea and Krause in [6] (see also [7]).In [1], we have introduced one-and twolevel methods for quasi-variational inequalities composed by a term arising from the minimization of a functional and another one given by an operator.In the present paper, we extend to this type of quasi-variational inequalities a multigrid method with constraint level decomposition introduced in [4] for variational inequalities.The method, having four variants which differ from one to another by the argument of the operator, assumes that the closed convex set is decomposed as a sum of closed convex level subsets.These methods are first introduced as subspace correction algorithms in a general reflexive Banach space.Under an assumption on the level decomposition of the closed convex set of the problem, we prove that the algorithms are globally convergent if a certain convergence condition is satisfied, and estimate their global convergence rate.These general algorithms become multilevel or multigrid methods if we use finite element spaces associated with the level meshes of the domain and with the domain decompositions on each level.In this case, the methods become multigrid V -cycles, but the results hold for other iteration types, the W -cycle iterations, for instance.Moreover, the iterations of these multigrid methods have an optimal computing complexity.We prove that the assumption we made in the general theory holds for the one-obstacle problems, and write the convergence rate depending on the number of level meshes.The problems we are dealing have a unique solution only if the operator in the non-differentiable term of the inequality is a contraction with a constant small enough.A similar condition, but stronger, is asked for the convergence of the algorithms.This convergence condition will impose some upper bounds of the number of level meshes we can use in algorithms.
The paper is organized as follows.In Section 2, we introduce the quasi-variational problem and give an existence and uniqueness result for it.Then, we define the multigrid algorithm in a general framework of reflexive Banach spaces, and prove its convergence under some assumptions.In Section 3, we show that this algorithm can be viewed as a multilevel method if we associate finite element spaces to the level meshes and consider decompositions of the domain at each level.We prove that the assumption made in the previous section holds for convex sets of one-obstacle type.If the decompositions of the domain are made using the supports of the nodal basis functions we get, in Section 4, the multigrid methods.This particular choice of the domain decompositions allows us to obtain better estimates for the convergence rate of the method.In this case, we write the convergence condition and the convergence rate of the method depending on the number of level meshes.
2. Abstract convergence results.We consider a reflexive Banach space V , and V 1 , . . ., V J some closed subspaces of V , where V J = V .Let K ⊂ V be a nonempty closed convex set, and we assume that there exist some convex sets K j ⊂ V j , j = 1, . . ., J such that We also assume that, at each level 1 ≤ j ≤ J, we have I j closed subspaces of V j , V ji , i = 1, . . ., I j , and we shall write I = max j∈J I j .Now, we assume that there exists a constant C 1 such that for any w ji ∈ V ji , j = J, . . ., 1, i = 1, . . ., I j .Evidently, we can take, for instance, but sharper estimations can be available in certain cases.Finally, we make the following Assumption 1.We assume that there exist two positive constants C 2 and C 3 , and that any w ∈ K can be written as w = J j=1 w j , with w j ∈ K j , j = 1, . . ., J, such that -for any v ∈ K, -and any w ji ∈ V ji satisfying w j + i k=1 w jk ∈ K j , j = 1, . . ., J, i = 1, . . ., I j , there exist v ji ∈ V ji , j = 1, . . ., J, i = 1, . . ., I j , which satisfy Now, we consider a Gâteaux differentiable functional F : K → R, and assume that there exist two real numbers p, q > 1 such that for any real number M > 0 there exist α M , β M > 0 for which for any u, v ∈ V with ||u||, ||v|| ≤ M .Above, we have denoted by F ′ the Gâteaux derivative of F , and we have marked that the constants α M and β M may depend on M .It is evident that if (4) holds, then for any u, v ∈ V , ||u||, ||v|| ≤ M , we have Following the way in [12], we can prove that for any u, v ∈ V , ||u||, ||v|| ≤ M , we have ( We point out that since F is Gâteaux differentiable and satisfies (4), then F is a convex functional (see Proposition 5.5 in [10], pag.25).
In certain cases, the second equation in (4) can be refined, and we assume that there exist some constants 0 < β jk ≤ 1, β jk = β kj , j, k = J, . . ., 1, such that . ., I j and l = 1, . . ., I k .Evidently, in view of (4), the above inequality holds for We consider an operator T : K → V ′ with the property that for any M > 0 there exists c M > 0 such that for any v, u ∈ K, ||v||, ||u|| ≤ M .Also, if K is not bounded, we assume that F + T is coercive on K, in the sense that for any sequences Now, we consider the quasi-variational inequality Since the functional F is convex and differentiable, problem ( 10) is equivalent with Using ( 5), for a given M > 0 such that the solution u of ( 10) Concerning the existence and the uniqueness of the solution of problem (10) we have the following result.
Proposition 1.Let V be a reflexive Banach space and K a closed convex non empty subset of V .We assume that F is Gâteaux differentiable on K, satisfies (4), and the operator T satisfies (8).If there exists a constant 0 then problem (10) has a unique solution.
Proof of Proposition 1.Let us define the mapping S : K → K which associates to a w ∈ K the solution Sw ∈ K of the variational inequality Evidently, the above inequality has a unique solution Sw ∈ K for any given w ∈ K.
We can easily show that the mapping S is a contraction if condition (13) holds.
To solve problem (10), we propose four algorithms.The first one can be written as, Algorithm 1.We start the algorithm with a u 0 ∈ K and decompose it as in Assumption 1 with w = u 0 , u 0 = u 0 1 + . . .+ u 0 J , u 0 j ∈ K j , j = 1, . . ., J. At iteration n + 1, n ≥ 0, assuming that we have u n ∈ K, we decompose it as in Assumption 1 with w = u n , u n = u n 1 + . . .+ u n J , u n j ∈ K j , j = 1, . . ., J.Then, for j ∈ J, . . ., 1, -we successively calculate, the corrections w n+1 j ∈ V j , u n j + w n+1 j ∈ K j , by the multiplicative algorithm: we first write w n j = 0, and for i = 1, . . ., I j , successively for any v ji ∈ V ji , u n j + w and write w -then, we write, The other three algorithms are variants of the above algorithm in which we change the argument of T , taking Like inequality (10), inequality (15) is equivalent with a minimization problem.The global convergence of the above algorithms is proved by Theorem 2.1.We consider that V is a reflexive Banach, V j , j = 1, . . ., J, are closed subspaces of V , and V ji , i = 1, . . ., I j , are some closed subspaces of V j , j = 1, . . ., J. Let K be a non empty closed convex subset of V decomposed as in (1) and which satisfies Assumption 1. Also, we assume that F be a Gâteaux differentiable functional on K and satisfies (4), the operator T satisfies (8), and the coerciveness condition ( 9) is satisfied if K is not bounded.Also, we assume that for any and On these conditions, if u is the solution of problem (10), then there exists a constant M > 0 such that ||u||, ||u 0 || and ji , n ≥ 0, j = 1, . . ., J, i = 1, . . ., I j , are the approximations of u obtained from Algorithm 1 or its variants, and we have the following error estimations where Remark 1.
, then condition ( 18) can be written as 2 > 0 and condition (19), as These two conditions imply that there exists a 0 < θ < 1 such that (13) in Proposition 1 holds.However, the two convergence conditions in the statement of Theorem 2.1 are stronger than the existence and uniqueness condition in that proposition, they asking that c M to be small enough in comparison with α M .
Proof of Theorem 2.1.
Step 1.We prove the boundedness of the approximations of , n ≥ 0, j = 1, . . ., J, i = 1, . . ., I j .Let us write where u is the solution of problem (10) and u 0 is the initial guess in the algorithms.
From the coerciveness condition (9), we get that such a M 0 exists, and, evidently, ||u||, ||u 0 || ≤ M 0 .We first prove by mathematical induction that ||u n || ≤ M 0 for n ≥ 0. To this end, for a given n ≥ 1, let us write || with k = 0, . . ., n, j = 1, . . ., J and i = 1, . . ., I j ).The variables u and v in equations ( 4) and ( 8) will be replaced in this proof only by the solution u of problem (10) or its approximations u n + J k=j+1 w n+1 k + w n+ i I j j , j = 1, . . ., J, i = 1, . . ., I j , obtained from Algorithm 1 or its variants.Using this M n in ( 4) and ( 8), we shall get that error estimation (20) holds for the iterations k ≤ n + 1. Evidently, constant C1 will depend on M n , but we get from this error estimation that Therefore, in view of (24), we get that ||u n || ≤ M 0 , for any n ≥ 0.
We use again the mathematical induction to prove that the sequences (u n + J k=j+1 w n+1 k + w ) n , j = J, . . ., 1, i = 1, . . ., I j are bounded.To this end, we use the boundedness of the sequence (u n ) n , ( 5), ( 15) and ( 8) for each variant of Algorithm 1.We give here the proof for , the proof for the other three variants being similar and more simple.In this case, for M = M n , from ( 5) and ( 15) with v ji = 0, we have and using (8), we get In view of (18), starting from the boundedness of (u n ) n , and using the above inequality and the coerciveness condition (9), we get by induction, for j = J, . . ., 1 and i = 1, . . ., I j , that the sequences (u n + J k=j+1 w n+1 k + w ) n are bounded.Therefore, we can conclude that there exists a M > 0 as in the statement of the theorem.
3. Multilevel Schwarz methods.We consider a family of regular meshes T hj of mesh sizes h j , j = 1, . . ., J over the domain Ω ⊂ R d .We write Ω j = ∪ τ ∈T h j τ and assume that T hj+1 is a refinement of T hj on Ω j , j = 1, . . ., J −1, and Ω 1 ⊂ Ω 2 ⊂ . . .⊂ Ω J = Ω.Also, we assume that, if a node of T hj lies on ∂Ω j , then it lies on ∂Ω j+1 , too, that is, it lies on ∂Ω.Besides, we suppose that dist xj+1node of T h j+1 (x j+1 , Ω j ) ≤ Ch j , j = 1, . . ., J − 1.In this section, C denotes a generic positive constant independent of the mesh sizes, the number of meshes, as well as of the overlapping parameters and the number of subdomains in the domain decompositions which will be considered later.Since the mesh T hj+1 is a refinement of T hj , we have h j+1 ≤ h j , and assume that there exists a constant γ, independent of the number of meshes or their sizes, such that 1 < γ ≤ h j /h j+1 ≤ Cγ, j = 1, . . ., J − 1.
At each level j = 1, . . ., J, we introduce the linear finite element spaces, and, for i = 1, . . ., I j , we write The functions in V hj j = 1, . . ., J − 1, will be extended with zero outside Ω j and the spaces will be considered as subspaces of H We consider the one-obstacle problem where with ϕ ∈ V hJ .It has been proved in [4] (see also [5]) that Assumption 1 holds, even in more general settings, for the finite element spaces V j = V hJ and V ji = V i hJ , j = 1, . . ., J, i = 1, . . ., I j , and the above convex set K. The level convex sets are taken of the form K j = {v ∈ V hj : ϕ j ≤ v}, j = J, . . ., 1 (36) with ϕ j ∈ V hj , j = J, . . ., 1, and ϕ = ϕ J + . . .+ ϕ 1 .Also, the constants C 2 and C 3 can be written as where d being the Euclidean dimension of the space where the domain Ω lies.The constants C 1 and β jk , j, k = J, . . ., 1, can be taken as in ( 3) and (7), respectively.The level decomposition w = w 1 + . . .+ w j , w ∈ K, w j ∈ K j , j = 1, . . ., J, asked by Assumption 1 and applied for w = u n at the beginning of each iteration of Algorithm 1 and its variants is made by using the nonlinear operators I hj : V hj+1 → V hj , j = 1, . . ., J − 1, which are defined as follows.Let us denote by x ji a node of T hj , by φ ji the linear nodal basis function associated with x ji and T hj , and by ω ji the support of φ ji .Given a v ∈ V hj+1 , we write I ji v = min x∈ωji v(x).Finally, we define I hj v := xjinode of T h j (I ji v)φ ji (x).The level decomposition of a w ∈ K is given by where, for a v ∈ V hJ , we recursively define As we see form the above estimations of the constants C 1 − C 3 , the convergence rates and the convergence conditions given in Theorem 2.1 depend on the functional F , the operator T , the maximum number of the subdomains on each level, I, and the number J of levels.The number of subdomains on levels can be associated with the number of colors needed to mark the subdomains such that the subdomains with the same color do not intersect with each other.Since this number of colors depends in general on the dimension of the Euclidean space where the domain lies, we can conclude that our convergence rate essentially depends on the number J of levels.
As functions only of J, the constants in the previous section can be written as where C is a generic constant which does not depend on J, and We can then conclude from Theorem 2.1 and Remark 1 that if the functional F the operator T have the asked properties, then Algorithm 1 and its variants are globally convergent if c M is small enough in comparison with α M .
Remark 2. 1) The results of this section have referred to problems in with Dirichlet boundary conditions, and the functions corresponding to the coarse levels have been extended with zero outside the domains Ω j , j = J −1, . . ., 1. Let us assume that the problem has mixed boundary conditions: ∂Ω J = Γ d ∪ Γ n , with Dirichlet conditions on Γ d and Neumann conditions on Γ n .In this case, if a node of T hj , j = J −1, . . ., 1, lies in Int(Γ n ), we have to assume that all the sides of the elements τ ∈ T hj having that node are included in Γ n .
2) Similar convergence results with those ones presented in this section can be obtained for problems in (H 1 ) d .4. Multigrid methods.In the above multilevel methods a mesh is the refinement of that one on the previous level, but the domain decompositions are almost independent from one level to another.We obtain similar multigrid methods by decomposing the level domains by the supports of the nodal basis functions.Consequently, the subspaces V i hj , i = 1, . . ., I j , are one-dimensional spaces generated by the nodal basis functions associated with the nodes of T hj , j = J, . . ., 1.In this case, Algorithm 1 and its variants are V-cycle multigrid iterations.Evidently, similar results can be given for the W-cycle multigrid iterations.In [3], it has been proved that, in the case of the multigrid methods, we can take where C ≥ 1 is a generic constant independent of the number of meshes.Now, we shall write the convergence rate of the multigrid Algorithm 1 and its variants in function of the number J of levels, and of the constants α M , β M and c M .To this end, we shall use the constants C 1 , max k=1,...,J J j=1 β kj given in (42), and C 2 and C 3 given and (40).
From condition (19), it follows that C3 C2 ≤ 1 and therefore, the constant C1 in  .Consequently, in view of (40), the constant C1 can be taken of the form Conditions ( 18) and ( 19) impose an upper bound of the number of levels J which we can consider in the algorithms, and, in the following, we shall derive such a condition on J. Condition (19) holds if