Weak Nonlinear Bilevel Problems : Existence of Solutions via Reverse Convex and Convex Maximization Problems

. In this paper, for a class of weak bilevel programming problems we provide suﬃcient conditions guaranteeing the existence of global solutions. These conditions are based on the use of reverse convex and convex maximization problems.

1. Introduction. We consider the following weak nonlinear bilevel optimization problem (weak in the sense of [8]) with F, f : R p × R q → R, X and Y be two subsets of R p and R q respectively. The formulation of the problem that we consider, called the pessimistic formulation, corresponds to a static uncooperative two player game in which one of the players has the leadership and full information about the second player. Player 1 (the leader) with the objective function F firstly announces a strategy x in X, and after, the player 2 (the follower) with the objective function f reacts optimally by selecting a strategy y(x) ∈ Y . Assume that the solution set M(x) is not always a singleton and the leader can not influence the choice of the follower. Then, the leader provides himself against the possible worst follower's choice by minimizing the marginal function sup y∈M(x) F (x, y). The presence in the first level of the implicit constraint set M(x), which is an output of the problem P(x), makes the problem (S) difficult to solve, which in general is not differentiable and not convex, and hence belongs to the class of nondifferentiable global optimization problems. The difficulty encountered in the investigation of weak nonlinear bilevel problems remains in finding suitable conditions which are not strong and depend only on the problem's data. In contrast, the strong Stackelberg problem min x∈X inf y∈M(x) F (x, y) which corresponds to the optimistic formulation, presents less difficulties and hence is more considered in the literature than the weak Stackelberg problem. It corresponds for example to the case where the leader can influence the follower in his choice of the strategies in M(x). Another interesting formulation of the leader's problem corresponds to the case where the leader evaluates the performance of the follower by his optimal value (see for example [6], [21] and [26]). For different applications of bilevel optimization problems, we refer to [9] and [20].
It is well-known that the existence of solutions for weak bilevel programming problems is a difficult task. So, our investigation in this paper, which is a continuation of previous works [1]- [4] dealing with the same subject, is to give sufficient conditions ensuring the existence of solutions to weak nonlinear bilevel optimization problems. For this purpose, we will establish some relationships between problem (S) and some other well-known global optimization problems. More precisely, under certain assumptions, firstly, we show that the existence of solutions to appropriate parameterized reverse convex and convex maximization problems imply the existence of solutions to (S). Similar results using d.c. problems are given in [4]. We note that these relationships between weak bilevel programming problems and such well-known global optimization problems that we provide in this paper are new in the literature. Relating to the same subject, we note that an interesting class of weak nonlinear bilevel optimization problems which admit solutions is given in [17]. On the other hand, reverse convex and convex maximization problems have received a great interest by several authors. Nowadays there exists a great number of interesting theoretical and numerical results for such problems. For papers dealing with reverse convex and convex maximization problems, we refer for example respectively to [5], [16], [23]- [25], [27]- [29] and [10]- [14], [22]. We also refer to the interesting books of global optimization [15] and [30] and references therein.
The paper is organized as follows. In Section 2, we recall some results related to convex analysis and reverse convex problems. In Section 3, under appropriate assumptions, we show the existence of solutions for problem (S) using reverse convex and convex maximization problems.
Throughout the paper, we assume that X and Y are compact and compact convex subsets of R p and R q , respectively. For a given optimization problem, we will use the term solution to mean global solution.
2. Background of convex analysis and reverse convex problems. In this section, we recall some definitions and fundamental results related to convex analysis and reverse convex problems that we will use in the sequel.
Let g : R n → R ∪ {+∞} be a convex function. The set dom(g) defined by dom(g) = y ∈ R n / g(y) < +∞ is called the effective domain of g. The function g is said to be proper if dom(g) = ∅. A vector y * ∈ R n is said to be a subgradient of g atȳ ∈ dom(g), if The set of all subgradients of g atȳ is called the subdifferential of g atȳ, and is denoted by ∂g(ȳ). Let A be a nonempty closed and convex subset of R n .
1. The indicator function of A denoted by ψ A , is the function defined on R n by 2. Let a ∈ A. The normal cone to A at a is the set denoted by N A (a) and defined by 3. Let a ∈ R n , and set (see [27]) If a ∈ A, then N * A (a) = N A (a). For illustration, we consider the following example.
which is nonempty contrarily to N A (a).
We recall the following results on subdifferential calculus and optimality conditions (see for example [7] and [19]) Assume that there exists a point in dom(h 1 ) at which h 2 is continuous. Then, for any y ∈ R n , we have Theorem 2.2. Let g : R n → R be a convex function and Z be a nonempty closed convex subset of R n . Then, we have •ȳ minimizes g over R n if and only if 0 ∈ ∂g(ȳ), •z minimizes g over Z if and only if 0 ∈ ∂g(z) + N Z (z).
Now, let us recall some properties and results concerning reverse convex problems.
Letf ,ĝ : R n → R be two convex functions andD be a nonempty convex subset of R n . Let us consider the following optimization problem We will make the following assumption.
When this property is satisfied, we say that the reverse convex constraintĝ(x) ≥ 0 is essential, and (P) is called a reverse convex problem. Otherwise, the problem (P) is equivalent to an ordinary convex programming problem.

Remark 1.
[28] Assumptions (2.1) implies that ifx is a solution of (P), then g(x) = 0. Indeed, letx be a solution of (P), and suppose thatĝ( Hence,ĝ(x) < 0. It follows from the continuity ofĝ that there exists t ∈ ]0, 1[ and with x * ∈D, which contradicts the optimality ofx. Hence the problem (P) is equivalent to the following min Then, any candidate x ∈D, for the optimality to problem (S) must satisfy the conditionĝ(x) = 0. Forx ∈ R n , consider the level set off relative toD, and passing byx

Theorem 2.3. [27]
Let z be a feasible point of (P) verifyingĝ(z) = 0, and 0 ∈ ∂ĝ(z). Assume that assumption (2.1) and the following condition are satisfied Then, z is a solution of (P) if and only if z satisfies the following condition Remark 2. We note that in [27], the compactness ofD is mentioned among the hypotheses, but not used in the corresponding proof. So, we have not included this property in Theorem 2.3.

3.
Existence of solutions to problem (S). In this section, we will give some sufficient conditions ensuring the existence of solutions to problem (S) via other wellknown global optimization problems in the literature. More precisely, under certain assumptions, we show that the existence of solutions to appropriate parameterized reverse convex and convex maximization problems implies the existence of solutions to problem (S). Sufficient conditions for (S) using d.c. problems are studied in [4].
3.1. Preliminary results. First, let us introduce the following notations. For We will use the following assumptions.
Recall that the sets X and Y are compact and compact convex subsets of R p and R q , respectively. Let assumptions (3.1)-(3.3) hold. For x ∈ X, consider respectively the following parameterized d.c. and reverse convex problems Remark 3. The following remarks are obvious.
2) Let assumptions (3.1)-(3.3) hold. Then, i) From assumption (3.1), the reverse convex constraint h x (y, t) ≥ 0 is essential. ii) the problem RCP(x) is equivalent to the following (see Remark 1) that is, the search of solutions can be restricted to the constraint set Note that the use of assumption (3.1) in the rest of the paper will be implicit. Otherwise, the problem RCP(x) will be equivalent to a convex programming problem.
Let x ∈ X. Then, we have the following relationship between the problems RCP(x) and DCP(x).
Proof. Let x ∈ X. i) Assume thatȳ x ∈ Y solves DCP(x). Then, Let (y, t) ∈ Y × R, such that h x (y, t) ≥ 0, i.e., f x (y) ≥ t. Then, Since f x (y) is a finite real number, let t ∈ R such that f x (y) ≥ t. Then, we havê That isȳ x solves DCP(x).
For x ∈ X, let the following parameterized convex maximization problem Then, as above, we have the following relationship between the problems CMP(x) and DCP(x).
i) Letȳ x be a solution of DCP(x). Then,ȳ x is a solution of the problem and hence, Let (y, t) ∈ Y × R such that t ≥F x (y). Then, It follows from the feasibility of (ȳ On the other hand, since (ȳ x ,t x ) solves CMP(x), if follows thatF x (ȳ x ) ≤t x . Then, we get which means thatȳ x solves the problem DCP(x).

3.2.
Existence of solutions to (S) via reverse convex and convex maximization problems. In this subsection, we give sufficient conditions ensuring the existence of solutions to (S) via reverse convex and convex maximization problems. First, we recall some results from [2] and [4]. If moreover, the following condition is satisfied (C 2 ) For any x ∈ X, there exists a common solution to the following problems then, the problem (S) has at least one solution.
Remark 4. Since the function F x (.) is concave, the problem max y∈Y F x (y) is equivalent to a convex programming problem.
Definition 3.2. A multifunction N : X −→ −→ 2 Y is said to be lower semicontinuous on X, if for any x ∈ X, and any sequence (x n ) converging to x in X, we have where lim inf n→+∞ N (x n ) = y ∈ Y / ∃ y n ∈ Y, lim n→+∞ y n = y, and y n ∈ N (x n ), ∀ n ∈ N .
Remark 5. As it is well-known, under some mild assumptions, and if moreover, the multifunction M(.) : X −→ −→ 2 Y associated to the solution set of the lower level problem is lower semicontinuous on X, the solution set of (S) is nonempty. However, such a property is strong and rarely satisfied. It is shown in [4], that the property (C 2 ) and the lower semicontinuity of M(.) are independent from each other. Therefore, the property (C 2 ) can be considered as an alternative condition which ensures the existence of solutions to problem (S). For y ∈ R q , we denote by ∂f x (y) the subdifferential of the function f x (.) at y. Then, we have the following result which gives sufficient conditions for the existence of solutions to problem (S) via d.c. problems.

Theorem 3.3. [4]
Assume that assumptions (3.2)-(3.6) and the following condition are satisfied (3.7) For any x ∈ X, there existsȳ x ∈ Y , such that i)ȳ x is a local solution of the problem DCP(x), ii) 0 ∈ ∂f x (ȳ x ). Then, the problem (S) has at least one solution.
For the convenience of the reader we give the proof.
Proof. Let x ∈ X be arbitrarily chosen. First, note that by ii) of assumption (3.7), the pointȳ x minimizes the function f x over R q , and hence over the set Y (sincē y x ∈ Y ). Let N Y (ȳ x ) denote the normal cone to Y atȳ x . Writing problem DCP(x) in its equivalent form min y∈R q (F x + ψ Y − f x )(y), and sinceȳ x is a local solution of problem DCP(x), it follows from [18] that . On the other hand, since Y is a nonempty convex set, then the function ψ Y is proper and convex (see for example [19]). Moreover, we have dom(ψ Y ) = Y , andF x is continuous on R q and hence in particular on Y . Then, by using Theorem 2.1, we get , and from ii) of assumption (3.7), it follows that 0 ∈ ∂F x (ȳ x ) + N Y (ȳ x ). Therefore, by Theorem 2.2ȳ x minimizes the function F x over the set Y , and hence it is a common solution of the following convex minimization problems min y∈Y f x (y) and min y∈YF x (y).

So, it is a common solution to the problems (sinceF
Finally, using the result of Theorem 3.1, we deduce that the problem (S) has at least one solution.
Theorem 3.4. Assume that assumptions (3.1)-(3.6) and the following assumption are satisfied (3.8) For any x ∈ X, there exists (ȳ x ,t x ) ∈ Y × R, such that i) (ȳ x ,t x ) is a solution of the problem RCP(x), ii) 0 ∈ ∂f x (ȳ x ). Then, the problem (S) has at least one solution.
Proof. Let x ∈ X be arbitrarily chosen. From assumption (3.8) there exists (ȳ x ,t x ) ∈ Y × R which solves RCP(x). It follows from Proposition 1 thatȳ x solves the problem DCP(x). Then, using the fact that 0 ∈ ∂f x (ȳ x ), and Theorem 3.3, we deduce the existence of solutions to (S).

Remark 6.
In the case of problem RCP(x), the assumption (2.2) in Theorem 2.3 becomes For (ȳ,t) ∈ Y × R, recall the following notation that we will use in the sequel Then, we have the following result.
Theorem 3.5. Assume that assumptions (3.1)-(3.6) and the following condition are satisfied (3.9) For any x ∈ X, i) there existsȳ x ∈ Y , such that for any y ∈ Y , we have . Then, the problem (S) has at least one solution.
In the following theorem, we consider a class of problems where the objective functions of the leader and the follower are linked by the following inequality (3.10) For any (x, y) ∈ X × Y , we have F (x, y) + f (x, y) ≤ 0.
Theorem 3.6. Assume that assumptions (3.1)-(3.6), (3.10) and the following assumption are satisfied (3.11) For any x ∈ X, there existsȳ x ∈ Y , such that such that . Then, the problem (S) has at least one solution.
Let y * t * ∈ ∂f x (y) × {−1}. Then, y * ∈ ∂f x (y) and t * = −1. For where the last equality follows from i) of assumption (3.11). Thus, Since y * ∈ ∂f x (y), it follows that (2) On the other hand, we have Using (2), we get where the first and the last inequalities follow from (2) and (1) respectively. That is ∂f x (y) × {−1} ⊂ N * S(gx,(ȳx,fx(ȳx))) (y, f x (y)), and hence assumption (3.9) is satisfied. By the same arguments as in Theorem 3.5, we verify that condition (C 3 ) is satisfied. The nonemptiness of the solution set of problem (S) follows from the same theorem.
Remark 7. Remark that 1) if for any x ∈ X, the function f x is differentiable on R q , then the pointȳ x given in assumption (3.11) is a solution of the following system of q + 1 equations 2) assumption (3.11) implies that the function f x (.) is bounded from bellow by −F x (ȳ x ).
Finally, we have the following theorem which gives sufficient conditions for the existence of solutions to (S) via parameterized convex maximization problems.
Theorem 3.7. Assume that assumptions (3.2) and (3.3) and the following property are satisfied (3.12) For any x ∈ X, there exists (ȳ x ,t x ) ∈ Y × R, such that i) (ȳ x ,t x ) is a solution of the problem CMP(x), ii) 0 ∈ ∂f x (ȳ x ).
Then, problem (S) has at least one solution.
Proof. Let x ∈ X be arbitrarily chosen. The condition i) of assumption (3.12) and Proposition 2 imply thatȳ x solves the problem DCP(x). By using the fact that 0 ∈ ∂f x (ȳ x ), we deduce from Theorem 3.3, that problem (S) has at least one solution.

4.
Conclusions. Due to their complex formulation, weak nonlinear bilevel programming problems are known to be difficult in both the theoretical and numerical aspects. For this reason, such problems are rarely studied in the literature in spite of their importance. In the present paper, and in order to contribute to the reduction of such difficulties, we have derived some relationships between a class of weak nonlinear bilevel optimization problems and some well-known other global optimization problems. Therefore, the established results allow us to get the existence of solutions to (S) via reverse convex and convex maximization problems. If the theoretical study of weak bilevel programming problems has known some development, the numerical one is still in its infancy. We note that finding numerical algorithms for solving this class of problems is of major importance. This is out of the scope of the present paper and will be the subject of a forthcoming work.