OPTIMALITY RESULTS FOR NONDIFFERENTIABLE VECTOR OPTIMIZATION PROBLEMS WITH VANISHING CONSTRAINTS

At present, some real extremum problems related to the activity of modern man, for example, in industry, economy, optimal control, engineering, mechanics, are modeled by optimization problems with vanishing constraints. In this paper, a class of nondifferentiable vector optimization problems with vanishing constraints is considered in which every component of the involved functions is locally Lipschitz. This kind of extremum problems is generally difficult to deal with, because of a special structure of constraints. The Karush-Kuhn-Tucker necessary optimality conditions are established for foregoing nonsmooth multicriteria optimization problems under the VC-Cottle constraint qualification. Sufficient optimality conditions are also proved for the considered nondifferentiable vector optimization problem with vanishing constraints under convexity hypotheses.


Introduction
A particular form of a mathematical programming problem which has attracted the attention of the optimization community over more than the past decade is a socalled optimization problem with vanishing constraints.This is a consequence of the fact that many problems from structural and topology optimization can be reformulated just as such an extremum problem.For example, vanishing constraints occur in truss topology design problems if a bar is not realized in the optimal structure so that constraints (like minimum thickness) disappear at the solution (see [1]).In recent years, topology optimization become an accepted tool in many engineering, mechanics and industrial applications such as airplane and car manufacturing.Further, the corresponding research shows that the robot motion planning problem can be transformed into the mathematical programming problem with vanishing constraints (see [22]).In addition, such optimization problems are also widely used in the economic dispatch problem (see [18]) and the nonlinear integer optimal control (see [19,22]).Achtziger and Kanzow [1] were the first who introduced the formulation of a mathematical programming problem with vanishing constraints.Since optimization problems with vanishing constraints, in their general form, are quite a new class of mathematical programming problems, only few papers have been published on this subject so far (see, for example, [2,3,9,10,[12][13][14]16,17,25,29]).But the most of them have been devoted to optimality and duality results for differentiable scalar optimization problems with vanishing constraints.Recently, however, there can be found in the literature first results on optimality conditions for some classes of nondifferentiable optimization problems with vanishing constraints.In [20], Kazemi and Kanzi analyzed a class of optimization problems with differentiable objective functions and non-differentiable vanishing constraints for which, by using various qualification conditions, they obtained several stationary conditions of Karush-Kuhn-Tucker type.Unfortunately, based on the mistaken example, they gave an incorrect statement that it is impossible to replace the smoothness condition for the objective function of the considered optimization problem with vanishing constraints by its Lipschitzian condition in order to prove the necessary optimality conditions.Recently, Su and Hang [27] established the optimality conditions in terms of upper and lower Hadamard derivatives for the nonsmooth semi-infinite interval-valued mathematical programming problem with vanishing constraints.However, the optimality results for multiobjective programming problems with vanishing constraints have been established in the literature mainly for differentiable optimization problems of such a type (see, [4,24,28]).Ardakani et al. [5] considered a multiobjective optimization problem with vanishing constraints, in which its objective functions are continuously differentiable and its constraints are convex and not necessarily differentiable.They introduced two new Abadie-type constraint qualifications and presented some necessary condition for properly efficient solutions of this vector optimization problem, using convex subdifferential.Nevertheless, there are only a few works devoted to results for vector optimization problems with vanishing constraints in which all involved functions are nondifferentiable (see, [6,11]).In [11], Guu et al. considered a nondifferentiable vector semi-infinite optimization problem with vanishing constraints.For such nonsmooth multiobjective programming problems, they derived various stationary conditions and proved the strong Karush-Kuhn-Tucker type sufficient optimality conditions under generalized convexity assumptions.Barilla et al. [6] formulated their results established for the considered nonsmooth vector optimization problem in terms of Mordukhoivich subdifferential.Therefore, the main purpose of this paper is to prove the aforesaid necessary optimality conditions for the considered nondifferentiable multiobjective programming problem with vanishing constraints in which all components of the involved functions are locally Lipschitz.Namely, for the considered nondifferentiable multiobjective programming problem with vanishing constraints, firstly, we prove the Karush-Kuhn-Tucker necessary optimality conditions for a feasible solution to be its weak Pareto solution.We also use Karush-Kuhn-Tucker approach to the considered nondifferentiable multicriteria optimization problem to exploit the special structure of its vanishing constraints.Moreover, the present paper is also an attempt at providing an extension of the well-known Cottle constraint qualification from classical constrained optimization problems to the considered nondifferentiable vector optimization problems with vanishing constraints.Thus, we introduce the socalled VC-Cottle constraint qualification for such nonsmooth extremum problems, under which we prove the Karush-Kuhn-Tucker necessary optimality conditions for nondifferentiable multiobjective programming problems with vanishing constraints.Motivated to find a (weak) Pareto solution for the considered nondifferentiable multiobjective programming problem with vanishing constraints, we also generalize the definition of a S-stationary point introduced by Achtziger and Kanzow [1] for the considered multiobjective programming problem with vanishing constraints.Then, we show the equivalence between a S-stationary point and the Karush-Kuhn-Tucker necessary optimality condition.At the end, we correct the mistaken result and the incorrect statement given by Kazem and Kanzi [20] that it is not possible to derive the necessary optimality conditions for nonsmooth scalar optimization problems in which objective functions are nondifferentiable.In this paper, not only we prove the aforesaid necessary optimality conditions for nonsmooth scalar optimization problems with vanishing constraints in which also objective functions are nondifferentiable, but we establish them for nondifferentiable multiobjective programming problems with vanishing constraints.At the end of the paper, we also analyze and comment Example 4.1 [20].

Preliminaries
The following convention for equalities and inequalities will be used in the paper.
For any x = (x 1 , x 2 , ..., x n ) T , y = (y 1 , y 2 , ..., y n ) T in R n , we define: (i) x = y if and only if x i = y i for all i = 1, 2, ..., n; (ii) x > y if and only if x i > y i for all i = 1, 2, ..., n; (iii) x ≧ y if and only if x i ≧ y i for all i = 1, 2, ..., n; (iv) x ≥ y if and only if x ≧ y and x ̸ = y.In this paper, we will use the same notation for row and column vectors when the interpretation is obvious.The notation ⟨•, •⟩ is used in the paper to denote the inner product.A nonempty set X ⊂ R n is said convex if x + α (y − x) ∈ X for all x, y ∈ X and any α ∈ [0, 1].Now, we recall some definitions and results for locally Lipschitz functions.

Definition 2.3 ( [7]
).The Clarke generalized subgradient of a locally Lipschitz function φ : R n → R at u ∈ R n , denoted by ∂φ (x), is defined as follows: It is well-known that a function φ : X → R defined on a convex set X is said to be convex provided that, for all x, u ∈ X and any α ∈ [0, 1], one has Definition 2.4 ([7, 26]).The subdifferential of a convex function φ : R n → R at u ∈ R n is defined as follows The following result in convex analysis is well-known.Proposition 2.1 ([7]).Let φ be convex on R n and locally Lipschitz at u ∈ R n .Then the Clarke generalized subgradient ∂φ (u) of φ at u coincides with the subdifferential of φ at u in the sense of convex analysis, and φ 0 (u; v) coincides with the classical directional derivative φ ′ (u; v) := lim α↓0 φ(u+αv)−φ(u) α .
As it follows from the definition of a convex function and the definition of its subdifferential, the following result is true: ([7, 26]).If φ : R n → R is a convex function, then the inequality holds for all x, u ∈ R n and any ξ ∈ ∂φ (u).

Corollary 2.1 ([7]). For any scalars α i , one has
and equality holds if all but at most one of the φ i is strictly differentiable at u.

Theorem 2.1 ([7]
).Let the function φ : R n → R be locally Lipschitz at a point u ∈ R n and attain its (local) minimum at u. Then 0 ∈ ∂φ (u) .

Proposition 2.4 ([7]
).Let φ 1 and φ 2 be locally Lipschitz functions at u. Then φ 1 φ 2 also a locally Lipschitz function at u, and one has both regular in the sense of Clarke at u, then the equality holds and φ 1 φ 2 is regular in the sense of Clarke at u.

Proposition 2.5 ( [7]). Let the functions φ
regular in the sense of Clarke for each i ∈ I (u), then equality holds.

Nondifferentiable vector optimization problem with vanishing constraints and optimality conditions
In the paper, we consider the following constrained vector optimization problem: where , are assumed to be locally Lipschitz functions on R n .We call (MPVC) a multiobjective programming problem with vanishing constraints.The considered multiobjective programming problem (MPVC) is called vector optimization problem with vanishing constraints due to its implicit sign constraints G t (x), t ∈ T , which vanish immediately whenever H t (x) = 0, t ∈ T .
Remark 3.1.If p = 1 and all functions constituting (MPVC) are continuously differentiable, then (MPVC) reduces to the smooth scalar optimization problem with vanishing constraints which was introduced for the first time to optimization theory by Achtziger and Kanzow [1].If p = 1, the objective function is continuously differentiable and J = ∅, then (MPVC) reduces to the scalar optimization problem with vanishing constraints considered by Kazemi and Kanzi [20].Hence, the results established in the paper for the considered multiobjective programming problem (MPVC) extend similar results previously proved in the literature.
For the purpose of simplifying our presentation, we will next introduce some notations which will be used frequently throughout this paper. Let be the set of all feasible solutions in the problem (MPVC).Now, for any feasible solution x, let us denote the following index sets Further, let us divide the index set T + (x) into the following index subsets: Similarly, the index set T 0 (x) can be partitioned into the following three index subsets: In multiobjective programming problems, the concept of (weak) Pareto optimality (or (weak) efficiency) has an important role in all optimal decision problems with noncomparable criteria.Definition 3.1.A feasible point x is said to be a weak Pareto solution (a weakly efficient solution) for (MPVC) if there is no other x ∈ D such that Definition 3.2.A feasible point x is said to be a Pareto solution (an efficient solution) for (MPVC) if there is no other x ∈ D such that In this paper, we use the modified Cottle constraint qualification in proving the Karush-Kuhn-Tucker necessary optimality conditions for the considered multiobjective programming problem (MPVC) with vanishing constraints.We can call it as the V C-Cottle constraint qualification (V C-CCQ). ) Proof.By assumption, x ∈ D is a weak Pareto solution in the considered multiobjective programming problem (MPVC) with vanishing constraints.At first, we define the following auxiliary function F : R n → R as follows .9) We now show that, for all x ∈ R n , F (x) ≧ 0. (3.10) We proceed by contradiction.Suppose, contrary to the result, that there exists x ∈ R n such that F ( x) < 0.Then, by (3.9), it follows that This means that x is feasible in (MPVC).Moreover, we have that the inequalities this is a contradiction to the assumption that x ∈ D is a weak Pareto solution in (MPVC).Hence, the inequality (3.10) must be true.
From the feasibility of x in (MPVC), it follows that Now, we designate by J (x) the set of indices j ∈ J for which F (x) = g j (x), by T H (x) the set of indices t ∈ T for which F (x) = −H t (x) and by T HG (x) the set of indices t ∈ T for which F (x) = H t (x)G t (x).Hence, using Proposition 2.5, by (3.13), we get that Firstly, we assume that g j (x) < 0, j = 1, ..., m, H t (x) > 0 and G t (x) < 0 for all t ∈ T .Then, (3.12) and (3.14) yield Hence, by the definition of a convex hull, there exists λ ∈ R p , λ ≥ 0, In this case, we obtain the conditions (3.2)-(3.8)setting µ j = 0, j ∈ J, ϑ . Now, we assume that there exist j ∈ J, such that g j (x) = 0 or t ∈ T such that H t (x) = 0 or G t (x) = 0. Hence, by the definition of a convex hull, there exist vectors λ ∈ R p , λ ≧ 0, µ ∈ R J(x) , µ ≧ 0, β ∈ R T H (x) , β ≧ 0 and υ ∈ R T HG (x) , υ ≧ 0, not all equal to zero, such that Observe that, in (3.15), we can set µ j = 0, j / ∈ J (x), (3.17) Using (3.17) in (3.16), we get   2) Let t ∈ T +0 (x).Then H t (x) > 0 and G t (x) = 0.The first condition implies that t / ∈ T HG (x), and consequently, β t = 0. Since we have always υ t ≧ 0, conditions (3.21) and (3.22) give ϑ H t = 0 and ϑ G t ≧ 0 for all t ∈ T +0 (x).
Based on the above cases, we conclude that the necessary optimality conditions (3.2)-(3.8)hold.We see that (3.23) is exactly the necessary optimality conditions (3.2).Further, note that also the Karush-Kuhn-Tucker necessary optimality condition (3.4) is also fulfilled.Indeed, if g j (x) < 0 for all j / ∈ J (x), then µ j = 0 for all j / ∈ J (x).Hence, we get that µ j ≧ 0, j = 1, ..., m.Moreover, Lagrange multiplier λ is not equal to 0 as it was shown above.This means that λ ≥ 0 and completes the proof of this theorem.
Various stationarity concepts are widely studied in the literature devoted to differentiable optimization problems with vanishing constraints and known to be important optimality conditions for such mathematical programming problems (see, for example, [1,13]).Now, we generalize the definition one of them to the nondifferentiable vectorial case.Namely, we extend the definition of a S-stationary point given by [15] to the considered nonsmooth multiobjective programming problem with vanishing constraints.Definition 3.4.A feasible solution x is called a S-stationary point for the nonsmooth vector optimization problem (MPVC) with vanishing constraints if there are Lagrange multipliers λ ∈ R p , µ ∈ R m , ϑ H ∈ R q and ϑ G ∈ R q , not equal to 0, such that Here, it is worth mentioning that the concept of a S-stationary point is equivalent to the standard KKT condition.

KKT point of (MPVC). Then x is a S-stationary point of (MPVC). Conversely, let x be a S-stationary point of (MPVC). There are Lagrange multipliers
Based on the foregoing results and the definition of a S-stationary point, we now formulate the Karush-Kuhn-Tucker necessary optimality conditions of a Sstationary type.

Theorem 3.2 (Karush-Kuhn-Tucker necessary optimality conditions of S-stationary type). Let x ∈ D be a (weak) Pareto solution of the considered multiobjective programming problem (MPVC) with vanishing constraints and the V C-Cottle constraint qualification be fulfilled at x. Then, there exist Lagrange multipliers
In the next theorem, we present the Karush-Kuhn-Tucker type sufficient optimality conditions for (MPVC) under convexity assumptions.In order to prove this result for a S-stationary point of (MPVC), we introduce extra denotations.Namely, if x ∈ D is a S-stationary point of (MPVC) and hence, by Definition 3.4, there exist Lagrange multipliers are fulfilled at x, then we introduce the following denotations: Theorem 3.3.Let x ∈ D be a S-stationary point of (MPVC).Further, assume that the conditions (3.24)-(3.27)be fulfilled with Lagrange multipliers λ ∈ R p , µ ∈ R m , ϑ H ∈ R q and ϑ G ∈ R q such that the following conditions be satisfied: holds for some We proceed by contradiction.Suppose, contrary to the result, that x is not a weak Pareto solution of (MPVC).Then, by Definition 3.1, there exists x ∈ D such that By the definition of a S-stationary point for (MPVC) (see Definition 3.4), (3.29) yields (3.33) Hence, using again Definition 3.4 together with (3.33), we get By convexity hypotheses, the inequalities hold.Hence, by Definition 3.4, the above inequalities yield, respectively, ) j∈J(x) holds for any ξ i ∈ ∂f i (x), i ∈ I, ζ j ∈ ∂g j (x), j ∈ J, ς t ∈ ∂H t (x), t ∈ T , γ t ∈ ∂G t (x), t ∈ T .This is a contradiction to (3.28).In order to prove that x is a Pareto solution in (MPVC), the convexity assumption imposed on the objective functions should be replaced by the assumption of strictly convexity.However, the proof is similar in such a case and, therefore, it is omitted.Thus, the proof of this theorem is completed.
In order to illustrate the result established in Theorem 3.3, we present the example of a convex multiobjective programming problem (MPVC) with vanishing constraints.
Example 3.1.Consider the following multiobjective programming problem (MPVC) with vanishing constraints defined by ≦ 0 and x = (0, 0) is a feasible solution of (MPVC1).We shall show by using the sufficient optimality conditions established in Theorem 3.3 that x = (0, 0) is a weak Pareto solution of (MPVC1).First, we shall show that is a S-stationary point of (MPVC1).
In fact, we have that the condition (3.24) implies then, in fact, the conditions (3.24)- (3.27) are fulfilled.This means by Definition 3.4 that x = (0, 0) is a S-stationary point of (MPVC1).Further, note that the conditions a) and b) in Theorem 3.3 are fulfilled and all functions constituting (MPVC1) are convex.Then, by Theorem 3.3, we conclude that x = (0, 0) is a weak Pareto solution of (MPVC1).
In [20], Kazemi and Kanzi presented an example of a nondifferentiable scalar optimization problem with vanishing constraints to justify the fact that it is not possible to replace the smoothness condition of the objective function f by its Lipschitzian condition in the necessary optimality conditions established in the abovementioned paper.Unfortunately, the analysis provided by these authors in the aforesaid example is incorrect.We shall improve this mistake and show that it is possible to consider nondifferentiable optimization problem with vanishing constraints, also with locally Lipschitz objective functions.
Unfortunately, based on the wrong example, they gave also an incorrect statement that it is impossible to generalize their optimality results for optimization problems with vanishing constraints in which also objective functions are nondifferentiable.Namely, they stated that it is not possible to replace the smoothness condition imposed on the objective function of the considered optimization problem with vanishing constraints by its Lipschitzian condition in order to prove the necessary optimality conditions.They illustrated this incorrect statement by an example of a scalar optimization problem with vanishing constraints in which the set of all feasible solutions has been calculated incorrectly.
Example 3.2.We consider the nondifferentiable optimization problem with vanishing constraints presented in Example 4.1 [20] which it is defined as follows where In Example 4.1 [20], the set of all feasible solutions in (MPVC2) has been defined by S = ({0} × R + ) ∪ (R + × {0}), which is completely incorrect if the constraint functions H 1 , H 2 , G 1 and G 2 are defined as in the formulation of the optimization problem (MPVC2).The correct set of all feasible solutions in (MPVC2) is defined by D = (x 1 , x 2 ) ∈ R 2 : x 1 ≧ 0 ∧ x 2 ≧ 0 .However, if we define correctly the set D of all feasible solutions in (VP), then we observe that x = (0, 0) is not its optimal solution, contrary to the result given in Example 4.1 [20].Since x = (0, 0) is not a minimizer in (MPVC2), therefore, it is not possible to use the necessary optimality conditions at such a point.However, the necessary optimality conditions have been used by the Authors in Example 4.1 [20] at such a nonoptimal point.Hence, it is not possible to conclude any results from such an incorrect example.Thus, it is not true the statement given in [20] that it is not possible to give the necessary optimality conditions for optimization problems with vanishing constraints in which the smoothness condition of the objective function f is replaced by its locally Lipschitzan condition.
Remark 3.2.Now, we improve the formulation of the optimization problem with vanishing constraints considered in Example 4.1 [20] in order to obtain its set of all feasible solutions defined by S = ({0} × R + ) ∪ (R + × {0}), that is, as it was given in Example 4.1 [20].Therefore, we have to modify the constraint functions G 1 and G 2 as follows: Then, we consider the following optimization problem with vanishing constraints: Note that x = (0, 0) is an optimal solution in (MPVC3) and, moreover, T 00 (x) = {1, 2}.However, we cannot use the Karush-Kuhn-Tucker necessary optimality conditions for (MPVC3) since the suitable constraint qualification is not fulfilled at x.For example, note that the V C-Cottle constraint qualification is not fulfilled at x for (MPVC3) since there exist Lagrange multipliers β 1 = 0, β 2 = 0, υ 1 = 1 and υ 2 = 1, not all equal to 0, for which (3.1) is not satisfied.This means that the V C-Cottle constraint qualification is violated and, therefore, the Karush-Kuhn-Tucker necessary optimality conditions cannot be applied for such an extremum problem.In such a case, the Fritz John optimality conditions should be used, that is, the necessary optimality conditions in which a Lagrange multiplier associated to the objective function may be equal to 0 (see [21] in the differentiable scalar case).

Conclusion
In the paper, a class of nondifferentiable vector optimization problems with vanishing constraints has been considered in which each component of the involved functions is locally Lipschitz.The Karush-Kuhn-Tucker necessary optimality conditions established in the paper show that it is possible to prove such optimality results also for such extremum problems with vanishing constraints in which the smoothness condition assumed for the objective function of the considered optimization problem is replaced by its Lipschitzian condition.What is more, we have proved these optimality results for a much broader class of such extremum problems in comparison to scalar optimization problems with vanishing constraints considered in [20].Namely, we have derived the aforesaid necessary optimality conditions and, under convexity hypotheses, sufficient optimality conditions for nondifferentiable multiobjective programming problems with vanishing constraints in which not only constraints are nonsmooth as in [20], but also each component of the multiple objective function is locally Lipschitz.Moreover, we have generalized to the nonsmooth case the definition of a S-stationary point introduced in the literature for smooth scalar optimization problems with vanishing constraints and the classical Cottle constraint qualification to the case of nondifferentiable extremum problems with vanishing constraints.Based on such generalizations, we have proved the Karush-Kuhn-Tucker necessary optimality conditions and their equivalence to the aforesaid stationary point.At the end, we have improved the incorrect result and statement given in [20].
It seems that the techniques employed in this paper can be used in proving similar results for other classes of nondifferentiable multiobjective programming problems with vanishing constraints.We shall investigate these problems in the subsequent papers.

. 1 )
Theorem 3.1 (Karush-Kuhn-Tucker necessary optimality conditions).Let x ∈ D be a (weak) Pareto solution of the considered multiobjective programming problem (MPVC) with vanishing constraints and the V C-Cottle constraint qualification (V C-CCQ) be fulfilled at x.Then, there exist Lagrange multipliers λ
are convex and D is a convex set.Then, x is a weak Pareto solution (a Pareto solution) in (MPVC).Proof.By assumption, x ∈ D is a S-stationary point for (MPVC).Then, by Definition 3.4, there exist Lagrange multipliers λ .30) Moreover, by assumptions a) and b), we conclude from Definition 3.4 and x ∈ D that m j=1 ≦ K u ∥y − z∥ holds for all y, z ∈ u + εB, where B signifies the open unit ball in R n , so that u + εB is the open ball of radius ε about u.