Handling Optimization under Uncertainty Problem Using Robust Counterpart Methodology

In this paper we discuss the robust counterpart (RC) methodology to handle the optimization under uncertainty problem as proposed by Ben-Tal and Nemirovskii. This optimization methodology incorporates the uncertain data in U a so-called uncertainty set and replaces the uncertain problem by its so-called robust counterpart. We apply the RC approach to uncertain Conic Optimization (CO) problems, with special attention to robust linear optimization (RLO) problem and include a discussion on parametric uncertainty in that case. Some new supported examples are presented to give a clear description of the used of the RC methodology theorem.


Introduction
The Robust Counterpart (RC) Methodology of Ben-Tal and Nemirovskii, is one of the existing methodologies for handling uncertainty in the data of an optimization problem.Citing from Ben-Tal [9], the main challenge in this RC methodology is how and when we can reformulate the robust counterpart of uncertain problems as a computationally tractable optimization problem or at least approximate the robust counterpart by a tractable problem.Due to its definition the robust counterpart highly depends on how we choose the uncertainty set .As a consequence we can meet this challenge only if this set is chosen in a suitable way.
In this paper, our aim in this paper is to overview the RC methodology in the case of uncertain conic optimization problems with a special focus onto uncertain linear optimization problems.We focus more on a discussion of a more detail proof of a crucial theorem for RC methodology of Ben-Tal Nemirovskii in case of robust linear optimization, as can be seen in Ben-Tal and Nemirovski [9].As it is mentioned above that the main challenge of handling uncertain optimization problem is to answer the question how and when we can reformulate the robust counterpart of uncertain problems as a computationally tractable optimization problem or at least approximate the RC by a tractable problem, thus the detail proof of the crucial theorem is important to be understood since in regarding to the fast development of the theory and application of RO, a clear understanding of the RC methodology always needed.Especially for them who are new to the field of RO theory, this paper aims to give a clarification on the theorem.Some new supported examples are also presented to give a clear description of the used of the RC methodology theorem.

Optimization under Uncertainty: Topics, Difficulties and Methodologies
Optimization under uncertainty refers to the branch of optimization where the data vector is uncertain.This means that the data vector is not known exactly at the time when its solution has to be determined.The uncertainty may be due to measurement or modelling errors or simply to the unavailability of the required information at the time of the decision.
Consider an optimization problem of the form where ( ) denotes the objective function of the problem and the functions ( ), are the constraint functions.These functions depend on the vectors and , where is the vector of decision variables and stands for the data specifying a particular problem instance.
By way of example, in the standard linear optimization problem is the triple( ) where is the objective vector, is the constraint matrix and is the given right-hand side vector of the constraints.
The classical approach in operations research or management science to deal with uncertainty is stochastic programming (SP) (see in Ben-Tal and Nemirovski [7]).The uncertainty in the data of the problem is then modelled by a set of random variables whose distributions are assumed to be known.A less sophisticated approach replaces each uncertain component in the data vector by a representative nominal value, usually the mean value, and hence in essence ignores the uncertainty.It is also discussed in Gabrel et al. [21] that there are some new developments on bridging RO and stochastic programming.A discussion about RO in perspective of SP is given by Chen et al. in [17], the discussion focus on an introduction of an approach for constructing uncertainty sets for robust optimization using new deviation measures for random variables termed the forward and backward deviations.These deviation measures capture distributional a asymmetry and lead to better approximations of chance constraints.
We use a different approach, the aforementioned robust counterpart (RC) methodology of Ben-Tal and Nemirovskii.In this methodology, as it is mentioned above, it is assumed that the parameter data belongs to an uncertainty set .
Thus, an uncertain optimization problem can be expressed as follows: So it is in fact a whole family of optimization problems.One is associated with the uncertain problem (3) its so-called Robust Counterpart.In the following, we briefly discuss how this robust counterpart can be obtained.Consider the uncertain problem (3).First, we remove the uncertainty in the objective function by replacing (3) by the equivalence problem In the methodology of Ben-Tal and Nemirovskii one only considers solutions x that are feasible for this problem for all possible values (and for some ).Thus, the set of all so-called Robust feasible solutions of ( 4) is given by The pair ( ) denotes the column vector obtained by concatenating the column vectors and .
Now the robust counterpart of the uncertain problem (4) consists of minimizing over this set: Obviously, the robust counterpart of (4) represents a worst-case oriented approach: a pair of solutions ( ) is robust feasible only if satisfies the constraints for all possible values of (and some ).The optimal solutions of ( 6) are called robust optimal solutions.Note that the robust counterpart ( 6) is an optimization problem with usually infinitely many constraints, depending on the uncertainty set .This implies that this problem may be very hard to solve.This means that only if is chosen suitably, the problem (6) can be solved efficiently.

Uncertain Conic Problem
In this section we discuss one of the important optimization class problems, i.e, Conic optimization (CO).This class of problem is a very useful optimization technique that concerns the problem of minimizing a linear objective function over the intersection of an affine set and a convex cone.The importance of this class of problems is due to two facts, i.e., many practical nonlinear problems can be modeled as a CO problem, and a wide class of CO problems can be solved efficiently by so-called interior-point methods.
The interest in CO was highly stimulated when it became clear that the interior-point methods that were developed in the two last decades for Linear Optimization (LO) see, e.g., Jarre [25], Mehrotra [30], Nesterov and Nemirovskii [32], Peng et al. [34], Renegar [35], Roos et al. [36], Sturm and Zhang [37]) and which revolutionized the field of LO, could be naturally extended to obtain polynomial-time methods for CO.The most elegant theory developed by Nesterov and Nemirovskii [32] provides an interiorpoint method with polynomial complexity if the underlying cone has a so-called self-concordant barrier that is computationally tractable.
This opened the way to a wide spectrum of new applications which cannot be captured by LO, e.g., in image processing, finance, economics, control theory, combinatorial optimization, etc.For a nice survey both of the theory of CO and many new applications, we refer to the book of Ben-Tal and Nemirovskii [8]; Ben-Tal et al. [11].
In this paper we do not touch the algorithmic aspects of interior-point methods for CO.We refer the interested reader to the existing literature, where one can find a wide variety of such methods.See, e.g., the above references and some numerical evidences for the efficiency of these methods has been provided by many authors (see in Andersen et al. [1], Jarre [25], Karasan et al. [27], Mehrotra [30], Peng et al. [34], Sturm and Zhang [37]).
The general form of a conic optimization problem is as follows: where the objective function is , with .Furthermore represents an affine function from to .Each denotes convex cones in , it is either a non-negative orthant (linear constraints) or a Lorentz cone (conic quadratic constraints), or a semidefinite cone (linear matrix inequalities).
The easiest and most well-known case occurs whenthe cone is the nonnegative orthantof , i.e., when .Then the above problem gets the form This is nothing but one of the standard forms of the well-known LO problem.Thus it becomes clear that LO is a special case of CO.When the data associated with (7), i.e., the triple ( * + ), is uncertain and is only known to belong to some uncertainty set , we speak about the uncertain conic problem which has the following form: The robust counterpart to (8) is the following convex problem This is a CO problem with usually infinitely many constraints, depending on the uncertainty set .Hence, in general, this problem may be very hard to solve.In the next section, we discuss the robust linear optimization problem and we show that for special choices of the uncertainty set the problem ( 10) is computationally tractable.

Robust Linear Optimization and Its Examples
An uncertain linear optimization problem has the following form: Where is the set of all possible realizations of ( ), with (each) matrix having size .It is important to be mentioned here that the set of parameter ( )is not a random variables.The set will be the representation of the uncertain ( ).To this end, the following discussion will give some explanation on it.As mentioned in the previous section, the first step to do the RC methodology is removing the uncertain from the objective function.This implies that the robust counterpart of ( 11) is the following semi-infinite optimization problem: The tractability (12) depends on the uncertainty set .The following theorem makes clear that if the set can be described either by linear constraints, or conic quadratic constraints or by a semidefinite constraint, then (12) becomes computationally tractable.
Because the following theorem is crucial for the paper, and since the proof in Ben-Tal and Nemirovski [9] is written with less detail, we include a more detailed proof below.

Theorem 1 (Ben-Tal and Nemirovski [9])
Assume that the uncertainty set in (11) is given as the affine image of a bounded set * + , and is given either by a system of linear inequalities , or 1. by a system of conic quadratic inequalities ‖ ‖ or 2. by a linear matrix inequality ∑ where are matrices and are vectors.
In the cases 2 and 3 we assume that the system of constraints defining is strictly feasible.Then the robust counterpart ( 12) of ( 11) is equivalent to 1. a linear optimization problem in case 1, 2. a conic quadratic problem in case 2, 3. a semidefinite problem in case 3.
In all cases, the data of the resulting robust counterpart problem are readily given by equation reference goes here and the data specifying the uncertainty set.Moreover, the size of the resulting problem is polynomial in the size of the data specifying the uncertainty set.

Proof:
By assumption, the uncertainty set has the following form ) is a nominal data vector, and .The feasible set of ( 12) is where with as given by ( 13).This implies that the pair ( ) is robust feasible if and only if Now, let be the matrix with columns Thus, the first constraint in (15) can be rewritten as follows. ) Letting denote the i -th row of the matrix , for and , the second constraint in ( 14) can be written as Let denote the matrix with columns and the column vector with entries .Then the i -th inequality in ( 18) is equivalent to From ( 17) and ( 19), the pair ( ) is a feasible solution of the RC (12) if and only if satisfies where and, for m i ,..., 1  the following holds.
This holds if and only if is such that the optimization problem has a nonnegative optimal value, for each .Now, by assumption, * + is representable as for suitable and , where is either a nonnegative orthant, or a direct product of the second order cones, or the semidefinite cone.Consequently, ( , -) can be written as follows: Note that this conic problem is bounded (since is bounded) and is a self-dual cone.Moreover, if the cone is nonlinear, then the problem is strictly feasible, by the assumption for the cases 2 and 3 in the theorem.Therefore, the optimal value of the problem is equal to the optimal value of the dual problem, by the Strong Conic Duality in Ben-Tal and Nemirovskii [8], and the dual problem is solvable.Note that the term in the objective function does not depend on and hence it can be considered as a constant.Introducing a vector of dual variables , the dual problem of (25) is the following conic problem: Since ( , -) and ( , -) have the same optimal value, we may conclude that y is robust feasible if and only if the optimal value of ( , -) is nonnegative for each .At this stage we use that each ( , -) is solvable, i.e. has an optimal solution.We may therefore conclude that is robust feasible for each there exists a vector such that (27) We conclude from this that the robust counterpart (2) of ( 1) is equivalent to the problem This is a linear optimization problem if the cone is linear, a second-order cone problem if is a direct product of second-order cone(s), and a semidefinite problem if the cone is a semidefinite cone.Hence the proof is complete.
In the next subsection, we present some new simple examples to illustrate the use of Theorem 1.

Some Examples to Illustrate the Use of Theorem 1
We start by considering the following uncertain problem: Where is a given nonnegative number.Obviously the uncertainty is only in the constraint matrix, due to the uncertain parameter .We derive the RC of this problem just by applying the method outlined in the proof of Theorem 1.
In the current case we have So we have and .Using the notations of ( 13), the set is defined by In the present case the matrix is given by .Hence we have The matrices for are given by , -, -, - whence it follows that Furthermore, and since is the zero vector, , - Using the entries in , To proceed we need to find a conic representation of the set To keep things simple we observe that this set allows a linear description as follows: where We only have to substitute all the computed entities in (28) to obtain the robust counterpart of the given problem.We do this in steps.Note that since , we have constraints for and these are given by ( 41) where ( ).The constraints for the respective values of are: In other words, ( ) Yet we observe that only the variable appears in the objective function of (27).Since the constraints for can be satisfied if and only if .
Similarly, the constraints for can be satisfied if and only if and those for if and only if .Replacing by and by , we get the following equivalent system of constraints: In the next example we derive the RC in this more direct way.The aim of the above example, however, was to demonstrate how the RC of a problem that satisfies the hypothesis of Theorem 1 can be obtained in a straightforward way by using the scheme presented in the proof of Theorem 1.

Example 2 For
, consider the uncertain problem:

Conclusion
From the discussion above, we may conclude the paper by claiming that RC methodology can be employ to obtain the robust optimal solution of un-certain CO as long as the RC formulation can be represented in CO formulation wheather it is a linear, conic quadratic or semidefininte optimization.From the discussed examples, it can be conclude that the optimal value of RC which is denotes as ( ) is not always depends continuously on the parameter .

Figure 1 .
Figure 1.The robust optimal value function of ( )of Example 1.

Figure 2 .
Figure 2. The robust optimal value function of ( )of Example 2. As in the previous example the `worst' value of occurs when .Hence, the RC is given by 2 ( ) 3 (60)