Computing the recession cone of a convex upper image via convex projection

It is possible to solve unbounded convex vector optimization problems (CVOPs) in two phases: (1) computing or approximating the recession cone of the upper image and (2) solving the equivalent bounded CVOP where the ordering cone is extended based on the first phase (Wagner et al., 2023). In this paper, we consider unbounded CVOPs and propose an alternative solution methodology to compute or approximate the recession cone of the upper image. In particular, we relate the dual of the recession cone with the Lagrange dual of weighted sum scalarization problems whenever the dual problem can be written explicitly. Computing this set requires solving a convex (or polyhedral) projection problem. We show that this methodology can be applied to semidefinite, quadratic and linear vector optimization problems and provide some numerical examples.


Introduction
Multiobjective optimization, in which there are multiple conflicting objective functions to be minimized simultaneously, has been studied extensively in the literature as application areas range from engineering to natural sciences. Vector optimization is a generalization, where values of the objective function might not be compared in an element-wise fashion. Stated in technical terms, the order relation on the objective space is determined by an ordering cone, which may differ from the positive orthant. Vector optimization plays an approximation is then used to transform the problem into a bounded one so that existing algorithms can be applied in the second phase.
In this paper, we propose an alternative way of approximating the recession cone of the upper image, that is, the first phase given in [30]. In [30], this is done by solving Pascoletti-Serafini scalarizations (see [24]) and updating an approximation of the desired recession cone in an iterative manner. In this paper, instead of computing an approximation of the recession cone itself as in [30], we consider the dual cone of it. We use a characterization of the dual cone from [29] given in terms of the well-known weighted sum scalarizations. We observe that for some classes of CVOPs, it is possible to write the dual of the recession cone explicitly. Then, computing this set reduces to solving a bounded convex projection problem [18]. For the special case of LVOPs, it is possible to compute the recession cone exactly by solving a bounded polyhedral projection problem [21]. Moreover, in this case, it is possible to reduce the dimension of the projection problem by one.
When the dimension of the objective space is two, the procedure simplifies further and it reduces to solving two convex (or linear, if the corresponding problem is linear) scalar optimization problems. Compared to applying the algorithm from [30] or solving a twodimensional LVOP as in [19], solving two convex or linear programs is simpler and more efficient.
The structure of the paper is as follows. In Section 2 we provide notation and preliminaries. In Section 3 we introduce the convex vector optimization problem and the relevant solution concepts. Section 4 introduces a method for approximating the recession cone of the upper image based on its connection to the set of weights for which the weighted sum scalarization is bounded. In Section 5 we discuss particular problem classes for which this method yields representation in the form of a convex projection problem. Section 6 provides examples.

Preliminaries
Let q ∈ N and R q be the q-dimensional Euclidean space. Throughout the paper, we primarily use the ℓ 2 norm y := y 2 = q i=1 |y i | 2 1 2 on R q . We will shortly remark on results under the ℓ p norm y p = ( q i=1 |y i | p ) 1 p for p ∈ [1, ∞) and the ℓ ∞ norm y ∞ = max i∈{1,...,q} |y i |. The (closed ℓ 2 ) ball centered at point c ∈ R q with radius r > 0 is denoted by B(c, r) := {y ∈ R q | y − c ≤ r}.
The interior, closure, boundary, and convex hull of a set A ⊆ R q are denoted by int A, cl A, bd A, and conv A, respectively. The (convex) conic hull of A, λ i a i | n ∈ N, λ 1 , . . . , λ n ≥ 0, a i . . . , a n ∈ A , is the set of all conic combinations of points from A. The recession cone of a set A is For two sets A, B ⊆ R q , their sum is understood as their Minkowski sum and their distance is measured via the Hausdorff distance If a different norm is considered, the Hausdorff distance can be defined analogously. We A polyhedron can also be represented as a finite intersection of halfspaces.
It is pointed if it does not contain any line through the origin. A cone C ⊆ R q generates an order on R q given through x ≤ C y ⇐⇒ y ∈ {x} + C for x, y ∈ R q . If C is a nontrivial, pointed, convex ordering cone, then ≤ C is a partial order. A function f : R n → R q is C-convex if for all x, y ∈ R n , and all λ ∈ [0, 1] it holds Convex projection is a problem of the form where S ⊆ R n × R m is a convex feasible set. If the feasible set S is a polyhedron, then the problem is a polyhedral projection. Under solving a projection problem, we understand computing the set Y (if polyhedral) or an approximation of it (otherwise) in the sense of finding a representation as in (1). More details on polyhedral projections can be found in [21], and on convex projection in [18,28].

Problem description
In this section, we introduce a convex vector optimization problem (CVOP) and its upper image. The main object of interest in this work is the recession cone of the upper image. Its importance can be seen by the role it plays in the boundedness properties of CVOPs and the appropriate solution concepts.
A convex vector optimization problem is where C ⊆ R q is a nontrivial, pointed, convex ordering cone with a non-empty interior and h : R n → R m and f : R n → R q are continuous functions that are R m + -and C-convex, respectively. We denote the feasible region by X := {x ∈ R n | h(x) ≤ 0} and its image by The upper image of (P) is defined as Here we are particularly interested in the recession cone of the upper image, that is, P ∞ . We encounter it within the boundedness notions for CVOPs recalled below.
Note that for a bounded problem, it holds P ∞ = cl C, see [18,Lemma 2.2]. A bounded problem is, in particular, always self-bounded. However, an unbounded problem can be self-bounded or not. An illustration is provided in Figure 1. We refer readers to [29] for more examples and more in-depth discussion. An appropriate solution concept for a CVOP depends on whether the problem is bounded or not. Solution concepts for bounded CVOPs are proposed in [1,8,20] and for self-bounded problems in [29]. According to these, a solution consists of finitely many minimal elements on the boundary of the upper image P which generate both an inner and an outer approximation of it. The self-bounded case, however, contains challenges: In general, it is difficult to check if a CVOP is self-bounded. Moreover, the solution concept of [29] includes the recession cone P ∞ . However, computing P ∞ exactly may not be possible if it is not polyhedral.
Recently, a generalized solution concept was proposed in [30], which includes an approximation of the recession cone P ∞ of the upper image. This solution concept is tailored for unbounded problems, but it is applicable for a CVOP regardless of whether it is (self-) bounded or not. Similarly to the above, it also yields polyhedral approximations of the upper image. We will provide this solution concept below explicitly as it illustrates the importance of approximating the recession cone P ∞ .
First, we define approximations of a convex cone. Interested readers can also compare this to the definition in [7] for convex sets.
Definition 3.2 differs slightly from [30,Definition 3.3]: Here the ℓ 2 norm (and the corresponding Hausdorff distance) is used, while [30] applied the ℓ 1 norm to measure distance. The ℓ 1 norm was chosen in [30] primarily for algorithmic reasons. Here we opt for the ℓ 2 norm for pragmatic reasons: Since we will work with dual cones, the ℓ 2 norm has the advantage of being self-dual. Alternatively, we could work with the pair of ℓ 1 -and ℓ ∞ norms, but this would create a cumbersome terminology. When the choice of the norm(s) impacts the results of the paper, we provide corresponding remarks. Now we can define a solution of a CVOP, where c ∈ int C with c = 1 is a fixed element. First, recall that a pointx ∈ X is called a minimizer for ( is a (weak) (ε, δ)-solution of (P) ifX = ∅ is a set of (weak) minimizers for (P), Y is a δ-outer approximation of P ∞ and it holds A (weak) (ε, δ)-solution (X , Y) of (P) is a finite (weak) (ε, δ)-solution of (P) if the sets X , Y consist of finitely many elements.
An approach to compute a solution of a CVOP in the sense of Definition 3.3 is provided in [30]. It was shown that once an outer approximation of P ∞ is available, the algorithms for bounded CVOPs can be used to find a solution in the sense of Definition 3.3. This is done by transforming the (unbounded) CVOP into a bounded one by replacing the ordering cone with the outer approximation of P ∞ . [30] also contains an algorithm for computing a finite δ-outer approximation of P ∞ .
In this paper, we provide an alternative approach to compute a polyhedral approximation of P ∞ . We consider some special classes of CVOPs for which we can compute a finite δ-outer approximation Y of P ∞ by solving a particular convex projection problem. For example, we will see that if we consider linear vector optimization problems, then we can compute the exact P ∞ by solving a polyhedral projection problem.
4 Approximating P ∞ via P + ∞ Let us propose an approach to compute an approximation of P ∞ by approximating its dual P + ∞ . It is based on the known close connection between the dual of the recession cone P + ∞ and the set of weights for which the weighted sum scalarization of the CVOP is a bounded problem. The boundedness of a scalar (weighted sum scalarization) problem can be verified through the feasibility of its dual problem, assuming strong duality. Expressing the cone P + ∞ through a set of weights for which the dual problem is feasible can be interpreted through the lens of a projection problem. This interpretation should become clearer for the particular special cases considered in Section 5. Solving this projection problem provides an inner approximation of P + ∞ . We show that an inner approximation of P + ∞ yields an outer approximation of P ∞ with an appropriate tolerance.
Let us start by recalling the weighted sum scalarization of (P), which is given by It is well known that if w ∈ C + \ {0}, then an optimal solution of (P w ) is a weak minimizer of (P), see [16,Theorem 5.28]. On the other hand, for a weak minimizerx ∈ R n , there exists w ∈ C + such thatx is an optimal solution to (P w ), see [16,Theorem 5.13]. This shows us that for CVOPs one is interested in solving (P w ) for w ∈ C + . However, the weighted sum scalarization problem may be unbounded for some w ∈ C + if (P) is not bounded. The set of weights for which the weighted sum scalarization is bounded, denoted by will play an important role. The following proposition gives a relationship between the dual cone of P ∞ and W .
Recall that f is a C-convex function. Then for all w ∈ C + is w T f : R n → R a convex function and hence (P w ) is a convex optimization problem. The Lagrangian L : R n ×R m → R for (P w ) is given by and the dual problem is where the dual objective function g : We know that the weak duality holds between the primal and dual problems (P w ) and (D w ), that is, Moreover, we say that the strong duality holds if the value of primal and dual problems are the same, that is, p w = d w . From now on, we assume the following.
Assumption 4.2. The problem (P) is feasible and it satisfies a constraint qualification such that the strong duality holds for the pair of (P w ) and (D w ) for any w ∈ C + .
This assumption is satisfied, for example, if the problem has only affine constraints, or if the (generalized) Slater's condition holds, that is, there existsx ∈ ri X such that h(x) < 0. Strong duality gives us the following result.
Proof. Since (P) is feasible, (P w ) for any w ∈ C + is also a feasible problem. Then, p w < ∞ holds and the weak duality implies that the dual problem (D w ) is not unbounded. On the other hand, the strong duality implies that (P w ) is bounded if and only if (D w ) is feasible.
For some classes of convex optimization problems, it is possible to write the constraints of the dual problem (D w ) explicitly. In the following section, we will consider these classes, for which we will express P + ∞ explicitly. This will provide a way to compute P + ∞ or an approximation of it.
Recall that the initial aim was to compute P ∞ or its outer approximation. If P + ∞ is determined by finitely many generators, it is easy to compute (the finitely many generators of) P ∞ . What if its (inner) approximation is available instead? Will a dual cone of an approximation of P + ∞ be an approximation of P ∞ ? The following proposition provides an answer.
Proposition 4.4. Let Z ⊆ R q be a finite δ-inner approximation of P + ∞ and Y be a finite set of generating vectors of (cone Z) + , that is, (cone Z) + = cone Y. Then, Y is a finite δ-outer approximation of P ∞ .
Second, we use ({y} + B(0, δ)) ∩ P ∞ = ∅ to show that the initial assumption cannot hold. By separation arguments, there exists w ∈ R q \ {0} such that w T (y − b) < w T p for all b ∈ B(0, δ), p ∈ P ∞ . In particular, w ∈ P + ∞ and w T (y − b) < 0 for all b ∈ B(0, δ). Without loss of generality, we may assume w = 1. The choice ofb = −δw shows that it holds w T y < w Tb = −δ. On the other hand, since w ∈ P + ∞ ∩ B(0, 1), there exists z ∈ cone Z ∩B(0, 1) such that w − z ≤ δ. Since z ∈ cone Z, y ∈ cone Y, we have y T z ≥ 0. Then, using the Cauchy-Schwarz inequality, we obtain which is a contradiction.
Let us now address the issue of the norm used. The above proposition holds for the (self-dual) ℓ 2 norm. Do we get a similar result for other (dual pairs of) norms? For computational purposes, the pair of ℓ 1 and ℓ ∞ norms with polyhedral unit balls are particularly important. The following remark shows that, in the general case, the tolerance is increased, but by less than a factor of two.
Remark 4.5. Let p, r ∈ [1, ∞] satisfy 1 p + 1 r = 1 and consider the dual pair of ℓ p and ℓ r norms alongside appropriately adapted Definition 3.2 of approximation of a cone. The following can be shown: If Z ⊆ R q is a finite δ-inner approximation of P + ∞ in ℓ p and Y is a finite set of generating vectors of (cone Z) + , then Y is a finite 2δ 1+δ -outer approximation of P ∞ in ℓ r .
We sketch the proof of this claim. Let B p and B r denote the closed balls with respect to the ℓ p and ℓ r norms. Let y ∈ cone Y ∩ B r (0, 1). First, we prove by contradiction that ({y} + B r (0, 2δ 1+δ )) ∩ (P ∞ ∩ B r (0, 1)) = ∅ implies ({y} + B r (0, δ)) ∩ P ∞ = ∅: Assume that y + b ∈ P ∞ \ B r (0, 1) for some b ∈ B r (0, δ) and consider the convex optimization problem Since a coercive function attains a minimum over a closed set, there exists an optimal solution λ * satisfying λ Second, we show that ({y}+B r (0, δ))∩P ∞ = ∅ leads to a contradiction: By a separation argument, there exists w ∈ P + ∞ \ {0} such that w T (y + b) < 0 for all b ∈ B r (0, δ) and, therefore, w T y < −δ. Since we can without loss of generality assume w ∈ P + Remark 4.6. In [31], a slightly different Hausdorff distance for closed convex cones K 1 , K 2 ⊆ R q is defined as where p ∈ [1, ∞]. If this Hausdorff distance is used to define δ-inner and -outer approximations of cones in Definition 3.2, then Proposition 4.4 holds, that is, the approximation tolerance is preserved, for any dual pairs of norms by [31,Theorem 1]. However, this result cannot be applied to our case since the two distance measures do not coincide in general. To see this, consider Proposition 4.4 suggests that by computing a δ-inner approximation of P + ∞ , we can generate a δ-outer approximation of P ∞ , which can be used to compute a finite (ǫ, δ)solution to problem (P). Note that it is sufficient to consider the set W since this set corresponds, up to the closure, to the cone P + ∞ of interest by Proposition 4.1. What about the closure? Set W can be computed exactly if it is polyhedral (so closed). Otherwise, it needs to be approximated, in which case an approximation of W is also an approximation of its closure.
From a practical point of view, instead of computing or approximating the cone W , we will compute or approximate the bounded convex set for a fixed c ∈ int C with c = 1. The next proposition shows that the set W can be approximated through an approximation of W c .
Proposition 4.7. Let W c be as defined in (4) for some c ∈ int C with c = 1 and let δ ∈ (0, 1) be a tolerance. Assume thatW is a finite δ-inner approximation of W c in the sense that it holdsW ⊆ W c and d H (W c , convW ) ≤ δ.
Then,W is also a finite δ-inner approximation of the cone W .
Proof. Consider an element w ∈ W ∩ B(0, 1). Note that the Cauchy-Schwarz inequality implies W ∩ B(0, 1) ⊆ W c , therefore, there existsw ∈ convW with w −w ≤ δ. Our proof would be finished if w ≤ 1. We proceed with the case w > 1 where we show that the orthogonal projection w Tw w Tww provides the desired bound: Firstly, since w Tw = 1 2 And thirdly, for w Tw w Tw = arg min α∈R w − αw it holds w − w Tw w Tww ≤ w −w ≤ δ, which proves the claim.
In light of Remark 4.5, let us again address different norms in the context of Proposition 4.7. Keep in mind that the ℓ 1 and ℓ ∞ norms are relevant for computational purposes.
Remark 4.8. Let p, r ∈ [1, ∞] satisfy 1 p + 1 r = 1 and use c ∈ int C with c r = 1 to define the set W c . The following can be shown: IfW is a finite δ-inner approximation of W c in ℓ p , thenW is a finite 2δ 1+δ -inner approximation of the cone W in ℓ p . We sketch the proof again. Let w ∈ W ∩ B p (0, 1). Since the Cauchy-Schwarz theorem implies w ∈ W c , there existsw ∈ convW satisfying w −w p ≤ δ. Consider the convex optimization problem min α≥0 w − αw p .
Since a coercive function attains its minimum over a closed set, there exists an optimal solution α * satisfying w − α * w p ≤ w −w p ≤ δ and α * w p ≤ w p + α * w − w p ≤ 1 + δ. The claim follows from 1 1+δ α * w ∈ coneW ∩ B p (0, 1) satisfying A cone is determined by its base, such as W ∩ {w ∈ R d | w T c = 1}. However, a full-dimensional set W c is preferable for computational purposes. Alternatively, one could aim to replace the base with a (q − 1)-dimensional set generating it. Assume without loss of generality that c q = 0. Let c −q ∈ R q−1 denote the first q − 1 components of c and w : R q−1 → R q be defined as so that c T w(λ) = 1 holds for all λ ∈ R q−1 . Then for the bounded set we have W = cone {w(λ) ∈ R q | λ ∈ Λ} by construction.
In particular, for the q = 2 case, the set Λ is a bounded interval and it suffices to solve two scalar problems to compute the bounds The drawback of considering the (q − 1)-dimensional set Λ arises if the set cannot be computed exactly, but has to be approximated: Approximation error for the set Λ is not preserved for the cone W and bound on the tolerance depends on the particular choice of vector c. Nevertheless, we consider the approach through the set Λ useful at least in two cases: (1) If the set Λ can be computed exactly. (2) In the q = 2 case when the interval Λ is approximated through two scalar problems, since solvers for scalar problems can in practice usually handle significantly lower precision than multi-objective problems or algorithms for projection problems.

Computations for special cases
The solution approach presented in Section 4 is applicable for the problems where Assumption 4.2 holds and the set can be expressed explicitly through the constraints of the dual problem. In this section, we will discuss three cases for which we can write the dual problem, hence the set W , explicitly.
We start with a relatively wide class of semidefinite problems, which is well-studied for the single objective case and has many application areas, see for instance the review paper [3]. There are also some studies that consider the class of semidefinite vector optimization problems in the literature, see [11,12,32]. Similar to the single objective case, the arguments of [3] can be straightforwardly extended to show that linear vector optimization and quadratic convex vector optimization problems with polyhedral ordering cones are special cases of semidefinite vector problems. Nevertheless, we also address linear and quadratic problems individually and provide further observations.
For the problems we consider below, the set of weights W , and consequently also the sets W c of (4) and Λ of (6), have a form of a convex (or polyhedral) projection. Methods for solving convex (or polyhedral) projections can, therefore, be used to approximate (or compute) the set W c (or the set Λ). In the light of Proposition 4.7, we obtain an approximation of P + ∞ . Finally, a dual cone of this approximation provides the desired approximation of the recession cone P ∞ of the upper images as Proposition 4.4 shows.
An outer approximation of the recession cone is needed to solve a CVOP in the sense of Definition 3.3. The method proposed in this paper can be used to replace the first phase of the algorithm proposed in [30]. Keep in mind that if the problem is self-bounded, then the recession cone itself can also be used to solve the problem. If this is not the case, however, an outer approximation of it is needed even if it is possible to compute P ∞ exactly. In the light of Proposition 4.1, unless the set W is known to be closed, we need to look for its inner approximation.

Semidefinite problems
The first class of problems we consider are the semidefinite problems. In the following, S k denotes the set of symmetric k × k matrices and S k + denotes the set of symmetric, positive semidefinite k × k matrices. Consider a semidefinite vector program in inequality form, minimize P T x with respect to ≤ C (SDVP) subject to x 1 F 1 + . . . + x n F n + G 0, for some P ∈ R n×q , F 1 , . . . , F n , G ∈ S k , k ≥ 2. The weighted sum scalarization for a weight w ∈ C + is the scalar semidefinite program minimize w T P T x subject to x 1 F 1 + . . . + x n F n + G 0 and its Lagrange dual is maximize tr(GZ) subject to tr(F i Z) + e T i P w = 0, i ∈ {1, . . . , n}, Z 0.
We refer a reader interested in the derivation of the dual problem to [4,Example 5.11].
Assumption 4.2 on constraint qualification is satisfied if there exists x ∈ R n such that x 1 F 1 + . . . + x n F n + G ≺ 0, consider [4,Equation 5.27]. Then the strong duality yields the set W of the convex projection form W = {w ∈ C + | ∃Z 0 : tr(F i Z) + e T i P w = 0, i = 1, . . . , n}, which can be computed by the method presented in [18]. In the following subsections, we consider two special cases of semidefinite problems for which further simplification and/or observation can be made.

Linear problems
The class of linear vector optimization problems is the most studied vector optimization problem class; many solution approaches are available and some of them are already mentioned in Section 1. Many existing methods are designed to solve bounded problems. Possibly unbounded LVOPs are considered for instance in [19,26]. It has been shown that the recession cone can be computed via the homogeneous problem, which is again a q-dimensional LVOP, see [19,Section 4.6]. The parametric simplex method from [26] is a decision space algorithm and provides the recession directions of the upper image at the final stage of the algorithm together with its vertices.
In this section, we show that to compute the recession directions of an LVOP, it is sufficient to solve a polyhedral projection problem where the dimension of the problem can be decreased from q to q−1. Note that the proposed method is an alternative to computing the recession cone via the homogeneous problem from [19] as both methods form the first phase of the solution approach considered in this paper.
Given matrices P ∈ R n×q , A ∈ R m×n , a vector b ∈ R m , and a polyhedral ordering cone C, we consider the linear vector optimization problem minimize P T x with respect to ≤ C subject to Ax ≤ b.
For a weight vector w ∈ C + , the Lagrange dual (D w ) of the weighted sum scalarization problem (P w ) is given by maximize − b T y subject to A T y = −P w, y ≥ 0.
Applying Proposition 4.1 and Theorem 4.3, we obtain The problem of computing the set (7) is a polyhedral projection problem. Closure is not needed on the right-hand side of (7) since the set is a polyhedron. The polyhedral dual cone P + ∞ can be computed exactly, rather than approximated, which is appropriate since the linear problem is self-bounded per Proposition 4.1.
As we suggested in Section 4, instead of computing the cone (7) in R q , we can compute the (q − 1)-dimensional set Λ = {λ ∈ R q−1 | w(λ) ∈ C + , ∃y ≥ 0 : −P w(λ) = A T y}, which also corresponds to solving a polyhedral projection problem. Moreover, we know that Λ is a closed interval if q = 2. In this case, to obtain the bounds of this interval, it suffices to solve the following two scalar linear problems minimize/maximize λ subject to w(λ) ∈ C + , − P w(λ) = A T y, λ ∈ R, y ≥ 0.

Convex quadratic problems
The last special case that we consider is the class of convex quadratic problems, which is a well-established area of mathematical programming in the scalar case. There are also recent papers that consider this class of problems in the multiobjective setting in different contexts, see [5,10,17,23]. In this section, we show that if the convex quadratic vector optimization problem contains at least one quadratic constraint, then the problem is bounded. Moreover, below we identify several conditions under which it holds P + ∞ = C + . We consider the following convex quadratic vector optimization problem where Q j ∈ S n + \ {0}, c j ∈ R n , r j ∈ R for j ∈ {1, . . . , p}, A ∈ R m×n , b ∈ R m , and the C-convex objective function f = (f 1 , . . . , f q ) T : R n → R q is given by f i (x) = x T P i x + d T i x with P i ∈ S n , d i ∈ R n for i = 1, . . . , q. Note that f is C-convex if and only if for all w ∈ C + is w T f convex, or equivalently q i=1 w i P i 0. In particular, for C ⊇ R q + a convexity of each objective f 1 , . . . , f q implies C-convexity of f . For C = R q + , the converse also holds. Now let us look at what we can learn about the problem. The weighted sum scalarization for a weight vector w ∈ C + Keeping Theorem 4.3 in mind, we are interested in the weights w for which the dual problem is feasible. Given the infimum term in the dual function, we have feasibility in two cases: if the quadratic expression in x is convex, or if the quadratic expression in x is constant. This yields the following form of set W , Using the structure of W given by (8), we show in the following two propositions that either the set W itself or its closure is equal to C + in some standard cases.
Proposition 5.1. Consider problem (QVP). In each of the following cases, W = P + ∞ = C + holds, in particular, the problem is bounded.
(a) There is at least one nonlinear constraint, that is, p > 0.
Proof. For each case, we will show W = C + . This implies W = P + ∞ and by Proposition 4.1, the problem is self-bounded. Indeed, it is bounded as we also have P + ∞ = C + . (a) By convexity, we have Q 1 , . . . , Q p 0 and q i=1 w i P i 0 for arbitrary w ∈ C + . If q i=1 w i P i = 0, then the choice of ν = 0 gives 0 = q i=1 w i P i + p j=1 ν j Q j 0. If q i=1 w i P i = 0, then the choice of ν 1 = 1, ν 2 = . . . , ν p = 0 gives 0 = q i=1 w i P i + p j=1 ν j Q j 0. This shows that Together with (8), this implies that W = C + .
(b) By (a), it is sufficient to consider problems without nonlinear constraints, that is, p = 0. In this case, the cone W given by (8) simplifies to Since the C-convexity of the objective implies q i=1 w i P i 0 for all w ∈ C + , we can write W = (C + \ W 1 ) ∪ W 2 , where If the matrices P 1 , . . . , P q are linearly independent, then W 1 = {0}, since q i=1 w i P i = 0 occurs only for w = 0. Since 0 ∈ W 2 and W = (C + \ W 1 ) ∪ W 2 , we conclude W = C + .
Proposition 5.2. Consider problem (QVP) and assume that the problem is nonlinear, that is, there is at least one nonlinear constraint or objective function. If C = R q + , then P + ∞ = R q + .
Proof. By Proposition 5.1 (a), it is sufficient to consider problems without nonlinear constraints, that is, p = 0. In this case, W = (C + \ W 1 ) ∪ W 2 , where W 1 , W 2 are as in (9). If W 1 = {0}, then P + ∞ = C + follows since P + ∞ = cl W . Assume w ∈ W 1 \ {0}. Noting that C + = R q + , w j > 0 for some j ∈ {1, . . . , q}. Consider the diagonal elements of the matrix q i=1 w i P i = 0. Since the matrices P 1 , P 2 , . . . , P q are positive semidefinite for C = R q + , all of their diagonal elements are nonnegative. Then, q i=1 w i P i = 0 implies for w j > 0 that all the diagonal elements of matrix P j are zero and, therefore, P j is the zero matrix.
Since the problem is not linear, there exists i ∈ {1, . . . , q} with P i = 0. Then, for any w ∈ W 1 we can construct a sequence of w (n) := w + 1 n e i ∈ R q + \ W 1 converging to w. Hence, we conclude P + ∞ = cl (R q + \ W 1 ) = R q + . We see that the computation of P + ∞ is only relevant if C = R q + , (QVP) has only linear constraints and P 1 , . . . , P q are linearly dependent. In that case, it can be done via computing sets W 1 , W 2 given by (9) and setting P + ∞ = cl (C + \ W 1 ) ∪ W 2 . As long as the ordering cone is polyhedral, W 2 is in the form of a polyhedral projection, so P + ∞ can be obtained through computations with polyhedra.

Numerical Examples
In this section, we provide numerical examples to illustrate the proposed solution methodology. We consider a two-dimensional and a three-dimensional linear problem and two semidefinite programming problems with different objective functions minimized over the same feasible set. Example 6.1. Consider the illustrative two-dimensional linear example min As outlined in Section 5.2, to identify the recession cone of the upper image, it suffices to solve two scalar linear problems, minimize / maximize λ subject to λ 1 − λ ≥ 0, y ≥ 0, 4 2 1 1 1 1 1 1 2 4 y = λ 1 − λ .  . We take c = (1, 1, 1) T ∈ int C, that is, w : R 2 → R 3 is given by w(λ) = (λ 1 , λ 2 , 1 − λ 1 − λ 2 ). As explained in Section 5.2, it is possible to compute the set Λ ⊆ R 2 by solving a bounded linear projection problem. The sets W, P ∞ , and Λ are displayed in Figure 3. We find an approximation of the cone of recession directions W = {w ∈ R 3 + | ∃Z 0 : tr(F i Z) + e T i P w = 0, i = 1, 2, 3} through the convex projection problem of (approximately) computing the set W c = {w ∈ R 3 + | ∃Z 0 : tr(F i Z) + e T i P w = 0, i = 1, 2, 3, w 1 + w 2 + w 3 ≤ √ 3}.
The convex projection yields both inner and outer approximations of the set W c , which generate inner and outer approximations of cones of W and P ∞ . All of them are displayed in Figure 4 for the problem with objective P 1 . Outer approximation of P ∞ , for which approximation tolerance is guaranteed by Propositions 4.4 and 4.7, is needed as a part of a solution.
In Figure 5 we use the problem with objective P 2 to compare approximations obtained via the set W c and the set Λ. Recall that we only have tolerance guarantees for approach through the set W c .  Compare the approximations of P ∞ obtained via the set W c (left) and via the set Λ (right), both convex projections were solved for tolerance ǫ = 0.005.