University of Birmingham Characterization of tropical hemispaces by (P,R)-decompositions

We consider tropical hemispaces, deﬁned as tropically convex sets whose complements are also tropically convex, and tropical semispaces, deﬁned as maximal tropically convex sets not containing a given point. We introduce the concept of ( P , R ) -decomposition. This yields (to our knowledge) a new kind of representation of tropically convex sets extending the classical idea of representing convex sets by means of extreme points and rays. We charac- terize tropical hemispaces as tropically convex sets that admit a ( P , R ) -decomposition of certain kind. In this characterization, with each tropical hemispace we associate a matrix with coeﬃcients in the completed tropical semiﬁeld, satisfying an extended rank- one condition. Our proof techniques are based on homogenization (lifting a convex set to a cone), and the relation between tropical hemispaces and semispaces.


Introduction
Max-plus algebra is the algebraic structure obtained when considering the max-plus semifield R max,+ . This semifield is defined as the set R ∪ {−∞} endowed with α ⊕ β := max(α, β) as addition and the usual real numbers addition α ⊗ β := α + β as multiplication. Thus, in the max-plus semifield, the neutral elements for addition and multiplication are −∞ and 0 respectively.
Consequently, in the max-times semifield, 0 is the neutral element for addition and 1 is the neutral element for multiplication.
In this paper we consider both of these semifields at the same time, under the common notation T and under the common name tropical algebra. In what follows T denotes either the max-plus semifield R max,+ or the max-times semifield R max,× . We will use 0 to denote the neutral element for addition, 1 to denote the neutral element for multiplication, and T + to denote the set of all invertible elements with respect to the multiplication, i.e., all the elements of T different from 0.
The space T n of n-dimensional vectors x = (x 1 , . . . , x n ), endowed naturally with the componentwise addition (also denoted by ⊕) and λx := (λ ⊗ x 1 , . . . , λ ⊗ x n ) as the multiplication of a scalar λ ∈ T by a vector x, is a semimodule over T. The vector (0, . . . , 0) ∈ T n is also denoted by 0, and it is the identity for ⊕.
In tropical convexity, one first defines the tropical segment joining the points x, y ∈ T n as the set {αx ⊕ β y ∈ T n | α, β ∈ T, α ⊕ β = 1}, and then calls a set C ⊆ T n tropically convex if it contains the tropical segment joining any two of its points (see Fig. 1 below for an illustration of tropical segments in dimension 2). Similarly, the notions of cone, halfspace, semispace, hemispace, convex hull, linear span, convex and linear combination, can be transferred to the tropical setting (precise definitions are given below). Henceforth all these terms used without precisions should always be understood in the max-plus or max-times (i.e. tropical) sense.
The interest in this convexity (also known as max-plus convexity when T = R max,+ , or max-times convexity or B-convexity when T = R max,× ) comes from several fields, some of which we next review. Convexity in T n and in more general semimodules was introduced by Zimmermann [29] under the name "extremal convexity" with applications e.g. to discrete optimization problems and it was studied by Maslov, Kolokoltsov, Litvinov, Shpiz and others as part of the Idempotent Analysis [17,19,22], inspired by the fact that the solutions of a Hamilton-Jacobi equation associated with a deterministic optimal control problem belong to structures similar to convex cones. Another motivation arises from the algebraic approach to discrete event systems initiated by Cohen et al. [6], since the reachable and observable spaces of certain timed discrete event systems are naturally equipped with structures of cones of T n (see e.g. Cohen et al. [7]). Motivated by tropical algebraic geometry and applications in phylogenetic analysis, Develin and Sturmfels studied polyhedral convex sets in T n thinking of them as classical polyhedral complexes [10].
Many results that are part of classical convexity theory can be carried over to the setting of T n : separation of convex sets and projection operators (Gaubert and Sergeev [14]), minimization of distance and description of sets of best approximation (Akian et al. [1]), discrete convexity results such as Minkowski theorem (Gaubert and Katz [11,12]), Helly, Carathéodory and Radon theorems (Briec and Horvath [2]), colorful Carathéodory and Tverberg theorems (Gaubert and Meunier [13]), to quote a few.
Here we investigate hemispaces in T n , which are convex sets in T n whose complements in T n are also convex. The definition of hemispaces makes sense in other structures once the notion of convex set is defined. Hemispaces also appear in the literature under the name of halfspaces, convex halfspaces, and generalized halfspaces. As general convex sets are quite complicated in many convexity structures, a simple description of hemispaces is highly desirable. Usual hemispaces in R n are described by Lassak in [18]. Martínez-Legaz and Singer [20] give several geometric characterization of usual hemispaces in R n with the aid of linear operators and lexicographic order in R n .
Hemispaces play a role in abstract convexity (see Singer [27], Van de Vel [28]), where they are used in the Kakutani Theorem to separate two convex sets from each other. The proof of Kakutani Theorem makes use of Zorn's Lemma (relying on the Pasch axiom, which holds both in tropical and usual convexity). A different approach is to start from the separation of a point from a closed convex set, as investigated in many works (e.g., Zimmermann [29], Litvinov et al. [19], Cohen et al. [8,9], Develin and Sturmfels [10], Briec et al. [4]). This Hahn-Banach type result is extended to the separation of several convex sets by an application of non-linear Perron-Frobenius theory by Gaubert and Sergeev in [14].
In the Hahn-Banach approach, tropically convex sets are separated by means of closed halfspaces in T n , defined as sets of vectors x in T n satisfying an inequality of the form j γ j x j ⊕ α i β i x i ⊕ δ. As shown by Joswig [16], closed halfspaces in T n are unions of several closed sectors, which are convex tropically and in the ordinary sense.
Briec and Horvath [3] proved that the topological closure of any hemispace in T n is a closed halfspace in T n . Hence closed halfspaces, with respect to general hemispaces, are "almost everything". However, the borderline between a hemispace and its complement in T n has a generally unknown intricate pattern, with some pieces belonging to one hemispace and the rest to the other. This pattern was not revealed by Briec and Horvath.
The present paper gives a complete characterization of hemispaces in T n by means of the so-called (P , R)-decompositions (see Definition 2.3 below). In dimension 2 the borderline is described explicitly and all the types of hemispaces in T 2 that may appear are shown in Figs. 2 and 3. Thus, our result is more general than the one established in [3] even in dimension 2. In higher dimensions one may use the characterization in terms of (P , R)-decompositions to describe the thin structure of the borderline quite explicitly.
We now describe the basic idea of the proof of this characterization. Let us first recall that like in usual convexity, a closed convex set in T n can be decomposed as the (tropical) Minkowski sum of the convex hull of its extreme points and its recession cone (Gaubert and Katz [11,12]). As a relaxation of this traditional approach, we suggest the concept of (P , R)-decomposition to describe general convex sets in T n . Developed here in the context of tropical convexity, this concept corresponds to that of Motzkin decomposition studied in usual convexity in locally convex spaces (see e.g. [15]). Homogenization, which carries convex sets to convex cones, is another classical tool we exploit in the setting of T n . Next, an important feature of tropical convexity (as opposed to usual convexity) is the existence of a finite number of types of semispaces, i.e., maximal convex sets in T n not containing a given point. These sets were described in detail by Nitica and Singer [23][24][25], who showed that they are precisely the complements of closed sectors. Let us mention that the multiorder principle of tropical convexity [23,24,26,21] can be formulated in terms of complements of semispaces. It follows from abstract convexity that any hemispace is the union of all the complements of semispaces which it contains. These sets are closed sectors of several types. The convex hull in T n of a union of sectors of certain type gives a sector of the same type, perhaps with some pieces of the boundary missing. Some degenerate cases may also appear. Sectors admit a (relatively) simple (P , R)-decomposition, and we can combine such (P , R)-decompositions to obtain a (P , R)-decomposition of the hemispace. So far the method is quite general and geometric, and in dimension 2 sufficient for classification.
For higher dimensions the fact that we deal with hemispaces becomes relevant. It turns out that a hemispace in T n admits a (P , R)-decomposition consisting of unit vectors and linear combinations of two unit vectors. Thus, to characterize a hemispace by means of (P , R)-decompositions we need to understand how the linear combinations of two unit vectors are distributed among the hemispace and its complement. The proof becomes more algebraic and combinatorial, and at this point it becomes convenient to work with cones and their (usual) representation in terms of generators. Using homogenization, we reduce the study of general hemispaces in T n to the study of conical hemispaces in T n+1 (these are hemispaces in T n+1 which are also cones or, equivalently, cones in T n+1 whose complements enlarged with 0 are also cones). We introduce the "α-matrix", whose entries stem from the borderline between a conical hemispace and its complement in two-dimensional coordinate planes. We show that it satisfies an extended rank-one condition, and then we prove that this condition is also sufficient in order for a set to generate a conical hemispace. This part of the proof is more technical and it is given in the last third of the paper, starting with Proposition 4.10 and ending with the proof of Theorem 4.7. We use the rank-one condition to describe the fine structure of the α-matrix, which is an independent combinatorial result of interest, and then use this structure to construct explicitly the complementary conical hemispace for a conical hemispace given by its (P , R)-decomposition. Finally, we translate this result back to the (P , R)-decomposition of general hemispaces, to obtain the main result of the paper (Theorem 4.22).
The paper is organized as follows. Section 2 is occupied with preliminaries on convex sets in T n , and introduces the concept of (P , R)-decomposition. In Section 3 we study semispaces in T n , in order to give, exploiting homogenization, a simpler proof of their characterization than the one given in [23,24]. Hemispaces appear here as unions of (in general, infinitely many) complements of semispaces, i.e., the closed sectors of [16]. Section 4 contains the main results on hemispaces in T n . The purpose of Section 4.1 is to reduce general hemispaces in T n to conical hemispaces in T n+1 . This aim is finally achieved in Theorem 4.5. In view of this theorem, in Section 4.2 we study conical hemispaces only. There we prove Theorem 4.7 as explained above, which gives a concise characterization of conical hemispaces in terms of generators. In Section 4.3, we obtain a number of corollaries of the previous results. In the first place we verify that closed hemispaces in T n are closed halfspaces in T n , a result of [3], see Theorem 4.18 and Corollary 4.20. Finally, the main result of this paper is given in Theorem 4.22 of Section 4.4. It provides a characterization of general hemispaces in T n as convex sets having particular (P , R)-decompositions, and is obtained as a combination of Theorems 4.5 and 4.7.

Preliminaries
In the sequel, for any m, n ∈ Z with m n, we denote the set {m, m + 1, . . . ,n} by [m, n], or simply by [n] when m = 1. The multiplicative inverse of λ ∈ T + (recall that T + := T\{0}) will be denoted by λ −1 . For x ∈ T n we define the support of x by We will say that x ∈ T n has full support if supp(x) = [n]. Otherwise we say that x has non-full support.
The set of the vectors {e i,n | i ∈ [n]} ⊆ T n defined by e i,n j = form the standard basis in T n . We will refer to these vectors as the unit vectors. In what follows, we will work with unit vectors in both T n and T n+1 . For simplicity of the notation, we identify e i,n with e i,n+1 for i n, and write simply e i for them.
To introduce a topology we need to specialize T to one of the models. Namely, if T = R max,× then we use the topology induced in R n + by the usual Euclidean topology in the real space. If T = R max,+ , then our topology is induced by the metric d ∞ (x, y) = max i∈[n] |e x i − e y i |. Note that the max-plus and max-times semifields are isomorphic.

Tropical cones and tropically convex sets: (P , R)-decomposition and homogenization
We begin by recalling the definition of cones and by describing some relations between them and convex sets. Definition 2.1. A set V ⊆ T n is called a (tropical) cone if it is closed under (tropical) addition and multiplication by scalars. A cone V in T n is said to be non-trivial when V = {0} and V = T n .

Definition 2.2.
For P , R ⊆ T n , we define the (tropical) convex hull of P to be: conv(P ) := y∈P λ y y λ y ∈ T for y ∈ P and y∈P λ y = 1 and the (tropical) linear span of R or cone generated by R to be: span(R) := y∈R λ y y λ y ∈ T for y ∈ R , where in both cases only a finite number of the scalars λ y is not equal to 0. We will also consider the (tropical) Minkowski sum of conv(P ) and span(R), which is conv(P ) ⊕ span(R) := x ⊕ y x ∈ conv(P ), y ∈ span(R) .
Observe that span(R) always contains the null vector 0, but conv(P ) does not contain it in general. For this reason, we always have conv(P ) ⊆ conv(P ) ⊕ span(R) and we do not always have span(R) ⊆ conv(P ) ⊕ span(R).
For each convex set C ⊆ T n at least one decomposition of the form (1) exists: just take P = C and R = ∅. A canonical decomposition of the form (1) can be written for closed convex sets, by the tropical analogue of Minkowski theorem, due to Gaubert and Katz [11,12].
Reversing the homogenization means taking a section of a cone by a coordinate plane. Below we take only sections of cones in T n+1 by x n+1 = α (mostly with α = 1), and not by x i = α with i ∈ [n]. Definition 2.6. For V ⊆ T n+1 and α ∈ T, the set The following property of coordinate section is standard (the proof is given for the reader's convenience). Proposition 2.7. Let V ⊆ T n+1 be closed under multiplication by scalars, and take any α = 0. Then C α Let us write out a (P , R)-decomposition of a section of a cone generated by a set U ⊆ T n+1 .
Proposition 2.8. If U ⊆ T n+1 , V = span(U ) and the coordinate section C 1 V is non-empty, then where P U := y ∈ T n ∃μ = 0, (μy, μ) ∈ U and R U := z ∈ T n (z, 0) ∈ U . (3) Proof. Let us represent for some λ y , λ z ∈ T, with only a finite number of λ y , λ z not equal to 0. Thus, for some λ y , λ z ∈ T, with y∈P U λ y = 1 and only a finite number of λ y , λ z not equal to 0. Then, Since (y, 1) ∈ V for y ∈ P U and (z, 0) ∈ V for z ∈ R U , we conclude that (x, 1) ∈ V, and so x ∈ C 1 V . 2 Then, by Proposition 2.8, we have C 1 where P U and R U are defined by (3). With U given by (4), we have P U = P and R U = R.

Recessive elements
We will use the following notions of recessive elements: Definition 2.10. Let C ⊆ T n be a convex set.
(i) Given x ∈ C, the set of recessive elements at x, or locally recessive elements at x, is defined as rec x C := z ∈ T n x ⊕ λz ∈ C for all λ ∈ T .
(ii) The set of globally recessive elements of C, denoted by recC, consists of the elements that are recessive at each element of C.
There is a close relation between recessive elements and (P , R)-decompositions.
Proof. Let z ∈ R. If x ∈ C, we have x = p ⊕ r for some p ∈ conv(P ) and r ∈ span(R). Then, for any λ ∈ T, because r ⊕ λz ∈ span(R) as a consequence of fact that span(R) is a cone. Since this holds for any x ∈ C and λ ∈ T, we conclude that z ∈ rec C. 2 For closed convex sets, every locally recessive element is globally recessive: [12].) If a convex set C ⊆ T n is closed, then rec x C ⊆ rec C for all x ∈ C.

Proposition 2.12. (See Gaubert and Katz
Proposition 2.12 is proved in [12] for the max-plus semifield, and hence it follows also for the max-times semifield as these two semifields are isomorphic. There are also other useful situations when a locally recessive element turns into a globally recessive one. Lemma 2.13. Let C ⊆ T n be a convex set and y ∈ C. If z ∈ rec y C and supp( y) ⊆ supp(z), then z ∈ rec C.
Proof. Since z ∈ rec y C we have y ⊕ λz ∈ C for all λ ∈ T, and since supp( y) ⊆ supp(z) we have y ⊕ λz = λz for all λ μ, where μ = i∈supp( y) y i z −1 i if y = 0 and μ = 0 otherwise. Given any β ∈ T, recalling that T denotes either R max,+ = (R ∪ {−∞}, max, +) or R max,× = ([0, +∞), max, ×), we know that there exists λ ∈ T such that λ > β ⊕μ. Then, for any x ∈ C, we have x⊕β z = x⊕βλ −1 λz ∈ C because βλ −1 1 and x, λz ∈ C. Thus, we conclude that z ∈ rec C. 2 Using the above observations, we now show that (P , R)-decompositions can be combined, under certain conditions. Theorem 2.14. Let {C } be a family of convex sets in T n , each of which admits the following (P , R)-decomposition: and let C := conv( C ). Then, if any of the following conditions hold: (ii) C is closed; (iii) For any z ∈ R there exists y ∈ conv(P ) such that supp( y) ⊆ supp(z).

Proof.
We have: As conv( (i) In this case span( R ) ⊆ rec C, and hence C ⊕ span( R ) ⊆ C. We know that P ⊆ C , hence conv( Let us now prove that R ⊆ rec C holds for cases (ii) and (iii).
(ii) Each z ∈ R is recessive at all y ∈ P , hence by Proposition 2.12 it is globally recessive.
(iii) For z ∈ R , let y ∈ conv(P ) be such that supp( y) ⊆ supp(z). By Lemma 2.11 we have z ∈ rec C , so in particular z ∈ rec y C . It follows that z ∈ rec y C because C ⊂ C. As supp( y) ⊆ supp(z) and z ∈ rec y C, we have z ∈ rec C by Lemma 2.13. 2 We will also need the following lemma.

Tropical semispaces
In this section we aim to give a simpler proof for the structure of semispaces in T n , originally described by Nitica and Singer [23,24], and to introduce hemispaces in T n with some preliminary results on their relation with semispaces.

Conical hemispaces, quasisectors and quasisemispaces
We first introduce and study three objects called conical hemispaces, quasisectors and quasisemispaces. The importance of these lies in the fact that they will be the main tools for studying hemispaces, sectors and semispaces (see Definitions 3.13, 3.14 and 3.21 below) using the homogenization technique. Definition 3.1. We call conical hemispace a cone V 1 ⊆ T n , for which there exists a cone V 2 ⊆ T n such that V 1 ∩ V 2 = {0} and V 1 ∪ V 2 = T n . In this case we call (V 1 , V 2 ) a joined pair of conical hemispaces (since V 2 is a conical hemispace as well). We say that a joined pair (V 1 , V 2 ) of conical hemispaces is non-trivial when V 1 = {0} and V 2 = {0}.
For completeness, we show the relationship between conical hemispaces and hemispaces (for the concept of hemispace, see the Introduction or Definition 3.13 below).

is a conical hemispace if and only if it is a hemispace and a cone.
Proof. If V is a conical hemispace, then it is a cone and its complement (not enlarged with 0) is a convex set. Thus V is a hemispace and a cone, whence the "only if" part follows. Conversely, assume (by contradiction) that V is a hemispace and a cone, and V ∪ {0} is a convex set and not a cone. Then V ∪ {0} contains the sum of any two of its elements but it is not a cone, so it is not a wedge. By Lemma 3.3 V is not a wedge, in contradiction with the fact that it is a cone. Whence the "if" part follows. 2 Note that conical hemispaces are almost the same as "conical halfspaces" of Briec and Horvath [3]. Indeed, the latter "conical halfspaces" are, in our terminology, hemispaces closed under the multiplication by any non-null scalar. In [3] it is not required that 0 belongs to the "conical halfspace". Definition 3.5. For any y = 0 in T n and i ∈ supp( y), define the following sets: which will be referred to as quasisectors of type i.
it follows that W i (y) and W i (y) ∪ {0} are both cones, so they form a joined pair of conical hemispaces. Also note that y ∈ W i (y) for all i ∈ supp( y).
Proof. The "only if" part follows from the fact that y ∈ W i (y) for i ∈ supp( y). In order to prove the "if" part, assume that i ∈ supp( y) and , hence x i = 0, in contradiction with our assumption. Furthermore, . Then, y can be written as a linear combination of the x i 's: Restating Theorem 3.6 we get the following.
Theorem 3.7. Let V ⊆ T n be a cone and take y = 0 in T n . Then y / We are also interested in the following object.
Proof. Suppose that V is a quasisemispace at y. Since it is a cone not containing y, This statement shows that Theorem 3.7 is an instance of a separation theorem in abstract convexity, since it says that when V is a cone in T n , we have y / ∈ V if and only if there exists a quasisemispace W i (y) \ {0} (where i ∈ supp( y)) in T n that contains V and does not contain y.
In particular, we obtain the following result.

Corollary 3.10. Each non-trivial cone V can be represented as the intersection of the quasisemispaces
, and for each complement F of a cone, the set F ∪ {0} can be represented as the union of the quasisectors W i (y) contained in F ∪ {0} (where y ∈ F and i ∈ supp( y)).

Lemma 3.11.
Assume that x, y ∈ T n satisfy supp(x) ∩ supp( y) = ∅. Then, for any i ∈ supp(x) ∩ supp( y), the non-null point z with coordinates Corollary 3.10 and Lemma 3.11 imply the following (preliminary) result on conical hemispaces. such that Proof. As V 1 \ {0} and V 2 \ {0} are complements of cones, Corollary 3.10 yields that i.e. V 1 and V 2 are the unions of the quasisectors contained in them. We claim that the quasisectors contained in V 1 and V 2 are of different type. To see this assume that, on the contrary, there exist two points y ∈ V 1 , y ∈ V 2 and an index Then, by Lemma 3.11 applied to y , y and i, we conclude that the quasisectors W i (y ) and W i (y ), and so the conical hemispaces V 1 and V 2 , have a non-null point in common, which is a contradiction. From the discussion above it follows that there exist disjoint subsets I, J of [n] such that Finally, since the conical hemispaces V 1 and V 2 are cones, the unions above coincide with their spans. 2

Tropical hemispaces, sectors and tropical semispaces
We now turn to convex sets using the homogenization technique. Below we will be interested in the following objects.
and so Proof. By Definition 3.5 applied to (y, 1), we have Let us make the following observation which will be useful in the next section.
for each i ∈ supp( y) and for i = n + 1.
Proof. The "only if" part is trivial, since S i (y) contains y for each i ∈ supp( y) and for i = n + 1.
For the "if" part, consider the homogenization V C of C. If (10) is satisfied, then for each i ∈ supp( y) and for i = n + 1 there exists By Theorem 3.6, it follows that (y, 1) ∈ V C , and so y ∈ C. 2 Restating Theorem 3.19 we obtain the following. Theorem 3.20. Let C ⊆ T n be a convex set and take y ∈ T n . Then y / ∈ C if and only if C ⊆ S i (y) for some i ∈ supp( y) or i = n + 1.

Definition 3.21.
A convex set of T n is called a (tropical) semispace at y ∈ T n if it is a maximal (with respect to inclusion) convex set of T n not containing y.
There are exactly the cardinality of supp( y) plus one semispaces at y ∈ T n . These are given by the convex sets S i (y) for i ∈ supp( y) and i = n + 1.

Proof.
Suppose that C is a semispace at y ∈ T n . Since it is a convex set not containing y, Theorem 3.20 implies that it is contained in S i (y) for some i ∈ supp( y) or i = n + 1. By maximality, it follows that it coincides with S i (y). 2 The following corollary corresponds to Corollary 3.10.

Corollary 3.23. Each convex set C ⊆ T n can be represented as the intersection of the semispaces S i (y)
containing it (where y / ∈ C and i ∈ supp( y) or i = n + 1), and each complement F of a convex set can be represented as the union of the sectors S i (y) contained in F (where y ∈ F and i ∈ supp( y) or i = n + 1).

Homogenization and (P , R)-decompositions
Let us start with (P , R)-decompositions of quasisectors and sectors. Proposition 4.1. For y ∈ T n and i ∈ supp( y), the quasisectors W i (y) and the sectors S i (y) and S n+1 (y) can be represented as Proof. We claim that if x ∈ W i (y), then This proves our claim. Using this property, we conclude that W i (y) ⊆ span e i ⊕ y j y −1 i e j j ∈ supp( y) .
For the converse inclusion, let us show that the vector e i ⊕ y j y −1 i e j belongs to W i (y) for any j ∈ supp( y). Indeed, we have (e i ⊕ y j y −1 i e j ) k = 0 for any k ∈ [n] \ {i, j}, and so in particular for any k ∈ [n] \ supp( y), and Thus, e i ⊕ y j y −1 i e j ∈ W i (y) by Definition 3.5. Since W i (y) is a cone and e i ⊕ y j y −1 i e j ∈ W i (y) for any j ∈ supp( y), we conclude that This completes the proof of the first equality in (12).
From the first equality in (12) it follows that, given y ∈ T n , for all i ∈ supp( y, 1) Hence by Definition 3.14 and Proposition 2.8, it follows that for all i ∈ supp( y) ∪ {n + 1}, where and Let i ∈ supp( y) (hence i n). Then by (13) we have Therefore by (15)  if and only if (z, 0) = e i ⊕ y j y −1 i e j for some j ∈ supp( y). Consequently, by (14), we obtain the second equality of (12).
If H is a hemispace, the resulting (P , R)-decomposition is given by and if V is a conical hemispace, then we have Proof. By Theorem 3.25 any hemispace H can be represented as the convex hull of all the sectors contained in H. Consider the (P , R)-decomposition of sectors given in the last two lines of (12). The pair of sets (P , R) which determines the (P , R)-decomposition of the sector S i (y), for any y ∈ T n and i ∈ supp( y), satisfy condition (iii) of Theorem 2.14 due to the fact that supp( y i e i ) ⊆ supp(e i ⊕ y j y −1 i e j ) for all j ∈ supp( y), and the pair of sets determining the (P , R)-decomposition of the sector S n+1 (y) satisfies this condition trivially (since R is empty). Therefore we can combine all the (P , R)-decompositions of the sectors contained in H (in other words, take the unions of all P and all R separately) to obtain a (P , R)-decomposition of H. To form the set P , let us first collect, using Theorem 3.25 and the second line of (12), all the vectors y i e i such that S i (y) ⊆ H (where i ∈ supp( y)). If we have S n+1 (y) ⊆ H for some y ∈ T n then, using Theorem 3.25 and the third line of (12), we also add the zero vector and all the vectors y j e j , where j ∈ supp( y). This explains the expression for P in (17), in both cases. The set R is composed of the vectors e i ⊕ y j y −1 i e j appearing on the second line of (12), such that S i (y) ⊆ H and j ∈ supp( y). This explains the last line of (17).
By Theorem 3.12, any conical hemispace V is the linear span of all the quasisectors contained in V. Consider the (P , R)-decomposition of quasisectors given in the first line of (12). By Lemma 2.15 the union of all the sets R appearing in these (P , R)-decompositions of the quasisectors contained in V gives the set R appearing in a (P , R)-decomposition of V (in which P = ∅). By Theorem 3.12 and the first line of (12), R consists of all the vectors e i ⊕ y j y −1 i e j such that W i (y) ⊆ V and j ∈ supp( y). This shows (18). 2 Let us make an observation on the (P , R)-decomposition of Theorem 4.2.

Lemma 4.3.
Let H ⊆ T n be a hemispace, z ∈ T n , and let R be defined by the last line of (17). If S i (z) ⊆ H then z ∈ span(R).
Proof. Since S i (z) ⊆ H, by the last line of (17) the set R contains all the vectors of the form e i ⊕ z j z −1 i e j for j ∈ supp(z). Representing we conclude that z ∈ span(R). 2 We shall need the following characterization of joined pairs of conical hemispaces by means of sections.
Let α ∈ T. Then, given any x ∈ T n , since V 1 ∪ V 2 = T n+1 we have (x, α) ∈ V 1 ∪ V 2 , and so x ∈ C α . Since x ∈ T n is arbitrary, this shows C α V 1 ∪ C α V 2 = T n for any α ∈ T. Suppose now that α = 0 and C α , then the non-null vector (x, 0) would belong to V 1 ∩ V 2 , contradicting the fact that V 1 ∩ V 2 = {0}. This shows , and completes the proof of (19) and (20).
Given any x ∈ T n and α ∈ T, since C α . It follows that (x, α) ∈ V 1 ∪ V 2 . Since x ∈ T n and α ∈ T are arbitrary, we conclude that , and by (19) and (20)  The following theorem relates complementary pairs of hemispaces in T n with joined pairs of conical hemispaces in T n+1 through the concept of section.
, and (V 1 , V 2 ) is a joined pair of conical hemispaces in T n+1 .
Proof. In the first place, observe that by Corollary 2.9 we have C 1 To prove that (V 1 , V 2 ) is a joined pair of conical hemispaces, we show that (19) and (20)  Let us first prove (19). Since (H 1 , Since V 1 ∩ V 2 and V 1 ∪ V 2 are closed under multiplication by scalars, using (23) and Proposition 2.7 we conclude that Thus we obtained (19). It remains to prove (20). Eqs. (21) and (22) imply that C 0 V 1 = span(R 1 ) and C 0 V 2 = span(R 2 ), so it remains to show that (span(R 1 ), span(R 2 )) is a joined pair of conical hemispaces of T n .
Let us show first that span(R 1 ) ∪ span(R 2 ) = T n . Take a vector z ∈ T n . As (H 1 , H 2 ) is a complementary pair of hemispaces, either z ∈ H 1 or z ∈ H 2 . Assume z ∈ H 1 . By Theorem 3.20 (taking H 2 as C and H 1 as its complement), it follows that S i (z) ⊆ H 1 for i = n + 1 or for some i ∈ supp(z). If S i (z) ⊆ H 1 for some i = n + 1, then z ∈ span(R 1 ) by Lemma 4.3. In the case when S i (z) H 1 for any i = n + 1, we have S n+1 (z) ⊆ H 1 , and we consider αz for α = 0.
We are left with the case when S n+1 (αz) ⊆ H 1 or S n+1 (αz) ⊆ H 2 for each α. Since by Lemma 3.16 the sets S n+1 (αz) are increasing with α, it can be only that either S n+1 (αz) ⊆ H 1 for all α, or S n+1 (αz) ⊆ H 2 for all α. Assume the first case. Then, we obtain that all vectors x with supp(x) ⊆ supp(z) are in H 1 , since x ∈ S n+1 (αz) with α = i∈supp(z) x i z −1 i holds for every such x. But then S i (z) ⊆ H 1 for any i ∈ supp(z), implying that z ∈ span(R 1 ).
We have shown that if z ∈ H 1 then z ∈ span(R 1 ) ∪ span(R 2 ). The same statement holds in the case of z ∈ H 2 (by symmetry). Thus span(R 1 ) ∪ span(R 2 ) = T n is proved, and it remains to show that span(R 1 ) ∩ span(R 2 ) = {0}.
Assume by contradiction that z ∈ span(R 1 ) ∩ span(R 2 ) and z = 0. As z ∈ span(R 1 ), we have z = where only a finite number of the scalars β x are not equal to 0. Observe that R 1 = ∅ and at least one β x is not equal to 0 because z = 0. By (17), R 1 is composed of vectors of the form e i ⊕ y j y −1 i e j , where y ∈ T n and i, j ∈ supp( y) are such that S i (y) ⊆ H 1 . Consequently we have β(e i ⊕ y j y −1 i e j ) z for some β ∈ T + , y ∈ T n and i, j ∈ supp( y) such that S i (y) ⊆ H 1 . Since S i (y) ⊆ H 1 , by (17) it follows that y i e i ∈ P 1 . As z ∈ span(R 2 ), for the same reasons as above there also exist β ∈ T + , y ∈ T n and i , j ∈ supp( y ) such that β (e i ⊕ y j (y i ) −1 e j ) z and y i e i ∈ P 2 .

On the (P , R)-decomposition of conical hemispaces
We know that the (P , R)-decomposition of a conical hemispace, as a linear span of quasisectors (Theorem 3.12), consists of unit vectors and linear combinations of two unit vectors (Theorem 4.2). Therefore, to describe the (P , R)-decompositions of a joined pair of conical hemispaces we need to understand how the linear combinations of two unit vectors are distributed among them. With this aim, we first associate with a non-trivial joined pair (V 1 , V 2 ) of conical hemispaces in T n the index The following lemma is elementary and will rather serve to define below the coefficients α ij . In what follows, for some purposes it will be convenient to assume that scalars can also take the value +∞ (the structure which is obtained defining λ ⊕ (+∞) := +∞, (+∞) ⊕ λ := +∞ for λ ∈ T, λ ⊗ (+∞) := +∞, (+∞) ⊗ λ := +∞ for λ ∈ T + and 0 ⊗ (+∞) := 0, (+∞) ⊗ 0 := 0 is usually known as the completed semifield, see for instance [9]) and to adopt the convention e i ⊕ λe j = e j if λ = +∞. (25) Lemma 4.6. Let (V 1 , V 2 ) be a non-trivial joined pair of conical hemispaces of T n , and let I, J ⊂ [n] be defined as in (24). Then, for any i ∈ I and j ∈ J we have Proof. In the sequel, we will use the fact that every linear combination of two unit vectors belongs either to V 1 or to V 2 , which follows from V 1 ∩ V 2 = {0} and V 1 ∪ V 2 = T n .
Assume now that inf{β ∈ T ∪ {+∞} | e i ⊕ βe j ∈ V 2 } = +∞. Observe that we have the following implication: since e j ∈ V 2 and, further, because if we had > in (27), then there would exist α, β ∈ T with α > β such that e i ⊕ αe j ∈ V 1 and e i ⊕ βe j ∈ V 2 . Then by e i ⊕ βe j ∈ V 2 and (26) it would follow that e i ⊕ αe j ∈ V 2 , whence e i ⊕ αe j / ∈ V 1 , a contradiction. If inf{β ∈ T ∪ {+∞} | e i ⊕ βe j ∈ V 2 } = 0, then the lemma follows from (27). Thus, it remains to consider the case inf{β ∈ T ∪ {+∞} | e i ⊕ βe j ∈ V 2 } ∈ T + . In this case, by the definition Then, since every linear combination of two unit vectors belongs either to V 1 or to V 2 , we have e i ⊕ αe j ∈ V 1 for all α < inf{β ∈ T ∪ {+∞} | e i ⊕ βe j ∈ V 2 }, and so (27) must be satisfied with equality. This completes the proof. 2 Henceforth, the matrix whose entries are the coefficients (28) will be referred to as the α-matrix (associated with the non-trivial joined pair (V 1 , V 2 ) of conical hemispaces). Besides, with each coefficient α ij we associate the pair of subsets of T ∪ {+∞} defined by Thus, by Lemma 4.6 it follows that for any i ∈ I and j ∈ J . In the sequel, we write I 1 + · · · + I m = I if I k for k ∈ [m] and I are index sets such that I 1 ∪ · · · ∪ I m = I and I 1 , . . . , I m are pairwise disjoint.
We now formulate one of the main results of the paper: a characterization of conical hemispaces in terms of their generators. We will immediately prove that any conical hemispace fulfills the given conditions. The proof that these conditions are also sufficient is going to occupy the remaining part of this section.
for any i 1 , i 2 ∈ I and j 1 , j 2 ∈ J .
Proof of the "only if" part of Theorem 4.7. Define V 1 := V and V 2 := V ∪ {0}. Thus, (V 1 , V 2 ) is a non-trivial joined pair of conical hemispaces in T n because V is a conical hemispace and non-trivial. Let I and J be the sets defined in (24). Then, I and J satisfy J = [n] \ I , and these sets are non-empty Indeed, by Theorem 4.2 both V 1 and V 2 are generated by unit vectors and linear combinations of two unit vectors. The distribution of unit vectors is given by I and J . Observe that (33) conforms to this distribution, since for any i ∈ I , e i belongs to the generators of V 1 as 0 ∈ σ (−) ij , and for any j ∈ J , e j belongs to the generators of V 2 since +∞ ∈ σ (+) ij . This obviously implies that no linear combination of e i 1 and e i 2 with i 1 , i 2 ∈ I (resp. of e j 1 and e j 2 with j 1 , j 2 ∈ J ) is necessary in (33) to generate V 1 (resp. V 2 ). For i ∈ I and j ∈ J , the distribution of the linear combinations of e i and e j is given by (30). Since (σ ij ), it follows that (33) also conforms to this distribution.
It remains to prove (32). Assume that Then, there exist β i 1 j 2 ∈ σ (+) For this to hold, the products β i 1 j 2 β i 2 j 1 and γ i 1 j 1 γ i 2 j 2 should be in T + , and hence β i 1 j 2 , β i 2 j 1 , γ i 1 j 1 and γ i 2 j 2 should be in T + . Then, we make the linear combination where λ satisfies λβ i 2 j 1 = γ i 1 j 1 , hence also λγ i 2 j 2 = β i 1 j 2 , and observe that This completes the proof of the "only if" part of Theorem 4.7. The "if" part will be proved later (formally after Remark 4.17, but the preparations for this proof will start right after Corollary 4.9). 2 The following result shows that if a non-trivial cone V defined as in (31) is a conical hemispace, then V ∪{0} can be defined as V 2 in (33) and the scalars σ ij are precisely the entries of the α-matrix associated with the non-trivial joined pair of conical hemispaces (V, V ∪ {0}).
ij , form a joined pair of conical hemispaces, and we have σ ij = α ij for all i ∈ I and j ∈ J with α ij defined by (28).
ij }. We first claim that the unit vectors and linear combinations of two unit vectors contained in V 1 are precisely the ones in R. Indeed given j ∈ J , since e j / ∈ R, it readily follows that e j / ∈ V 1 . Then, the unit vectors contained in V 1 are precisely the ones in R (i.e. e i for i ∈ I ). Assume now that e i ⊕ βe j ∈ V 1 for some i ∈ I , j ∈ J and β ∈ T + . Then, we have e i ⊕ βe j = y∈R δ y y, where only a finite number of the scalars δ y is not equal to 0. Observe (36) Then 1 = (e i ⊕ βe j ) i = ( y∈R δ y y) i = y∈R δ y y i = y∈R δ y , and so δ y 1 for all y ∈ R. Besides, since only a finite number of the scalars δ y is not equal to 0 and β = (e i ⊕ βe j ) j = ( y∈R δ y y) j = y∈R δ y y j , we conclude that β = δ y y j for some y ∈ R such that δ y = 0. Using (36)  Finally, the fact that the entries α ij of the α-matrix associated with (V 1 , V 2 ) coincide with the scalars σ ij follows from their definition (28) and from (34) and (35). 2 Condition (32) will be called the rank-one condition, due to the following observation.

Corollary 4.9. If condition (32) is satisfied and σ
In particular, if all the entries of an α-matrix belong to T + , then it has rank one.
In the rest of this subsection, we assume that I is a non-empty proper subset of [n] and V is the non-trivial cone defined by (31), where J := [n] \ I and the sets σ (−) ij , which are either of the form ij , satisfy the rank-one condition (32). With the objective of showing that any such cone is a conical hemispace, we first give a detailed description of the "thin structure" of the corresponding σ -matrix that follows from the rank-one condition (32). This description can be also seen as one of the main results.
for i ∈ I , then by the rank-one condition (32) it follows that: , and J 0 Proof. In this proof, we will use F , F and F to represent an entry of a matrix which belongs to T + , T + ∪ {+∞} and T + ∪ {0} = T, respectively.
(i) This property readily follows from the definition of the sets J < i , J i , J 0 i , and J ∞ i .
(ii) If these conditions are violated, then the σ -matrix has one of the following 2 × 2 minors violating (32).
(iii) If this condition is violated, then the σ -matrix has one of the following 2 × 2 minors violating (32). More precisely, one of the first two minors will appear when ( do not hold for some i 1 , i 2 , then there exist j 1 and j 2 such that σ i 1 j 1 ∈ σ (+) By definition, note that there exist subsets L 1 , . . . , L p , K 1 , . . . , K p , and J 1 , . . . , J p of J , such that or equivalently Indeed, if i 1 ∈ I r−1 and i 2 ∈ I r , using Remark 4.11 we conclude that either Using part (ii) of Proposition 4.10 and the fact that In the former case, we have In the latter case, as Finally, note that by part (iv) of Proposition 4.10 we have Observe that V is also generated by the set since any vector of the form e i ⊕ λe j , where j ∈ J i and λ < σ ij , can be expressed as a linear combination of e i ⊕ σ ij e j and e i . Moreover, defining for i ∈ I , we have V = i∈I (C i ⊕ D i ).

Lemma 4.12.
There exist β h ∈ T + , for h ∈ I , and γ j ∈ T + , for j ∈ i∈I ( J i ∪ J < i ), such that for each i ∈ I, the set of non-null vectors of the cone C i is the set of vectors satisfying Proof. Part (v) of Proposition 4.10 implies that there exist β i , γ j ∈ T + such that σ ij = γ −1 j β i for all σ ij ∈ T + . Thus, the cone C i can be equivalently defined by C i = span e i ∪ γ j e i ⊕ β i e j j ∈ J i ∪ γ j e i ⊕ λβ i e j j ∈ J < i , λ < 1 . Next, any non-null vector x ∈ C i can be written as a linear combination of vectors in the cones with the same coefficient x i at e i . The generators of C ij and C < ij satisfy the first and second conditions of (40) respectively, hence x also satisfies all these conditions. Conversely, each non-null vector x satisfying (40) can be written (using similar ideas to those in the proof of Proposition 4.1) as a linear combination of the generators of C ij and C < ij , and so it belongs to C i . 2 Later we will show that certain Minkowski sums of the cones C i are conical hemispaces. To this end, note that when forĨ ⊆ I we have J < i ∪ J i = ∅ for all i ∈Ĩ. Evidently, any set given by (41) is a conical hemispace.
Theorem 4.14. Given x ∈ T n , if x i = 0 for some i ∈ I, let h := min{r ∈ [p] | x t = 0 for some t ∈ I r } andx ∈ T n be the vector defined byx k : Proof. The "if" part: Let t ∈ I h be such that x t = 0. Then, by the definition ofx we have It follows that x ∈ V becausex ∈ i∈I h C i ⊆ V, e i ∈ V for all i ∈ I and e i ⊕ λe j ∈ V for all i ∈ I h , j ∈ K h and λ ∈ T + . The "only if" part: Let x ∈ V. As V = i∈I (C i ⊕ D i ), we have x = i∈I (y i ⊕ z i ) for some y i ∈ C i and z i ∈ D i . Note that y i ⊕ z i = 0 for i ∈ I r with r < h since y i i ⊕ z i i = x i = 0 for such vectors. So x = i∈ r h I r (y i ⊕ z i ).
We will show that y i can be chosen so thatx = i∈I h y i ∈ i∈I h C i . For this, observe that for all i ∈ I h , since e i ∈ C i , we can assume x i =x i = y i i , adding x i e i to y i if necessary. This fixes our choice of y i . Then by (37), for r > h we have J r ∪ K r ⊆ K h , or equivalently, J h ∪ L h ⊆ L r . It follows from (39) and the above that supp( We now describe i∈I r C i as the set of vectors lying in a halfspace (42) and satisfying a constraint (43).

Lemma 4.15.
If J r = ∅, then the non-null elements of the cone i∈I r C i are the vectors x ∈ T n that satisfy x i = 0 for some i ∈ I r , j∈ J r γ j x j i∈I r β i x i and x j = 0 for j / ∈ I r ∪ J r , and, in addition, γ j x j = i∈I r β i x i ⇒ ∃k ∈ I r such that γ j x j = β k x k and j ∈ J k . (43) Proof. Assume first that the conditions are satisfied for x ∈ T n . Given j ∈ J r , if γ j x j = i∈I r β i x i , let k ∈ I r be such that β k x k = i∈I r β i x i and j ∈ J k . Then, the vector y kj := e k ⊕ x j x −1 k e j belongs to C k because j ∈ J k and x j x −1 Given j ∈ J r such that γ j x j < i∈I r β i x i , let k be any element of I r such that β k x k attains the maximum in i∈I r β i x i . The vector y kj := e k ⊕ x j x −1 k e j again belongs to C k , because j ∈ J k ∪ J < k and x j x −1 follows that x ∈ i∈I r C i as a sum of x i e i for i ∈ I r and x k y kj = x k e k ⊕ x j e j over all y kj considered above.
Assume now that x ∈ i∈I r C i is non-null. Represent x = i∈I r y i where y i ∈ C i . Using (40) we observe that each vector y in C i for i ∈ I r satisfies j∈ J r γ j y j β i y i and y h = 0 for all h / ∈ I r ∪ J r , hence it lies in the halfspace (42), and so the same holds for x. Besides, the fact that x = 0 and (42) imply that x i = 0 for some i ∈ I r . Finally, if γ j x j = i∈I r β i x i , let k ∈ I r be such that x j = y k j . Since y k ∈ C k , by (40) we have γ j y k j β k y k k , and it follows that γ j x j = γ j y k j β k y k k β k x k i∈I r β i x i .
All these inequalities turn into equalities, so we have γ j y k j = β k y k k with y k ∈ C k , and hence j ∈ J k by (40). This shows that the conditions of the lemma are also necessary. 2 Proposition 4.16. For each r ∈ [p] the cone i∈I r C i is a conical hemispace.
Proof. The case when J r = ∅ was treated in (41), so we can assume J r = ∅. We have shown that the non-trivial elements of i∈I r C i are precisely the elements of T n that satisfy (42) and (43). In the rest of the proof, we assume that the complement of I r ∪ J r is empty, or equivalently, we will show that i∈I r C i is a conical hemispace in the plane {x i = 0 | i / ∈ I r ∪ J r }, from which it follows that i∈I r C i is a conical hemispace in T n . (For this, verify that the complement of a cone lying in {x i = 0 | i ∈Ĩ}, forĨ a subset of [n], is a cone, if the restriction of that complement to {x i = 0 | i ∈Ĩ} is a cone.) Thus, we assume I r ∪ J r = [n].
Let us build a "reflection" of i∈I r C i , swapping the roles of I r and J r , and the roles of J k and J < k in (42) and (43). Namely, we define it as the setC containing 0 and all the vectors x ∈ T n that satisfy i∈I r We need to show thatC is a cone. Evidently, x ∈C implies λx ∈C for all λ ∈ T. If x, y ∈C \ {0} and z = x ⊕ y satisfies (44) with strict inequality, then z ∈C. If not, let i be such that β i z i = j∈ J r γ j z j , and assume z i = x i . It follows that β i x i = j∈ J r γ j x j , and then there exists k ∈ J r such that γ k x k = β i x i and k ∈ J < i . Further observe that γ k z k γ k x k = β i x i = β i z i = j∈ J r γ j z j γ k z k , and so γ k z k = β i z i , showing that z satisfies (45) and is inC.
We now show thatC \ {0} is the complement of i∈I r C i , soC and i∈I r C i form a joined pair of conical hemispaces. Building the complement of i∈I r C i by negating (42) It can be verified that both branches belong to the "reflection"C as defined by (44) and (45).
We are now left to show that i∈I r C i and its "reflection"C do not contain any common non-null vector. We will use (38), i.e., the fact that for each i 1 , . This property means that the sets J < i and J i = J \ J < i are nested, hence the elements of I r and J r can be assumed to be ordered so that and the following properties are satisfied: Assume now x ∈ ( i∈I r C i ) ∩C but x = 0. Then, we necessarily have i∈I r β i x i = j∈ J r γ j x j = 0. Let i 1 ∈ I r be such that β i 1 x i 1 = j∈ J r γ j x j . Since x ∈C, there exists j 1 ∈ J < i 1 such that j∈ J r γ j x j = γ j 1 x j 1 . As x ∈ i∈I r C i , there exists i 2 ∈ I r such that β i 2 x i 2 = i∈I r β i x i = γ j 1 x j 1 and j 1 ∈ J i 2 , and so i 2 < i 1 by (46). Again, using the fact that x ∈C and β i 2 x i 2 = j∈ J r γ j x j , we conclude that there exists j 2 ∈ J < i 2 such that j∈ J r γ j x j = γ j 2 x j 2 , and so j 1 < j 2 by (46). Repeating this argument again and again we obtain infinite sequences i 1 > i 2 > i 3 > · · · and j 1 < j 2 < j 3 < · · · , which is impossible.
Hence, i∈I r C i andC form a joined pair of conical hemispaces. 2 Remark 4.17. It can be shown thatC = j∈ J rC j , whereC j are defined as the "reflection" of C i , i.e., cones whose non-null vectors satisfy The proof ofC = j∈ J rC j is based on the arguments of Lemmas 4.12 and 4.15. As this observation is just a remark, we will not provide a proof.
Proof of the "if" part of Theorem 4.7. Let C i ⊂ T n , for i ∈ I , be defined by (39) (see also (40), a working equivalent definition, and Lemma 4.15 for an equivalent definition of i∈I r C i ). Let the operator x →x be defined as in Theorem 4.14.
Let x ∈ V (which in particular means x = 0) and λ ∈ T + . If x i = 0 for all i ∈ I , then λx ∈ V is immediate by Remark 4.13 because x = 0. If x i = 0 for some i ∈ I , let h := min{r ∈ [p] | x t = 0 for some t ∈ I r }. Then,x / ∈ i∈I h C i by Theorem 4.14 because x ∈ V. Note that for y := λx we have min{r ∈ [p] | y t = 0 for some t ∈ I r } = h andŷ = λx. By Theorem 4.14 it follows that y ∈ V becausê y = λx / ∈ i∈I h C i . Let now x, y ∈ V (which in particular means x = 0 and y = 0) and define z := x ⊕ y. Assume first that x i = y i = 0 for all i ∈ I . Then, z i = 0 for all i ∈ I , and as z = 0, we conclude z ∈ V by Remark 4.13.
In the second place, assume x i = 0 for some i ∈ I but y t = 0 for all t ∈ I . Then, note thatẑ =x ⊕ w If x 1 = x 3 = 0, we have x = x 4 (e 2 ⊕ e 4 ) ⊕ x 2 e 2 ∈ V 1 when x 2 x 4 , and defining γ = x −1 4 x 2 we have

Closed hemispaces and closed halfspaces
We now consider the case of closed conical hemispaces, and show that these are precisely the closed homogeneous halfspaces, i.e., cones of the form  Proof. Closed homogeneous halfspaces are closed conical hemispaces, since the complement of (48) is given by and adding the null vector 0 to this complement we get a cone.
Conversely, if a conical hemispace V is closed, then in (31) we have σ ij ∈ T for all i ∈ I and j ∈ J , and the sets σ (−) ij can only be of the form Equivalently, the sets J < i and J ∞ i of Proposition 4.10 are empty for all i ∈ I , and so K r = ∅ for r ∈ [p]. Observe that this means that L r = J if J r = ∅, which in turn implies p = r. Moreover, we also have Assume first that p 2, which implies J 1 = ∅ as mentioned above. Then, we have J 2 ∪ K 2 ⊆ K 1 = ∅ by (37). It follows that J 2 = ∅, and so p = 2. Thus, we have I = I 1 ∪ I 2 and V = i∈I 1 ∪I 2 C i . By Lemma 4.15, the cone i∈I 1 C i can be represented by j∈ J 1 γ j x j i∈I 1 β i x i and x j = 0 for j ∈ L 1 ∪ I 2 .
(49) Note that this is just condition (42), and condition (43) is always satisfied as J k = J 1 for all k ∈ I 1 . Since J 2 = ∅, it follows that i∈I 2 C i is generated by {e i | i ∈ I 2 }, and then (49) implies that V = i∈I 1 ∪I 2 C i is the set of all vectors satisfying j∈ J 1 γ j x j i∈I 1 β i x i and x j = 0 for j ∈ L 1 , which is a closed homogeneous halfspace. Note that by Lemma 4.15 we arrive at the same conclusion if we assume that p = 1 and J 1 = ∅.
Finally, if we assume that p = 1 and J 1 = ∅, then V = i∈I 1 C i is generated by {e i | i ∈ I 1 = I}, i.e., V = {x ∈ T n | x j = 0 for j ∈ J } is a closed homogeneous halfspace. 2 We now recall an important observation of [3], which will allow us to easily extend the result of Theorem 4.18 to general hemispaces. For the reader's convenience, we give an elementary proof based on (tropical) segments and their perturbations. [3].) Closures of hemispaces = closed hemispaces.

Lemma 4.19. (See Briec and Horvath
Proof (in the max-times setting, with usual arithmetics). Consider the closure of a hemispace H in R n max,× . Since the closure of a convex set is a closed convex set (see e.g. [12,5]), we only need to show that the complement of this closure is also convex. This complement is open, so it consists of all points x ∈ H for which there exists an open "ball" B x := {u ∈ R n max,× | |u i − x i | < for all i ∈ [n]} such that B x ⊆ H. We need to show that if x and y have this property, then any linear combination z = λx ⊕ μy with λ ⊕ μ = 1 also does. If we assume λ = 1, then    [3].) Closed hemispaces = closed halfspaces.
Proof. We need to consider the case of a closed halfspace that is not necessarily homogeneous, and of a closed hemispace. A general closed halfspace is a set of the form where I , J and L are pairwise disjoint subsets of [n]. As in the case of conical hemispaces, it can be argued that the complement is convex too, so (51) describes a hemispace.
Conversely, by Theorem 4.5, for a general hemispace H ⊆ T n there exists a conical hemispace V ⊆ T n+1 such that H = C 1 V . Even if H is closed, V may be not closed in general. However, if V is the closure of V, then the section C 1 V still coincides with H. Indeed, for any z = (x, 1) ∈ V there exists a sequence {z k } k∈N of vectors of V such that lim k z k = z. Since z n+1 = 1 and, by Proposition 2.7, C α V = {αx | x ∈ H} for any non-null α, we can assume that z k = (λ k x k , λ k ) for some λ k ∈ T and x k ∈ H.
It follows that lim k λ k = 1 and lim k x k = x. Thus, x ∈ H because H is closed. Therefore, we conclude By Lemma 4.19 it follows that V is convex, and so V ∪ {0} and V form a joined pair of conical hemispaces. Then, by Theorem 4.18, V can be expressed as a solution set to j∈ J γ j x j ⊕ αx n+1 i∈I β i x i ⊕ δx n+1 and x j = 0 for j ∈ L, for some disjoint subsets I , J and L of [n]. The original hemispace in T n appears as a section of this closed homogeneous halfspace by x n+1 = 1, and so it is of the form (51). 2   ⊕ span e i ⊕ λe j i ∈ I \ {n + 1}, j ∈ J , λ ∈ σ (+) ij (53) otherwise. Moreover, if H is a hemispace given by the right-hand side of (52), then H is given by the righthand side of (53), and vice versa.