Analysis of the weighted Tchebycheff weight set decomposition for multiobjective discrete optimization problems

Scalarization is a common technique to transform a multiobjective optimization problem into a scalar-valued optimization problem. This article deals with the weighted Tchebycheff scalarization applied to multiobjective discrete optimization problems. This scalarization consists of minimizing the weighted maximum distance of the image of a feasible solution to some desirable reference point. By choosing a suitable weight, any Pareto optimal image can be obtained. In this article, we provide a comprehensive theory of this set of eligible weights. In particular, we analyze the polyhedral and combinatorial structure of the set of all weights yielding the same Pareto optimal solution as well as the decomposition of the weight set as a whole. The structural insights are linked to properties of the set of Pareto optimal solutions, thus providing a profound understanding of the weighted Tchebycheff scalarization method and, as a consequence, also of all methods for multiobjective optimization problems using this scalarization as a building block.


Introduction
Multiobjective optimization has gained substantial and increasing attention in the optimization literature. This can be partly explained by the prevalence of multiple, conflicting objectives in practical applications, see e.g. [20] for a prominent example in cancer radiation treatment. From a theoretical point of view, multiobjective optimization is not only an interesting research area but it is also worth studying due to its connections to game theory, inverse optimization, and, among others, robust optimization, see [7,10,11].
A multiobjective optimization problem (MOP) with p objectives, p ∈ N, p ≥ 2, can be stated as where X ⊆ R n , for n ∈ N, is called the feasible set, and f = ( f 1 , . . . , f p ) : R n → R p is the (vector-valued) objective function. A multiobjective discrete optimization problem (MODO) can be stated as an MOP with the additional restriction that X is a finite set. We denote by Y := f (X ):={y ∈ R p : y = f (x), x ∈ X } the set of images and call R n and R p the decision space and the image space, respectively. For images y,ȳ ∈ R p the weak componentwise ordering is defined by y ȳ if and only if y i ≤ȳ i for all i = 1, . . . , p, the componentwise ordering is defined by y ≤ȳ if and only if y ȳ and y =ȳ, and the strict componentwise ordering is defined y <ȳ if and only if y i <ȳ i for all i = 1, . . . , p. Further, the nonnegative orthant is defined by R p :={y ∈ R p : y 0}. The sets R p ≥ and R p > are defined analogously. Then, a feasible solution x dominates another feasible solution x if and only if f (x) ≤ f (x ). A feasible solution x * ∈ X is efficient (weakly efficient) if there does not exist another feasible solution x ∈ X such that f (x) ≤ f (x * ) ( f (x) < f (x * )). We call an image y = f (x) (weakly) nondominated if x is (weakly) efficient and denote by Y N (Y wN ) the set of (weakly) nondominated images. For a more detailed and thorough introduction on multiobjective optimization, we refer to [15].
A scalarization transforms systematically an MOP into a single objective problem using additional parameters, such as weights or reference points. In this context, three questions are of major interest: Is the optimal solution of the scalarized problem guaranteed to be (weakly) efficient? And, vice versa, can any efficient solution be obtained as an optimal solution for a scalarized problem? If yes, how do the parameters need to be chosen to obtain this specific efficient solution?
The well-known weighted sum scalarization chooses a non-negative weight λ i ≥ 0 for each objective function and solves the problem min y∈Y {λ y}, see [35]. The image of every optimal solution of this scalar-valued problem is a (weakly) nondominated image of the original problem if λ ∈ R p > (λ ∈ R p ≥ ) [18]. By varying the weights, other nondominated images can be found. Weighted sum scalarizations yield so-called supported nondominated images. These are located on the convex hull of the set of images and, in general, are a strict subset of Y N . Those nondominated images that are also extreme points of the convex hull of Y are called extreme supported nondominated images.
The basic idea of the weight set decomposition is quite intuitive and has been explored extensively for the weighted sum scalarization [2,6,19,26,28]. Each nondominated image has an associated weight set component, i.e., the set of all weight vectors for which the weighted sum scalarization yields the same nondominated image. The weight set decomposition is usually taken to be a (minimal) collection of weight set components that cover the weight set, the set of all eligible weights.
Weight set components offer decision makers additional insight into the nondominated set, and can be particularly useful for three or more objectives, when visualization of the nondominated set is difficult. For example, a weight set component with a comparatively large volume is obtained from a nondominated image that is more 'robust' with respect to changes in preferences of the single objectives. The intersections of weight set components also embody the adjacency structure of the supported nondominated images: two supported nondominated images are adjacent, if the dimension of the intersection of their weight set components is equal to the dimension of the weight set minus one. In addition to their value to decision makers, the construction of weight set components may also form an integral part of algorithms for generating sets of nondominated images or approximations to them. The adjacency structure can be especially helpful in the design of interactive methods [2,19,28].
However, the existence of unsupported nondominated images, i.e., images that are nondominated but not supported, and the fact that corresponding solutions cannot be computed by the weighted sum scalarization delimits the applicability of this particular scalarization. Yet, it motivates the weighted Tchebycheff scalarization which does not suffer from this shortcoming.
Let s ∈ R p be a reference point, λ ∈ R p ≥ a weight vector, and y λ ∞ := max i=1,..., p {|λ i y i |} the max-norm on R p . Then, the weighted Tchebycheff scalarization can be stated as Typically, the reference point is chosen to be the ideal point y I i := min x∈X f i (x), i = 1, . . . , p, or to be some utopia point y U < y I . For weights λ ∈ R p > and reference points s y I , every optimal solution to T S (λ) is weakly efficient for an MODO. If the solution is unique, it is efficient [32]. Conversely, each nondominated image is indeed optimal for a weighted Tchebycheff scalarization problem with appropriately chosen weights [32].
In this work, we provide a first rigorous and comprehensive theory on the weighted Tchebycheff weight set components, we analyze the polyhedral and combinatorial structure of the sets and provide an adjacency concept of nondominated images.

Related work
The Tchebycheff norm was introduced for biobjective optimization problems by Geoffrion in 1967 [17]. Bowman [8] and Wierzbicki [33], among others, suggest using the (weighted) Tchebycheff norm to find nondominated images of MOPs, even for nonlinear objective functions. To avoid weakly nondominated images which are not nondominated, modifications are introduced: the lexicographic weighted Tchebycheff scalarization [32] chooses among all images that are optimal the image with minimal 1-norm. The augmented weighted Tchebycheff norm [32] adds the 1-norm scaled with a small parameter to the objective function. The modified augmented weighted Tchebycheff norm [14,21] also uses weights in the augmentation term.
Since the distance to the reference point may provide useful information in the optimization process, many applications of these Tchebycheff scalarization techniques can be found in the context of interactive approaches, see [25] for an overview. For example, Steuer and Choo [32] utilize the (augmented) weighted Tchebycheff scalarization, while Luque et al. [24] develop such an approach for solving convex multiobjective programs using the lexicographic weighted Tchebycheff scalarization. For multiobjective mixed integer linear programming, Alves and Clímaco [1] combine a branch-and-bound approach with adjustments of the reference point, employing the augmented weighted Tchebycheff scalarization.
Weight set decomposition methods for the weighted sum scalarization date back to the work of Yu and Zeleny in 1975 [34], who introduce a generalized simplex method and link basic efficient solutions with the set of weights in the polyhedral cone defined by the corresponding basis matrix. For biobjective problems, the well-known dichotomic search approach [3,12] in fact calculates all extreme supported nondominated images. Benson and Sun [4,5] extend this idea and establish a link between extreme supported nondominated images of a multiobjective linear optimization problem and a partitioning of the weight set.
Przybylski et al. [28] adapt this technique to multiobjective integer programs. They state fundamental properties concerning the weight set components: Each weight set component W S (y) of an image y is a polytope and knowing all extreme supported nondominated images is sufficient for its calculation. A weight set component has dimension equal the dimension of the weight set if and only if the corresponding image is an extreme supported nondominated one, which implies that the set of extreme supported nondominated images is sufficient and necessary to cover the whole weight set. Further, two weight set components intersect in common faces only. That is, there exists a face F of W S (y) and a face F of W S (y ) such that F = F = W S (y) ∩ W S (y ). Based on this symmetry, two extreme supported nondominated images are defined to be adjacent if and only if the dimension of their intersection is one less than the dimension of the weight set. Finally, they present an algorithm for computing all extreme supported nondominated images for three objectives using the derived properties by iteratively shrinking supersets of the actual weight set components.
The weight set decomposition is implicitly calculated by the algorithms of Özpeynirci and Köksalan [26] and Bökler and Mutzel [6]. The algorithms of Alves and Costa [2] and Halffmann et al. [19] iteratively augment subsets of the weight set components based on the convexity property. Seipp [31] and Schulze et al. [30] use a weight set decomposition linked with so-called arrangements of hyperplanes in the image space to show that the number of extreme supported nondominated images of multiobjective minimum spanning tree problems and unconstrained multiobjective combinatorial problems, respectively, is polynomially bounded. Correia et al. [13] modify the results of Seipp to enumerate all efficient minimum spanning trees.
For the weighted Tchebycheff scalarization, Eswaran et al. [16] explicitly consider weight set components for biobjective problems. Based on this approach, Ralphs et al. [29] adapt the dichotomic search method to calculate all nondominated images of biobjective discrete optimization problems. Karakaya et al. [23] introduce an adjacency concept based on the weighted Tchebycheff scalarization. Here, two images are called adjacent if the intersection of their weight set components with respect to the weighted Tchebycheff scalarization is non-empty. Bozkurt et al. [9] give a representation of the weight set components as a union of polytopes, which is used to evaluate the quality of efficient solutions and efficient solution sets. In connection with [23], this is recently modified by Karakaya and Köksalan [22].
) illustrated in (a), and their weight set components (y r ), r = 1, . . . , 5, for both weighted sum scalarization (b) and weighted Tchebycheff scalarization (c). Note that, for both scalarizations, the restriction to weights contained in is without loss of generality. Thus, All images are nondominated. The image y 4 is not extreme supported. The image y 5 is not supported. The images y 1 and y 2 are adjacent w.r.t. the weighted sum weight set decomposition though their weighted Tchebycheff weight set components do not intersect

Our contribution
We present a rigorous theory on the weight set decomposition approach for the weighted Tchebycheff scalarization of MODOs. As shown in Fig. 1, the weighted Tchebycheff scalarization implies a more complex structure in comparison to the weighted sum scalarization. Our primary contribution is a comprehensive theoretical analysis of this structure and its properties. Knowing this structure may allow for new algorithms in the future, following the methodologies of [2,6,19,26,28], to compute all or subsets of the nondominated images including their weighted Tchebycheff weight set decomposition. Moreover, calculating the weighted Tchebycheff weight set decomposition might also enrich already existing algorithms, see for example [14,21] and the references therein, to provide additional information on the solution set, cf. [9,22].
In Sect. 2, we show that it is necessary and sufficient to consider only the weight set components for nondominated images and establish that weight set components have convexity-related properties: they are star-shaped and convex along rays emanating from a vertex of the weight set. We study the intersection of weight set components in Sect. 3. Such intersections coincide with weight set components of certain weakly nondominated images, and, hence, all convexity-related properties also apply, although intersections of star-shaped sets are not star-shaped in general. In Sect. 4, we follow the approach of Bozkurt et al. [9] to describe weight set components as unions of finitely many polytopes. We show that the obtained polytopes induce, for any weight set component, the existence of a so-called polytopal subdivision, which lays the foundation of the dimensional analysis in Sect. 5. In particular, this allows an adaption and a refinement of the adjacency concepts introduced in [28] and [23], respectively, which 'reveals the organization' of the nondominated set. We close with some concluding remarks in Sect. 6.

Foundations
In this section, we introduce the concept of the weight set decomposition for the weighted Tchebycheff scalarization. We also derive properties connecting the weight set with the nondominated set Y N and investigate convexity properties.
Recall that a polyhedron is the intersection of finitely many halfspaces and the dimension of a polyhedron P ⊆ R p is the maximum number of affinely independent points in P minus one. A polyhedron is called a polytope if it is bounded. For w ∈ R p and z ∈ R, the inequality w y ≤ z is valid for P if P ⊆ {y ∈ R p : w y ≤ z}. A set F ⊆ P is a face of P if there is some valid inequality w y ≤ z such that F = {y ∈ P : w y = z}. Note that faces of P are polyhedra themselves and, thus, the notion of dimension can be adapted. In particular, faces of dimension 0 are called extreme points. Polyhedra can be generalized as follows: (Ziegler [36]) A polytopal complex C is a finite collection of polytopes in R p such that (i) the empty polytope is in C, (ii) if P ∈ C, then all the faces of P are also in C, (iii) the intersection P ∩ Q of two polytopes P, Q ∈ C is a face of both P and Q.
The dimension dim(C) of the polytopal complex C is the largest dimension of a polytope in C. The underlying set of C is the point set P∈C P. A subcomplex of a polytopal complex C is a subset C ⊆ C that is a polytopal complex itself. A polytopal subdivision of a set S ⊆ R p is a polytopal complex C with the underlying set P∈C P = S. For example, the collection of all faces of a polytope P defines a polytopal subdivision of P itself. [27]) A set S ⊆ R p is star-shaped, if there exists an element y ∈ S such that θ y + (1 − θ)ȳ ∈ S for allȳ ∈ S and all θ ∈ (0, 1). The set of all such elements y is called kernel of S and is denoted by ker(S).

Definition 2.2 (Preparata and Shamos
In the remainder of this paper, we consider MODOs. Further, we make the following assumption on the reference point used in the weighted Tchebycheff scalarization.

Assumption 1
The reference point s ∈ R p is a utopia point. Thus, s < y for all images y ∈ Y and we can also assume that the reference point s used in the weighted Tchebycheff scalarization is the zero vector (s = 0) and Y ⊆ R p > . As a consequence of Assumption 1, the problem T S (λ) simplifies to min{ f (x) λ ∞ : x ∈ X } = min{ y λ ∞ : y ∈ Y }. Furthermore, it holds y λ ∞ > 0 for all y ∈ Y and λ ≥ 0. The following proposition extends Theorem 4.5 in [32] to the case of weakly nondominated images.

then there exists a weight λ such that y uniquely minimizes T S (λ).
Proof For y ∈ Y wN , define the weight λ componentwise by .., p λ jȳ j which implies 1 > λ jȳ j and, thus, y j >ȳ j for all j = 1, . . . , p. This contradicts y ∈ Y wN . To prove the second statement, we choose again the weight λ defined by λ i = 1 y i for i = 1, . . . , p. Then, similar calculations imply that, for an image y =ȳ ∈ Y with y λ ∞ ≥ ȳ λ ∞ , it holds that y j ≥ȳ j for all j = 1, . . . , p. This is a contradiction to y ∈ Y N .
Since α y λ ∞ = y αλ ∞ holds for all scalars α > 0, normalization of the weight λ does not change the optimal solution set of T S (λ). Hence, analogously to the weighted sum method, we restrict the set of eligible parameters to the (normalized) weight set Note that is a ( p − 1) dimensional polytope and the projection/bijection φ : is particularly useful for the visualization of the weight sets of MODOs with three objectives. Next, we introduce the decomposition of the weight set implied by the weighted Tchebycheff scalarization.

Definition 2.4
For an image y ∈ Y , the weight set component of y with respect to the weighted Tchebycheff scalarization is defined by Note that λ ∈ (y) if and only if y is optimal for T S (λ), i.e., y = f (x) for some optimal solution x of T S (λ). Obviously, if an image is not weakly nondominated, then its weight set component is empty.
We introduce a notation for the normalized weight used in the proof of Proposition 2.3 since it plays a major role.

Definition 2.5
For y ∈ Y wN , we denote the kernel weight 1 of y by λ(y) and define it componentwise by Proposition 2.3 implies that if y is weakly nondominated then its weight set component is nonempty. Hence, an image is weakly nondominated if and only if its weight set component is nonempty. If y is nondominated, we obtain the following corollary.

Proof
The claim follows by Proposition 2.3, the definition of the kernel weight, finiteness of the feasible set, and the continuity of the function defined by λ → ȳ λ ∞ . Hereby, note that, for any given weakly nondominated imageȳ ∈ Y wN , the function defined by λ → ȳ λ ∞ is continuous since it is the pointwise maximum of finitely many linear functions.
The next propositions show that nondominated images suffice to define the weight set components of all images.

Proposition 2.7
Let an image y ∈ Y be given. Then The following proposition shows that all weights λ ∈ map to a nondominated image in Y N by optimizing T S (λ).

Proposition 2.8 It holds that = y∈Y N (y).
Proof For a weight λ ∈ , there exists an image y ∈ Y that is optimal for T S (λ). Hence, So,ȳ is optimal for T S (λ) and, thus, λ ∈ (ȳ). It follows ⊆ ȳ∈Y N (ȳ). The reverse inclusion holds trivially. Proposition 2.3 implies another fact about the weight set components: a weight set component of a nondominated image cannot be a subset of another. In particular, y ∈ Y N implies (y)\ ȳ∈Y N ,ȳ =y (ȳ) = ∅. Thus, Proposition 2.8 states a sufficient and necessary condition to decompose the weight set.
Next, we observe a structural property of the weight set components: in contrast to the weighted sum weight set components, the sets (y) are not necessarily convex.

Example 2.9 Consider the set of nondominated images
∞ and, therefore, λ 3 / ∈ (y 1 ). Consequently, (y 1 ) is not convex. Figure 2 illustrates the weight set components for Example 2.9. In Sect. 4, we explain how these sets can be computed.
To gain more insights into the structure of weight set components, we subdivide weight set components into smaller subsets according to the index in which the maximum of the associated scalar product is attained (i.e., defining the weighted Tchebycheff norm value). This will be useful to prove a convexity related property (Corollary 2.12) and to establish a polytopal subdivision of the weight set components.

Definition 2.10
For a weakly nondominated image y ∈ Y wN and i = 1, . . . , p, we define the ith dimensional weight set component by Figure 2 presents these sets for Example 2.9. With the image set of Example 2.9, one can also show that both λ 1 and λ 2 are contained in (y 1 , 1) and, thus, the dimensional weight set components are not necessarily convex. However, we can derive a 'convexity-related' property.

Proposition 2.11
For a weakly nondominated image y ∈ Y wN , the following holds true: (ii) For i = 1, . . . , p, the ith dimensional weight set component (y, i) is a star-shaped set with λ(y) ∈ ker( (y, i)).
and, therefore, λ is the kernel weight. This shows statement (i).
The first property of Proposition 2.11 states that the kernel weight is the only weight that is contained in all dimensional weight set components. The second justifies the name kernel weight. We immediately get the following corollary.
For one dimensional weight sets (i.e. for two objectives), star-shapedness is equivalent to convexity of the weight set components. This justifies why the dichotomic search approach used for the weighted-sum scalarization can be adapted to the weighted Tchebycheff scalarization as proposed in [16,29].
We can also derive a second convexity-related property with the help of the following lemma. Note that we fix p − 1 entries of a weight λ ∈ R p ≥ and do not consider normalized weights here.

Lemma 2.13
Let an index k, a weight λ ∈ R p ≥ , and a scalar t > 0 be given. If an image y ∈ Y is optimal for both T S (λ) and T S (λ + te k ), where e k is the kth unit vector in R p , then y is also optimal for T S (λ + θ te k ) for all θ ∈ [0, 1].
Proof First observe that for any y ∈ Y , θ ∈ [0, 1], and k, λ and t as given, where i is an index such that y λ ∞ = λ i y i . Consider the image y ∈ Y that is optimal for both T S (λ) and T S (λ + te k ), fix i * such that y λ ∞ = λ i * y i * and fix θ ∈ (0, 1). Letȳ ∈ Y and fix i such that ȳ λ ∞ = λ iȳi . Thus, We consider two cases for ȳ λ+te k ∞ . For each case, we show that y λ+θ te k ∞ ≤ ȳ λ+θ te k ∞ . In both cases, we use the observation that λ i * y i * ≤ λ iȳi since y is optimal for T S (λ).
where the last inequality follows by optimality of y for T S (λ + te k ). Recall that λ i * y i * ≤ λ iȳi = ȳ λ+θ te k ∞ . Thus, Fig. 3 The convexity property of Proposition 2.14. The intersection of the line segments H k,a (dashed lines) in (2.3) and the weight set components are always convex sets. The green-gray and violet-gray checkerboard areas represent the intersection of weight set components (y 1 ) ∩ (y 2 ) and (y 1 ) ∩ (y 3 ), respectively. See Fig. 2 for a representation of the individual weight set components Since t > 0 and λ k ≥ 0, it must be that y k ≤ȳ k and, hence, Lemma 2.13 shows that, for any pair λ 1 and λ 2 with λ 2 − λ 1 equal to a positive multiple of a unit vector, if an image y is optimal for both T S (λ 1 ) and T S (λ 2 ), then y is also optimal for T S (λ), where λ is any convex combination of λ 1 and λ 2 .
In order to transfer this result to the weight set, we define, for a given index k ∈ {1, . . . , p} and a vector a ∈ R p such that a i > 0 for i = k, the following line segments: Along these line segments, some convexity-related property holds true.
Proposition 2.14 For any k ∈ {1, . . . , p} and a ∈ R p such that a i > 0 for i = k, the intersection (y) ∩ H k,a is convex for all y ∈ Y wN .

The intersection of weight set components
In this section, we analyze the structure of the intersection of two weight set components. In general, the intersection of two star-shaped sets is not guaranteed to be star-shaped. However, this holds true for the intersection of two weight set components. To prove this, we first define a (possibly artificial) image. We say an imageȳ ∈Ȳ contributes to the local nadir image ifȳ i = y N i (Ȳ ) for some i ∈ {1, . . . , p}.
In the following, we avoid trivial cases by requiring |Ȳ | ≥ 2. Further, for ease of exposition, we assume that the local nadir image exists in Y . The local nadir image y N (Ȳ ) is dominated, and, consequently, (y N (Ȳ )) = ∅ implies y N (Ȳ ) ∈ Y wN \Y N . The local nadir image is closely related to the intersection of weight set components, as shown in the following proposition.
Thus, Proposition 3.3 implies that Corollary 2.12 and Proposition 2.14 hold in fact for intersections of weight set components:

A polytopal subdivision of the weight set components
In the following, we construct a representation of the weight set components as the union of polytopes based on the idea of [9]: For an image y ∈ Y N , the weight set can be decomposed into p polytopes where the ith polytope contains all weights such that the weighted Tchebycheff norm of y is attained in the ith index. By taking all nondominated images into account, we can refine this decomposition such that the following holds: for each polytope obtained and for any image y ∈ Y N , the index in which the weighted Tchebycheff norm is attained can exactly be determined. Hence, additional dividing hyperplanes based on which image is optimal can be added. In this section, we establish that this construction yields the existence of a polytopal subdivision of each weight set component which lays a well-defined foundation of a notion of dimension. We formally state the construction. Based on Example 2.9, each step of this construction is illustrated in Fig. 4. Let Y N = {y 1 , . . . , y R }. For y 1 , recall the ith dimensional weight set component for i ∈ {1, . . . , p} (see Definition 2.10): Using y 1 , we subdivide the weight set into p polytopes P (i) :={λ ∈ : . . . , p (Fig. 5a). For a weight λ in one of these polytopes, we can then immediately identify the index (pairs) in which the weighted Tchebycheff norm (with weight λ) of y 1 is attained. Using y 2 , we can further subdivide the weight set into (at most) p 2 polytopes . . , p, to identify weights for which the weighted Tchebycheff norm of y 1 is attained in index i 1 and the weighted Tchebycheff norm of y 2 is attained in index i 2 (Fig. 5b). This reasoning can be extended to multiple images y 1 , . . . , y R : For each image y r , we choose an index i r ∈ {1, . . . , p} and consider (4.1) Fig. 5 The construction of the polytopal subdivision of the weight set according to (4.1) for Example 2.9 with p = 3. a The image y 1 induces a decomposition of the weight set into three polytopes P (i) = {λ ∈ : The images y 1 and y 2 induce a decomposition of the weight set into (at most) p 2 polytopes P (i 1 ,i 2 ) = {λ ∈ : λ i 1 y 1 c Taking the other images y 3 and y 4 into account, the weight set can be decomposed into the polytopes P (i 1 ,i 2 ,i 3 ,i 4 ) = {λ ∈ : λ i r y r i r ≥ λ k y r k , k = i r , r = 1, 2, 3, 4}. d Each polytope P (i 1 ,i 2 ,i 3 ,i 4 ) can further be subdivided based on the optimal image for the weighted Tchebycheff scalarizations. For example, P (i 1 ,i 2 ,i 3 ,i 4 ) is divided into four polytopes (y r ) ∩ P (i 1 ,i 2 ,i 3 ,i 4 ) . Hereby, note that the polytopes (y 3 ) ∩ P (i 1 ,i 2 ,i 3 ,i 4 ) and (y 4 ) ∩ P (i 1 ,i 2 ,i 3 ,i 4 ) have dimension one, cf. Fig. 2. e Then, the polytopes can individually be assigned to the (in some cases multiple) weight set components. See Fig. 2 for a representation of the individual weight set components Obviously, each set P (i 1 ,...,i R ) is a polytope and each weight λ ∈ is contained in at least one polytope of the form 4.1. If λ ∈ P (i 1 ,...,i R ) , we can deduce that y r λ ∞ = λ i r y r i r for all r = 1, . . . , R (Fig. 5e). Hence, by Proposition 2.7, deciding whether λ ∈ (y r ) holds true reduces to check R inequalities: We state some properties of the families defined in (4.3).
Proof The statement (i) is easy to see. Without loss of generality, let Y N be enumerated such that y r = y 1 and y s = y 2 . We define On the one hand, is an intersection of valid inequalities for P (i 1 ,...,i R ) ∩ (y 1 ), as shown in (4.2). These inequalities define a face F 1 = P (i 1 ,...,i R ) ∩ (y 1 ) ∩ H , if it is nonempty. On the other hand, is an intersection of valid inequalities for P ( j 1 ,..., j R ) ∩ (y 2 ). Analogously, the set F 2 = P ( j 1 ,..., j R ) ∩ (y 2 ) ∩ H is a face of P ( j 1 ,..., j R ) ∩ (y 2 ). By definition, a weight λ ∈ P (i 1 ,...,i R ) ∩ (y 1 ) ∩ P ( j 1 ,..., j R ) ∩ (y 2 ) satisfies for r = 1, . . . , R: λ i r y r i r ≥ λ k y r k for k = i r , λ j r y r j r ≥ λ k y r k for k = j r .
This implies λ i r y r i r = λ j r y r j r . Since also λ i 1 y 1 i 1 ≤ λ i 2 y 2 i 2 and λ j 2 y 2 j 2 ≤ λ j 1 y 1 j 1 hold, we get Thus, the equality λ i 1 y 1 i 1 = λ j 2 y 2 j 2 holds true. It follows that . Inclusionwise maximality follows also from the latter equalities.
This motivates to augment the families of polytopesC(y).

Definition 4.2
For an image y ∈ Y N , the weight set complex of y with respect to the weighted Tchebycheff scalarization is defined by C(y):={F : there exists a polytope P ∈C(y) such that F is a face of P}.
Since Proposition 4.1 (ii) remains true if y r is chosen to be equal to y s , we can conclude that polytopes inC(y r ) always intersect in common faces. Hence, Proposition 4.1(i) implies that the weight set complex of y r is indeed a polytopal subdivision of its weight set component (y r ).

Corollary 4.3 Let y ∈ Y N . Then, C(y) is a polytopal complex such that
P∈C(y) P = (y). Fig. 4 shows the subdivision for Example 2.9. Moreover, Proposition 2.8 can be adapted: The knowledge of all polytopal complexes C(y) is sufficient to cover the weight set, that is, = y∈Y N P∈C(y) P. Here, note a slight but important difference: for all nondominated images y ∈ Y N , the full knowledge of all polytopes in the weight set complex C(y) (and, hence, (y)) is not required anymore, since individual polytopes can belong to multiple weight set complexes. Nevertheless, the knowledge of all nondominated images is still required, and the underlying set of the union of all known polytopes must cover the complete weight set.
Then, by construction, the family of polytopes C(y, i):={F : there exists a polytope P ∈C(y, i) such that F is a face of P} is again a polytopal complex. This construction is consistent with the definition of the dimensional weight set components: it holds that (y, i) = P∈C(y,i) P and, moreover, P ∈ C(y, i) implies that P ∈ C(y), and, vice versa, for each polytope P ∈ C(y) there is an index i ∈ {1, . . . , p} such that P ∈ C(y, i). That is, C(y, i) is indeed a subcomplex of C(y). the ith dimensional weight set complex.
Taking images in Y wN \ Y N into account, the construction of the polytopal subdivision needs to be refined. This can be done by adapting (4.1) and (4.2) based on an enumeration of Y wN . Then, the following result can be analogously derived.

Corollary 4.6
For any weakly nondominated image y ∈ Y wN , there exists a polytopal subdivision of (y).

Remark 4.7
Due to the construction of these polytopal subdivisions, the intersection of weight set components induces a polytopal subdivision that uses polytopes of both subdivisions only. That is (y 1 ) ∩ (y 2 ) = P∈C(y 1 )∩C(y 2 ) P. In particular, C(y 1 ) ∩ C(y 2 ) itself is also a polytopal complex. Thus, we can compare weight set components based on the polytopes in the polytopal subdivision. Similarly, the union of two weight set complexes is a polytopal subdivision of the union of the corresponding weight set components. This is, in particularly, important to define a notion of dimension of (unions/intersections of) weight set components.

The dimension of the weight set components
In this section, we analyze the dimension of the weight set components. First, we define the dimension with respect to the associated polytopal complex. Recall that a polytopal complex is defined via a finite set of polytopes and, for all weight set components, there exists a polytopal subdivision, see Corollary 4.3.

Definition 5.1
For an image y ∈ Y , the dimension of its weight set component (y) is defined by dim( (y)):= dim(C(y)).
Note that the dimension of the dimensional weight set components as well as the intersections or unions of weight set components can be defined analogously by Remarks 4.4 and 4.7. In the following, we distinguish between images in Y N and Y wN \Y N .
We first consider nondominated images. Due to the finite number of polytopes in C(y), Corollary 2.6 immediately implies that the dimension of the corresponding weight set complexes C(y) must be equal to p − 1.
This raises the question whether an analogon to Corollary 5.2 for weakly nondominated but dominated images holds true. This is not the case.
How can we characterize the dimension of the weight set components of images y w ∈ Y wN \ Y N ? We will conclude that this depends on the images dominating y w . If y w ∈ Y wN \Y N , then there exists an image y ∈ Y N such that y i ≤ y w i for all i = 1, . . . , p, i.e., (y w , i) ⊆ (y, i) for all i satisfying y w i = y i .
Thus, the dimension of the weight set components of weakly nondominated but dominated images depends on the number of images that dominate y w and on which indices those images are strictly better. Proof This follows from Lemma 5.5.
As a further consequence, we obtain a characterization of nondominated images. int( (y, i)) denote the set of all weights λ ∈ (y, i) such that there exists a scalar ε > 0 with B ε (λ) ⊆ (y, i). An image y ∈ Y is nondominated if and only if int( (y, i)) = ∅ for all i = 1, . . . , p. Fig. 6 The weight set components (y r ), r = 5, 6, 7, of Example 5.7. Note that y r ∈ Y wN \Y N for r = 5, 6, 7 and, thus, the interior of at least one dimensional weight set component is empty

Corollary 5.8 Let
The intersection of weight set components In Sect. 3, the intersection of weight set components y∈Ȳ (y) forȲ ⊆ Y is determined by the weight set component of the (dominated) local nadir image y N (Ȳ ). Thus, the dimension of the intersection sets is characterized by Proposition 5.6. Note that a nonempty intersection implies that all images inȲ contribute to y N (Ȳ ), andȲ = {y ∈ Y N : y y N (Ȳ )}. Thus, if all images inȲ coincide in at least one index i, it holds that dim( y∈Ȳ (y)) = p − 1 and they share at least one ( p − 1)dimensional polytope in their weight set complexes, in particular, in their ith dimensional weight set component. Notice also that this cannot happen between different dimensional weight set components as (y 1 , i) ∩ (y 2 , j) ⊆ {λ ∈ : λ i y 1 i = λ j y 2 j } and the dimension of the latter polytope is p − 2. Considering only two nondominated images, we can therefore define a concept of (proper) adjacency regarding the weighted Tchebycheff scalarization.

Definition 5.9
Let two images y,ȳ ∈ Y be given.  The definition of adjacency of nondominated images with respect to the weighted Tchebycheff scalarization introduced and used in [22,23] aligns with the definition given in Definition 5.9(i). Definitions 5.9(ii) and (iii) are motivated by the concept given in [28]. In conclusion, taking the dimension of the intersection set into account, the notion of adjacency can and should be refined.

Conclusion
Besides the weighted sum and the ε-constraint method, the weighted Tchebycheff method is a frequently applied scalarization technique in multiobjective optimization. The weighted Tchebycheff scalarization problem is closely linked to many other single objective optimization disciplines, including robust optimization, goal programming, and location theory. It is a building block of many exact and heuristic algorithms, which systematically vary the choice In this article, we provide the first rigorous and comprehensive theory of the set of all eligible weights for the weighted Tchebycheff scalarization. We analyze the polyhedral and combinatorial structure of the set of all weights yielding the same efficient solution as well as the composition of the weight set as a whole. To date, analogous research has mostly been published for the weighted sum method. However, there are substantial differences: The weighted Tchebycheff scalarization is able to yield all efficient solutions (i.e., not only the supported ones as in the weighted sum method). Additionally, due to absence of convexity, the structure of the weight set of the weighted Tchebycheff method is more complex and the analysis is more technical. Through this analysis, convexity-related properties and bounds on dimension of the weight set components have been proven.
Contrasting the structures of the weight set decomposition of the weighted sum scalarization, the weighted Tchebycheff scalarization provides some additional insights at a higher level. For the weighted sum scalarization, the decomposition describes the gradients of the nondominated part of the convex hull of the set of images as well as information about adjacent nondominated faces. However, it neither provides information about the positioning nor the size of the convex hull in the image space. In fact, nondominated frontiers (of some multiobjective optimization problems) may vary substantially but still share the same weight set decomposition of the weighted sum scalarization (cf. Figure 8(a)).
In contrast, the weight set decomposition of the weighted Tchebycheff norm yields more information about the positioning of the nondominated images. Note that the weight set decomposition includes the knowledge of the local nadir weights. In fact, the weight set decomposition of two sets of nondominated images coincides as long as their set of local nadir weights coincide. With the local nadir weights λ N for the weight set components known, the local nadir images must be located on the rays defined by D(λ N ):={t · λ N : t > 0}. These rays narrow down the configuration of the nondominated set, since each nondominated image y must be within the region determined by all rays of local nadir weights that are contained in the weight set component (y). If the kernel weight for the weight set component (y) is additionally known, the nondominated image y must be located on the ray D(λ(y)):={t · λ(y) : t > 0}. Taking nondominance and the definition of local Fig. 8 Biobjective example of distinct sets of nondominated images (indicated by color), each of which have the same weight set decomposition with respect to weighted sum scalarization (a) or weighted Tchebycheff scalarization (b). a The gradient vectors describing the convex hull are equivalent (parallel lines are indicated by line type, e.g., solid, dashed, and dotted) even though the frontiers vary widely in overall shape. b With the local nadir weights for weight set components known, the local nadir images must be located on the associated rays (indicated by dotted lines) and, hence, these rays narrow down the possible location of the nondominated images. If the kernel weights for weight set components are additionally known, then the nondominated images must be located on the associated rays (indicated by dashed lines). In this case, the location of the nondominated set is uniquely determined up to scaling by multiplicative factor nadir images into account, the complete nondominated set can be determined up to scaling of the objectives by a multiplicative factor. Figure 8(b) illustrates these observations for a biobjective example. An analogous reasoning is not possible for the weight set decomposition of the weighted sum scalarization.
Thus, an immediate idea for future research is a thorough analysis of 'duality' between the weight set decomposition and the image space described informally above and illustrated in Fig. 8 Other directions of research include the algorithmic utilization of the derived properties. Star-shapedness and line convexity may be used to derive outer approximation [28] or inner approximation [2,19] methods that iteratively shrink or augment weight set components, respectively. The properties may be also utilized for interactive approaches with focus on a graphical exploration and presentation of solutions. The idea of weight set decompostion can further be applied to the parameter sets of other scalarizations. For example, weighted p-norm scalarizations or the augmented modified weighted Tchebycheff scalarization yield nondominated images, only, and theoretically connect the already studied weight set decompositions. This may provide methods for dealing algorithmically with overlapping components of weighted Tchebycheff weight set components and revealing additional insights in the images space of multiobjective optimization problems. Availability of data and material Not applicable.

Conflicts of interest Not applicable.
Code availability Not applicable.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.