Nonlinear expectations of random sets

Sublinear functionals of random variables are known as sublinear expectations; they are convex homogeneous functionals on infinite-dimensional linear spaces. We extend this concept for set-valued functionals defined on measurable set-valued functions (which form a nonlinear space) or, equivalently, on random closed sets. This calls for a separate study of sublinear and superlinear expectations, since a change of sign does not alter the direction of the inclusion in the set-valued setting. We identify the extremal expectations as those arising from the primal and dual representations of nonlinear expectations. Several general construction methods for nonlinear expectations are presented and the corresponding duality representation results are obtained. On the application side, sublinear expectations are naturally related to depth trimming of multivariate samples, while superlinear ones can be used to assess utilities of multiasset portfolios.


Introduction
Fix a probability space ( , F, P). A sublinear expectation is a real-valued function e defined on the space L p (R) of p-integrable random variables (with p ∈ [1, ∞]) such that e(ξ + a) = e(ξ ) + a (1.1) for each deterministic a, the function e is monotone, i.e., see Peng [29,30], who brought sublinear expectations to the realm of probability theory and established their close relationship to solutions of backward stochastic differential equations. A superlinear expectation u satisfies the same properties with (1.2) replaced by In many studies, the homogeneity property together with sub-(super-)additivity is replaced by convexity of e and concavity of u. The range of values may be extended to (−∞, ∞] for the sublinear expectation and to [−∞, ∞) for the superlinear one. Abstract sublinear functionals have been studied by Fuglede [11], Schmeidler [32] and many further papers in relation to capacities and the Choquet integral and in view of applications to game theory and optimisation. While the notation e reflects the expectation meaning, the choice of notation u is explained by the fact that the superlinear expectation can be viewed as a utility function that assigns a higher utility value to the sum of two random variables in comparison with the sum of their individual utilities; see Delbaen [7,Chap. 4]. If the random variable ξ models a financial gain, then r(ξ ) = −u(ξ ) is called a coherent risk measure. The property (1.1) is then termed cash-invariance, and the superadditivity property is turned into subadditivity due to the change of sign. The subadditivity of a risk measure means that the sum of two random variables bears at most the same risk as the sum of their risks; this is justified by the economic principle of diversification.
It is easy to see that e is a sublinear expectation if and only if is a superlinear one, and in this case e and u are said to form an exact dual pair. The sublinearity property yields e(ξ ) + e(−ξ) ≥ e(0) = 0, so that −e(−ξ) ≤ e(ξ ). The interval [u(ξ ), e(ξ )] generated by an exact dual pair of nonlinear expectations characterises the uncertainty in the determination of the expectation of ξ . In finance, such intervals determine price ranges in illiquid markets; see Madan [24].
We equip the space L p with the σ (L p , L q )-topology based on the standard pairing of L p and L q with 1/p + 1/q = 1. It is usually assumed that e is lower semicontinuous and u is upper semicontinuous in the σ (L p , L q )-topology. Given that e and u take finite values, general results of functional analysis concerning convex functions on linear spaces imply the semicontinuity property if p ∈ [1, ∞) (see Kaina and Rüschendorf [20]); it is additionally imposed if p = ∞. A nonlinear expectation is said to be law-invariant (more exactly, law-determined) if it takes the same value on identically distributed random variables; see Föllmer and Schied [10,Sect. 4.5].
A rich source of sublinear expectations is provided by suprema of conventional (linear) expectations taken with respect to several probability measures. Assuming the σ (L p , L q )-lower semicontinuity, the bipolar theorem yields that this is the only possible case; see Delbaen [7,Sect. 4.5] and Kaina and Rüschendorf [20]. Then is the supremum of expectations E[γ ξ] over a convex σ (L q , L p )-closed cone M in L q (R + ); the superlinear expectation is obtained by replacing the supremum with the infimum. In the following, we assume that (1.5) holds and that the representing set M is chosen in such a way that the corresponding sublinear and superlinear expectations are law-invariant, that is, with each γ , M contains all random variables identically distributed as γ .
A random closed set X in Euclidean space is a random element with values in the family F of closed sets in R d such that {X ∩ K = ∅} is in F for all compact sets K in R d ; see Molchanov [25,Sect. 1.1.1]. In other words, a random closed set is a measurable set-valued function. A random closed set X is said to be convex if X almost surely belongs to the family co F of closed convex sets in R d . For convex random sets in Euclidean space, the measurability condition is equivalent to the condition that the support function of X (see (2.2) below) is a random function on R d with values in (−∞, ∞]. In the set-valued setting, it is natural to replace the inequalities (1.2) and (1.3) with inclusions. For sets, the minus sign corresponds to the reflection with respect to the origin; it does not alter the direction of the inclusion, and so there is no direct link between set-valued sublinear and superlinear expectations.
This paper aims to systematically explore nonlinear set-valued expectations. Section 2 recalls the classical concept of the (linear) selection expectation for random closed sets, introduced by Aumann [4] and Artstein and Vitale [3]; see also Molchanov [25,Sect. 2.1]. The selection expectation E[X] is defined as the closure of the set of expectations E[ξ ] of all integrable random vectors ξ such that ξ ∈ X almost surely (selections of X). In Sect. 2.3, we introduce a suitable convergence concept for (possibly unbounded) random convex sets based on linear functionals applied to the support function.
Nonlinear expectations of random convex sets are introduced in Sect. 3. We refine the properties of nonlinear expectations stated in Molchanov [25,Sect. 2.2.7]. Basic examples of such expectations and more involved constructions are considered with a particular attention to the expectations of random singletons. It is also explained how the set-valued expectation applies to random convex functions and how it is possible to get rid of the homogeneity property and extend the setting to convex/concave functionals.
Among the rather vast variety of nonlinear expectations, it is possible to identify extremal ones: the minimal sublinear expectation of X is the convex hull of nonlinear expectations of all sets from some family that yields X as their union. In the case of selections, this becomes a direct generalisation of the representation of the selection expectation as the set of expectations for all random points almost surely belonging to a random set. The maximal superlinear extension is the intersection of nonlinear expectations of all half-spaces containing the random set. While the two coincide in the linear case and provide two equivalent definitions of the selection expectation, the two constructions differ in general. Similar set-valued functions on linear spaces have been studied by Hamel [12] and Hamel and Heyde [13], and the dual representation in [12,13] appears to be the representation of maximal superlinear expectations in our setting restricted to special random closed sets.
Nonlinear maps restricted to the family L p (R d ) of p-integrable random vectors and sets having the form of a random vector plus a cone have been studied by Cascos and Molchanov [6] and Hamel and Heyde [12,13]; comprehensive duality results have been proved by Drapeau et al. [9]. In our terminology, these studies concern the case when the argument of a superlinear expectation is the sum of a random vector and a convex cone, which in Hamel et al. [14] is allowed to be random, but is the same for all random vectors involved. However, for general set-valued arguments, it does not seem possible to rely on the approach of [9,12,13], since the known techniques of set-valued optimisation theory (see e.g. Khan and Tammer [21]) do not suffice to handle functions whose arguments belong to a nonlinear space.
The key technique suitable to handle nonlinear expectations relies on the bipolar theorem. A direct generalisation of this theorem for functionals of random convex sets is not feasible, since random convex sets do not form a linear space. Section 5 provides duality results for sublinear expectations and Sect. 6 for superlinear ones. Specifically, the constant-preserving minimal sublinear expectations are identified. For the superlinear case, the family of random closed convex sets such that a superlinear expectation contains the origin is a convex cone. However, it is rather tricky to use separation results since linear functions (such as the selection expectation) may have trivial values on unbounded integrable random sets. For instance, the selection expectation of a random half-space with a nondeterministic normal is the whole space; in this case, superlinear expectations are not dominated by any nontrivial linear expectation. In order to handle such situations, the duality results for superlinear expectations are proved for the maximal superlinear expectation. It is shown that the superlinear expectation of a singleton is usually empty; in order to come up with a nontrivial minimal extension, singletons in the definition of the minimal extension are replaced by translated cones. For arguments being the sum of a point and a cone in R d , we recover the results of Hamel and Heyde [12,13].
Some applications are presented in Sect. 7. Sublinear expectations are useful in order to identify outliers in samples of random sets. Such samples often appear in partially identified models in econometrics, e.g. as intervals giving the salary range (see Molchanov and Molinari [27]), or as interval-valued price ranges in finance. The superlinear expectation can be used to assess multivariate risk in finance and to measure multivariate utilities. The superlinearity property is essential, since the utility of the sum of two portfolios described by random sets "dominates" the sum of their individual utilities. We show that the minimal extension of a superlinear expectation is closely related to the selection risk measure of lower random sets considered by Molchanov and Cascos [26]. Allowing the arguments of multiasset utilities to be general convex random sets makes it possible to use iteration-based constructions in the dynamic framework (see Lépinette and Molchanov [23]) and so consider nonlinear extensions of multivariate martingales. The case of random sets having the form of a vector plus a cone is the standard setting in the theory of markets with proportional transaction costs; see Kabanov and Safarian [19]. Superlinear expectations make it possible to assess utilities (and risks) of such portfolios and so develop dynamic hedging strategies; see [23]. Allowing general arguments of superlinear expectations makes it possible to include models of general convex transaction costs (see Pennanen and Penner [31]), most importantly, the setting of limit order books.
The appendix presents a self-contained proof of the fact that vector-valued sublinear expectations of random vectors necessarily split into sublinear expectations applied to each component of the vector. This fact reiterates the point that the set-valued setting is essential for defining multivariate nonlinear expectations.
We use the following notational conventions: X, Y denote random closed convex sets, F is a deterministic closed convex set, ξ and β are p-integrable random vectors and random variables, ζ and γ are q-integrable vectors and variables with 1/p + 1/q = 1, η is usually a random vector with values in the unit sphere S d−1 , u and v are deterministic points from S d−1 .

Integrable random sets and selection expectation
Let X be a random closed set in R d , always assumed to be almost surely nonempty. A random vector ξ is called a selection of X if ξ ∈ X almost surely. Let L p (X) denote the family of (equivalence classes of) p-integrable selections of X for p ∈ [1, ∞), essentially bounded ones if p = ∞, and all selections if If X is integrable, then its selection expectation is defined by The support function of any nonempty set F in R d is defined by allowing possibly infinite values if F is not bounded, where u, x denotes the scalar product. Due to homogeneity, the support function is determined by its values on the unit sphere S d−1 . If X is an integrable random closed set, then its expected support function is the support function of E[X], that is, which may be seen as the dual representation of the selection expectation with (2.1) being its primal representation. Ararat and Rudloff [2] provide an axiomatic Daniell-Stone type characterisation of the selection expectation. The property (2.3) can also be expressed as meaning that in this case, it is possible to interchange expectation and supremum. If X is an integrable random closed set and H is a sub-σ -algebra of F, the conditional expectation E[X|H] is identified by its support function, being the conditional expectation of the support function of X; see Hiai and Umegaki [16] and [25, Sect. 2.1.6].
The dilation (scaling) of a closed set F is defined as cF = {cx : x ∈ F } for c ∈ R. For two closed sets F 1 and F 2 , their closed Minkowski sum is defined by and the sum is empty if at least one summand is empty. If at least one of F 1 and F 2 is compact, the closure on the right-hand side is not needed. We write shortly F + a instead of F + {a} for a ∈ R d .
If X and Y are random closed convex sets, then X + Y is a random closed convex set; see [25,Theorem 1.3.25]. The selection expectation is linear on integrable random closed sets, that is, In the following, the letter C always refers to a deterministic closed convex cone in R d which is distinct from the whole space. If F = F + C, then F is said to be C-closed. Due to the closed Minkowski sum on the right-hand side, F is also topologically closed. Let co F(R d , C) denote the family of all C-closed convex sets in R d (including the empty set), and let L p (co F(R d , C)) be the family of all p-integrable random sets with values in co F(R d , C). Any such random set is necessarily a.s. nonempty. By we denote the polar cone of C.
is the family of lower convex closed sets, and a random closed convex set with realisations in this family is called a random lower set. Example 2.2 Let C be a convex closed cone in R d which does not coincide with the whole space. If X = ξ + C for ξ ∈ L p (R d ), then X belongs to the space L p (co F(R d , C)). For each ζ ∈ L q (C o ), we have h(X, ζ ) = ξ, ζ .

Support function at random directions
is the smallest half-space with outer normal η that contains X. If X is a.s. C-closed, then (2.5) holds with η running through the family of selections of S d−1 ∩ C o . For each ζ ∈ L q (R d ), the support function h(X, ζ ) is a random variable with values in (−∞, ∞]; see [22,Lemma 3.1]. While h(X, ζ ) is not necessarily integrable, its negative part is always integrable if X is p-integrable. Indeed, choose any ξ ∈ L p (X) and write h(X, ζ ) = h(X − ξ, ζ ) + ξ, ζ . The second summand on the right-hand side is integrable, while the first one is nonnegative. A random closed set X is called Hausdorff-approximable if it appears as the almost sure limit in the Hausdorff metric of random closed sets with at most a finite number of values. It is known [25,Theorem 1.3.18] that all random compact sets are Hausdorff-approximable, as well as those that appear as the sum of a random compact set and a random closed set with at most a finite number of possible values. The random closed set X from Example 2.3 is not Hausdorff-approximable.
The distribution of a Hausdorff-approximable p-integrable random closed convex set X is uniquely determined by the selection expectations E[γ X] for all γ ∈ L q (R + ), and it actually suffices to let γ run through all measurable indicators; see Hess [15] and [25,Proposition 2.1.33]. If X is Hausdorff-approximable, then its selections ξ are identified by the condition E[ξ 1 A ] ∈ E[X1 A ] for all A ∈ F. By passing to the support functions, we arrive at a variant of Lemma 2.4 with ζ = u1 A for all u ∈ S d−1 and A ∈ F.

Convergence of random closed convex sets
Convergence of random closed sets is typically considered in probability, almost surely or in distribution; see Molchanov [25,Sect. 1.7]. In the following, we define L p -type convergence concepts. The space L p (R d ) is equipped with the σ (L p , L q )topology, that is, ξ n → ξ means that E[ ξ n , ζ ] → E[ ξ, ζ ] for all ζ ∈ L q (R d ).

Lemma 2.6
Recall that C denotes a generic convex cone in R d which differs from the whole space. If X is a p-integrable random C-closed convex set, then L p (X) is a nonempty convex σ (L p , L q )-closed and L p (C)-closed subset of L p (R d ).
Proof If ξ n ∈ L p (X) and ξ n → ξ ∈ L p (R d ) in σ (L p , L q ), then for all ζ ∈ L q (R d ). Thus ξ is a selection of X by Lemma 2.4. The statement concerning C-closedness is obvious.
A sequence (X n ) n∈N in L p (co F(R d , C)) is said to converge to a random set X ∈ L p (co F(R d , C)) scalarly in σ (L p , L q ) (shortly, scalarly) if where the convergence is understood in the extended line (−∞, ∞]. Since E[h(X n , ζ )] equals the support function of L p (X n ) in the direction ζ , this convergence is the scalar convergence L p (X n ) → L p (X) as convex sets in L p (R d ); see Sonntag and Zǎlinescu [34].

Definitions
Fix p ∈ [1, ∞] and a convex closed cone C distinct from the whole space R d .

Definition 3.1 A sublinear set-valued expectation is a function
for all p-integrable random closed convex sets X and Y . A superlinear set-valued expectation U satisfies the same properties with the exception of ii) replaced by U (F ) ⊆ F and (3.1) replaced by the superadditivity property The nonlinear expectations E and U are said to be law-invariant if they retain their values on identically distributed random closed convex sets.
is empty for all p-integrable random sets Y . Thus each sublinear expectation is either always empty or always nonempty. In view of this, we assume that sublinear expectations take nonempty values. We always exclude the trivial cases when E(X) = R d or U (X) = ∅ for all X.
Note that E(C) is a closed convex cone, which may be strictly larger than C. By Proposition 3.2, U (C) is either C or is empty. The sublinear (respectively, superlinear) expectation is said to be normalised if E(C) = C (respectively, U (C) = C). We always have E(R d ) = R d by property ii), and we also have The properties of nonlinear expectations do not imply that they preserve deterministic convex closed sets A nonlinear expectation is said to be constant-preserving if all nonempty deterministic sets from co F(R d , C) are invariant.
The superlinear and sublinear expectations E and U form a dual pair if U (X) is a subset of E(X) for each p-integrable random closed convex set X. In contrast to the univariate setting, the reflection −X = {−x : x ∈ X} of X with respect to the origin does not alter the direction of set inclusions, so that the exact duality relation (1.4) is For a sequence (F n ) n∈N of closed sets, its lower limit lim inf n→∞ F n is the set of limits for all convergent sequences x n ∈ F n , n ∈ N, and its upper limit lim sup n→∞ F n is the set of limits for all convergent subsequences x n k ∈ F n k , k ∈ N.
The sublinear expectation E is called lower semicontinuous if and U is upper semicontinuous if for any sequence (X n ) n∈N of random closed convex sets converging to X in the chosen topology, e.g. scalarly lower semicontinuous if (X n ) scalarly converges to X. Note that our lower semicontinuity definition is weaker than its standard variant for set-valued functions which would require that E(X) is a subset of lim inf n→∞ E(X n ); see Hu and Papageorgiou [18, Proposition 2.35].

Proposition 3.3 If
s. with X being an independent copy of X, then E(X) = R d for each law-invariant sublinear expectation E.
Proof By subadditivity and law-invariance, Proposition 3.3 applies if X = H η (0) is a half-space with a non-atomic η, so that each law-invariant sublinear expectation on such random sets takes trivial values.

Remark 3.5
It is possible to consider nonlinear expectations defined only on some special random sets, e.g. singletons or half-spaces. It is then only required that the family of such sets is closed under translations, under dilations by positive reals and for Minkowski sums.
Remark 3.6 Utility functions of random variables are usually assumed to be superadditive. Risk measures of random variables are defined by inverting the sign and so become subadditive. In order to resemble the terminology common for risk measures, the family co F could be ordered by the reverse inclusion ordering; then the terminology is correspondingly adjusted, e.g. a superlinear expectation becomes sublinear and monotonically decreasing. The use of the reverse inclusion order promoted by Hamel et al. [13,14] is largely motivated by financial terminology, where risk measures are traditionally assumed to be antimonotonic and subadditive; see e.g. Föllmer and Schied [10,Chap. 4]. In the reverse inclusion order, set-valued risk measures become subadditive, exactly as conventional risk measures of random variables are. We, however, systematically consider the conventional inclusion order, and so our set-valued setting extends the setup advocated by Delbaen [7] in the numerical case. He considers utility functions instead of risk measures: utility functions are superlinear and increasing, corresponding to the properties of the superlinear set-valued expectation U . Thus up to a change of terminology, our superlinear expectation corresponds to the sublinear set-valued risk measure of Hamel et al. [13,14]. On the other hand, our sublinear expectation is a different object, which requires a separate treatment. Indeed, in the set-valued framework, a change of sign (that is, the central symmetry) does not alter the direction of the inclusion, and so it is not possible to convert a superlinear function to a sublinear one.
Remark 3.7 Motivated by financial applications, it is possible to replace the homogeneity and sub-(super-)additivity properties with convexity or concavity, e.g.
But then U can be turned into a superlinear expectation U for random sets in the space R d+1 by letting The arguments of U are random closed convex sets Y = {t} × X; they form a family closed for dilations, Minkowski sums and translations by singletons from R + × R d . Note that selections of {t} × X are given by (t, ξ ) with ξ being a selection of X.
In view of this, all results in the homogeneous case apply to the convex case if the dimension is increased by one.

Examples
The simplest example is provided by the selection expectation, which is linear and law-invariant on all integrable random convex sets.

Example 3.8 Let
denote the set of fixed points of a random closed set X. If X is almost surely convex, then F X is also almost surely convex, and if X is compact with positive probability, then F X is compact. It is easy to see that With a similar idea, it is possible to define the sublinear expectation E(X) = supp X as the support of X, which is the set of points x ∈ R d such that X hits any open neighbourhood of x with positive probability. By the monotonicity property,

Expectations of singletons
The additivity property on deterministic singletons immediately yields the following useful fact.

Lemma 3.11
We have E(X) = {x ∈ R d : E(X − x) 0}, and the same holds for the superlinear expectation.
Fix C = {0}. Restricted to singletons, the sublinear expectation is a homogeneous map E : L p (R d ) → co F that satisfies . Assuming in addition lower semicontinuity, the sublinear expectation then becomes the usual (linear) expectation.
The following result concerns the superlinear expectation of singletons. For a general cone C, a similar result holds with singletons replaced by sets ξ + C.
For each ξ ∈ L p (R d ) and any normalised superlinear expectation U , the set U ({ξ }) is either empty or a singleton, and U is additive on whence the inclusion turns into equality.
In view of Proposition 3.12 and if we impose in addition upper semicontinuity on the superlinear expectation, U ({ξ }) equals {E[ξ ]} or is empty for each p-integrable ξ . The family of ξ ∈ L p (R d ) such that U ({ξ }) = ∅ is then a convex cone in L p (R d ).

Nonlinear expectations of random convex functions
A lower semicontinuous convex function f : This support function is called the perspective transform of f ; see Hiriart-Urruty and Lemaréchal [17,Sect. IV.2.2]. Note that f can be recovered by letting t = 1 in the support function of T f . If x → ξ(x) is a random nonnegative lower semicontinuous convex function on R d , then its sublinear expectation can be defined as E(ξ )(x) = h(E(T ξ ), (1, x)), and the superlinear one is defined similarly. With this definition, all constructions from this paper apply to random functions.

Extensions of nonlinear expectations 4.1 Minimal extension
The minimal extension of a sublinear set-valued expectation E on random sets from where co denotes the closed convex hull operation. The extension is called minimal since it is the smallest sublinear expectation compatible with the values of the original expectation on sets ξ + C. It extends a sublinear expectation defined on sets ξ + C to all p-integrable random closed sets X such that X = X + C a.s. In terms of support functions, the minimal extension is given by Proof The additivity of E on deterministic singletons follows from this property of E.
The homogeneity and monotonicity properties of E are obvious. The subadditivity follows from the fact that L p (X + Y ) is the L p -closure of the sum L p (X) + L p (Y ); see [25, Proposition 2.1.6].

Maximal extension
Extending a superlinear expectation U from its values on half-spaces yields its maximal extension being the intersection of superlinear expectations of random half-spaces almost surely containing X ∈ L p (co F(R d , C)). The maximal extension is the largest superlinear expectation consistent with the values of the original one on half-spaces.

Proposition 4.2 If U is superlinear on half-spaces with the same normal, that is,
for all β, β ∈ L p (R) and η ∈ L 0 (S d−1 ∩ C o ), and is scalarly upper semicontinuous on half-spaces with the same normal, that is, Proof The additivity on deterministic singletons follows from the fact that we have The homogeneity and monotonicity properties of the extension are obvious. For two p-integrable random closed convex sets X and Y , (4.4) yields that Assume that (X n ) scalarly converges to X. Let x n k ∈ U (X n k ) and let (x n k ) converge to x. Then in σ (L p , L q ), scalar upper semicontinuity of U on half-spaces yields that we have U (H η (X)) ⊇ lim sup k→∞ U (H η (X n k )), whence x ∈ U (H η (X)) for all η. Therefore x ∈ U (X), confirming the upper semicontinuity of the maximal extension. The lawinvariance property is straightforward.
It is possible to let η in (4.3) be deterministic and define With this reduced maximal extension, the superlinear expectation is extended from its values on half-spaces with deterministic normal vectors. Note that the reduced maximal extension may be equal to the whole space, e.g. for X being a half-space H η (0) with a nondeterministic normal. It is obvious that U (X) ⊆ U (X) ⊆ U (X) and U is constant-preserving. The reduced maximal extension is particularly useful for Hausdorff-approximable random closed sets.

Exact nonlinear expectations
It is possible to apply the maximal extension to the sublinear expectation and the minimal extension to the superlinear one, resulting in E and U . The monotonicity property yields that for each p-integrable random closed set X, It is easy to see that each extension is an idempotent operation, e.g. the minimal extension of E coincides with E.
A nonlinear sublinear expectation is said to be minimal (respectively, maximal) if it coincides with its minimal (respectively, maximal) extension. A superlinear expectation is said to be reduced maximal if U = U .
If (4.6) holds with the first two inclusions being equalities (that is, E coincides with its minimal and maximal extensions), then E is called exact. The same applies to superlinear expectations. Note that the selection expectation is exact on all integrable random closed convex sets, its minimality corresponds to (2.1) and maximality is (2.3).
Since random convex closed sets can be represented either as families of their selections or as intersections of half-spaces, the minimal representation of an exact nonlinear expectation may be considered its primal representation, while the maximal representation becomes the dual one.

Duality for minimal sublinear expectations
The minimal sublinear expectation is determined by its restriction on random sets ξ + C; the following result characterises such a restriction.

L q )-lower semicontinuous normalised sublinear expectation if and only if
for all u = 0, Z cu = Z u for all c > 0, Z 0 = {0} and Proof (Sufficiency) For linearly independent u and v in R d , each ζ ∈ Z u+v satis- Since Z cu = Z u = cZ u for any c > 0, so the function h(E(ξ + C), u) is sublinear in u and hence a support function. The additivity property on singletons follows from the construction since for each deterministic a ∈ R d . Furthermore, h(E(C), u) = h(C, u) which implies that E(C) = C. The homogeneity property is obvious. The function E is subadditive since Finally, for u ∈ C o , the set {ζ ∈ Z u : E[ζ ] = u} is closed in σ (L q , L p ). Since h(E(ξ + C), u) is the support function of the closed set {ζ ∈ Z u : E[ζ ] = u} in the direction ξ , it is lower semicontinuous as a function of ξ so that (3.3) holds.
(Necessity) By Proposition 3.2, the support function is infinite for u / ∈ C o . For u ∈ C o , let . By sublinearity, A u is a convex cone in L p (R d ), and A cu = A u for all c > 0. Furthermore, A u is closed with respect to the scalar convergence ξ n + C → ξ + C by the assumed lower semicontinuity of E. Hence it is closed with respect to the convergence ξ n → ξ in σ (L p , L q ).
Note that 0 ∈ A u and let Since A u is convex and σ (L p , L q )-closed, the bipolar theorem yields that

C) is a scalarly lower semicontinuous minimal normalised sublinear expectation if and only if E admits the representation
(Sufficiency) The right-hand side of (5.2) is sublinear in u and so is a support function. The additivity on singletons, monotonicity, subadditivity and homogeneity properties of E are obvious. For a deterministic F ∈ co F(R d , C), the sublinearity of the support function yields that = h E(X), u .
Since the support function of E(X) given by (5.2) is the supremum of scalarly continuous functions of X, the minimal sublinear expectation is scalarly lower semicontinuous.

Remark 5.3
The sets Z u , u ∈ R d , constructed in the proof of necessity in Lemma 5.1 are maximal sets representing the sublinear expectation.

Corollary 5.4
If u ∈ Z u for all u ∈ R d , then E[X] ⊆ E(X) for all p-integrable X and any scalarly lower semicontinuous normalised minimal sublinear expectation E.
Remark 5.5 The sublinear expectation given by (5.2) is law-invariant if and only if the sets Z u are law-complete, that is, with each ζ ∈ Z u , the set Z u contains all random vectors that have the same distribution as ζ . If p = ∞, then the elements of Z u can be represented as vectors composed of probability measures absolutely continuous with respect to P. This is also possible for p ∈ [1, ∞) using measures with p-integrable densities.

.2) turns into the condition h(E(X), u) = E[h(Z X, u)], whence E(X) = E[Z X]. In this example, h(E(X), u) is not solely determined by h(X, u).
This sublinear expectation is not necessarily constant-preserving.
If the normal η = u is deterministic and Otherwise, E(H u (β)) = R d . Thus the sublinear expectation of a random half-space with a deterministic normal is either a half-space with the same normal or the whole space.

Consider now the situation when for each u, the value of h(E(X), u) is solely determined by the distribution of h(X, u). This is the case if the supremum in (5.2)
involves only ζ such that ζ = γ u for some γ ∈ L q (R + ). The following result shows that this condition characterises constant-preserving minimal sublinear expectations, which then necessarily become exact ones.

C) is a scalarly lower semicontinuous constant-preserving minimal sublinear expectation if and only if
whence E is constant-preserving.
(Necessity) Since E is minimal, the support function of E(X) is given by (5.2). The constant-preserving property yields that E(H u (t)) = H u (t) for all half-spaces H u (t) with u ∈ C o . By the argument from Example 5.7, the minimal sublinear expectation of a half-space H u (t) is distinct from the whole space only if (5.3) holds.
The properties of Z u imply those of M u = {γ : γ u ∈ Z u }. Indeed, assume that defined by an analogue of (1.5). Since the negative part of h(X, u) is p-integrable, it is possible to consistently let e u (h(X, u)) = ∞ in (5.5) if h(X, u) is not p-integrable.

Corollary 5.9 Each scalarly lower semicontinuous constant-preserving minimal sublinear expectation is exact.
Proof Since (5.4) yields that E(H η (X)) = R d if η is random, the maximal extension of E by an analogue of (4.3) reduces to deterministic η, and so E = E is the reduced maximal extension. For u ∈ S d−1 ∩ C o and β ∈ L p (R), we have E(H u (β)) = H u (e u (β)); cf. Example 5.7. Thus the reduced maximal extension of E is given by

Corollary 5.11 Let E be a scalarly lower semicontinuous constant-preserving minimal law-invariant sublinear expectation. Then we have E(E[X|H]) ⊆ E(X) for all X ∈ L p (co F(R d , C)) and any σ -algebra H ⊆ F. In particular, E[X] ⊆ E(X).
Proof The law-invariance of E implies that e u is law-invariant. The sublinear expectation e u is dilatation-monotonic, meaning that e u (E[β|H]) ≤ e u (β) for all β ∈ L p (R); see Föllmer and Schied [10, Corollary 4.59] for this fact derived for risk measures. The statement follows from (5.5).
The following result identifies the particularly important case when the families M u = M do not depend on u. This essentially means that the sublinear expectation preserves centred balls. Let B r denote the ball of radius r centred at the origin. where e admits the representation (1.5). Furthermore,
By (5.6), the support functions of both sides of (5.7) are identical.
If X = {ξ } is a singleton, there is no need to take the convex hull on the right-hand side of (5.7).
Remark 5.13 Equality (5.6) can be viewed as a scalarisation of the sublinear expectation. Indeed, it represents the convex set E(X) as an intersection of half-spaces H u (e(h(X, u))) and so provides a dual representation of E(X). Such scalarisations have been considered by Hamel and Heyde [13] and Hamel et al. [14] for set-valued risk measures, which are sublinear for the reverse inclusion. In that case, the exact equality may be violated, see (6.5) below, and the scalarisation is defined as the support function of the superlinear expectation.
Example 5.14 For an integrable X and n ∈ N, consider the sublinear expectation where X 1 , . . . , X n are independent copies of X. It is easy to see that E ∪ n (X) is a minimal constant-preserving sublinear expectation; it is given by (5.6) with the corresponding numerical sublinear expectation e(β) being the expected maximum of n i.i.d. copies of β ∈ L 1 (R). By Corollary 5.9, this sublinear expectation is exact.

Example 5.15
For α ∈ (0, 1), let P α be the family of random variables γ with values in [0, α −1 ] and such that E[γ ] = 1. Furthermore, let M be the cone generated by P α , that is, M = {tγ : γ ∈ P α , t ≥ 0}. In finance, the set P α generates the average value-at-risk, which is the risk measure obtained as the average quantile; see Föllmer and Schied [10,Definition 4.43]. Similarly, the numerical sublinear e and superlinear u generated by this set M are represented as average quantiles. Namely, e(β) is the average of the quantiles of β at levels t ∈ (1 − α, 1), and u(β) is the average of the quantiles at levels t ∈ (0, α).

Duality for maximal superlinear expectations
Consider a superlinear expectation defined on L p (co F (R d , C)). If C = {0}, we deal with all p-integrable random closed convex sets. Recall that C o is the polar cone of C.

Theorem 6.1 A map U : L p (co F(R d , C)) → co F(R d , C) is a scalarly upper semicontinuous normalised maximal superlinear expectation if and only if
by the assumed upper semicontinuity of U . Thus A η is a convex σ (L p , L q )-closed cone in L p (R). Consider its positive dual cone Since U (C) = C, we have U (X) 0 whenever C ⊆ X a.s. In view of this, if β is a.s. nonnegative, then H η (β) a.s. contains zero and so β ∈ A η . Thus each γ from M η is a.s. nonnegative. The bipolar theorem yields that Since (−t) / ∈ A u , (6.2) implies that the cone M u is strictly larger than {0}. Since U is assumed to be maximal, (4.3) implies that (Sufficiency) It is easy to check that U given by (6.1) is additive on deterministic singletons, homogeneous and monotonic. If F ∈ co F(R d , C) is deterministic, then letting η = u in (6.1) be deterministic and using the nontriviality of M u yields that U (F ) ⊆ F . Furthermore, U (C) = C, since U (C) contains the origin and so is not empty. The superadditivity of U follows from the fact that It is easy to see that U coincides with its maximal extension.
Note that (6.1) is equivalently written as If (X n ) scalarly converges to X and x n k → x for x n k ∈ U (X n k ), k ∈ N, then , and the upper semicontinuity of U follows.
In contrast to the sublinear case (see Theorem 5.2), the cones M η from Theorem 6.1 need not satisfy additional conditions like those imposed in Lemma 5.1. However, if the intersection in (6.1) is taken over all η ∈ L q (C o ), then one must require that M βη = {γ /β : γ ∈ M η } for all β ∈ L p ((0, ∞)).

Corollary 6.2 If 1 ∈ M η for all η, then U (X) ⊆ E[X] for all p-integrable X and any scalarly upper semicontinuous maximal normalised superlinear expectation U .
Proof Restrict the intersection in (6.1) to deterministic η = u and γ = 1, so that the right-hand side of (6.1) becomes E[X]. Example 6.3 Let X = H η (β) be the half-space with normal η ∈ L 0 (S d−1 ) and β ∈ L p (R). If C = {0}, the maximal superlinear expectation of X is given by Assume that d = 2 and let η = (1, π)/ √ 1 + π 2 with π being an almost surely positive random variable. This example represents the case of two currencies exchangeable at rate π without transaction costs. We then have where u is the numerical superlinear expectation with the representing set In particular, if β = 0 a.s., then the random set H η (0) describes all portfolios available at price zero for two currencies with the exchange rate π , and (1, e(π)) and w = (1, u(π)) for the exact dual pair e and u of nonlinear expectations with the representing set M.

Reduced maximal extension
The following result can be proved similarly to Theorem 6.1 for the reduced maximal extension from (4.5).

Theorem 6.4 A map U : L p (co F(R d , C)) → co F is a scalarly upper semicontinuous normalised reduced maximal superlinear expectation if and only if
It is possible to take the intersection in ( The representation (6.3) can be equivalently written as the intersection of the half-spaces {x : Corollary 6.5 Let U : L p (co F(R d , C)) → co F be a scalarly upper semicontinuous law-invariant normalised reduced maximal superlinear expectation, and let the probability space be non-atomic. Then U is dilatation-monotonic, meaning that for each sub-σ -algebra H ⊆ F and all X ∈ L p (co F(R d , C)),

In particular, U (X) ⊆ E[X].
Proof Since u v (β) given by (6.4) is a law-invariant concave function of β ∈ L p (R) and the probability space is non-atomic, it is dilatation-monotonic, meaning that u v (E[ξ |H]) ≥ u v (ξ ); see Föllmer and Schied [10,Corollary 4.59]. Hence, Thus the infimum on the right-hand side of (6.3) written for U (X) is dominated by the infimum corresponding to U (E [X|H]). This implies the inclusion of the two sets.
3) is nontrivial and does not depend on v, then (6.3) turns into where u given by (6.4) is the numerical superlinear expectation with the representing set M. In this case, U (X) is the largest convex set whose support function is dominated by u(h(X, v)), that is, Note that u(h(X, ·)) may fail to be a support function. The left-hand side of (6.5) is the scalarisation of the superlinear expectation U (X); cf. Hamel et al. [13,14]. Since for X ∈ L p (co F(R d , C)), this reduced maximal superlinear expectation admits an equivalent representation as Example 6.7 Let X = ξ + C for a ξ ∈ L p (R d ) and a deterministic convex closed cone C that is different from the whole space. Then

(E[γ ξ] + C).
A cone C is said to be a Riesz cone (or lattice cone) if R d with the partial order generated by C is a Riesz space (or a vector lattice), that is, the maximum of any two points from R d is well defined. If this is the case, then U (ξ + C) = x + C for some x, since an intersection of translations of C is again a translation of C; see Aliprantis and Tourky [1, Theorem 1.16].
Example 6.8 Let U (X) = E[X 1 ∩ · · · ∩ X n ] for n independent copies of X, noticing that the expectation is empty if the intersection X 1 ∩ · · · ∩ X n is empty with positive probability. This superlinear expectation is not a reduced maximal one. For instance, . . , n so that the reduced maximal extension U (X) is the largest convex set whose support function is dominated by U (H v (X)), v ∈ S d−1 . However, the support function of E[X 1 ∩ · · · ∩ X n ] is the expectation of the largest sublinear function dominated by min(h(X i , v), i = 1, . . . , N), and so U (X) may be a strict subset of U (X). Then where the minimum is applied coordinatewise to independent copies of ξ , while U (X) is the largest convex set whose support function is dominated by the function with a possibly strict inequality.

Minimal extension of a superlinear expectation
In any nontrivial case, the superlinear expectation of a nondeterministic singleton is empty.

Proposition 6.9 Let U be a normalised superlinear expectation satisfying the conditions of Proposition 4.2. Then
for all v ∈ S d−1 .
Proof By a variant of Proposition 4.2 for the reduced maximal extension, this extension satisfies the conditions of Theorem 6.4 and hence admits the representation (6.3).
If ξ ∈ L p (R d ), then (6.3) yields that which is not empty only if (6.7) holds.
In the setting of Example 6.6, U ({ξ }) is empty unless u( ξ, v ) + u(− ξ, v ) is nonnegative for all u. The latter means that u( ξ, v ) = e( ξ, v ) for the exact dual pair of real-valued nonlinear expectations. Equivalently, If this is the case for all ξ ∈ L p (X), then the minimal extension of U (X) is the set F X of fixed points of X; see Example 3.8. Thus it is not feasible to come up with a nontrivial minimal extension of the superlinear expectation if C = {0}.
A possible way to ensure nonemptiness of the minimal extension is to apply it to random sets X from L p (co F(R d , C)) with a cone C having interior points, since then at least one of h(X, v) and h(X, −v) is almost surely infinite for all v ∈ S d−1 . The minimal extension of U is given by (6.8) The following result implies in particular that the union on the right-hand side of (6.8) is a convex set; cf. (4.1).
Theorem 6.10 Let U be a scalarly upper semicontinuous law-invariant normalised reduced maximal superlinear expectation, and let the probability space be nonatomic. Then the minimal extension U given by (6.8) is a law-invariant superlinear expectation.
Proof Let x and x belong to the union on the right-hand side of (6.8) (without closure). Then x ∈ U (ξ + C) and x ∈ U (ξ + C) for ξ, ξ ∈ L p (X), and the superlinearity of U yields that ξ is a selection of X, the convexity of U (X) easily follows. Additivity on deterministic singletons, monotonicity and homogeneity are evident from (6.8). If F ∈ co F(R d , C) is deterministic, the dilatationmonotonicity of U (see Corollary 6.5) yields that For the superadditivity property, consider x and y from the nonclosed right-hand side of (6.8) for X and Y , respectively. Then x ∈ U (ξ + C) for some ξ ∈ L p (X) and y ∈ U (ξ + C) for some ξ ∈ L p (Y ). Hence, Finally, let F X be the σ -algebra generated by X, that is, F X is generated by the events {X ∩ K = ∅} for all compact sets K in R d . The convexity of X implies that E[ξ |F X ] is a selection of X for any ξ ∈ L p (X). By the dilatation-monotonicity from Corollary 6.5, it is possible to replace ξ ∈ L p (X) in (6.8) with an F X -measurable p-integrable selection of X. The families of F X -measurable selections of X and F Y -measurable selections of Y coincide for two identically distributed random sets X and Y ; see Molchanov [25,Proposition 1.4.5].
Below we establish the upper semicontinuity of the minimal extension.
Theorem 6.11 Assume that p ∈ (1, ∞], U satisfies the conditions imposed in Theorem 6.10, and that 0 / ∈ U (ξ + C) for all nontrivial ξ ∈ L p (C). Then the minimal extension U is scalarly upper semicontinuous.
Proof It suffices to omit the closure in (6.8) and consider x n ∈ U (X n ) with x n → x and X n → X scalarly in σ (L p , L q ). For each n ∈ N, there exists a ξ n ∈ L p (X n ) such that x n ∈ U (ξ n + C).
Assume first that p ∈ (1, ∞) and sup n∈N E[|ξ n | p ] < ∞. Then (ξ n ) n∈N is relatively compact in σ (L p , L q ). Without loss of generality (if necessary, passing to subsequences), assume that (ξ n ) converges to ξ in σ (L p , L q ). Since ξ n , ζ ≤ h(X n , ζ ) for all ζ ∈ L q (C o ), taking expectations, letting n → ∞ and using the convergence ξ n → ξ and X n → X yields that E[h(ξ, ζ )] ≤ E[h(X, ζ )]. By Lemma 2.4, ξ is a selection of X. By the upper semicontinuity of U , the upper limit of (U (ξ n + C)) is a subset of U (ξ + C). Hence x ∈ U (ξ + C) for some ξ ∈ L p (X) so that x ∈ U (X).
Assume now that ξ n p p = E[|ξ n | p ] → ∞. Let ξ n = ξ n / ξ n p . This sequence is bounded in the L p -norm, and so we can assume without loss of generality that ξ n → ξ in σ (L p , L q ). Since x n / ξ n p ∈ U (ξ n + C)/ ξ n p = U (ξ n + C), the upper semicontinuity of U yields that 0 ∈ U (ξ + C). For each ζ ∈ L q (C o ), we have ξ n , ζ ≤ h(X n , ζ ). Dividing by ξ n p , taking expectations and letting n → ∞ yields that E[ ξ , ζ ] ≤ 0. Thus ξ ∈ C almost surely. Given that E[ ξ ] = 1, this contradicts the fact that U (ξ + C) contains the origin.
The proof for p = ∞ follows the exact same steps, splitting the cases when sup n∈N |ξ n | is essentially bounded (in which case the sequence is relatively compact in σ (L ∞ , L 1 )) and when the essential supremum of (ξ n ) converges to infinity.
The case p = 1 is excluded in Theorem 6.11 since relative compactness in L 1 requires uniform integrability, which is a stronger condition than boundedness in L 1 .
The exact calculation of U (X) involves working with all p-integrable selections of X, which is a very rich family even in simple cases like X = ξ + C. Since U (X) ⊆ U (X), (6.9) the superlinear expectation U (X) yields a computationally tractable upper bound on U (X).
Note that the minimal extension U of a reduced maximal superlinear expectation is not necessarily a maximal superlinear expectation itself. The following result describes its reduced maximal extension. Theorem 6.13 Assume that U is defined by (6.8), where U = U is a scalarly upper semicontinuous reduced maximal superlinear expectation with representation (6.6). Then U (H v (β)) = U (H v (β)) for all v ∈ S d−1 ∩ C o and β ∈ L p (R), and the reduced maximal extension of U coincides with U .
Proof By (6.3), U (H v (β)) = H v (u(β)). In view of (6.9), it suffices to show that each x ∈ H v (u(β)) also belongs to U (H v (β)). Let y be the projection of x onto the subspace orthogonal to v. It suffices to show that x − y ∈ U (H v Since tv, w ≤ v, w u(β), we deduce that x ∈ U (ξ + C) ⊆ U (H v (β)). Since U and U coincide on half-spaces, the reduced maximal extension of U is In general, U (X) may be a strict subset of U (X) as the following example shows; so superlinear expectations are not necessarily exact even on rather simple random sets of the type ξ + C. Example 6.14 Consider ξ ∈ R 2 which takes with equal probabilities two possible values: the origin and a = (a 1 , a 2 ). Let X = ξ + C, where C is the cone containing R 2 − and with points (1, −π) and (−π , 1) on its boundary such that π, π > 1. Let M v = M be the family from Example 5.15 and let u be the superlinear expectation with the representing set M. For each β ∈ L 1 (R), u(β) equals the average of the t-quantiles of β over t ∈ (0, α). If α ∈ (0, 1/2] and β takes two values with equal probabilities, then u(β) is the smaller value of β. Then U (X) = C ∩ (a + C) so that U (X) coincides with U (X) in this case. Now assume that α ∈ (1/2, 1). If β with equal probabilities takes two values t and s, then u(β) = max(t, s) − |t − s|/(2α) and For v ∈ C o , the linear function x → x, v is dominated by 1 2α a, v if a, v < 0 and by (1 − 1 2α ) a, v otherwise. By an elementary calculation, In view of Example 6.12, for the minimal extension, it suffices to consider selections of C which are measurable with respect to the σ -algebra σ (ξ) generated by ξ ; these  Figure 1 shows U (X) and U (X) for π = π = 2, a = (1, −1) and α = 0.7. It shows that the minimal extension may indeed be a strict subset of the underlying reduced maximal superlinear expectation.

Depth-trimmed regions and outliers
Consider a sublinear expectation E restricted to the family of p-integrable singletons and let C = {0}. The map ξ → E({ξ }) satisfies the properties of depth-trimmed regions imposed by Cascos [5], which are those from Zuo and Serfling [35] augmented by monotonicity and subadditivity. Therefore, the sublinear expectation provides a rather generic construction of a depth-trimmed region associated with a random vector ξ ∈ L p (R d ). In statistical applications, points outside E({ξ }) or its empirical variant are regarded as outliers. The subadditivity property (3.1) means that if a point is not an outlier for the convolution of two samples, then there is a way to obtain this point as the sum of two non-outliers for the original samples. where q β (s) is an s-quantile of β (in case of nonuniqueness, the choice of a particular quantile does not matter because of integration). The risk measure r(β) = e α (−β) is called the average value-at-risk. Denote by E α the corresponding minimal sublinear expectation constructed by (5.6), so that h(E α ({ξ }), u) = e α ( ξ, u ) for all u. The set E α ({ξ }) is the zonoid-trimmed region of ξ at level α; see Cascos [5] and Mosler [28,Sect. 3.1]. This set can be obtained as where P α ⊆ L 1 (R + ) consists of all random variables with values in [0, α −1 ] and expectation 1; see Example 5.15. This setting is a special case of Theorem 5.12 with M = {tγ : γ ∈ P α , t ≥ 0}. The value of α controls the size of the zonoid-trimmed region; α = 1 yields a single point, being the expectation of ξ . The subadditivity property of zonoid-trimmed regions was first noticed by Cascos and Molchanov [6].
Example 7.2 Let X be an integrable random closed convex set. Consider the random set Y in R d+1 given by the convex hull of the origin and {1} × X. The selection expectation Z X = E[Y ] is called the lift expectation of X; see Diaye et al. [8]. If X = {ξ } is a singleton, then Z X is the lift zonoid of ξ ; see Mosler [

Parametric families of nonlinear expectations
Consider nonlinear expectations U and E such that U (X) ⊆ E[X] ⊆ E(X) for all random closed sets X ∈ L p (co F(R d , C)). Then it is natural to regard observations of X that do not lie between the superlinear and sublinear expectation as outliers.
Let X 1 , . . . , X n be independent copies of a p-integrable random closed convex set X. For a sublinear expectation E, is also a sublinear expectation. The only slightly nontrivial property is the subadditivity, which follows from the fact that If X 1 ∩ · · · ∩ X n is a.s. nonempty, then yields a superlinear expectation, noticing that We let U ∩ n (X) = ∅ if X 1 ∩ · · · ∩ X n is empty with positive probability.

Example 7.3
Choosing E(X) = U (X) = E[X] in (7.1) and (7.2) yields a family of nonlinear expectations depending on the parameter n, which are also easy to compute.
It is easily seen that E ∪ n (X) increases and U ∩ n (X) decreases as n increases. Define the depth of F ∈ co F(R d , C) as It is easy to see that E ∪ 1 (X) = E(X) and U ∩ 1 (X) = U (X). Hence F ∈ co F(R d , C) has depth one if U (X) ⊆ F ⊆ E(X). Note that U ∩ n (X) decreases to the set of fixed points of X and E ∪ n (X) increases to the support of X as n → ∞; see Example 3.8. Thus only closed convex sets F satisfying F X ⊆ F ⊆ supp X may have a positive depth.
In order to handle the empirical variant of the preceding concept based on a sample X 1 , . . . , X n of independent observations of X, consider a random closed setX that with equal probabilities takes one of the values X 1 , . . . , X n . Its distribution can be simulated by sampling one of these sets with possible repetitions. Then it is possible to use the nonlinear expectations ofX in order to assess the depth of any given convex set, including those from the sample.

Risk and utility of a set-valued portfolio
For a random variable ξ ∈ L p (R) interpreted as a financial outcome or gain, the value e(−ξ) (equivalently, −u(ξ )) is used in finance to assess the risk of ξ . It may be tempting to extend this to the multivariate setting by assuming that the risk is a d-dimensional function of a random vector ξ ∈ L p (R d ), with the conventional properties extended coordinatewise. However, in this case the nonlinear expectations (and so the risk) are marginalised, that is, the risk of ξ splits into a vector of nonlinear expectations applied to the individual components of ξ ; see Theorem A.1.
Moreover, an adequate assessment of the financial risk of a vector ξ is impossible without taking into account exchange rules that can be applied to its components in order to convert ξ to another financial position. If no exchanges are allowed and only consumption is possible, one arrives at positions being selections of X = ξ + R d − . On the other hand, if the components of ξ are expressed in the same currency with unrestricted exchanges and disposal (consumption) of the assets, each position from the half-space X = {x : x i ≤ ξ i } is reachable from ξ . Working with the random set X also eliminates possible nonuniqueness in the choice of ξ with identical sums.
In view of this, it is natural to consider multivariate financial positions as lower random closed convex sets or, equivalently, those from L p (co F(R d , C)) with C = R d − . The random closed set is said to be acceptable if 0 ∈ U (X), and the risk of X is defined as −U (X). The superadditivity property guarantees that if both X and Y are acceptable, then X + Y is acceptable. This is the classical financial diversification advantage formulated in set-valued terms. The value U (X) determines the utility of X, exactly corresponding to the classical properties of utility functions being monotone superlinear functions of random variables. In particular, the superadditivity amounts to the fact that the utility of the sum is larger than or equal to the sum of the utilities.
If X ∈ L p (co F(R d , C)) and C = R d − , the minimal extension (6.8) is called the lower set extension of U . If U is reduced maximal, (6.6) yields that where u(ξ ) = (u(ξ 1 ), . . . , u(ξ d )) is defined by applying the same superlinear expectation u with representing set M to each component of ξ . Then In other words, U (X) is the closure of the set of all points dominated coordinatewise by the superlinear expectation of at least one selection of X. In Molchanov and Cascos [26], the origin-reflected set −U (X) was called the selection risk measure of X.
For set-valued portfolios X = ξ + C, arising as the sum of a singleton ξ and a (possibly random) convex cone C, the maximal superlinear expectation (in our terminology), considered a function of ξ only and not of ξ + C, was studied by Hamel and Heyde [13] and Hamel et al. [14]. However, if C becomes random, the resulting function of ξ alone is not necessarily law-invariant. The case of general random set-valued arguments was pursued by Molchanov and Cascos [26].
For the purpose of risk (or utility) assessment, one can use any superlinear expectation. However, the sensible choices are the maximal superlinear expectation in view of its closed form dual representation, and the lower set extension in view of its direct financial interpretation (through its primal representation), meaning the existence of a selection (that is, a financial position) with all components acceptable. Example 6.14 provides the numerical calculation of the reduced maximal and the minimal extension (see Figure 1) using the average quantile utility function. Given that the minimal superlinear expectation may be a strict subset of the maximal one (see Example 6.14), the acceptability of X under a maximal superlinear expectation may be a weaker requirement than the acceptability under the lower set extension.
From the financial viewpoint, the acceptability of X = ξ + C (for the payoff ξ ∈ L p (R d ) and a deterministic cone C describing the family of portfolios available at price zero) under the lower set extension (that is, the minimal extension) means the existence of an exchange scenario ξ ∈ L p (C) such that ξ + ξ has all components acceptable. In other words, by exchanging the components of ξ and taking into account the transaction costs imposed by the cone C, it is possible to make all components of ξ individually acceptable. On the other hand, the acceptability of X under the reduced maximal extension means that ξ, η is acceptable for all u from the dual cone of C, that is, ξ is acceptable under all price systems determined by C. For instance, this is the case if ξ + ξ has all components acceptable since ξ, u = ξ + ξ , u − ξ , u .
The first term on the right-hand side is acceptable since the dual cone of C is a subset of R d + , while the second term is nonnegative since u belongs to the dual cone of C.
Acknowledgements IM is grateful to Ignacio Cascos for discussions and a collaboration on related works. This work was motivated by the stay of IM at the Universidad Carlos III de Madrid in 2012 supported by the Santander Bank. IM was also supported by the Swiss National Science Foundation grants 200021_153597 and IZ73Z0_152292. The authors thank the referee for a very careful reading of the manuscript and for suggesting a number of improvements.
Funding Note Open access funding provided by University of Bern.
Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Such a function may be viewed as the restriction of a sublinear set-valued expectation to the family of sets ξ + R d − and letting e(ξ ) be the coordinatewise supremum of E(ξ + R d − ). The following result shows that vector-valued sublinear expectations marginalise, that is, they split into sublinear expectations applied to each component of the random vector. for all ξ ∈ A. It is easy to see that each μ ∈ A has all components nonnegative. The bipolar theorem yields that Since e is constant-preserving, Assume that two components of μ do not vanish, say μ 1 and μ 2 . Then C μ = y : ξ 1 dμ 1 + ξ 2 dμ 2 ≤ y 1 dμ 1 + y 2 dμ 2 ⊇ ξ 1 dμ 1 , ∞ × ξ 2 dμ 2 , ∞ × R × · · · × R.
Thus this latter set C μ does not influence the coordinatewise infimum in (A.1) in comparison to the sets obtained by letting μ ∈ A o 1 ∪ A o 2 . The same argument applies to μ ∈ A o with more than two nonvanishing components. Thus the intersection in (A.1) can be taken over μ ∈ A o 1 ∪ · · · ∪ A o d , whence the result.
A similar result holds for superlinear vector-valued expectations.