Random set solutions to elliptic and hyperbolic partial differential equations

The past decades have seen increasing interest in modelling uncertainty by heterogeneous methods, combining probability and interval analysis, especially for assessing parameter uncertainty in engineering models. A unifying mathematical framework admitting the combination of a wide range of such methods is the theory of random sets, describing input and output of a structural model by set-valued random variables. The purpose of this paper is to highlight the mathematics behind this approach. The modelling and computational implications are discussed and demonstrated with the help of prototypical partial differential equations -- a scalar elliptic equation from elastostatics and hyperbolic systems arising in elastodynamics.


Introduction
Many models in structural engineering are cast in the form of (systems of) ordinary or partial differential equations. Static problems, e.g. from elasticity theory, give rise to elliptic partial differential equations. Dynamic problems (e.g. vibrations of elastic bodies or acoustic wave propagation) typically are formulated in terms of hyperbolic partial differential equations or systems. The quantities of interest are solutions to (elliptic) equilibrium equations or to (hyperbolic) equations of motion. These equations contain coefficients thatas a rule-depend on further parameters, such as material constants or geometric dimensions. In addition, the parameters may vary in space and time. An important task in engineering design is the analysis of the uncertainty of the computed output. This uncertainty traces back to uncertainty in the input parameters, which may be due to many sources: random fluctuations, lack of information, conflicting assessments, random measurement errors, systematic measurement errors, fluctuations due to spatial inhomogeneity, errors made by assigning parameter status to state variables, model insufficiency, just to name a few. Uncertainty in the model itself is important, but will be left aside in this paper.
Available information on parameter variability may consist in frequency distributions obtained from large samples, values from small samples or single measurements, interval bounds, experts' point estimates, or educated guesses from experience.
Accordingly, an increasing desire has risen in the engineering community to introduce methods of quantifying the uncertainty beyond probability, such as interval analysis, set-valued models, fuzzy sets, evidence theory, random sets, sets of probability measures, imprecise probability, lower and upper previsions, info-gap-analysis, etc. This paper focusses on combinations of probability and interval analysis. This includes random variables, random fields, intervals, interval fields, random variables with interval parameters and random fields with interval parameters (such as mean, variance, correlation length).
As already noted in [1,50,51], all these types and their combinations can be accommodated in the framework of random sets. Random sets are set-valued random variables, whereby the values may be sets of real numbers, subsets of finite dimensional vector spaces, or subsets of function spaces. For example, random variables are single-valued random sets and intervals are random sets on a one-point probability space. The purpose of this paper is to highlight some important mathematical modelling questions that arise when using random sets. The considerations will be exemplified with the help of a prototypical elliptic equation and a class of hyperbolic systems.
When uncertainties in the coefficient functions of ordinary or partial differential equations are modelled as random sets, random fields with interval parameters, or (finite) interval fields, the solutions of these equations are no longer single-valued, but rather random sets with values in function spaces. Evaluating the solution at a fixed point in space or time will result in a real-valued random set. However, here an important issue arises: in order to properly qualify as a random set, the set-valued solution has to satisfy a certain measurability condition (to be detailed in Section 2.3). It turns out that the essential ingredient for this condition to hold is that the solution should depend continuously on the coefficients of the equation, and in turn also on the hyperparameters of the random variables or random fields defining the coefficients. This continuous dependence is already needed in pure interval modelling to prove that the values of the solutions are intervals. It is also needed in probabilistic modelling to infer that the solutions are random variables or random fields. In these two (classical) cases, the arguments are rather straightforward. However, in models combining interval and probabilistic inputs, additional subtleties are involved. One of the main goals of the present article is to formulate a rigorous mathematical framework that allows one to infer that the solutions are random sets. The elliptic and the hyperbolic case require different arguments, the essence of which will be worked out in the corresponding sections.
Here is a short review of relevant literature on uncertainty quantification in engineering models, combining interval methods with probabilistic methods. Adopting the language of [32], the structural model can be cast in the form Z = g(X, Y ) where X and Y refer to different types of inputs in a hybrid probability-interval uncertainty analysis; Z is the system response or output, g is the performance function or input-output map. The map g can be given, for example, by an ordinary or partial differential equation, a finite element model, or a black box algorithm. Here X refers to the probabilistic part, Y to the interval part-both may be multidimensional. If both parts are present, the output is necessarily both random and set-valued. There are three subcases. In the first case, X and Y are considered as separate, non-interacting entities. The set-valued output arises through their combination in the input-output map g. In the second case, X = X(Y ), that is, X is a random variable or random field with interval hyperparameters. This is often referred to as an imprecise random variable or an imprecise random field. The third possibility is that Y = Y (X), the interval bounds are random variables. This is usually referred to as a random interval, a special case of a random set.
Typical representative articles concerning the first case are [42,63], where n-DOF structures with interval material parameters under stochastic excitation have been studied. Structures described by stochastic ordinary differential equations with random interval parameters have been addressed in [60], while wave equations with interval propagation speed under space-time white noise excitation have been addressed in [53]. Combinations of intervals and random vectors as input parameters in beams and truss structures have been considered in [22]. The paper [38] merges intervals and random variables by means of evidence theory (finite random sets) in an application to squeal analysis of a brake system.
The second case extends from imprecise random variables as material parameters in frame and grid structures [64] to finite element models with imprecise random field inputs [11,16], employing a Karhunen-Loève expansion with interval coefficients. The applications in Sections 4 and 5 of this paper are in the same spirit. A combination of inputs consisting of imprecise random variables and imprecise random fields can be found in [50].
As mentioned, the third case concerns random interval input and is addressed in the finite dimensional version in [22,51], in combination with stochastic differential equations in the quoted paper [60] and in [55,56].
There are various papers which contrast and compare interval modelling and probabilistic modelling side by side in beam and plate structures [43,62,64,65]. The question of identification and model calibration under hybrid uncertainty has been addressed in the literature as well. A procedure for identifying interval models from measurements has been proposed in [15]. Constructing random sets from statistical data has been considered in [51]. In the recent paper [36] a Bayesian-type updating procedure for random variables with interval constraints has been presented, while a whole range of calibration methods has been combined in [27] for quantification of mixed uncertainties, reliability analysis and design of structures.
Survey papers on modelling uncertainty in engineering by hybrid probability-interval approaches are [6,48,32]. A recent review paper on numerical methods for uncertainty propagation in probability-box models is [14]. Further details and references on numerical algorithms can be found in Subsection 6.2. For the theory of random sets, their interpretations and applications the reader is referred to the monographs [9,41,46], notably [7] for engineering applications.
The plan of the paper is as follows. Section 2 will be devoted to the required mathematical set-up. These facts are known, but it appears useful to elaborate them in some detail for the later applications. Section 3 singles out the principal mathematical argument that allows one to prove that the output quantity is a random set. The combination of arguments giving a full proof of this result seems to be new. Section 4 details the considerations for elliptic equations as an application of the findings of Section 3, including a simple explicit numerical example. Section 5 deals with hyperbolic systems and elaborates the continuity argument in the hyperbolic case, also having a short look at hyperbolic equations and systems in higher space dimensions. Section 6 contains a comparison with the parametric approach and numerical aspects, as well as some remarks on modelling correlations. Detailed proofs can be found in the dissertation of the first author [45].
The present paper is an extended and updated version of the proceedings contribution [34].

Mathematical preliminaries
This section specifies some mathematical notions which are required for a rigorous treatment of random sets.
Recall that a subset S of a finite dimensional vector space such as n-dimensional Euclidean space R d is open, if for each point x in S, each ball of the form {y ∈ R d : y − x < η} with center in x and sufficiently small radius η is also contained in S. The set S is closed, if with each converging sequence whose terms belong to S, the limit also belongs to S. The subset S is bounded, if the norms x of the elements of S are all smaller than a fixed constant. Further, finite dimensional Euclidean spaces are separable, that is, they contain a countable dense subset (a sequence of points such that every ball in R d contains at least one element of the sequence), and they are complete (that is, if x k is a sequence such that the sum of its norms ∞ k=1 x k converges, then the sum ∞ k=1 x k also converges). All these notions can be literally generalized to infinite dimensional normed spaces. A typical example is the space of continuous functions on a bounded, closed interval I, denoted by C(I), with norm f = max x∈I |f (x)|. This space is separable and complete.

Random variables
A real-valued random variable a is characterized by its distribution function F a (x) which defines the probabilities P of basic events of the form {a ≤ x} via P (a ≤ x) = F a (x). This point of view suffices for most purposes. However, in the sequel families of random variables will be needed, and it will be necessary to specify the domains of the random variables under consideration. The usual mathematical setting is a probability space (Ω, Σ, P ) where Ω is a set (the collection of all elementary events), Σ is the family of measurable subsets of Ω, and P is a probability measure (that assigns to every set belonging to Σ a number between 0 and 1, satisfying the usual axioms). In this point of view, a random variable is a mapping from Ω into the real line such that the events {ω ∈ Ω : a(ω) ≤ x} are measurable (i.e., belong to Σ). Thus a probability P {ω ∈ Ω : a(ω) ≤ x} can be assigned to such events, denoted again by P (a ≤ x), and these probabilities in turn define the distribution function of the random variable a. From the probabilities P (a ≤ x), the probabilities of open and closed intervals, their complements, their countable unions and intersections etc. can be computed. The family of subsets of the real line which can be obtained by indefinite repetition of these set theoretic operations defines the so-called Borel σ-algebra. The Borel σ-algebra can also be characterized as the smallest σ-algebra containing all open subsets of R. A random variable has the property that all sets of the form {ω ∈ Ω : a(ω) ∈ B}, where B is any Borel set, are measurable. Accordingly, random variables are often referred to as measurable functions.
If the probability space is finite, say Ω = {1, . . . , m}, one may take the power set of Ω as σ-algebra, and every real-valued function on Ω is measurable. A typical infinite probability space is standard Gaussian space with Ω = R, Σ the Borel measurable subsets of R, and the probability of an event B given by the Gaussian integral P (B) = (2π) −1/2 B e −ω 2 /2 dω; the probability measure P is given through the Gaussian density e −ω 2 /2 / √ 2π. On standard Gaussian space, every continuous real-valued function on Ω is a random variable.
These notions can be easily extended to finite dimensional random variables, i.e., with values in R d (then the joint probability distributions have to be specified). Measurable functions with values in an infinite dimensional normed space can be defined in the same way.

Random fields
A random field is a process that assigns a random variable q(x) to every point x in a region D in space. To define the probabilistic properties of the field, the joint distributions of the values at any finite number of points q(x 1 ), . . . q(x n ) should be specified. If the random field is Gaussian, it is completely specified by the mean value µ q = E(q(x)) and the second moments, i.e., the covariance C(x, y) = COV(q(x), q(y)) for any two points x, y. If it is (weakly) homogeneous and isotropic, the covariance depends only on the distance ρ = x − y of the points and is of the form C(x, y) = σ 2 c(ρ) with the field variance σ 2 and the autocorrelation function c(ρ). A typical autocorrelation function is of the form where is the so-called correlation length. The standard method of simulation of a random field is based on the Karhunen-Loève expansion. If q(x) is any mean zero random field with finite second moments given by the covariance C(x, y), its Karhunen-Loève expansion is obtained by solving the eigenvalue problem which has a sequence of positive eigenvalues c k and orthonormal eigenfunctions ϕ k (x) (orthonormality in mean square). Then where the ξ k are uncorrelated random variables with unit variance. If the process is Gaussian, the ξ k are independent and distributed according to N (0, 1). For the numerical simulation, the spatial region is discretized by a grid and the ϕ k are taken, e.g., piecewise constant on the grid elements. The eigenvalue problem becomes a matrix eigenvalue problem, and the series (3), with approximate eigenvalues and eigenfunctions, truncated after a finite number M of terms, can be used for Monte Carlo simulations of the field trajectories (i.e., realizations of the field).
In case q(x) is a mean zero homogeneous Gaussian random field on D = R with autocorrelation function (1) and variance σ 2 , an alternative to the Karhunen-Loève expansion is the representation as an Ornstein-Uhlenbeck process, namely as solution to the Langevin stochastic differential equation see e.g. [5]. Here W = W (x, ω) denotes Wiener process on the real line. In this case, realizations of the random field can be conveniently generated by simulating solutions to the Langevin equation. An additional advantage of the representation through (4) is that the dependence of q(x) on the field parameters σ, is more explicit than through the Karhunen-Loève expansion (where σ, enter in the coefficients c k and the eigenfunctions ϕ k ). The representation (4) allows one to directly prove that the assignment (σ, ) → q(x, ω) is continuous at fixed x and ω, an important ingredient in the random set case discussed in Section 3.

Random sets
In general, a random set is a set-valued random variable satisfying certain measurability conditions, to be detailed below. The simplest case arises when the underlying probability space is finite. In this case, one speaks of finite random sets or Dempster-Shafer structures. Such a structure is given by finitely many closed subsets A i , i = 1, . . . , m of a target space A, usually Euclidean space R d , called the focal elements, each of which comes with a probability weight p i = p(A i ), p i = 1. Viewed as a random set, a Dempster-Shafer structure is given by an m-point probability space Ω = {1, 2, . . . , m} with probability masses {p 1 , p 2 , . . . , p m }; the assignment i → A i is the defining set-valued random variable A. However, starting with Subsection 2.4 and particularly in Section 3, random sets defined on infinite probability spaces will be needed. Also, the target space can be infinite-dimensional, for example, a function space containing the system outputs.
In the sequel, (Ω, Σ, P ) denotes a general probability space. The target space will be either Euclidean space R d or more generally a normed, complete and separable space A. In analogy to the Borel σ-algebra on R introduced in Subsection 2.1, the Borel σ-algebra on A is defined as the smallest σ-algebra containing all open subsets of A. Given a set-valued map ω → A(ω) where each A(ω) is a subset of the target space A, upper and lower inverses of subsets B of the target space A are defined by The requirements that ω → A(ω) is a random set are • each A(ω) is closed; • the upper inverses A − (B) are measurable subsets of Ω for every closed subset B of the target space A.
In the later applications, one would like to admit also open sets B or more generally Borel sets B in the second condition. In addition, it will be useful to relate it to the existence of certain measurable selections. Recall that a measurable selection of a random set A is a random variable a such that a(ω) In this respect, an important technical tool is the fundamental measurability theorem. In order to obtain all subsequent equivalences, one needs to assume that (Ω, Σ, P ) is a complete probability space, meaning that every subset of a set of probability zero belongs to Σ. This can always be achieved by enlarging Σ so as to include these subsets.
The fundamental measurability theorem states that, assuming the first condition (each A(ω) closed) that the following conditions are equivalent: • the upper inverses A − (B) are measurable subsets of Ω for every closed subset B of the target space A; • the upper inverses A − (B) are measurable subsets of Ω for every open subset B of the target space A; • the upper inverses A − (B) are measurable subsets of Ω for every Borel subset B of the target space A; • ω → A(ω) admits a Castaing representation.
For the rather elaborate proof, the reader is referred to [9,41]. Here are some first consequences of the fundamental measurability theorem. First, the lower inverses of events B can be written as A − (B) = {ω ∈ Ω : A(ω) ∩ B c = ∅} c , where B c denotes the complement of B, and hence are measurable due to the fundamental measurability theorem and the measurability of A − (B). Second, thanks to the required measurability properties, it is legitimate to introduce upper and lower probabilities for any Borel set B ⊂ A. The concept of random sets encompasses many other well-established methods of uncertainty modelling. Clearly, every interval defines a random set (as a random variable on a one-point probability space). Similarly, every random variable a on a probability space (Ω, Σ, P ) defines a (single-valued) random set ω → A(ω) = {a(ω)}. The upper and lower probability of an event B coincide with its probability P (B). Further, every normalized fuzzy set can be viewed as a random set. Here the probability space is the interval Ω = [0, 1], equipped with the uniform probability distribution, and the focal elements A(ω) are just the ω-level sets. It is not difficult to prove that the possibility measure π(B) of a subset B of the real line coincides with its upper probability [25]. Indeed, and rather obviously, In the special case of finite random sets, the measurability condition holds automatically; the only remaining condition is that the focal elements A i are closed.

Random sets generated by families of random variables
To set the stage, consider random variables a λ , defined on the same probability space (Ω, Σ, P ), and depending on a set of parameters λ ∈ Λ. Suppose the random variables have values in a target space A. Then one can define a set-valued map ω → A(ω) = {a λ (ω) : λ ∈ Λ}. The question is whether this assignment defines a random set. Before answering this question, consider the example of an imprecise Gaussian family Here the underlying probability space is the unit interval Ω = (0, 1) with the uniform distribution, and Φ is the standard normal distribution function. It follows that each random variable a λ is distributed according to N (µ, σ 2 ). The corresponding subset of the real line is Due to the fact that the parameter set Λ is a (two-dimensional) closed and bounded interval and the fact that the assignment (µ, σ) → µ + σΦ −1 (ω) is continuous at fixed ω, it follows that each A(ω) is a closed and bounded interval, indeed. Some members of the family and the resulting random intervals are depicted in Figure 2.4 (left).  Figure 1: Some members of the family a λ (left), random set with focal element (right). The parameters λ = (µ, σ) were taken as −1 ≤ µ ≤ 1, 1 ≤ σ ≤ 2.
Anticipating that ω → A(ω) is a random set, one may define the upper and lower distribution functions Anticipating also that the functions a, a bounding the random intervals are random variables, one obtains simply that F is the distribution function of a and F is the distribution function of a.
Here is the argument why ω → A(ω) is a random set. First, one constructs a Castaing representation by taking a countable dense subset of parameter values , for example, all points with rational coordinates. Again from the continuity of the assignment λ → a λ (ω) at fixed ω, i.e., is measurable as a countable union of measurable sets; note that the individual sets are measurable, because the maps ω → a λ (ω) are measurable at each fixed λ. The fundamental measurability theorem implies that A − (B) is measurable for every Borel set B, in particular, for every closed set B. This is the required property for ω → A(ω) to be a random set.
Comparison with the concept of p-boxes. Given two cumulative distribution functions F ≤ F on the real line, a probability box or p-box, as introduced in [17], is a collection of cumulative distribution functions F such that The random set ω → A(ω) above can be viewed as a p-box through (9), but it collects only the cumulative distribution functions corresponding to the random variables a λ , λ ∈ Λ. Such p-boxes are often referred to as distributional or parametric p-boxes. The argument above shows that distributional p-boxes are random sets, provided the underlying family of random variables depends continuously on the parameter λ. On the other hand, p-boxes that are understood as containing all cumulative distribution functions between F (b) and F (b) (the original concept of [17]) may be referred to as distribution-free or non-parametric p-boxes. One may define a corresponding set-valued map by employing the quasi-inverses of F (−1) and F (−1) . It has been shown in [1] that this assignment defines a random set as well; see also the discussion in [4]. The distributional and the distribution-free p-boxes corresponding to F (b) and F (b) in (9) look the same, but the distribution-free version contains many probability distribution functions other than the one coming from the Gaussian family (indicated in Figure 2.4, right).

Random sets generated by parametrized random fields
This central section lays down the basic structure that will allow one to prove that set-valued maps obtained as solutions to partial differential equations whose coefficients are given as intervals, random fields, or parametrized random fields, are indeed random sets.

The general measurability result
For the present purpose, the most general situation to be considered is the following.
• (Ω, Σ, P ) is a complete probability space; • the target space A is a normed, complete and separable space; • Λ is a bounded and closed subset of some Euclidean space; • a λ (ω), λ ∈ Λ is a family of random variables with values in A, that is, the maps ω → a λ (ω) are measurable for each λ; • the random variables depend continuously on λ, that is, the maps Λ → A : λ → a λ (ω) are continuous for each ω.
Then the following assertions hold: The set-valued map defines a random set. All focal elements A(ω) are bounded and closed subsets of A. In addition, if Λ is a (multi-dimensional) interval and A = R, the sets A(ω) are intervals. Proof of assertion. First, it is clear from the continuity assumption that all A(ω) are bounded and closed subsets of A. The assertion that they are intervals under the additional specializing hypothesis follows from the continuity as well. To prove the random set property, one has to show that the upper inverses A − (B) are measurable for every open subset B of A. This follows exactly by the argument given in equation (10) and applying the fundamental measurability theorem.
Example. Consider a function u = u(λ, a) that depends on an interval parameter λ, say λ ∈ [λ, λ], and a fixed random variable a. Such a situation arises, for example, when u is the response of a system with mixed interval and random uncertainty. Both λ and a can be multi-dimensional, so this covers the case of discretized interval and random fields. Assume that the dependence of u on λ and a is continuous. The functions a λ (ω) = u(λ, a(ω)) form a family of random variables exactly of the type discussed here. Collecting the responses in A(ω) = {u(λ, a(ω)) : λ ∈ Λ} therefore leads to a random set, that is actually a random interval ω → [a(ω), a(ω)] if the response is one-dimensional.

Continuous dependence of random fields on their parameters
A more involved instance of a random set is obtained by taking a random field on the real line (x ∈ R) whose parameters are intervals. For example, consider mean zero random fields q(x, ω, ) with fixed variance σ 2 , but correlation length varying in an interval [ , ]. A common probability space for these random fields with different correlation lengths is needed. A convenient way of generating such random fields, in the case of the autocorrelation function (1), is by means of the Langevin equation (4). The common probability space is Wiener space (Ω W , Σ W , P W ) (see Appendix).
Thus at every chosen point x, the assignment ω → Q(x, ω) = {q(x, ω, ) : ∈ [ , ]} defines a (scalar) random set, as can be seen by the same arguments as in the example above. The trajectories of the random field are interval-valued curves, given by x → [min(q(x, ω, ), max(q(x, ω, )], where the minimum and maximum are obtained by letting run through [ , ].
Alternatively, the Karhunen-Loève expansion (3) can be used to place the random fields q(x, ω, ) in a common probability space Ω, not depending on . Formally, this can be done by considering an infinite product of standard Gaussian spaces with random elements ω = (ω 1 , ω 2 , ω 3 , . . .). The required sequence of Gaussian variables ξ k is then given by ξ k (ω) = ω k (see Appendix), and the Karhunen-Loève expansion reads In general, there is no way to assert that the eigenvalues c k, and eigenfunctions ϕ k, (x) depend continuously on the correlation length , even after discretization and reduction to a matrix eigenvalue problem. Thus one has to analyze the situation in each case individually. The case of the autocovariance function (1) on a finite interval can be treated explicitly, thanks to the results in [24]. Without restriction of generality, one may assume that the finite interval is D = [−1, 1]. Then the eigenvalue problem (2) becomes 1 −1 e −|x−y|/ ϕ(y) dy = cϕ(x).
Following [24], problem (12) can be solved in the following way. First, the two equations 1 − α tan α = 0, respectively α + 1 tan α = 0 have two sequences of positive solutions denoted by α k = α k ( ) and α * k = α * k ( ), for even and odd k, respectively. The resulting eigenvalues and eigenfunctions are for even and odd k, respectively. Therefore, the one-dimensional random field q(x, ω, ) with covariance function given by the equation (1) has the Karhunen-Loève expansion For the truncated Karhunen-Loève expansion, one may take a 2M -dimensional standard Gaussian space as probability space Ω, on which the random variables ξ k and ξ * k are defined. It is clear that the solutions α k and α * k of the deterministic equations (13) depend continuously on . The same is true for the eigenvalues as well as eigenfunctions shown in (14), (15). Thus truncating the Karhunen-Loève expansion (16) at a finite M , the resulting random field depends continuously on the correlation length at fixed ω.

Random fields as random variables valued in function spaces
Consider the random field q(x, ω, ) as in Subsection 3.2. The paths or trajectories (in space) of the random field are the maps x → q(x, ω, ), with fixed ω and . It is well-known from the theory of stochastic differential equations that the paths are continuous [5]. This also follows from the Kolmogorov-Chentsov theorem [33], because the autocovariance function (1) satisfies the hypotheses of this theorem. However, for the purposes of the present paper, more is needed.
First, jointly continuous dependence on x and is required. This is easy to achieve, because one can consider the paths in space and parameter set as the maps (x, ) → q(x, ω, ), with fixed ω and apply the previous arguments. Indeed, when using a truncated Karhunen-Loève expansion, the argument was just given after (16). When the Langevin equation is used, one may resort to the results of [55,56] which assert the continuous dependence, jointly in x and .
Second, the random fields have to be interpreted as random variables in the space of continuous functions. This is because they enter as variable coefficients in the partial differential equations to be solved, and the solution depends on the whole trajectories, not only the values of the random fields at individual points x. Let x belong to an interval I and to an interval L, both closed and bounded. Continuity of the paths in x and can be stated by saying that the map q(ω) : (x, ) → q(x, ω, ) belongs to the space of continuous function C(I × L) for fixed ω. Thus the random field induces a map Ω → C(I × L), ω → q(ω). Standard arguments from the theory of stochastic processes show that this map is measurable, that is, a random variable with values in C(I × L). For details of the proof, see [45,Section 1.2]. The argument also shows that • for fixed ∈ L, the map Ω → C(I), ω → q(ω, ) is measurable; • for fixed ω ∈ Ω, the map L → C(I), → q(ω, ) is continuous.
Note that the same random field q may be considered in three different ways as a measurable function of ω: as an element of the function space C(I × L), denoted by q(ω), as an element of the function space C(I) at fixed correlation length , denoted by q(ω, ), and as a real number at each fixed x and , denoted by q(x, ω, ). Whenever its values are collected in a random set, the corresponding capital letter Q will be used.

The mean values of a parametrized random field as an interval field
Let ω → A(ω) be a random set whose focal elements A(ω) = [a(ω), a(ω)] are closed and bounded intervals. Let {a k (ω), k = 1,2,3,. . . } be a Castaing representation. The Aumann expectation E(A) is defined as the closure of the set of all expectation values of the measurable selections a k for k = 1, 2, 3, . . ., that is, E(A) = cl{E(a k ) : k = 1, 2, 3, . . .}, see [41]. It is clear that in the considered case, the Aumann expectation is simply the interval E(A) = [E(a), E(a)]. Next, consider a random set generated by a parametrized random field ω → Q(x, ω) = {q(x, ω, ) : ∈ [ , ]} as in Subsection 3.2. Thanks to the continuous dependence on the parameter discussed in Subsection 3.2 each Q(x, ω) is a closed and bounded interval at fixed x. One may form the Aumann expectation E(Q(x)), which is a deterministic interval at fixed x. Thus the assignment x → E(Q(x)) defines an interval field.

Application to elliptic equations
A prototypical stochastic elliptic partial differential equation, arising e.g. in elastostatics, is the boundary value as discussed, e.g., in [28]. Here D is a bounded open domain in R d and ∂D is its boundary. For example, in two space dimensions u might be the transversal displacement of a non-uniform membrane under transversal load f . The spatially varying elastic properties are subsumed in the coefficient a(x). Alternatively, in three space dimensions, u(x) could be the pressure of a fluid in a porous medium with permeability a(x) and source rate f (x) [39]. This section serves to demonstrate how the random field methods can be applied in practice. The required solution theory is collected, and a simple numerical example is elaborated.
In the foundations of the finite element method, problem (17) is commonly solved by variational methods. Suitable function spaces are defined as follows: L 2 (D) is the space of square integrable functions; H 1 (D) is the space of square integrable functions all whose first order partial derivatives are square integrable as well; H 1 0 (D) is the subspace of functions that vanish on the boundary ∂D. Both L 2 (D) and H 1 0 (D) are normed spaces; the squares of the norms are given by The spaces are complete and separable. The variational formulation of equation (17) then is: for all v ∈ H 1 0 (D). The deterministic case. Assume that a is a continuous function on D which is bounded from above and below, that is, a ∈ C(D) and α ≤ a(x) ≤ β for some constants α, β > 0 and all x in D. Further, assume that f is square integrable on D. Under these conditions, it is well-known that the variational problem (18) has a unique solution u in H 1 0 (D), [44,70]. For the purpose of this example, the source term f will be fixed. The solution u depends on the coefficient function a; this dependence will be made explicit in the notation u = u a . Denote the set of functions which are continuous on D up to the boundary with values between α and β by C(D; α, β). It is equipped with the norm a = max x∈D |a(x)|. The crucial assertion is that the map C(D; α, β) → H 1 0 (D) : a → u a is continuous. This non-trivial result can be proved by invoking the variational formulation (18), as it was done in [45]. A different proof, for more general partial differential equations, can be found in [44].
The random field case. To start with, let a(x, ω), ω ∈ Ω, be a random field with continuous paths lying between α and β. As noted in Subsection 3.3, it can be viewed as a random variable with values in C(D; α, β). Due to the continuity of the above map, the assignment ω → u a(ω) is measurable, thus the solution u a is a random variable with values in H 1 0 (D). Intuitively, this means that u a is a random field whose paths belong to H 1 0 (D). In order to interpret it as a classical random field, one should be able to assign values u a(ω) (x) at each point x in D. In one space dimension, this is easy because H 1 0 (D) is continuously imbedded in C(D), and so the pointwise evaluations C(D; α, β) → R : a → u a (x) are continuous as well. Thus the maps ω → u a(ω) (x) are random variables for each x, and the paths x → u a(ω) (x) are continuous functions. In space dimensions two and three, more theory is needed. First, the input random field a(x, ω) should have Lipschitz continuous paths, that is, satisfying an inequality of the form |a(x, ω) − a(y, ω)| ≤ C|x − y| for some constant C and all x, y. This is not true of the random field generated by the Langevin equation, but it is true of the random field generated by the truncated Karhunen-Loève expansion. Then one can invoke regularity theory saying that the solution to problem (17) is in H 2 (D ) in any open subregion D of D and depends continuously on a in this space [44]. Again, H 2 (D ) is continuously imbedded in C(D ), and so one obtains a classical random field u a(ω) (x), defined for all x in the interior of D.
The random set case. Let a(x) be a parametrized random field. More specifically, it will be constructed as follows. In two space dimensions, take a closed and bounded interval I such that D ⊂ I × I and a random field of the form q(x, ω, ) as in Subsection 3.2 and set a(x, ω, ) = µ(x) + χ q(x 1 , ω, )q(x 2 , ω, ) , for x = (x 1 , x 2 ) in D.
Here µ(x) > 0 is the mean field (assumed fixed and deterministic) and χ is a cut-off function; this is needed because the individual realizations of a Gaussian random field can be arbitrarily large with non-zero probability.
In the end, the cut-off has to be chosen so as to guarantee that α ≤ a(x, ω, ) ≤ β. One could also use different correlation lengths in the two variables and/or multiply the fields with different standard deviations σ 1 , σ 2 . The construction in one or three space dimensions is analogous.
When varies in an interval L, a set-valued solution to (17) can be defined as U (ω) = {u a(ω, ) : ∈ L}. Combining the continuity of the map a → u a with the statement at the end of Subsection 3.3, one obtains that It follows from Subsection 3.1 that U (ω) is a random set in the target space H 1 0 (D). Again, in one space dimension one may form the set-valued functions U (ω, x) = {u a(ω, ) (x) : ∈ L} at each x in D. By the same argument, the U (ω, x) are random sets in R; they form the point evaluations of the random set solution U (ω) at the points x in D. In space dimensions two and three, a modified argument as in the random field case has to be applied.
Numerical example. The purpose of this paragraph is to illustrate the method in a simple example, the displacement of an L-shaped membrane with non-uniform, random elastic properties. This has the character of a text book example, and so all units are taken dimensionless. The L-shaped domain D can be seen in Figure  2 (left). The equation for the displacement is (17). A constant load f (x) ≡ 1 is assumed, and the random field a(x) is taken of the form (19) with constant mean µ(x) ≡ 1, autocovariance function of the form (1), and field variance identically equal to one as well. In this elliptic problem, it suffices to guarantee that the realizations of the random field a(x, ω, ) stay above zero. Thus a lower cut-off function χ was chosen at the value 0.1. To produce the random set solution, the correlation length was taken from the interval 0.5 ≤ ≤ 1.5.
Numerically, the L-shaped region was subdivided in a grid of size 18×18, and problem (17) was solved by the finite element method using a Matlab-program. To generate the random field, the Karhunen-Loève expansion truncated at M = 130 was used. The value of a(x, ω, ) entered in each finite element was chosen as the average of the values at its four nodes. A Monte Carlo sample size of 500 was used. At each realization of the underlying 2M -dimensional Gaussian variable, see (16), the finite element program was executed with eleven values of between 0.5 and 1.5, stepsize 0.1.
As outlined above, this procedure results in an approximation to the random set solution U (ω). The solution is a random set in H 1 0 (D) and in H 2 (D ) in any open subregion, actually everywhere except near the sharp inside corner [29]. Consequently, each horizontal or vertical slice through the solution is a random set in the space of continuous functions. At fixed x = (x 1 , x 2 ), each realization of the solution is an interval. The mean (Aumann expectation) is an interval field. Figure 2 (left) shows a realization of the displacement random field at fixed correlation length = 0.5. Realizations of the full random set solution can be visualized by plotting slices at fixed x 1 or x 2 . Such a cut at x 2 = 0.4444 is shown in Figure 2 (center). Above each fixed x 1 , the realization is an interval. Finally, the mean value field can also be depicted by slices. Figure 2 (right) shows a slice of this interval field at x 2 = 0.4444. The upper and lower bounding curves of the mean displacement are shown together with the mean solutions at correlation lengths 0.5, 0.6, . . . , 1.5.
It is clear that the strategy outlined in the simple example (17) can be extended to more general elliptic equations and systems, using the variational formulation (see [44,70] for second order scalar elliptic equations and [12] for more general elliptic systems of elasticity theory).

Application to hyperbolic systems
Many equations from one-dimensional elastodynamics (and other fields, such as acoustics or hydrodynamics) can be written as a hyperbolic system where x ∈ R and t ∈ R denote the spatial and time variable, respectively. The solution vector is (u 1 , . . . , u n ), the driving term is (g 1 , . . . , g n ), the coefficient matrix is (f ij ) i,j=1,...,n . The coefficients a i are assumed to be real-valued and signify the propagation speeds of the solution components. The focus of this section will be on the case where the coefficients as well as the initial data are random fields/random sets on a probability space (Ω, Σ, P ). A typical example from elastodynamics concerns longitudinal waves in thin, non-homogeneous rods, described by the wave equation where u is the longitudinal displacement, ρ the density, E the modulus of elasticity, and q the body force [26,Chapter 2]. Equation (21) is readily cast in the form of (20) and g = q/ρ. A second example is provided by various scalar transport or advection-reaction equations in the form The construction of classical, deterministic solutions to the hyperbolic system (20) is a vector-valued version of the construction for the scalar equation (23). Thus it will suffice to briefly indicate how (23) can be treated. The systems case can be found, e.g., in [47,Section 13]. The solution can be constructed by the method of characteristics, which consists in solving the system of differential equations ∂ ∂τ γ(x, t, τ ) = a (γ(x, t, τ ), τ ) , γ(x, t, t) = x.
This results in the characteristic curves τ → γ(x, t, τ ), passing through the point x at time τ = t. Equation (23) is equivalent to the integral equation which in turn can be solved by successive iteration. The issue to be addressed here is when a and f , g (and possibly u 0 ) are random fields or parametrized random fields on the probability space (Ω, Σ, P ). In order to obtain the solution as a random field or as a random set, additional properties of the coefficients are required.
Following the theory of random differential equations [8,30], a continuous random field solution γ(x, t, τ, ω) can be obtained provided the transport coefficient a(x, t, ω) is pathwise Lipschitz continuous and globally bounded. The next issue is the composition of the random fields f , g, u 0 with γ, for example, (x, t, τ, ω) → f (γ(x, t, τ, ω), τ, ω). This composition is jointly measurable (in all four variables) and pathwise continuous, if both γ and f are pathwise continuous random fields. Finally, the integral equation (25), now with random field input, has to be solved by successive approximation (on suitable bounded domains of determinacy), taking account of the measurability at each iteration step. The precise requirements are: • For fixed ω ∈ Ω, a(x, t, ω) is locally Lipschitz in (x, t); • |a(x, t, ω)| has a uniform global bound independently of (x, t, ω); • |f (x, t, ω)| is locally bounded in (x, t) independently of ω; • f , g and u 0 are pathwise continuous random fields.
Under these assumptions, the integral equation (25) has a pathwise continuous random field solution u(x, t, ω). Note that the assumptions on a and f can be enforced by a cut-off procedure similar to (19). If, in addition, the random fields a, f , g, u 0 are pathwise differentiable, u(x, t, ω) is a pathwise classical solution to the differential equation (23). When solving, e.g., the wave equation (21), one has to take care that the fields ρ and E are sufficiently regular so that the coefficients in system (22) have the required properties. Another approach to hyperbolic systems with random field coefficients, based on weak solutions, can be found in [40]. It applies in any space dimension and leads to a different set of requirements on the coefficients and data.
The random set case. Assume that some or all of the random fields a, f , g, u 0 depend on interval parameters. As in the previous sections, this leads to a set-valued solution ω → U (ω) to (23) in a suitable space to be determined. In order to prove that the set-valued solution is a random set, one may follow exactly the same strategy as in Section 4. Only the continuous dependence of the solution on the coefficients and data requires different arguments, as outlined next. The continuity of the map (a, f, g, u 0 ) → u = u (a,f,g,u0) can be split up into the continuity of the maps a → γ and (γ, f, g, u 0 ) → u.
To be specific, start with a bounded and closed initial interval K 0 = [−κ, κ] and a time horizon T > 0. Denote by c the uniform global bound of |a(x, t, ω)|, as listed under the requirements. The region is a domain of determinacy for K 0 . That is, when (x, t) ∈ K T , all characteristic curves τ → γ(x, t, τ, ω) joining (x, t) with the x-axis lie also in K T . Denote by Lip(K T ) the space of Lipschitz continuous functions on K T and by Γ(K T × [−T, T ]) the subset of C(K T × [−T, T ]) which comprises all characteristic curves γ arising from elements a ∈ Lip(K T ). The first step is to show that the map is continuous, when the spaces are equipped with the supremum norm. The second step is to prove that the map is continuous in the supremum norm. The latter is a rather standard argument in the theory of differential equations using the Gronwall lemma. Details are elaborated in [45]. Since the initial choice of K 0 was arbitrary, the continuity of the map (a, f, g, u 0 ) → u = u (a,f,g,u0) with values in C(R 2 ) follows. Thus if the random fields (a, f, g, u 0 ) depend continuously on a (possibly multidimensional) parameter λ ∈ Λ, the same arguments as in Sections 3 and 4 show that U (ω) = {u (a(ω,λ),f (ω,λ),g(ω,λ),u0(ω,λ)) : λ ∈ Λ} is a random set in C(R 2 ). The multidimensional case: The generalization to the transport equation in higher space dimensions, is straightforward, using the method of characteristics. Here x ∈ R d , a = (a 1 , . . . , a d ), u, f and g are still scalar functions. In the multidimensional analogues of (20), (21), it is more challenging to establish continuous dependence of the solution on the data. One approach could be to formulate the problem in terms of semigroup theory and then invoke Kato's perturbation results [35]. For the system of linear acoustics this has been done, e.g., in [23,Section 6]. Another approach takes recourse to the Fourier integral operator representation of the solution [52].
6 Further results, numerical aspects

The parametric point of view
Given a parametrized family a λ , λ ∈ Λ, of random variables or random fields, and a map a λ → u a λ describing the response of a system, the approach favored in this paper has been to first construct the random set and then to define lower and upper probabilities of events B by ).
An alternative approach, the parametric viewpoint, would consist in skipping the random set step and defining lower and upper probabilities induced by the family a λ , λ ∈ Λ, directly: This approach gives tighter bounds: but deprives one of using the concepts of the theory of random sets. The numerical computation of the two types of upper and lower probabilities are different, but of comparable effort. For a comparison of the two approaches, from the conceptual and numerical points of view, see [3,21].

Numerical aspects
As is well-known, the computational effort when implementing heterogeneous uncertainty models is high: it involves both Monte Carlo sampling and the computation of interval bounds. Suppose the quantity of interest, given through a model response function a λ → u a λ as above, is one-dimensional. This could be the model response at a single point x or the value of a performance criterion. Further, suppose the lower and upper distribution functions F (b), F (b) of this quantity are sought. The procedure in the random set framework is as follows: If the hypotheses of the previous sections are met, the quantity of interest (26) is a random interval U (ω) = [u(ω), u(ω)], and the lower and upper distribution functions are simply given by F (b) = P (u ≤ b), F (b) = P (u ≤ b). The following algorithm produces a numerical approximation to these lower and upper distribution functions. From there, further probabilistic indicators such as bounds on exceedence probabilities or moment bounds can be computed. It is assumed that the parameter set Λ indexing the family of random variables a λ is a d-dimensional interval (or a sufficiently regular subset of such an interval). It is also assumed that the random variables a λ are defined on a common probability space (Ω, Σ, P ). The following steps are required: • Discretize Λ in a grid λ 1 , . . . , λ M .
• Generate a sample ω 1 , . . . , ω N distributed according to the underlying probability P on the common probability space Ω.
• For each ω k , estimate Each [u(ω k ), u(ω k )] is a realization of the random interval [u, u].
• Then u(ω k ) and u(ω k ), k = 1, . . . , N, are samples of u and u, respectively. Their empirical cumulative distribution functions provide estimates for F (b) and F (b).
In the parametric point of view, the assumptions on the index set Λ are the same. It is not necessary to require that the random variables a λ are defined on the same probability space. Denote by P λ the probability distribution of a λ . The algorithm is as follows: • Discretize Λ in a grid λ 1 , . . . , λ M .
• For each λ i , generate a sample a λi (1), . . . , a λi (N ) of the random variable a λi (that is, generate a sample of size N from the distribution P λi ).
• Estimate the probability P λi ((−∞, b]) using the sample, for example by means of the Monte Carlo esti- In these crude forms, both algorithms require M · N evaluations of the (generally expensive) model function u. Accordingly, many approaches have been devised to reduce computational cost. In the random set case, a stochastic response surface, like a polynomial chaos expansion [24], reduces the number M to a level required for the accuracy of the response surface. Evaluating a Monte Carlo sample of size N , given the response surface, is cheap [50]. For further sampling concepts for random sets, see [1].
Here is a short list of methods currently being developed for the reduction of computational cost in imprecise probability models, in particular, for imprecise random fields: Approximations by inner and outer bounds [2,3,4,68,77]; propagation methods for p-boxes [13,14,27,54,75,76]; methods employing polynomial chaos expansion [37,49,61]; probability plots, an enhancement of the first order reliability method (FORM) proposed by [31]. A very efficient method appears to be advanced line sampling [10,66] that intertwines the two required loops and can reduce M and N simultaneously. In the parametric point of view, a promising method has become importance sampling, or rather reweighting a single importance Monte Carlo sample, which means M = 1. Still N model evaluations are required [18,19,71,72,73,78]. This can of course be aided by a deterministic response surface.

Modelling correlations
The question of how to model correlations in hybrid probability-interval uncertainty analysis has occurred implicitly at various places in this article. The purpose of this subsection is to spell out more explicitly what kind of approaches are possible. Essentially, there are three ways to formulate correlations.
Correlations introduced by the underlying probabilistic model. In the approach to random sets based on families of random variables, correlations can be defined through their joint distributions. These correlations are carried along when replacing distributional hyperparameters by intervals. This is generally the case in random field models with interval correlation lengths as e.g in [50] as well as in Section 4. More generally, random fields with families of correlation structures have been considered in [16], especially analyzing the corresponding Karhunen-Loève expansions with interval valued eigenvalues and eigenfunctions. In models with interval parameters and random field excitation, correlations are introduced through the random field, but become interval valued in output and/or state variables. Such a situation has been investigated in connection with spectral finite elements (finite elements combined with polynomial chaos expansions) in [11].
Copulas and multivariate capacities. A very interesting and practical approach is due to [1,2], who introduced the notion of random sets of indexable type. These random sets are defined on Ω = (0, 1] d and have values in a target product space A = A 1 × · · · × A d . The focal elements are of product form A 1 (ω 1 ) × · · · × A d (ω d ) where ω = (ω 1 , . . . , ω d ) ∈ Ω and each A j (ω j ) belongs to a suitable family of subsets of A j (closed subsets for the purpose of the present paper). A correlation structure can be imposed on a random set of indexable type by prescribing the joint probability measure on Ω with the help of a d-dimensional copula. The approach has been further developed in connection with simulation methods in [4].
Alternatively, copulas can also be introduced so as to act on the target space, rather than on the underlying probability space Ω. A bit of terminology is in order. The upper probability of a Borel subset B ⊂ A defined by (6) can be viewed as a so-called capacity functional [41], say on the family K(A) of compact subsets of A, while the lower probability defines a containment functional. In the multidimensional case, there are two possible ways of extending capacity functionals, following [59]. The first one is to define a capacity functional on the family of sets belonging to K(A 1 ) × · · · × K(A d ) directly, in which case it is termed a multivariate capacity functional. The second choice is to define a capacity functional on the larger family of sets K(A 1 × · · · × A d ), referred to as a joint capacity functional. It has been shown in [58] that multivariate containment functionals can be reconstructed from their univariate marginals by means of a family of copulas (but not by a single copula, in general). On the other hand, joint capacity functionals can always be reconstructed from their marginal capacities by means of a single copula, provided the marginal capacities are maxitive [59] (this is the case if the marginals arise from normalized fuzzy sets as indicated in Subsection 2.3). The relation of this approach to the one of [2] and others has been discussed in [57].
Structured focal elements. The third possibility to model dependence is through the structure of the focal elements. The geometric approach would replace, say in the case of two variables, the rectangles A 1 (ω 1 )×A 2 (ω 2 ) defining the joint focal sets by rhombic shapes [7,69]. Alternatively, any random set delivers a family of probability distributions, see e.g. [48]. One may keep the focal elements as rectangles but restrict the admissible joint probability distributions [20]. In the case of interval fields, one may use explicitly defined basis functions multiplied by interval factors [74] or spatial dependency functions [62], which are the counterpart to Karhunen-Loève expansions in the interval setting.

Summary and conclusions
The paper addressed mathematical and modelling issues when heterogeneous uncertainty is entered in engineering models. The model inputs were assumed to consist of random variables or random fields, combined with interval parameters. This led to model outputs which were both random and set-valued. This set-up is not new; it actually has been widely adopted in the engineering literature [11,22,38,42,50,51,60,63,64,66], see also the literature review in the introduction. However, few papers have addressed the question whether the set-valued model output satisfies the measurability conditions which make it a random set in its proper sense [3,53,55,56]. The main contribution of this paper was to show how the theoretical measurability questions can be addressed, and that the continuous dependence of the solution and the involved random and interval quantities on their parameters is the essential ingredient. This is important in as much as computing upper and lower probabilities in the sense of (6), (7) is only justified when the model output is a random set.
It was assumed here that the underlying probability spaces were infinite and that the quantity of interest was the model output itself. If one is only interested in upper and lower bounds on certain probabilities or moments, one may avoid collecting the output in a random set. Rather, one may work directly with the families of random variables defining the input, propagating them through the model and computing the desired probability bounds. In this parametric approach, which has been adopted and discussed e.g. in [3,16,18,72], the measurability issue is much simpler: it reduces to proving that the propagated variables are measurable (that is, still random variables), which is guaranteed if the input-output map is continuous.
Another simple situation in which the measurability question does not arise is the case of finite random sets, because on a finite probability space (with the power set as the σ-algebra) every map is measurable.
The focus was on model outputs that were generated through solutions to elliptic and hyperbolic partial differential equations. The most difficult part was the proof that the model output depended continuously on the input parameters and hyperparameters. There is no general method of proof; different arguments are needed for each type of equations. It is expected that new arguments will be needed when considering other types of equations, such as parabolic equations. As the paper was oriented toward the theoretical background of a widely adopted framework, only a simple illustrative example was given. Many larger scale applications can be found in the papers quoted above. For the same reason, computational aspects have only been briefly addressed. For more extensive analyses of the computational effort, the reader is referred to [4,14,18,21].
An important future research question is the modelling of correlations. It is expected that the framework of [59] can encompass most other approaches. In particular, it will be interesting to investigate how it applies in the case of interval fields.

Acknowledgements
This work was supported by the FWF doctoral program Computational Interdisciplinary Modelling W1227 of the Austrian Science Fund as well as a doctoral scholarship of the University of Innsbruck. Thanks are due to Thomas Fetz for providing the Matlab-code solving the deterministic FE-problem (17). The authors are grateful to the two referees for valuable comments. Thanks to the first referee, the important issue of correlation modelling has been included. The detailed and critical comments of the second referee led to many clarifications and to substantial improvements of the presentation.

Appendix: Explicit construction of infinite probability spaces
In Subsection 3.2 it was necessary to introduce common probability spaces for random fields with different variances and correlation lengths, first for random fields defined through the Langevin equation, and second for the infinitely many Gaussian random variables arising in the Karhunen-Loève expansion. The appendix briefly explains the mathematical details.
Here Ω W = C([0, ∞)) is the space of continuous functions ω = ω(x) on the half-line. It is a metric space; the metric is generated by the semi-norms ω a,b = max a≤x≤b |ω(x)|, 0 ≤ a < b < ∞. Then Σ W is the corresponding Borel σ-algebra. The probability measure P W is the measure on Ω W which is generated by the transition probabilities of Wiener process [67]. In this setting, Wiener process W : [0, ∞) × Ω W → R is defined through the point evaluations W (x, ω) = ω(x). Note that the realizations of the process are identified with the random elements, as is usual practice when constructing probability spaces on which stochastic processes live. The one-point and two-point probabilities are, for points 0 < x < y, and similarly for the joint probabilities at finitely many points. Infinite products of Gaussian spaces. For k = 1, 2, 3, . . ., let Ω k = R with the Borel σ-algebra Σ k and the standard Gaussian measure P k , for any Borel subset B k ⊂ R. The spaces (Ω k , Σ k , P k ) are all identical copies of standard Gaussian space, but the index is needed to form their product. The standard Gaussian variable on Ω k = R is defined through the identity map id(ω k ) = ω k . Again, the realizations of the random variable are identified with the random elements.