An analysis of the R\"uschendorf transform - with a view towards Sklar's Theorem

In many applications including financial risk measurement, copulas have shown to be a powerful building block to reflect multivariate dependence between several random variables including the mapping of tail dependencies. A famous key result in this field is Sklar's Theorem. Meanwhile, there exist several approaches to prove Sklar's Theorem in its full generality. An elegant probabilistic proof was provided by L. R\"{u}schendorf. To this end he implemented a certain"distributional transform"which naturally transforms an arbitrary distribution function $F$ to a flexible parameter-dependent function which exhibits exactly the same jump size as $F$. By using some real analysis and measure theory only (without involving the use of a given probability measure) we expand into the underlying rich structure of the distributional transform. Based on derived results from this analysis (such as Proposition 2.5 and Theorem 2.12) including a strong and frequent use of the right quantile function, we revisit R\"{u}schendorf's proof of Sklar's theorem and provide some supplementing observations including a further characterisation of distribution functions (Remark 2.3) and a strict mathematical description of their"flat pieces"(Corollary 2.8 and Remark 2.9).


Introduction
The mathematical investigation of copulas started 1951, due to the following problem of M. Fréchet: suppose, one is given n random variables X , X , . . . , Xn, all de ned on the same probability space (Ω, F, P), such that each random variable has a (non-necessarily continuous) distribution function F i (i = , , . . . , n). What can then be said about the set of all possible n-dimensional distribution functions of the random vector (X , X , . . . , Xn) (cf. [7])? This question has an immediate answer if the random variables were assumed to be independent, since in this case there exists a unique n-dimensional distribution function of the random vector (X , X , . . . , Xn), which is given by the product Π n i= F i . However, if the random variables are not independent, there was no clear answer to M. Fréchet's problem.
In [15], A. Sklar introduced the expression "copula" (referring to a grammatical term for a word that links a subject and predicate), and provided answers to some of the questions of M. Fréchet. In the following couple of decades, copulas (which are precisely nite dimensional distribution functions with uniformly distributed marginals), were mainly used in the framework of probabilistic metric spaces (cf. e. g. [13,14]). Later, probabilists and statisticians were interested in copulas, since copulas de ned in a "natural way" nonparametric measures of dependence between random variables, allowing to include a mapping of tail dependencies. Since then, they began to play an important role in several areas of probability and statistics (including Markov processes and non-parametric statistics), in nancial and actuarial mathematics (particularly with respect to the measurement of credit risk), and even in medicine and engineering.
One of the key results in the theory and applications of copulas, is Sklar's Theorem (which actually was proven in [13] and not in [15]). It says: Furthermore, if F is continuous, the copula C F is unique. Conversely, for any univariate distribution functions H , . . . , Hn, and any copula C, the composition C • (H , . . . , Hn) de nes a n-dimensional distribution function with marginals H , . . . , Hn.
Since the original proof of (the general non-continuous case of) Sklar's Theorem is rather complicated and technical, there have been several attempts to provide di erent and more lucidly appearing proofs, involving not only techniques from probability theory and statistics but also from topology and functional analysis (cf. [4]).
Among those di erent proofs of Sklar's Theorem, there is an elegant, yet rather short proof, provided by L. Rüschendorf, originally published in [12]. He provided a very intuitive, and primarily probabilistic approach which allows to treat general distribution functions (including discrete parts and jumps) in a similar way as continuous distribution functions. To this end, he applied a generalised "distributional transform" which -according to [12] -has been used in statistics for a long time in relation to a construction of randomised tests. By making a consequent use of the properties of this generalised "distributional transform" together with Proposition 2.1 in [12], the proof of Sklar's Theorem in fact follows immediately (cf. Theorem 2.2 in [12]). Irrespectively of [12] the same idea was used in the (again rather short) proof of Lemma 3.2 in [11]. All key inputs for the proof of Sklar's Theorem clearly are provided by Proposition 2.1 in [12]. However, the proof of the latter result is rather di cult to reconstruct. It says: [12] -Proposition 2.1. Let X, V be two random variables, de ned on the same probability space (Ω, F, P), such that V ∼ U( , ) and V is independent of X. Let F be the distribution function of the random variable X. Then U := F V (X) ∼ U( , ), and X = F − (U) P-almost surely.
While studying (and reconstructing) carefully the proof of Sklar's Theorem built on Proposition 2.1 in [12], we recognise that it actually implements key mathematical objects which do not involve probability theory at all and play an important role beyond statistical applications.
The main contribution of our paper is to provide a thorough analysis of these mathematical building blocks by studying carefully properties of a real-valued (deterministic) function, used in the proof of Proposition 2.1 in [12]; the so-called "Rüschendorf transform". We reveal some interesting structural properties of this function which to the best of our knowledge have not been published before, such as e. g. Theorem 2.12 which actually is a result on Lebesgue-Stieltjes measures, strongly built on the role of the right quantile function which seems to be not widely used in the literature (as opposed to the left quantile function).
Equipped with Theorem 2.12 we then revisit the proof of Proposition 2.1 in [12] (cf. also [10, Chapter 1.1.2]). However, in our approach Proposition 2.1 in [12] is an implication of Theorem 2.12 and Lemma 2.15. For sake of completeness we include a proof of Sklar's Theorem again (cf. also [10,Chapter 1.1.2]) -yet as an implication of Theorem 2.12, nally leading to Remark 2.21.
Last but not least, by observing the signi cance of the jumps of the lowest generalised inverse, the proof of Theorem 2.12 indicates how to construct the P-null set in Proposition 2.1 in [12] explicitly -leading to Theorem 2.18.

The Rüschendorf Transform
At the moment let us completely ignore randomness and probability theory. We "only" are working within a subclass of real-valued functions, all de ned on the real line, and with suitable subsets of the real line.
Let F : R −→ R be an arbitrary right-continuous and non-decreasing function. Let x ∈ R. Since F is non-decreasing, it is well-known that both, the left-hand limit We consider the following important transform of F: We call the real-valued function R F : Clearly, we have the following equivalent representation of the Rüschendorf λ-transform F λ : In particular, for all (x, λ) ∈ R × [ , ] the following inequality holds:

Assumption 2.2.
In the following we assume throughout that F is bounded on R (i. e., the range F(R) is a bounded subset of R), implying that F(R) ⊆ [c * , c * ] for some real numbers c * < c * . Moreover, let us assume that for any α ∈ (c * , c * ) the set {x ∈ R : F(x) ≥ α} is non-empty and bounded from below. * WLOG, we may assume from now on that c * = and c * = (else we would have to work with the function F−c * c * −c *

).
Although its proof (by contradiction) mostly is an easy calculus exercise with sequences, the following observation -which does not require a right-continuity assumption -should be explicitly noted (cf. also [5,6,13]): an arbitrary non-decreasing function. Then the following statements are equivalent: ≥ α} is a well-de ned real number for any α ∈ ( , ).
Hence, given Assumption 2.2, the assumed right-continuity of F and Remark 2.3 imply that (possibly after shifting and stretching F adequately) F actually is a distribution function! Therefore, its generalised inverse function F ∧ : ( , ) −→ R, given by is well-de ned and satis es for any α ∈ ( , ) (cf. e. g. [9]). Actually, since F is assumed to be right-continuous, it follows that . Moreover, the following important inequality is satis ed: for all α ∈ ( , ), δ > , and for all ε > . Hence, for all α ∈ ( , ). Also recall from e. g. [14] that {x ∈ R : . Then by J F := {x ∈ R : ∆F(x) > } we denote the set of all jumps of F which is well-known to be at most countable.
Throughout the remaining part of our paper, we follow the notation of [12] and put ξ := F ∧ (α) for xed < α < . By taking a closer look at F ∧ F λ (x) , we rstly note the following observation.
The next result shows an important part of the role of Rüschendorf transform which can be more easily understood if one sketches the graph of F including its jumps. Since J F is at most countable, it follows that J F = {xn : n ∈ M}, where either card(M) < ∞ or M = N. By making use of this representation and the canon- [14,Chapter 4.4]) we arrive at the following Proof. To prove the rst set inclusion, we may assume without loss of generality that F is not continuous in which gives the second set inclusion.
To verify the representation of the disjoint union Then < λ(α) < and Furthermore, a straightforward application of the inequality (2.4) (together with (2.5) and the monotonicity assumption on F) shows the graphically clear fact that there is no x ∈ J F such that (F(x−), F(x)) contains elements of the form F(z), respectively F(w−) for some z, w ∈ R. Now, given the construction of λ(α) above and the listed properties of any of the sets F(xn−), F(xn) , the assertion about the mapping Φ F follows immediately.
In particular, the following statements are equivalent: Hence, α ≤ F(x−) < α, which is a contradiction. Thus, the restricted function F| A + λ,α is continuous on A + λ,α . Let u ∈ A + λ,α . Since F is continuous at u, it follows that Thus, ∅ ≠ A + λ,α = B. To prove (ii), suppose that A + λ,α is non-empty. The previous calculations show that the existence of an element u ∈ A + λ,α already implies To nish the proof of (i), we have to verify (2.7). To this end, let ξ < η and x ∈ (ξ , η). Then there exists

1] for a large class of distribution functions F any non-empty set [ξ , η] = [F ∧ (α), F ∨ (α)] even emerges as a set of optimal solutions of the so called "single period newsvendor problem" which asks for the minimisation of coherent risk measures, such as the conditional-valueat-risk (which coincides with Expected Shortfall), corresponding to a cost function, induced by random demand. Here, one should recall that recently the Basel Committee on Banking Supervision (BCBS) suggested in their updated consultative document "Fundamental review of the trading book" to implement Expected Shortfall at α = . % in a bank's internal market risk model to calculate its minimum capital requirements with respect to market risk.
Let B(R) denote the set of all Borel subsets of R. In the following, let µ F : B(R) −→ [ , ∞] be the Lebesgue-Stieltjes measure of F. For a detailed description of the construction and properties of the Lebesgue-Stieltjes measure (including Lebesgue-Stieltjes integration), we refer the reader to e. g. [2] and [3]. For the convenience of the reader, we recall the following fundamental result (cf. for all x, y ∈ R. Clearly, this crucial result implies that µ G (x, y) = G(y−) − G(x) and hence for all y ∈ R. Moreover, µ G (R) = if and only if G is a constant function on R.

Since in this case
A + λ,α = (ξ , η) ∪ · {η} , it consequently follows that Next, we are going to reveal in detail that the function F is almost "left-invertible" at every x ∈ R which does not belong to the preimage A + λ,α of a " at piece" of F. More precisely: In particular, if < F < µ F -almost everywhere, then Proof. Let < λ ≤ . Consider the Borel set where J F ∧ := {α ∈ ( , ) : ∆F ∧ (α) > } denotes the set of all jumps of the function F ∧ . † Since the (leftcontinuous) function F ∧ : ( , ) −→ R is non-decreasing, J F ∧ is at most countable. Hence, if J F ∧ ≠ ∅, there exists a subset M of N, and a sequence (αn) n∈M , consisting of pairwise distinct elements αn ∈ J F ∧ , such that J F ∧ = {αn : n ∈ N}. Thus, α∈J F ∧ A + λ,α = · n ∈ M A + λ,αn . Corollary 2.11 therefore implies that -in any caseµ F (N λ ) = and hence R \ N λ ≠ ∅ (since F cannot be a constant function on the whole real line). Let First, let J F ∧ = ∅. Then ξ (x) = η(x). Lemma 2.7 therefore implies that A + λ,α(x) = ∅. In particular, as above. So, let α(x) ∈ J F ∧ . Then α(x) = αm for some m ∈ M, and hence , it follows once more again that x ≤ ξ (x) = F ∧ (α(x)), and hence Next, we consider the set A ∼ λ,α . Again, in line with [12], we put q := F(ξ −) and β := ∆F(ξ ) ≥ . Then Obviously, we may write: Moreover, by using a similar argument like that one which has shown us that the set A λ,α is non-empty, we further obtain Observe that only the subset A ∼ λ,α of A λ,α does depend on the choice of λ ∈ [ , ].
Apparently, to continue with the calculation of the respective probabilities, we have to consider the following two possible cases: β = and β > : Moreover, by taking into account that F(ξ ) = α in case (i) (since F is continuous at ξ if β = ), we have arrived at the following important is an arbitrary distribution function. Let α ∈ ( , ). Put ξ := F ∧ (α) and β := ∆F(ξ ). Let X, V be two random variables, both de ned on the same probability space (Ω, F, P), such that V ∼ U( , ) and V is independent of X. Then where c β := if β = and c β : To conclude, let us slightly point towards the fact that Lemma 2.15 could also be viewed as a building block of a probabilistic limit theorem (whose detailed discussion would then exceed the main goal of this paper, though).

The role of the distribution function of X
From now on, F := F X = P(X ≤ ·) is given as the distribution function of a given random variable X.
Proposition 2.16. Let X, V be two random variables, both de ned on the same probability space (Ω, F, P), such that V ∼ U( , ) and V is independent of X. Let F = F X be the distribution function of X. Then F V (X) ∼ U( , ) is a uniformly distributed random variable. Moreover, Proof. Let < α < . Lemma 2.15 -applied to F = F X -directly leads to P F V (X) ≤ α − α = P X > ξ and F(X) = α .

Remark 2.17.
It is well-known that in the case of a continuous distribution function, G = G X say, G(X) is uniformly distributed over ( , ) (cf. e.g. [5,Proposition 3.1]). However, continuity of G is even a necessary condition for G(X) being uniformly distributed over ( , ). Else there were some x ∈ R such that which would be a contradiction. Consequently, if F has non-zero jumps, F V (X) still would be uniformly distributed over ( , ) as opposed to F(X).
In order to complete the proof of statement of Proposition 2.1 in [12], let us recall that the assumed independence of the random variables X and V implies that the bivariate distribution function F (V ,X) of the random vector (V , X) coincides with the product of the distribution functions F V and F X . Moreover, since V ∼ U( , ), it follows that on B ( , ] µ F V = P(V ∈ ·) coincides with the Lebesgue measure m. Hence, if Φ : R −→ R denotes an arbitrary non-negative (or bounded) Borel function on R , B(R ), µ F V ⊗ µ F X , an immediate application of the Fubini-Tonelli Theorem leads to Theorem 2.18. Let X, V be two random variables, de ned on the same probability space (Ω, F, P), such that V ∼ U( , ) and V is independent of X. Let F = F X be the distribution function of the random variable X. Then If in addition P < F(X) < = (for example, if F is continuous), then On the other hand, equality 2.9 clearly implies Hence, since F V (X) ∼ U( , ), it follows that E B λ (X) m(dλ) = , implying that for m-almost all λ ∈ ( , ] we have Similarly, we obtain µ F F λ = = for m-almost all λ ∈ ( , ]. Hence, there exists an m-null set L ∈ B ( , ] such that < F λ < µ F -almost everywhere for all λ ∈ ( , ] \ L =: A. Thus, given the construction in the proof of Theorem 2.12, it follows that for all λ ∈ A there exists a µ F -Borel null set N λ , such that for any x ∈ R \ N λ the value F ∧ F λ (x) is well-de ned and satis es F ∧ F λ (x) = x. Hence, since ∈ A} ∪ · {X ∈ N V and V ∈ A}. If in addition P( < F(X) < ) = , F ∧ (F(X)) is well-de ned P-a. s. Consequently, since also F ∧ is nondecreasing, there exists a P-null setÑ, satisfying Ω \Ñ ⊆ Ω \ N ⊆ { < V ≤ }, such that for all ω ∈ Ω \Ñ. Remark 2.19. One might be easily lead to assume that already a direct application of Proposition 2.5 implies the rst statement of Theorem 2.18. However, in the rst instance Proposition 2.5 only implies that the equality X = F ∧ F V (X) = F ∧ F(X−)+V∆F(X) at least holds on the set D := {ω ∈ Ω : ∆F(X(ω)) > and < V(ω) < }. For the convenience of the reader, we conclude our paper with a full proof of the general version of Sklar's Theorem, built on Theorem 2.18 (cf. also the proof of Theorem 1.2 in [10], respectively the short proof of Lemma 3.2 in [11]), complemented with another interesting and seemingly novel observation (Remark 2.21), induced by Lemma 2.7.