Abstract

The event of large losses plays an important role in credit risk. As these large losses are typically rare, and portfolios usually consist of a large number of positions, large deviation theory is the natural tool to analyze the tail asymptotics of the probabilities involved. We first derive a sample-path large deviation principle (LDP) for the portfolio's loss process, which enables the computation of the logarithmic decay rate of the probabilities of interest. In addition, we derive exact asymptotic results for a number of specific rare-event probabilities, such as the probability of the loss process exceeding some given function.

1. Introduction

For financial institutions, such as banks and insurance companies, it is of crucial importance to accurately assess the risk of their portfolios. These portfolios typically consist of a large number of obligors, such as mortgages, loans, or insurance policies, and therefore it is computationally infeasible to treat each individual object in the portfolio separately. As a result, attention has shifted to measures that characterize the risk of the portfolio as a whole, see, for example, [1] for general principles concerning managing credit risk. The best-known metric is the so-called value at risk, see [2], which is measuring the minimum amount of money that can be lost with percent certainty over some given period. Several other measures have been proposed, such as economic capital, the risk-adjusted return on capital (RAROC), or expected shortfall, which is a coherent risk measure [3]. Each of these measurements is applicable to market risk as well as credit risk. Measures such as loss-given default (LGD) and exposure at default (EAD) are measures that purely apply to credit risk. These and other measures are discussed in detail in, for example, [4].

The currently existing methods mainly focus on the distribution of the portfolio loss up to a given point in time (e.g., one year into the future). It can be argued, however, that in many situations it makes more sense to use probabilities that involve the (cumulative) loss process, say . Highly relevant, for instance, is the event that ever exceeds a given function (within a certain time window, e.g., between now and one year ahead), that is, an event of the type It is clear that measures of the latter type are intrinsically harder to analyze, as it does not suffice anymore to have knowledge of the marginal distribution of the loss process at a given point in time, for instance, the event (1.1) actually corresponds to the union of events , for , and its probability will depend on the law of as a process on .

In line with the remarks we made above, earlier papers on applications of large-deviation theory to credit risk, mainly address the (asymptotics of the) distribution of the loss process at a single point in time, see, for example, [5, 6]. The former paper considers, in addition, also the probability that the increments of the loss process exceed a certain level. Other approaches to quantifying the tail distribution of the losses have been taken by [7], who use extreme-value theory (see [8] for a background), [9, 10], where the authors consider saddle point approximations to the tails of the loss distribution. Numerical and simulation techniques for credit risk can be found in, for example, [11]. The first contribution of our work concerns a so-called sample-path large deviation principle (LDP) for the average cumulative losses for large portfolios. Loosely speaking, such an LDP means that, with denoting the loss process when obligors are involved, we can compute the logarithmic asymptotics (for large) of the average or normalized loss process being in a set of trajectories : we could, for instance, pick a set that corresponds to the event (1.1). Most of the sample-path LDPs that have been developed so far involve stochastic processes with independent or nearly-independent increments, see, for instance, the results by Mogul'skiĭ for random walks [12], de Acosta for Lévy processes [13], and Chang [14] for weakly correlated processes; results for processes with a stronger correlation structure are restricted to special classes of processes, such as Gaussian processes, see, for example, [15]. It is observed that our loss process is not covered by these results, and therefore new theory had to be developed. The proof of our LDP relies on “classical” large deviation results (such as Cramér's theorem, Sanov's theorem, Mogul'skiĭ's theorem), in addition, the concept of epi-convergence [16] is relied upon.

Our second main result focuses specifically on the event (1.1) of ever (before some time horizon ) exceeding a given barrier function . Whereas we so far considered, inherently imprecise, logarithmic asymptotics of the type displayed in (1.2), we can now compute the so-called exact asymptotics; we identify an explicit function such that as , where is the probability of our interest. As is known from the literature, it is in general substantially harder to find exact asymptotics than logarithmic asymptotics. The proof of our result uses the fact that, after discretizing time, the contribution of just a single time epoch dominates, in the sense that there is a such that This can be interpreted as the most likely epoch of exceeding .

Turning back to the setting of credit risk, both of the results we present are derived in a setup where all obligors in the portfolio are i.i.d., in the sense that they behave independently and stochastically identically. A third contribution of our work concerns a discussion on how to extend our results to cases where the obligors are dependent (meaning that they, in the terminology of [5], react to the same “macroenvironmental” variable, conditional upon which they are independent again). We also treat the case of obligor-heterogeneity: we show how to extend the results to the situation of multiple classes of obligors.

The paper is structured as follows. In Section 2 we introduce the loss process and we describe the scaling under which we work. We also recapitulate a couple of relevant large-deviation results. Our first main result, the sample-path LDP for the cumulative loss process, is stated and proved in Section 3. Special attention is paid to, easily-checkable, sufficient conditions under which this result holds. As argued above, the LDP is a generally applicable result, as it yields an expression for the decay rate of any probability that depends on the entire sample path. Then, in Section 4, we derive the exact asymptotic behavior of the probability that, at some point in time, the loss exceeds a certain threshold, that is, the asymptotics of , as defined in (1.3). After this we derive a similar result for the increments of the loss process. Eventually, in Section 5, we discuss a number of possible extensions to the results we have presented. Special attention is given to allowing dependence between obligors, and to different classes of obligors each having its own specific distributional properties. In the appendix we have collected a number of results from the literature in order to keep the exposition of the paper self-contained.

2. Notation and Definitions

The portfolios of banks and insurance companies are typically very large; they may consist of several thousands of assets. It is therefore computationally impossible to estimate the risks for each element, or obligor, in a portfolio. This explains why one attempts to assess the aggregated losses resulting from defaults, for example, bankruptcies, failure to repay loans or insurance claims, for the portfolio as a whole. The risk in the portfolio is then measured through this (aggregate) loss process. In the following sections we introduce the loss process and the portfolio constituents more formally.

2.1. Loss Process

Let be the probability space on which all random variables below are defined. We assume that the portfolio consists of obligors, and we denote the default time of obligor by . Further, we write for the loss incurred on a default of obligor . We then define the cumulative loss process as where is the default indicator of obligor . We assume that the loss amounts are i.i.d., and that the default times are i.i.d. as well. In addition, we assume that the loss amounts and the default times are mutually independent. In the remainder of this paper, and denote generic random variables with the same distribution as the and , respectively.

Throughout this paper we assume that the defaults only occur on the time grid ; in Section 5, we discuss how to deal with the default epochs taking continuous values. In some cases we explicitly consider a finite time grid, say . The extension of the results we derive to a more general grid is completely trivial. The distribution of the default times, for each , is denoted by

Given the distribution of the loss amounts and the default times , our goal is to investigate the loss process. Many of the techniques that have been developed so far, first fix a time (typically one year), and then stochastic properties of the cumulative loss at time , that is, , are studied. Measures such as value at risk and economic capital are examples of these “one-dimensional” characteristics. Many interesting measures, however, involve properties of the entire path of the loss process rather than those of just one time epoch, examples being the probability that exceeds some barrier function for some smaller than the horizon , or the probability that (during a certain period) the loss always stays above a certain level. The event corresponding to the former probability might require the bank to attract more capital, or worse, it might lead to the bankruptcy of this bank. The event corresponding to the latter event might also lead to the bankruptcy of the bank, as a long period of stress may have substantial negative implications. We conclude that having a handle on these probabilities is therefore a useful instrument when assessing the risks involved in the bank's portfolios.

As mentioned above, the number of obligors in a portfolio is typically very large, thus prohibiting analyses based on the specific properties of the individual obligors. Instead, it is more natural to study the asymptotical behavior of the loss process as . One could rely on a central-limit-theorem-based approach, but in this paper we focus on rare events, by using the theory of large deviations.

In the following subsection we provide some background of large-deviation theory, and we define a number of quantities that are used in the remainder of this paper.

2.2. Large Deviation Principle

In this section we give a short introduction to the theory of large deviations. Here, in an abstract setting, the limiting behavior of a family of probability measures on the Borel sets of a complete separable metric space, a Polish space, is studied, as . This behavior is referred to as the large deviation principle (LDP), and it is characterized in terms of a rate function. The LDP states lower and upper exponential bounds for the value that the measures assign to sets in a topological space . Below we state the definition of the rate function that has been taken from [17].

Definition 2.1. A rate function is a lower semicontinuous mapping , for all the level set is a closed subset of . A good rate function is a rate function for which all the level sets are compact subsets of .

With the definition of the rate function in mind we state the large deviation principle for the sequence of measure .

Definition 2.2. We say that satisfies the large deviation principle with a rate function if (i)(upper bound) for any closed set (ii) (lower bound) for any open set We say that a family of random variables , with values in , satisfies an LDP with rate function if and only if the laws satisfy an LDP with rate function , where is the law of .

The so-called Fenchel-Legendre transform plays an important role in expressions for the rate function. Let for an arbitrary random variable , the logarithmic moment generating function, sometimes referred to as cumulant generating function, be given by for . The Fenchel-Legendre transform of is then defined by We sometimes say that is the Fenchel-Legendre transform of .

The LDP from Definition 2.2 provides upper and lower bounds for the log-asymptotic behavior of measures . In case of the loss process (2.1), fixed at some time , we can easily establish an LDP by an application of Cramér's theorem (Theorem A.1). This theorem yields that the rate function is given by , where is the Fenchel-Legendre transform of the random variable .

The results we present in this paper involve either (Section 3), which corresponds to i.i.d. loss amounts only, or (Section 4), which corresponds to those loss amounts up to time . In the following section we derive an LDP for the whole path of the loss process, which can be considered as an extension of Cramér's theorem.

3. A Sample-Path Large Deviation Result

In the previous section we have introduced the large deviation principle. In this section we derive a sample-path LDP for the cumulative loss process (2.1). We consider the exponential decay of the probability that the path of the loss process is in some set , as the size of the portfolio tends to infinity.

3.1. Assumptions

In order to state a sample-path LDP, we need to define the topology that we work on. To this end we define the space of all nonnegative and nondecreasing functions on , This set is identified with the space . The topology on this space is the one induced by the supremum norm As we work on a finite-dimensional space, the choice of the norm is not important, as any other norm on would result in the same topology. We use the supremum norm as this is convenient in some of the proofs in this section.

We identify the space of all probability measures on with the simplex : For a given we denote the cumulative distribution function by , that is, note that and .

Furthermore, we consider the loss amounts as introduced in Section 2.1, a with cdf , and a sequence of , each with cdf , such that as , meaning that for all . We define two families of measures and : where and . Below we state an assumption under which the main result in this section holds. This assumption refers to the definition of exponential equivalence, which can be found in Definition A.2.

Assumption 1. Let , be as above. We assume that and moreover that the measures and as defined in (3.5) and (3.6), respectively, are exponentially equivalent.

From Assumption 1, we learn that the differences between the two measures and go to zero at a “superexponential” rate. In the next section, in Lemma 3.3, we provide a sufficient condition, that is, easy to check, under which this assumption holds.

3.2. Main Result

The assumptions and definitions in the previous sections allow us to state the main result of this section. We show that the average loss process satisfies a large deviation principle as in Definition 2.2. It is noted that various expressions for the associated rate function can be found. Directly from the multivariate version of Cramér's theorem [17, Section 2.2.2], it is seen that, under appropriate conditions, a large deviations principle applies with rate function where is a generic random variable distributed as . In this paper we choose to work with another rate function that has the important advantage that it gives us considerably more precise insight into the system conditional of the rare event of interest occurring. We return to this issue in greater detail in Remark 3.6.

The large deviations principle allows us to approximate a large variety of probabilities related to the average loss process, such as the probability that the loss process stays above a certain time-dependent level or the probability that the loss process exceeds a certain level before some given point in time.

Theorem 3.1. With as in (3.3) and under Assumption 1, the average loss process, satisfies an LDP with rate function . Here, for , is given by with and .

Observing the rate function for this sample-path LDP, we see that the effects of the default times and the loss amounts are nicely decoupled into the two terms in the rate function, one involving the distribution of the default epoch (the “Sanov term”, cf. [17, Theorem 6.2.10]), the other one involving the incurred loss size (the “Cramér term”, cf. [17, Theorem 2.2.3]. Observe that we recover Cramér's theorem by considering a time grid consisting of a single time point, which means that Theorem 3.1 extends Cramér's result. We also remark that, informally speaking, the optimizing in (3.8) can be interpreted as the “most likely” distribution of the loss epoch, given that the path of is close to .

As a sanity check we calculate the value of the rate function for the “average path” of , given by for , where is the cumulative distribution of the default times as given in (2.3); this path should give a rate function equal to 0. To see this, we first remark that clearly for all , since both the Sanov term and the Cramér term are nonnegative. This yields the following chain of inequalities: where we have used that for , it always holds that cf. [17, Lemma 2.2.5]. The inequalities above thus show that if the “average path” lies in the set of interest, then the corresponding decay rate is 0, meaning that the probability of interest decays subexponentially.

In the proof of Theorem 3.1 we use the following lemma, which is related to the concept of epi-convergence, extensively discussed in [16]. After this proof, in which we use a “bare hands” approach, we discuss alternative, more sophisticated ways to establish Theorem 3.1.

Lemma 3.2. Let , with compact. Assume that for all and for all in we have Then we have

Proof. Let . Consider a subsequence . Let and choose such that for all . By the compactness of , there exists a limit point such that along a subsequence . By the hypothesis (3.10) we then have Let to obtain the result.

Proof of Theorem 3.1. We start by establishing an identity from which we show both bounds. We need to calculate the probability for certain . For each point on the time grid we record by the “default counter” the number of defaults at time : These counters allow us to rewrite the probability to where and the loss amounts have been ordered, such that the first corresponds to the losses at time 1, and so forth.Upper Bound
Starting from Equality (3.15), let us first establish the upper bound of the LDP. To this end, let be a closed set and consider the decay rate An application of Lemma A.3 together with (3.15) implies that (3.16) equals Next, we replace the dependence on in the maximization by maximizing over the set as in (3.3). In addition, we replace the in (3.17) by where the has been defined in (3.4). As a result, (3.16) reads Note that (3.16) equals (3.19), since for each and vector , with , there is a with . On the other hand, we only cover outcomes of this form by rounding off the .
We can bound the first term in this expression from above using Lemma A.5, which implies that the decay rate (3.16) is majorized by Now note that calculating the lim sup in the previous expression is not straightforward due to the supremum over . The idea is therefore to interchange the supremum and the lim sup, by using Lemma 3.2. To apply this lemma we first introduce and note that is a compact subset of . We have to show that for any sequence Condition (3.10) is satisfied, that is, such that the conditions of Lemma 3.2 are satisfied. We observe, with as in (3.18) and as in (3.4) with replaced by , that Since and since differs at most by from , it immediately follows that . For an arbitrary continuous function we thus have . This implies that Inequality (3.22) is established once we have shown that By Assumption 1, we can exploit the exponential equivalence together with Theorem A.7, to see that (3.25) holds as soon as we have that But this inequality is a direct consequence of Lemma A.6, and we conclude that (3.25) holds. Combining (3.24) with (3.25) yields so that indeed the conditions of Lemma 3.2 are satisfied, and therefore This establishes the upper bound of the LDP.
Lower Bound
To complete the proof, we need to establish the corresponding lower bound. Let be an open set and consider We apply Equality (3.15) to this lim inf, with replaced by , and we observe that this sum is larger than the largest term in the sum, which shows that (where we directly switch to the enlarged space ) the decay rate (3.29) majorizes Observe that for any sequence of functions it holds that for all , so that we obtain the evident inequality This observation yields that the decay rate of interest (3.29) is not smaller than where we have used that . We apply Lemma A.5 to the first lim inf in (3.32), leading to since as . The second lim inf in (3.32) can be bounded from below by an application of Lemma A.6. Since is an open set, this lemma yields Upon combining (3.33) and (3.34), we see that we have established the lower bound: This completes the proof of the theorem.

In order to apply Theorem 3.1, one needs to check that Assumption 1 holds. In general, this could be a quite cumbersome exercise. In Lemma 3.3 below, we provide a sufficient, easy-to-check condition under which this assumption holds.

Lemma 3.3. Assume that for all . Then Assumption 1 holds.

Remark 3.4. The assumption we make in Lemma 3.3, that is, that the logarithmic moment generating function is finite everywhere, is a common assumption in large deviations theory. We remark that for instance Mogul'skiĭ's theorem [17, Theorem 5.1.2], also relies on this assumption; this theorem is a sample-path LDP for on the interval . In Mogul'skiĭ's result, the are assumed to be i.i.d; in our model we have that , so that our sample-path result clearly does not fit into the setup of Mogul'skiĭ's theorem.

Remark 3.5. In Lemma 3.3 it was assumed that , for all , but an equivalent condition is In other words, this alternative condition can be used instead of the condition stated in Lemma 3.3. To see that both requirements are equivalent, make the following observations. Lemma A.4 states that (3.37) is implied by the assumption in Lemma 3.3. In order to prove the converse, assume that (3.37) holds, and that there is a for which . Without loss of generality we can assume that is finite for and infinite for . For , the Fenchel-Legendre transform is then given by Since and , we know that for , and hence which contradicts with the assumption that this ratio tends to infinity as , and thus establishing the equivalence.

Proof of Lemma 3.3. Let for some sequence of and . We introduce two families of random vectors and , which have laws and , respectively, as in (3.5)-(3.6). Since we know that for any there exists an such that for all we have that , and thus .
We have to show that for any , For , consider the absolute difference between and , that is, Next we have that for any it holds that , which yields for all the upper bound since the rounded numbers differ at most by 1 from their real counterparts. This means that the difference of the two sums in (3.42) can be bounded by at most elements of the , which are for convenience denoted by . Recalling that the are nonnegative, we obtain Next we bound the probability that the difference exceeds , by using the above inequality: where the last inequality follows from the Chernoff bound [17, Eqn. (2.2.12)] for arbitrary . Taking the log of this probability, dividing by , and taking the lim sup on both sides results in By the assumption, for all . Thus, yields As was arbitrary, the exponential equivalence follows by letting .

Remark 3.6. Large deviations analysis provides us with insight into the behavior of the system conditional on the rare event under consideration happening. In this remark we compare the insight we gain from the rate functions (3.7) and (3.8). We consider the decay rate of the probability of the rare event that the average loss process is in the set , and do so by minimizing the rate function over (where denotes the optimizing argument).
Let, for ease, the random vector have a density, given by by . Then well-known large deviations reasoning yields that, conditional on the rare event , the vector behaves as being sampled from an exponentially twisted distribution with density where is the optimizing argument in (3.7) with .
Importantly, the rate function we identified in (3.8) gives more detailed information on the system conditional on being in the rare set . The default times of the individual obligors are to be sampled from the distribution (with the optimizing argument in (3.8)), whereas the claim size of an obligor defaulting at time has density where denotes the density of , and The rate functions (3.7) and (3.8) are of comparable complexity, as both correspond to an -dimensional optimization (where (3.8) also involves the evaluation of the Fenchel-Legendre transform , which is a single-dimensional maximization of low computational complexity).

We conclude this section with some examples.

Example 3.7. Assume that the loss amounts have finite support, say on the interval . Then we clearly have So for any distribution with finite support, the assumption for Lemma 3.3 is satisfied, and thus Theorem 3.1 holds. Here, the i.i.d. default times, , can have an arbitrary discrete distribution on the time grid .
In practical applications, one (always) chooses a distribution with finite support for the loss amounts, since the exposure to every obligor is finite. Theorem 3.1 thus clearly holds for any (realistic) model of the loss given default.
An explicit expression for the rate function (3.8), or even the Fenchel-Legendre transform, is usually not available. On the other hand one can use numerical optimization techniques to calculate these quantities.

We next present an example to which Lemma 3.3 applies.

Example 3.8. Assume that the loss amount is measured in a certain unit, and takes on the values for some . Assume that it has a distribution of Poisson type with parameter , in the sense that for , It is then easy to check that , being finite for all . Further calculations yield for all , and otherwise. Dividing this expression by and letting , we observe that the resulting ratio tends to . As a consequence, Remark 3.5 now entails that Theorem 3.1 applies. It can also be argued that for any distribution with tail behavior comparable to that of a Poisson distribution, Theorem 3.1 applies as well.

4. Exact Asymptotic Results

In the previous section we have established a sample-path large deviation principle on a finite time grid; this LDP provides us with logarithmic asymptotics of the probability that the sample path of is contained in a given set, say . The results presented in this section are different in several ways. In the first place, we derive exact asymptotics (rather than logarithmic asymptotics). In the second place, our time domain is not assumed to be finite, instead, we consider all integer numbers, . The price to be paid is that we restrict ourselves to special sets , namely, those corresponding to the loss process (or the increment of the loss process) exceeding a given function. We work under the setup that we introduced in Section 2.1.

4.1. Crossing a Barrier

In this section we consider the asymptotic behavior of the probability that the loss process at some point in time is above a time-dependent level . More precisely, we consider the set for some function satisfying with as in (2.3). If we would consider a function that does not satisfy (4.2), we are not in a large deviations setting, in the sense that the probability of the event converges to 1 by the law of large numbers. In order to obtain a more interesting result, we thus limit ourselves to levels that satisfy (4.2). For such levels we state the first main result of this section.

Theorem 4.1. Assume that and that where . Then for as in (4.1) and is such that . The constant follows from the Bahadur-Rao theorem (Theorem A.8), with .

Before proving the result, which will rely on arguments similar to those in [18], one first discusses the meaning and implications of Theorem 4.1. In addition, one reflects on the role played by the assumptions. One does so by a sequence of remarks.

Remark 4.2. Comparing Theorem 4.1 to the Bahadur-Rao theorem (Theorem A.8), we observe that the probability of a sample mean exceeding a rare value has the same type of decay as the probability of our interest (i.e., the probability that the normalized loss process ever exceeds some function ). This decay looks like for positive constants and . This similarity can be explained as follows.
First, assume that the probability of our interest is actually the probability of a union events. Evidently, this probability is larger than the probability of any of the events in this union, and hence also larger than the largest among these: Theorem 4.1 indicates that the inequality in (4.6) is actually tight (under the conditions stated). Informally, this means that the contribution of the maximizing in the right-hand side of (4.6), say , dominates the contributions of the other time epochs as grows large. This essentially says that given that the rare event under consideration occurs, with overwhelming probability it happens at time .

As is clear from the statement of Theorem 4.1, two assumptions are needed to prove the claim; we now briefly comment on the role played by these.

Remark 4.3. Assumption (4.3) is needed to make sure that there is not a time epoch , different from , having a contribution of the same order as . It can be verified from our proof that if the uniqueness assumption is not met, the probability under consideration remains asymptotically proportional to , but we lack a clean expression for the proportionality constant.
Assumption (4.4) has to be imposed to make sure that the contribution of the “upper tail”, that is, time epochs , can be neglected; more formally, we should have In order to achieve this, the probability that the normalized loss process exceeds for large should be sufficiently small.

Remark 4.4. We now comment on what Assumption (4.4) means. Clearly, as ; the limiting value as grows is actually if . This entails that We observe that Assumption (4.4) is fulfilled if , which turns out to be valid under extremely mild conditions. Indeed, relying on Lemma A.4, we have that in great generality it holds as . Then clearly any , for which , satisfies Assumption (4.4), since Alternatively, if is chosen distributed exponentially with mean (which does not satisfy the conditions of Lemma A.4), then , such that we have that Barrier functions that grow at a rate slower than , such as , are in this setting clearly not allowed.

Proof of Theorem 4.1. We start by rewriting the probability of interest as For an arbitrary instant in we have We first focus on the second part in (4.13). We can bound this by where the second inequality is due to the Chernoff bound [17, Eqn. (2.2.12)]. The independence between the and , together with the assumption that the are i.i.d. and the are i.i.d. yields By (4.4) we have that for some (possibly ). Hence there exists an such that for all where (in case , any suffices) and defined in (4.3). Choosing , we obtain by using the first inequality in (4.17) for where the last inequality trivially follows by bounding the summation (from above) by an appropriate integral. Next we multiply and divide this by , and we apply the Bahadur-Rao theorem, which results in The second inequality in (4.17) yields , for some . Applying this inequality, we see that this bounds the second term in (4.13), in the sense that as , To complete the proof we need to bound the first term of (4.13), where we use that . For this we again use the Bahadur-Rao theorem. Next to this theorem we use the uniqueness of , which implies that for and there exists an , such that This observation yields, with such that , Combining the above findings, we observe Together with the trivial bound this yields Applying the Bahadur-Rao theorem to the right hand side of the previous display yields the desired result.

4.2. Large Increments of the Loss Process

In the previous section we identified the asymptotic behavior of the probability that at some point in time the normalized loss process exceeds a certain level. We can carry out a similar procedure to obtain insight in the large deviations of the increments of the loss process. Here we consider times where the increment of the loss between time and exceeds a threshold . More precisely, we consider the event Being able to deal with events of this type, we can for instance analyze the likelihood of the occurrence of a large loss during a short period; we remark that with the event (4.1) from the previous subsection, one cannot distinguish the cases where the loss is zero for all times before and at time , and the case where the loss is just below the level for all times before time and then ends up at at time . Clearly, events of the (4.26) make it possible to distinguish between such paths.

In order to avoid trivial results, we impose a condition similar to (4.2), namely, for all . The law of large numbers entails that for functions that do not satisfy this condition, the probability under consideration does not correspond to a rare event.

A similar probability has been considered in [5], where the authors derive the logarithmic asymptotic behavior of the probability that the increment of the loss, for some , in a bounded interval exceeds a thresholds that depends only on . In contrast, our approach uses a more flexible threshold, which depends on both times and , and in addition we derive the exact asymptotic behavior of this probability.

Theorem 4.5. Assume that and that where . Then for as in (4.27) and is such that . The constant follows from the Bahadur-Rao theorem (Theorem A.8), with .

Remark 4.6. A first glance at Theorem 4.5 tells us the obtained result is very similar to the result of Theorem 4.1. The second condition, that is, Inequality (4.29), however, seems to be more restrictive than the corresponding condition, that is, Inequality (4.4), due to the infimum over . This assumption has to make sure that the “upper tail” is negligible for any . In the previous subsection we have seen that, under mild restrictions, the upper tail can be safely ignored when the barrier function grows at a rate of at least . We can extend this claim to our new setting of large increments, as follows.
First note that Then consider thresholds that, next to condition (4.27), satisfy that for all Then, under the conditions of Lemma A.4, we have that since the second factor remains positive by (4.32) and the first factor tends to infinity by Lemma A.4. Having established (4.33) for all , it is clear that (4.29) is satisfied.
The sufficient condition (4.32) shows that the range of admissible barrier functions is quite substantial, and, importantly, imposing (4.29) is not as restrictive as it seems at first glance.

Proof of Theorem 4.5. The proof of this theorem is very similar to that of Theorem 4.1. Therefore we only sketch the proof here.
As before, the probability of interest is split up into a “front part” and “tail part.” The tail part can be bounded using Assumption (4.29); this is done analogously to the way Assumption (4.4) was used in the proof of Theorem 4.1. The uniqueness assumption (4.28) then shows that the probability of interest is asymptotically equal to the probability that the increment between time and exceeds ; this is an application of the Bahadur-Rao theorem. Another application of the Bahadur-Rao theorem to the probability that the increment between time and exceeds yields the result.

5. Discussion and Concluding Remarks

In this paper, we have established a number of results with respect to the asymptotic behavior of the distribution of the loss process. In this section we discuss some of the assumptions in more detail and we consider extensions of the results that we have derived.

5.1. Extensions of the Sample-Path LDP

The first part of our work, Section 3, was devoted to establishing a sample-path large deviation principle on a finite time grid. Here we modeled the loss process as the sum of i.i.d. loss amounts multiplied by i.i.d. default indicators. From a practical point of view one can argue that the assumptions underlying our model are not always realistic. In particular, the random properties of the obligors cannot always be assumed independent. In addition, the assumption that all obligors behave in an i.i.d. fashion will not necessarily hold in practice. Both shortcomings can be dealt with, however, by adapting the model slightly.

A common way to introduce dependence, taken from [5], is by supposing that there is a “macroenvironmental” variable to which all obligors react, but conditional on which the loss epochs and loss amounts are independent. First observe that our results are then valid for any specific realization of . Denoting the exponential decay rate by , that is, the unconditional decay rate is just the maximum over the ; this is trivial to prove if can attain values in a finite set only. A detailed treatment of this is beyond the scope of this paper.

The assumption that all obligors have the same distribution can be relaxed to the case where we assume that there are different classes of obligors (e.g., determined by their default ratings). We further assume that each class makes up a fraction of the entire portfolio. Then we can extend the LDP of Theorem 3.1 to a more general one, by splitting up the loss process into loss processes, each corresponding to a class. Conditioning on the realizations of these processes, we can derive the following rate function: where , and is the Cartesian product ( times), with as in (3.3). The optimization over the set follows directly from conditioning on the realizations of the perclass loss processes. We leave out the formal derivation of this result; this multiclass case is notationally considerably more involved than the single-class case, but essentially all steps carry over.

In our sample-path LDP we assumed that defaults can only occur on a finite grid. While this assumption is justifiable from a practical point of view, an interesting mathematical question is whether it can be relaxed. In self-evident notation, one would expect that the rate function It can be checked, however, that the argumentation used in the proof of Theorem 3.1 does not work; in particular, the choice of a suitable topology plays an important role.

If losses can occur on a continuous entire interval, that is, , we expect, for a nondecreasing and differentiable path , the rate function where is the space of all densities on and the density of the default time . One can easily guess the validity of (5.4) from (3.8) by using Riemann sums to approximate the integral. A formal proof, however, requires techniques that are essentially different from the ones used to establish Theorem 3.1, and therefore we leave this for future research.

5.2. Extensions of the Exact Asymptotics

In the second part of the paper, that is, Section 4, we have derived the exact asymptotic behavior for two special events. First we showed that, under certain conditions, the probability that the loss process exceeds a certain time-dependent level is asymptotically equal to the probability that the process exceeds this level at the “most likely” time . The exact asymptotics of this probability are obtained by applying the Bahadur-Rao theorem. A similar result has been obtained for an event related to the increment of the loss process. One could think of refining the logarithmic asymptotics, as developed in Section 3, to exact asymptotics. Note, however, that this is far from straightforward, as for general sets these asymptotics do not necessarily coincide with those of a univariate random variable, cf. [19].

Appendix

Background Results

In this section, we state a number of definitions and results, taken from [17], which are used in the proofs in this paper.

Theorem A.1 (Cramér). Let be i.i.d. real valued random variables with all exponential moments finite, and let be the law of the average . Then the sequence satisfies an LDP with rate function , where is the Fenchel-Legendre transform of the .

Proof . See, for example [17, Theorem 2.2.3].

Definition A.2. We say that two families of measures and on a complete separable metric space are exponentially equivalent if there exist two families of -valued random variables and with marginal distributions and , respectively, such that for all

Lemma A.3. For every triangular array , , ,

Proof. Elementary, but also a direct consequence of [17, Lemma 1.2.15].

Lemma A.4. Let for all , then

Proof. This result is a part of [17, Lemma 2.2.20].

Lemma A.5. Let be defined as . Then for any vector , such that , we have that where and as defined in (2.2).

Proof. See [17, Lemma 2.1.9].

Lemma A.6. Define for an i.i.d. sequence of -valued random variables . Let denote the law of in . For any discretization and any , let denote the vector . Then the sequence of laws satisfies the LDP in with the good rate function where is the Fenchel-Legendre transform of .

Proof. See [17, Lemma 5.1.8]. This lemma is one of the key steps in proving Mogul'skiĭ’s theorem, which provides a sample-path LDP for on a bounded interval.

Theorem A.7. If an LDP with a good rate function holds for the probability measures , which are exponentially equivalent to , then the same LDP holds for .

Proof. See [17, Theorem 4.2.13].

Theorem A.8 (Bahadur-Rao). Let be a sequence of i.i.d. real-valued random variables. Then we have The constant depends on the type of distribution of , as specified by the following two cases.(i)The law of X1 is lattice, that is, for some , the random variable is (a.s.) an integer number, and d is the largest number with this property. Under the additional condition , the constant is given by where satisfies .(ii) If the law of is nonlattice, the constant is given by with as in case (i).

Proof. We refer to [20] or [17, Theorem 3.7.4] for the proof of this result.

Acknowledgments

V. Leijdekker would like to thank ABN AMROE bank for providing financial support. Part of this work was carried out while M. Mandjes was at Stanford University, USA. The authors are indebted to E. J. Balder (Utrecht University, The Netherlands) for pointing out to the authors, the relevance of epi-convergence to their research.