A proposed benchmark model using a modularised approach to calculate IFRS 9 expected credit loss

Abstract The objective of this paper is to develop a methodology to calculate expected credit loss (ECL) using a transparent-modularised approach utilising three components: probability of default (PD), loss given default (LGD) and exposure at default (EAD). The proposed methodology is described by first providing a methodology to calculate the marginal PD, then the methodology for calculating the marginal recovery rates and resulting LGD, and lastly a methodology to calculate the EAD. These three components are combined to calculate the ECL in an empirical style. In markets where sophisticated IFRS9 models are developed, our proposed methodology can be used as in two settings: either as a benchmark to compare newly developed IFRS9 models, or, in markets where limited resources or technological sophistication exists, our methodology can be used to calculate ECL for IFRS9 purposes. This paper includes two such comparative studies to illustrate how our proposed methodology can be used as a benchmark for a newly developed IFRS9 model based on an emerging country’s unsecured and secured retail banking portfolio. This paper is, in essence, a step-by-step implementation guide of the proposed IFRS 9 methodology to calculate ECL as well as the use of such a model as a benchmark.

Abstract: The objective of this paper is to develop a methodology to calculate expected credit loss (ECL) using a transparent-modularised approach utilising three components: probability of default (PD), loss given default (LGD) and exposure at default (EAD). The proposed methodology is described by first providing a methodology to calculate the marginal PD, then the methodology for calculating the marginal recovery rates and resulting LGD, and lastly a methodology to calculate the EAD. These three components are combined to calculate the ECL in an empirical style. In markets where sophisticated IFRS9 models are developed, our proposed methodology can be used as in two settings: either as a benchmark to compare newly developed IFRS9 models, or, in markets where limited resources or technological sophistication exists, our methodology can be used to calculate ECL for IFRS9 purposes. This paper includes two such comparative studies to illustrate how our proposed methodology can be used as a benchmark for a newly developed IFRS9 model based on an emerging country's unsecured and secured retail banking portfolio. This paper is, in essence, a step-by-step implementation guide of the proposed IFRS 9 methodology to calculate ECL as well as the use of such a model as a benchmark.
Willem Daniel Schutte ABOUT THE AUTHOR This research was conducted by the Centre for Business Mathematics and Informatics (BMI) at the North-West University. BMI is a leading tertiary risk training and research group for the financial services industry, and is actively involved in applied research projects conducted at several major financial institutions in South Africa. WD Schutte completed his PhD in Mathematical Statistics in 2014. His research is focussed on applied statistics in quantitative risk management and model validation. Tanja Verster completed her PhD in Risk Analysis in 2007. Her research focus is on predictive modelling in the credit scoring environment. Helgard Raubenheimer completed his PhD in Risk Analysis in 2009. His research focus is on quantitative risk management, portfolio optimisation and fixed income modelling. Derek Doody is an employee at a retail bank in South Africa. Peet Coetzee completed his Masters degree in BMI and is an employee at a retail bank in South Africa.

PUBLIC INTEREST STATEMENT
A new accounting standard, IFRS 9, has come into place, to replaces the previous IAS 39 accounting standard. It is meant to respond to criticisms that the previous standard was too complex, inconsistent with the way entities manage their businesses & risks, and defers the recognition of credit losses on loans & receivables until too late in the credit cycle. This new standard requires financial institutions to provide against loans on an expected credit loss (ECL) basis. In this paper, we developed a methodology to calculate ECL under the new IFRS9 standard. Our proposed methodology can be used as a benchmark tool to compare newly developed IFRS9 models. On the other hand, in markets where limited resources or technological sophistication exists, our methodology can be followed to calculate ECL for IFRS9 purposes.

Introduction
Following the global financial crisis, the International Accounting Standards Board (IASB) launched a project to substitute the International Accounting Standard (IAS) 39 with the International Financial Reporting Standard (IFRS) 9 that outlines the requirements for the recognition and measurement of financial instruments in the financial statements of a company (IFRS, 2014). The new standard requires financial institutions to provide against loans on an expected credit loss basis (Cohen & Edwards, 2017). It is expected that IFRS9 will have a material impact on the financial institutions' systems and processes (Beerbaum, 2015).
IFRS9 defines the 12-month expected credit loss (in monetary value) as the "portion of lifetime expected credit losses that represent the expected credit losses that result from default events on a financial instrument that are possible within the 12 months after the reporting date" (Basel Committee on Banking Supervision [BCBS], 2015). According to Beerbaum (2015), this is often misunderstood, as it is not the extended cash shortfalls over the 12-month period, but the entire loss on an asset, weighted by the probability that the loss will occur in the next 12 months. It is, therefore, important to note that this amount is not only the losses expected in the 12 months following the reporting month but includes the expected cash shortfalls over the life of the lending exposure or portfolio of exposures resulting from possible loss events that could materialise in the next 12 months. The cash shortfall can be seen as "the difference between all contractual cash flows that are due to an entity in accordance with the contract and the cash flows that the entity expects to receive" . Thus, the approach one should follow to calculate the lifetime expected credit loss (ECL) should account for future expectations pertaining to cash flows and should preferably incorporate market-related information that may influence these future cash flows. Many definitions of default exist, e.g. the Basel definition of default is when either or both of the two following events take place: "A bank considers that the obligor is unlikely to pay its credit obligations to the banking group in full, without recourse by the bank to actions such as realising security (if held)." or "The obligor is past due more than 90 days on any material credit obligation to the banking group. Overdrafts will be considered as being past due once the customer has breached an advised limit or been advised of a limit smaller than current outstandings" (Basel Committee on Banking Supervision [BCBS], 2006). IFRS9 uses a "three stage model" for expected credit losses based on changes in credit quality since initial recognition, see, e.g. Aptivaa (2016b). Stage 1 is defined when credit risk has not increased significantly since initial recognition and under IFRS9 a 12-month ECL needs to be recognised. Stage 2 is defined when credit risk (probability of default) has increased significantly since initial recognition and a lifetime ECL needs to be recognised under IFRS9. Finally, Stage 3 is recognised when a financial asset is credit impaired (defaulted) and lifetime ECL needs to be recognised. In conclusion, Stage 2 and Stage 3 attribute a similar lifetime ECL to financial instruments, yet the mechanism of recognition is differently defined. For this reason, the focus of this paper will predominately be on Stage 1 and Stage 2.
These new IFRS9 models have requirements of large amounts of data, not to mention validation and other model governance/maintenance requirements of the upstream probability of default (PD), loss given default (LGD) and exposure at default (EAD) models. IFRS9, being a principle-based guideline, does not prescribe specific methodologies for lifetime EL estimation, and there is no single methodology suitable to all portfolios. The choice of methodology should be an informed one based on the availability of data as well as the materiality of the portfolio.
The aim of this paper is therefore the development of a transparent, simple, modularised methodology to calculate ECL. This methodology could either be used as an initial benchmark for newly developed complex IFRS9 models or, in markets where limited resources or technological sophistication exists, this methodology could be used to calculate ECL for IFRS9 purposes. In this paper, our proposed methodology is compared to newly developed IFRS9 models of a South African bank, for an unsecured and secured retail portfolio (referred to as Case study 1 and Case study 2, respectively). These comparative studies aim to illustrate how our methodology can be used as a benchmark model. The remainder of this paper is organised as follows. Section 2 contains a literature review on the methods to model ECL and describes the advantages thereof. In Sections 3-5, we discuss our proposed methodology to calculate PD, LGD and EAD, respectively. The values calculated in these sections form the proposed IFRS9 benchmark model. In Section 6, the resulting benchmark ECL methodology is provided. In Section 7, comparative studies illustrate how the proposed methodology can be used to benchmark newly developed IFRS9 models (unsecured and secured portfolios). Conclusions are made in Section 8 with recommendations for future research. Appendix A contains the conceptual design of the PD methodology, as well as the assumptions and mathematical notation of the marginal PD methodology. Appendix B contains some SAS-code that is used to calculate the marginal PD. Please refer to the link to the supplementary material at the end of the paper.

Literature review: indirect methods to model ECL
Various modelling practices exist to model ECL across industries. These approaches include roll rate models, vintage loss models, provision matrices, EL models and the discounted cash-flow method . The highlights, strengths and weaknesses of the different modelling approaches that may be used to forecast credit losses are well documented (Global Public Policy Committee [GPPC], 2016;McPhail & McPhail, 2014). These approaches are divided into two sets: direct and indirect methods. It is often found that the direct methods are referred to as total loss models (i.e. discounted cash flow method), while the indirect methods are referred to as loss component approaches (McPhail & McPhail, 2014). Direct methods entail the modelling of the ECL directly using some technique that contains the ECL drivers as explanatory factors. In contrast, examples of indirect methods include simulation-based models and modularised models. Modularisation is a design approach that subdivides a system into smaller parts that can be independently created and then used in different systems.
The specific indirect method that we will focus on is where the PD, LGD and EAD are modularised. A simplified expression for modelling expected credit loss (ECL i Þ per account i is as follows: where PD i , LGD i , and EAD i is the PD, LGD and EAD of account i over the total time horizon, t 2 0; . . . ; T ½ ; where T is determined by the stage (12 months for Stage 1 and lifetime for Stage 2 and 3). Another way of estimating the ECL for account i, could be to use the marginal PD, LGD and EAD for each disjoint interval of time. In this case, marginal PD refers to the probability of default for one period and not the total time horizon. This time horizon 0; . . . ; T ½ is divided into evenly spaced time intervals, for example, monthly. In each time interval, a fraction of the accounts that survived up to that time may default. The marginal PD is the probability that an account that has survived until the beginning of the time period t defaults during this particular time interval (Hallblad, 2015). The marginal PD can, therefore, be seen as a conditional PD when the formulation is discretised in this way.
If we use marginal PDs, Equation (1) changes as follows: where PD i;t is the marginal PD (i.e. the probability of defaulting of account i at time t), LGD i;t is the LGD for account i when an account defaults at time t, and EAD i;t the EAD at time t for account i. As mentioned before, T is dependent on the staging. In a sense, the marginal PD (PD i;t ) is a conditional PD, with the condition being that account i has survived up until the beginning of time t. Equation (2) could be further enhanced by taking into account the time value of money: where r is the annual nominal interest rate for each account (we assume a constant interest rate over the total time horizon without loss of generality).
Irrespective of the choice of methodology, according to IFRS9, the estimation of expected credit losses should incorporate not only historical information but also relevant current and future credit information, including forward-looking macro-economic scenarios (Miu & Ozdemir, 2017). The ECL methodology, which involves separate estimates of the PD, LGD and EAD is perhaps the best and one of the most popular choices (of an indirect method). The advantages of this approach can be summarised as follows: • The account level modelling provides a more granular risk profiling for each individual account/homogeneous pool of accounts; • Each risk parameter is independently driven by a separate set of economic factors, allowing a more dynamic view of economic impacts; • The modelling methodology is in line with statistical techniques prevailing in the consumer lending arena and intuitively adoptable by most model developers; • The lifetime PD that includes forward-looking macro-economic scenarios can directly be used to assess significant increases in credit risk (Aptivaa, 2016b).
Point estimates (PD, LGD, EAD) from credit scoring models can be used to meaningfully segment accounts into distinct risk groups. On the other hand, Bellotti and Crook (2012) showed that, when they considered the distributions of EL given a variety of risk elements (PD, EAD and LGD), these models give broad distributions and the boundaries between the different risk groups become fuzzy. They also found that the inclusion of an EAD distribution had the greatest effect on the distribution of EL.
That being said, IFRS9 does have requirements of large amounts of data, not to mention validation and other model governance/maintenance requirements of the upstream PD and LGD models. IFRS9, being a principle-based guideline, does not prescribe specific methodologies for lifetime EL estimation, and there is no single methodology applicable to all portfolios. The choice of methodology should be an informed one based on the availability of data, the institution's level of the sophistication and materiality of the portfolio. It is undisputed that banks across the globe exhibit different levels of technological sophistication and have access to various degrees of specialised resources. The fact that the Basel Committee allows for a standardised and internal ratings-based approach (and an advanced internal ratings-based approach) when measuring credit risk attest to this fact (BCBS, 2006). Literature concerning the impact of different levels of sophistication amongst bank is also vast and well known, i.e. see Bülbül et al. (2019) and Wenzelburger and Gersbach (2010) to name but two. McPhail & McPhail (2014) also highlight the challenges for institutions with limited resources.
Based on the above review of some of the most important literature pertaining to IFRS9, we propose an indirect method in this paper due to its advantages when modelling ECL. The indirect method proposed follows a transparent, simple, modularised approach for estimating PD, LGD, and EAD. The term structure of each component is identified separately, and integration takes place towards the end, prior to discounting the ECL amounts to the reporting date. The term structure describes the relationship between PD, LGD or EAD and month on book. We also later in the paper refer to the contractual term of a loan, which refers to length of the loan, i.e. the number of months originally available to pay back the loan. This proposed modularised approach can be summarised as the calculation of lifetime cash shortfalls based on all possible outcomes or defaults paths, weighted with historical probabilities.
Each component that is used in the calculation of ECL, namely PD, LGD and EAD, will be discussed in turn. In Section 3, the modularised PD is discussed, the modularised LGD follows in Section 4, followed by the calculation of the EAD in Section 5. In Section 6, the three components are integrated to calculate the ECL.

Modularised PD
The GPPC (2016) defines the probability of default as "an estimate of the likelihood of default over a given time horizon". Furthermore, it is stated that the PD should be a reflection of the bank's present view of the future and that PDs may be calculated for smaller sub-periods within the remaining lifetime of an account. As PD is a component of ECL, it should also reflect past events, current conditions as well as forecasts of future economic conditions. The reflection of past, current and future information relates to the concepts of point-in-time (PIT) and through-the-cycle (TTC). According to Novotny-Farkas (2016), a rating philosophy is used to determine the PD estimates, where the philosophy follows a PIT, TTC, or a hybrid approach. Financial Conduct Authority (FCA, 2018) describes rating philosophies as a spectrum with two stylised extremes of PIT and TTC, with different hybrid combinations of PIT and TTC in between. Appendix A consists of the concept behind the methodology to calculate the marginal PD (both PIT and TTC), followed by two important assumptions used in the methodology. The notation is then given before the methodology is described in mathematical detail in this section.

Marginal PD methodology
In order to explain the methodology, the following example is provided to illustrate the details of calculating the marginal PD. More substantiation about the concept behind the methodology is available in Appendix A.
Suppose the following sample of accounts is used, as presented in Table 1. It is assumed that one starts off with a sample of accounts drawn from across origination cohorts where the state of each account is tracked for each month that the account has been on book, referred to as month on book (MOB). Aptivaa (2016f) states that a month on book approach results in improved loss trend forecasting in the longer term. This is due to the month on book component capturing the maturation information of the loans.
Note that a zero represents accounts that are open and not in default, 1 represents accounts that went into default, 2 represents accounts that were closed, and a 3 represents accounts that are in default and were immediately closed.
The notation used in Table 1, is as follows: A zero represents accounts that are open and not in default, 1 represents accounts that went into default, 2 represents accounts that were closed, and a 3 represents accounts that are in default and were immediately closed. Last mentioned accounts are referred to as direct write-offs. In the example, it is evident that account A went into default, but cured in the next month, while Account C remained in default until the end of the observation window. Consideration should also be given to the fact that open accounts might become censored (no further information exist about the account from that time point forward). Accounts E and F are examples, with account F being censored while in default. Furthermore, it might be peculiar to define a closed or defaulted account as censored. However, these definitions are used to account for possible data anomalies should it exist, e.g. where closed accounts are no longer accounted for and thus considered as censored.
The next step is to perform a frequency-based calculation by counting the number of accounts that moved from one state to another during a month on book interval. The following variables should be calculated by counting the number of accounts per month on book that adheres to any of the following conditions: • Defaults and non-default accounts; • Cured accounts; • Closed accounts; • Censored accounts that are either open or closed, with a default or not-default status.
It is important to interpret the definition of censoring in the context of the data. An account is considered censored in a certain month on book, if the state of the account is not available at that point in time, while the state was known in the previous month on book. The status of the censoring (i.e. open in default, etc.) refers to the latest state of the account when the information was available.
After calculating the above-mentioned variables, one should have results similar to Table 2, based on the example provided in Table 1.
Several summary measures can now be calculated from Table 2: The number of accounts that are open and not in default, i.e. accounts that are at risk of migrating from a non-default to a default-or closed status, can be calculated as: where t>0 represents the month on book value.
• Marginal closing of non-default accounts: The number of additional accounts that migrated to a closed status that previously were not in default. The assumption is that an account can stay in the data set for several months after being closed. One would only sum accounts that were previously open and not in default and migrated to a closed status. The value can be calculated as follows: • Marginal closing of default accounts: The number of additional accounts that migrated to a closed status and that were previously in default. The value can be calculated as follows: • Marginal defaults: The number of additional accounts that migrated to a default state. The value can be calculated as follows: It is now possible to estimate the rates that will be used in the second phase. Define the PD as and estimate the rate of closure as The rate of closure of non-default accounts follows similarly: The calculation of the rate of closure of default accounts is as follows: Furthermore, the rate of cure can be derived by A typical life table approach is then followed (to ensure generality), by assuming an arbitrary starting value for the number of accounts in the life table, say T 1 ¼ 100. We can then calculate the following "t: • The life table number of closures of non-default accounts, • The life table number of closures of default accounts, • The life table number of cures, Finally, the monthly marginal through the cycle (TTC) PD, is while the monthly marginal point in time (PIT) PD is Equation (18) can be used to construct the term structure of the PDs (see Figures 2,3 & 13), while Equation (19) is used in the calculation of the ECL (see Sections 6 and 7.1.4). To facilitate improved understanding of the order in which the calculations should be done, a flow diagram is supplied in Figure A1 (in Appendix A). For further detail on the practical implementation of a related multistate Markov model in the statistical software package R, the reader can consult Allignol et al. (2011). However, we used SAS and have made available some SAS code (see Appendix B). The next section illustrates how the methodology could be applied.

Illustration of the marginal PD
To illustrate how the proposed marginal PD could be implemented, we use Case study 1's data (for details on Case study 1's data, see Section 7.1). Note that due to the large number of accounts (around 400 000), it is possible to choose different reference periods and base the calculation on segmentation variables, as deemed appropriate. The concept of segmentation in this paper refers to the classification of accounts into homogenous pools or groups where accounts within the same group exhibit similar credit risk characteristics. Whilst the following illustration is at the portfolio level, financial institutions should perform calculations based on reference periods and segmentations of their choice. Banks commonly segment their portfolios along lines of business, product types and risk characteristics to model more homogeneous groups of loans (McPhail & McPhail, 2014). A good rule of thumb is that a bank should employ more segmentation criteria to the point that loss in stability is not noticeable (McPhail & McPhail, 2014). Figure 1 depicts the four marginal rates (see Equations (8, 10-12)) calculated for the retail portfolio data. Note that all figures are desensitised due to the confidential nature of the data. In Figure 1, the calculations were performed at portfolio level with no further segmentation been performed. It is evident that loans classified as "in default" in the early months of the contract, have a material probability of curing from default (solid line). This probability of cure decreases over time. It should be noted that there are only a few accounts that enter a default state at very low month on book value, i.e. month on book less than 3. Such defaults are considered to be "technical defaults" and preferable one should investigate the cause of these defaults to avoid volatility due to the low number of accounts. The cyclical pattern evident in the dotted prepayment line reflects the underlying contractual properties of this portfolio. The portfolio consists of unsecured retail banking loans with the majority of the loans having a contractual lifetime of 60 months, although loans with a contractual lifetime of 12, 24, 36 and 48 months were also present. The spike at every 12th-month interval is a reflection of the contractual nature of the portfolio. Due to this, it is recommended that segmentation be performed on the contractual term of the loans to ensure that homogeneous pools of accounts are modelled. Note that in terms of Figure 1, one could also translate the concept of "prepayment" used in the graph to closure of the account.
The four marginal rates in Figure 1 are used to construct the life table, as described in Section 3.1. By using Equation (18), the marginal rates are converted in the term structure for the PD (Figure 2). The term structure of the PD is an essential part of the calculation of the ECL, as elaborated in Section 6.
Although segmentation of the portfolio is not a direct aim of this study, it is investigated. From Figure 3 it is evident that segmentation should receive additional focus as the PD term structure is segment dependent. For example, accounts with longer contractual terms (dashed line) have lower PDs during the early time periods of the contract, but the PD stayed at a higher level for a longer period of time compared to contracts with a shorter contractual term (dotted line). The figure also shows the effect when no segmentation is performed (solid line). The impact of segmentation is clearly worth noting, and considerations are given to this matter in Sections 7.1.6 and 8.
It can be argued that the variable, month on book, was used indirectly to segment the portfolio. Contractual term should be explored to possibly further segment the portfolio, e.g. see the initial analysis in Figure 3. In terms of the use of the proposed methodology as a benchmark (see Section 7), users should consider additional segmentation based on contractual features of the underlying portfolio, especially if those features are as obvious as those observed in Figure 3.
In the next section, the LGD component will be discussed, followed by an illustration of the LGD based on the same portfolio of unsecured retail loans (i.e. Case study 1's data).

Modularised LGD
In this section, the focus is on the second component used in the calculation of ECL. The methodology that is proposed to calculate the LGD will be discussed first. This is followed by a description of the procedure used to estimate the recovery rates and LGD. We will illustrate how the modularised LGD could be implemented by first providing a description of the data and parameters used.

LGD methodology
Lifetime LGD is tracked at portfolio level through a recovery curve with monthly time-steps, projected by the most direct multiplicative combination of empirical cohort and month-sincedefault effects from the margins. The methodology for the calculation will now be explained. Note in advance that, for the LGD calculation, month on book is used for segmentation and month since default is used to group the recoveries.
More formally, the LGD is defined as LGD ¼ 1 À RR, where RR is the recovery rate estimated using a marginal loss approach. The construction of the cumulative recovery curve is done by age of account at default. Therefore, we define the LGD for accounts that default at month on book t as follows: with RR t the recovery rate on accounts that default at month on book t.
The recovery rate RR t can be expressed as the sum of the marginal recovery rates, RR t ¼ ∑ I i¼1 MRR t;i where MRR t;i is the marginal recovery rate (discounted to the defaulted month on book t using the time since default i) for accounts that defaulted at month on book t and Ithe recovery period. Thus Note that we assume that we can recover up to I months after default and that the actual marginal recovery rates will only be available for accounts that defaulted more than I months before the current data. For this reason, we need to estimate the future marginal recovery rates.

Estimating the marginal recovery rates and LGD
The marginal recovery rates MRR t;i are estimated by segmenting account recoveries according to default vintage (e.g. years or months) and month on book and calculating the cumulative recoveries according to months since default. This is analogous to constructing a runoff triangle (see Table 3).
The marginal recovery rates MRR t;i are then estimated using the most recent data in the triangle (diagonal of the triangle). Thus MRR t;i ¼ ∑ NÀi n¼NÀiÀk ACR n;t;i À ∑ NÀt n¼NÀtÀk ACR n;t;iÀ1 ∑ NÀi n¼NÀiÀk EAD n;t ; where ACR n;t;i is the actual cumulative recoveries for vintage n ¼ 1; . . . ; N, month on book at default t and time since default i ¼ 1; . . . ; I: k is the number of vintages used to estimate the marginal recovery rates.
Given the recovery data and the estimated marginal recovery rates MRR t;i for book t ¼ 1; . . . ; T and time since default i ¼ 1; . . . ; I we can estimate the LGD as LGD t ¼ 1 À ∑ I i¼1 MRR t;i for month on book t ¼ 1; . . . ; T.

Illustration of the modularised LGD
To illustrate how the proposed modulatrised LGD could be implemented, we again use Case study 1's data. We start by first providing a description of the data and parameters used. Recovery data are used for all accounts defaulting within a specified time frame (same reference period as used in the PD model). Multiple defaults are treated as separate default events, and a recovery rate of 100% is allocated to cured accounts. The recoveries are discounted using the debit interest rate for each account. Figure 4 depicts the actual LGD at default by month on book for the portfolio. It is observed that the LGD peaks in the third month on book following default and then decreases as the month on book at default value gets larger. High volatility is observed at the end of the curve due to the low number of surviving accounts. Figure 5 depicts the recovery rates per monthly vintages. It is observed that the recovery rates level out between 60 and 80 months since default. For this reason, a recovery period I ð Þ of 84 months is used in the model.
In order to obtain a stable model, yearly vintages are used. The number of recent vintages used in estimating the marginal recovery rates is set at k ¼ 2. The segmentation according to month on book is done by binning the data into six-month intervals, and the cumulative recoveries are grouped yearly for the months since default.
Note that LGD is tracked as a recovery curve but, for more recent default cohorts, there is incomplete information because the recoveries are yet to take place. This resulting incomplete recovery curve of a particular default cohort is extrapolated from the more complete recoveries information of older default cohorts. These older recoveries are averaged, EAD weighted, and applied proportionately at appropriate timings to the later cohort. If k is set too large, it should be acknowledged that this calculation might be influenced by the oldest cohorts whose information appears most frequently in the projection steps, and cause a sampling bias that could impact the extrapolation. However, in our case (k ¼ 2Þ), this should not cause excessive sampling bias.    In Figure 6, the resulting LGD model (illustrating how the modularised LGD could be implemented) is depicted for semi-annually grouped month on book values and yearly vintages k ¼ 2 ð Þ. Similar to the actual LGD in Figure 4, the LGD is downward sloping for accounts defaulting at month on book values between 0 and 6 months and again for accounts defaulting at values between 78 and 84. As noted earlier, month on book was indirectly used to segment the modularised LGD.

Exposure at default
After calculating the marginal PD and LGD, the final component used in the calculation of ECL will be focused on. The EAD is relatively straightforward to calculate, and this section will explain the methodology followed to calculate the EAD.

EAD methodology
The EAD is estimated deterministically using the amortisation schedule of each account with discounting. In order to discuss the methodology, some notation is needed. Define EAD t as the exposure at default at some future month on book t, where t ¼ m; m þ 1; . . . ; term, with m representing the current month on book and term the contractual term of the account. The EAD at the current month on book m ð Þ is taken to be the exposure or outstanding balance, i.e. EAD m ¼ Exposure and the remaining EAD's are estimated as: for t ¼ m þ 1; . . . ; term where r is the annual nominal debit interest rate for the account and Note that this EAD methodology is only appropriate for amortising loans. Should a revolving product portfolio be considered, a different EAD model should be used, e.g. the Zero-One inflated Beta with Pareto tail (Chawla et al., 2016). Furthermore, due to EAD being calculated per account, segmentation is not applicable. Segmentation will typically only be applied to both the PD and LGD models.

Calculation of ECL
In the previous sections, we discussed the respective components (PD, LGD and EAD) that are used in the calculation of the ECL. In this section, the ECL calculation per account is discussed.
The expected lifetime credit loss (ECL i;m Þ per account i currently at month on book m is calculated as follows: LGD Month on book at default Figure 6.
LGD per month on book.
where r is the annual nominal debit interest rate for each account. PD m;mþt is the marginal PD (i.e. the probability of defaulting at time m þ t, given that the account survived up to time m) calculated using Equation (19)  LGD mþt is the LGD when an account defaults at month on book m þ t from the LGD model in Section 4 and EAD i;mþt the EAD at month on book m þ t for account i as described in Section 5.
The horizon T over which the expected credit loss is calculated, is taken to be either T ¼ min remaining term; 12 ð Þ if the account is in Stage 1 or T ¼ remaining term if the account is in Stage 2. Note again that for Stage 1, IFRS9 defines the 12-month expected credit loss as the portion of lifetime expected credit losses (BCBS, 2015). It is not the extended cash shortfalls over the 12-month period, but the entire loss on an asset, weighted by the probability that the loss will occur in the next 12 months (Beerbaum, 2015).

Case studies
In this section, the methodology that we propose is compared with newly developed IFRS9 models of a large retail bank. Case study 1 is based on an unsecured retail portfolio and Case study 2 on a secured portfolio. The purpose of both case studies were to illustrate how our proposed methodology could be used as a benchmark tool and not to exactly compare the bank's provisional model to the proposed methodology.

Case study 1
It should be noted that the model results given in forthcoming sections were taken from the development phase when the retail bank investigated different methods to estimate ECL under the new IFRS9 guidelines, and thus, constitute a not yet finalised IFRS9 model. Due to confidentially reasons, only general information relating to this model is disclosed. For example, it is known that the methodology followed by the bank is vastly different from the one proposed in this paper. It is also known that the PD was modelled using a hybrid approach (containing both survival analyses and logistic regression models). Furthermore, the LGD model also incorporated survival analyses.
For comparison reasons, the same data used by the bank in the development of their IFRS 9 model (referred to as "case study") is used to fit our proposed method (referred to as "benchmark"). Each model is then compared on an out-of-sample subset (test sample) of the unsecured loan portfolio using performing accounts only (Stage 1 and Stage 2). Note that the focus of this paper is neither on staging nor segmentation. Therefore, to ensure that the results of the comparison are more or less analogous, the current staging indicator (of this specific bank) is used. The remainder of this section will focus on the data used, followed by a comparison between PD and LGD models. The EAD is not compared since similar methodologies were used for the benchmark and case study. Finally, the expected credit loss of both models is compared. We conclude this first case study with recommendations and remarks about staging and the incorporation of varying economic scenarios.

Data of Case study 1
The test sample provided is confidential and very little information will be provided. The data used are from an unsecured retail portfolio of a major South African bank. The data covered a period from 1 September 2005 to 31 July 2015 (118 months), resulting in nearly 400 000 accounts.

PD comparison of Case study 1
When comparing the PD values of the case study to the benchmark model, it is important to recognise that the benchmark model utilised a multiple default definition. This entails that an account can follow an outcome path where it moves in-and out of default at several points in time. In Figure 9 the average PDs (weighted by EAD) of the case study and benchmark model are compared across stages without applying any segmentation based on the original contractual term of the account (see Figure 10 to observe the effect of segmentation). As mentioned in Section 3.2, it would be advisable to incorporate segmentation of the portfolio based on contractual features as it would enhance the homogeneity of the subset of accounts used in the development of the benchmark model, without substantially increasing the complexity of the analysis.
It should be noted that Figures 9-11 aim at highlighting the differences between the stages. Since different methodologies were used to develop the case study model compared to the benchmark model, it will be difficult to directly compare the results to one another. Figure 10 illustrates that the benchmark model follows a similar trend for both stages. However, the benchmark model has a slightly higher average PD for Stage 1 compared to the case study. The opposite  is true for Stage 2, where the PD's of the case study are significantly higher than the benchmark model. The difference between the Stage 1 PDs can most likely be attributed to the multiple default definition used in the benchmark model. When comparing Stage 1 and Stage 2 PDs of the benchmark model to each other, the differences between them are far smaller than performing the same comparison on the case study's PDs. The difference between the case study Stage 1 and Stage 2 PDs warrants further investigation.
The difference between the Case study Stage 1 and Stage 2 PDs may be due to the way in which segmentation (and/or staging of the accounts-see Section 7.1.6) is treated when the case study model was developed. Since the focus is not on the case study model, no further comments will be made on these differences. We can, however, comment on the possible implications of these differences. In our view, the consequence of such a big difference between Stages 1 and 2 PDs will become evident when adverse economic conditions prevail, such as a recession or the downgrading of South Africa's long-term foreign currency sovereign credit rating to sub-investment grade. The anticipated impact is that there will be a sudden increase in impairment levels, which might ultimately lead to volatile regulatory capital requirements. In the benchmark model, one can observe a far more gradual transition ("cliff effect") from Stage 1 to Stage 2 due to the fact that the estimation of the PDs was done without explicitly segmenting the accounts beforehand based on, for example, delinquency cycles.
However, to further facilitate comparison, the data were broken down into accounts with contractual terms less than 37 months (0-3 years), between 37 and 60 months (3-5 years), between 61 and 80 months and more than 80 months. Following the split, one is able to identify whether the differences identified in Figure 9 are observed across all terms. When inspecting Figure 10-for Stage 1-the benchmark PDs and the PDs of the case study are more or less parallel, with the biggest difference materialising for terms greater than 80 months with month on book values in excess of 60 months. For Stage 2, it is evident that the PDs of the case study are substantially higher than the benchmark's Stage 2 PDs. This phenomenon is evident across all terms. Figure 9. Average PD term structure per month on book. Figure 11 represents the distribution of benchmark and case study PDs per term-and stagegroup, respectively. It is interesting to observe that the case study's proportional distribution of Stage 1 PDs are consistent across all terms (left column of Figure 11). In comparison, the benchmark Stage 1 PDs have several higher PD values for accounts with an original term in excess of 80 months. This behaviour can be explained since the development data used to derive the PDs contained very few accounts that had a month on book value in excess of 80 months, and therefore the PDs for these large month on book values are quite data-dependent. The impact of the PDs for large month on book values are not substantial since only a very small proportion of the test sample (the subset of accounts used in comparing the ECL in Section 7.1.4) had a month on book value larger than 80 months (see Figure 8). Should the EAD be, however, substantially large for these accounts, then it will have an impact.
When comparing Stage 2 PDs, the most obvious difference between the case study and the benchmark model's PDs is observed for accounts with an original term between 37 and 60 months. In this situation, the proportional distribution of the case study's Stage 2 PDs are spread over higher PD values compared to the benchmark model's PDs. This is expected when one considers Figure 9 and compares the case study's Stage 2 PDs to the benchmark model's PDs. To conclude the PD comparison, it is recommended to calibrate the benchmark model on a homogenous pool of accounts by segmenting the portfolio based on contractual features. The comparison between the benchmark model and some other bank-specific model must then be done per stage, in order to assess the difference between Stages 1 and 2 for each model, rather than directly comparing each stage of the various models to each other. Such a direct comparison can, however, be done if the aim is to improve the bank's understanding of the dynamics of the model. Finally, it would be recommended to backtest any modelled PD to an observed default rate (see Section 7.1.5).

7.1.3.
LGD comparison of Case study 1 Figure 12 displays the LGD's per month on book for the case study and the benchmark model compared to the actual LGD. The main difference between the case study's LGD and the benchmark model is that the former is upward sloping for higher month on book values and the latter is downward sloping. This is due to volatility in the data for higher month on book values caused by the low number of accounts present in this segment (also confirmed by Figure 8). It is known that the methodology followed by the bank (case study) is vastly different from the one we propose. Therefore, we cannot comment on the exact reason behind the trend of the case study. The LGD Figure 11. Distribution of PDs per stage and term. Figure 12. LGD comparison. models, however, do seem comparable with each other, especially if one compares the benchmark model's trend to the actual LGD. Table 4 compares the calculated ECL of the case study with the benchmark model based on the test sample. Note that due to confidential reasons, no actual values are shown. We used a hypothetical exposure of one million rand. It must be noted that both the proposed methodology of PD and LGD models are calibrated on the loans portfolio over the same reference period (September 2005-December 2014). The test sample which was used to compare the ECL, consisted of a subset of accounts in the loans portfolio for 1 month. This test sample and the data used in our development are 1 year apart. Furthermore, the general economic conditions changed only marginally in this time period. It is also true that the proposed model has not been adjusted for any macroeconomical drivers. When considering Table 4 for Stage 1, the benchmark model results in a higher ECL value than the case study's model. For Stage 2, the opposite is true and the case study model's ECL value is much higher than the benchmark model. This result is expected given the comparison of the PD model in Section 7.1.2 and the LGD model in Section 7.1.3. The difference in the ECL is primarily driven by the difference between the PD values of the benchmark model compared to the case study (see Figure 10). Taking both Stage 1 and Stage 2 into account, the total ECL for the benchmark model is higher than the case study's model given the majority share of Stage 1.

ECL comparison of Case study 1
In order to analyse the above differences, it is necessary to compare the PD values to the average observed bad rate over the reference period. Table 5 contains the average 12-month PDs for both Stage 1 and Stage 2 accounts as well as the overall 12-month PD, average weighted by exposure. We used a hypothetical exposure of one million rand and show the PDs as relative percentages in relation to the average observed bad rate over the reference period. The weighted 12-month PD of the case study is 58.93% of the average observed bad rate. In contrast, the weighted 12-month PD of the benchmark is in good alignment with the average observed bad rate (98.25% of the average observed bad rate).

Recommendations based on Case study 1
This comparative study illustrates how our proposed benchmark models can be used to analyse newly developed IFRS9 model estimates. In this comparison, for example, we observed a large  jump in Stage 1 to Stage 2 PDs of the case study model, while our benchmark model had a much more gradual shift from Stage 1 to Stage 2. Our model is in this regard favourable in the sense that accounts moving between Stage 1 and Stage 2 will not lead to undue volatility in impairment loss amounts. Furthermore, it is recommended to backtest any modelled PD to observed bad rates to assess the accuracy of the model. The same is true for LGD. Based on the evidence that our proposed model is closely related to the actual LGD and bad rate, we believe that our model produces an accurate ECL, which might be slightly conservative.
Finally, as mentioned before, the comparison illustrates that research should be done into segmentation schemes and staging in order to develop mature IFRS9 models. It is also important to consider the IFRS9 requirement pertaining to the macro-economic adjustment of PDs, which was not addressed in either case studies. For these reasons, we provide some notes on staging and the incorporation of varying economic scenarios. 7.1.6. Note on staging (transfer criteria) Significant deterioration in a retail portfolio should be determined at a segment/pool level to assess if the account moves to Stage 2 or not. Significant deterioration needs to be calculated on a homogeneous segment of shared characteristics that sufficiently incorporate forward-looking information to assess whether there has been a significant increase in credit risk. This is clear when one considers the following paragraph under the staging section of GPPC (2016): "2.7.4.7 Relying only on delinquency or other indicators that are insufficiently forward-looking to assess whether there has been a significant increase in credit risk. IFRS9 permits this only when reasonable and supportable forward-looking information is not available without undue cost and effort. Except in very limited cases, it would be expected that a bank would be able to make use of other, qualitative indicators to supplement delinquency, such as credit bureau scores, the use of watch lists, etc." Further motivation why delinquency cycles should not be used to determine stages are provided by the (European Banking Authority [EBA], 2016): 102. IFRS9, paragraph B5.5.2, states that lifetime expected credit losses are generally expected to be recognised before a financial instrument becomes past due and that typically, credit risk increases significantly before a financial instrument becomes past due or other lagging borrower-specific factors (for example a modification or restructuring) are observed'. Therefore, credit institutions' analyses should take into account the fact that the determinants of credit losses very often begin to deteriorate a considerable time (months or, in some cases, years) before any objective evidence of delinquency appears in the lending exposures affected. Credit institutions should be mindful that delinquency data are generally backward-looking, and will seldom on their own be appropriate in the implementation of an ECL approach. For example, within retail portfolios adverse developments in macroeconomic factors and borrower attributes will generally lead to an increase in the level of credit risk long before this manifests itself in lagging information such as delinquency.
In conclusion, using only delinquency cycles for staging will be regarded as borderline compliant with the IFRS9 standard. The methods on how to determine the staging should be reconsidered and researched more thoroughly.

Note on incorporating varying economic scenarios
While banks have been following similar forward-looking macro-economic adjustments of PD for quite some time under the Basel internal ratings based approach to PD modelling, stress testing and the Comprehensive Capital Analysis and Review (CCAR), IFRS9 has introduced additional complexities in the form of lifetime ECL and lifetime PD for Stage 2 and Stage 3 exposures. Lifetime here refers to the life of the loan or the effective maturity (Miu & Ozdemir, 2017). While regulatory stress testing norms require banks to assess the impact of macro-economic factors on PDs over 1 to 2 years (under CCAR guidelines); however, under IFRS9 they are required to extend the macro-economic adjustment of PDs for the life of the loan, which can extend to over 5 years (long-term project financing loans, for instance) see, e.g. Aptivaa (2016d).
The complexities associated with adjustment of PD (PIT) for forward-looking macro-economic scenarios bring additional challenges to banks. To mitigate these challenges, IFRS9 provides for the following flexibilities, see, e.g. Aptivaa (2016d): • Judgmental override on long-term macro-economic forecasts can be carried out with valid justifications.
• Instead of a single forward-looking scenario, banks may generate multiple forward-looking scenarios and use them to generate multiple PD (PIT) estimates for a facility/instrument.
While these flexibilities will help in mitigating challenges of long-term macro-economic forecasts, establishing a stable relationship between macro-economic scenarios and PD (PIT) will still remain challenging.
To give a simple example of the application of the above-mentioned flexibilities, one can apply a similar methodology as that used during the development phase of our proposed PD model. The suggestion is to develop the PD term structure for different calendar month periods (called reference periods), i.e., we calculated the PD term structure for: This approach yielded three different term structures for the PDs. All three-term structures are similarly shaped, but each one is at a slightly different level, i.e., the term structures are parallel to each other. The term structure is also calculated using almost all the available data (accounts on book between September 2005 and December 2014). All of these term structures are displayed in Figure 13.
By probability weighting each of the three different term structures based on its representativeness of expected future economic conditions, an expected term structure can be created, which can be used as input into the ECL calculation. This approach is closely related to the one proposed by Yang et al. (2019), except that their focus is on determining the appropriate probability weighting that should be used.
An alternative way to address varying economic scenarios is the use of scalars to adjust the PD. While IFRS9 does not explicitly state the use of scalars as a requirement, Tasche (2013) mentions that the popular scaled PDs approach can be used to reflect the forecast period PD curve. Also, according to Aptivaa (2016e), banks generally end up with hybrid PIT and TTC models and have to be calibrated at each economic cycle to reflect PIT PDs.

Case study 2: secured retail portfolio
In this section, the methodology that we propose is again compared with a developed IFRS9 PD model of a large retail bank. For this case study, the portfolio is a secured portfolio (previous case study considered an unsecured portfolio). The methodology followed by the bank to model their PD is similar to the previous case study, namely, a hybrid approach containing both a survival analysis and logistic regression model. Table 6 summarises the differences between the case study PD and benchmark PD methodologies.
The benchmark model's PD term structure is shown in Figure 14 for the secured portfolio. Some similarities are noted when compared to Case study 1's term structure (compare with Figure 2). Differences observed include that the steep decline in PD is not evident in the secured PD term structure, but a more gradual decline is observed.
We are interested in the comparison of the calculated ECL of the case study with the benchmark. In the previous case study, we showed that the difference in the ECL is primarily driven by the difference between the PD values of the benchmark model compared to the case study. Therefore, we will only focus on comparing the PD of the benchmark with the PD during this case study and keep the LGD and EAD components consistent in the ECL comparison. The case study's IFRS9 EAD and LGD models were used to produce the LGD and EAD values (in both the benchmark and case study ECL calculations) as a means to have consistency within the ECL comparison. The bank's EAD model is based on using outstanding balance given a fixed amortisation schedule (similar to our proposed EAD model) and the bank's LGD model uses a survival curve (comparable to our proposed LGD model that uses a lifetime recovery curve).   We will be comparing the total ECL values for Stage 1, Stage 2, and the aggregate of Stage 1 and Stage 2, separately. We will denote the benchmark PD with PD b and the case study PD with PD cs and similarly the benchmark ECL with ECL b , and the case study with ECL cs :   Figures 15-17 illustrate the comparison of the total ECL values over a 2-year period. The monthly ECL cs and ECL b amounts are plotted per month over the 2-year period. The number of accounts is also included in the graph. The ECL amounts and the number of accounts are on different scales, but due to the confidentiality of the data, no actual values can be disclosed. Figure 15 compares the Stage 1 total ECL values over the 2-year period. The comparison illustrates that the ECL cs amounts are lower than the ECLb amounts over the entire 2-year period. In terms of this ECL comparison, it is suggested that the PDcs values result in the ECL cs amount under Stage 1 being underestimated for all months over the 2-year period, in comparison with the ECLb amounts. In January 2018, we find a sudden increase in the number of accounts for Stage 1. The number of accounts gradually increase thereafter. This sudden change in the number of accounts is attributed to a change in how the significant increase in credit risk (SICR) was defined. The SICR determines the stage allocation of accounts in terms of Stage 1 and Stage 2, by assessing whether or not there was a significant increase in the credit risk of the accounts since their respective origination dates. Figure 16 compares the Stage 2 ECL values over the 2-year period. The comparison illustrates that the ECL cs amounts are higher than the ECLb amounts over the entire 2-year period. In January 2018, we find that there is a sudden decrease in the monthly number of accounts for Stage 2, which corresponds with the change in SICR discussed above. It is clear that ECL b changes with a similar factor as the number of accounts in January 2018. In comparison, the ECL cs is quite   stable before and after the change in the number of accounts in January 2018. Although no comment can be made about the ECL cs due to confidentiality, it is evident that the benchmark model is sensitive to this change in the number of accounts.
In Figure 17, the aggregates of Stage 1 and Stage 2 ECL values are compared over the 2-year period. It is obvious that the ECL b is consistently higher than the ECL cs . When Figures 15 and 16 are aggregated, the slight increase in the ECL b of Stage 1 during January 2018 is somewhat cancelled out by the significant decrease in ECL b of Stage 2. This have a slight net decreased impact on the aggregated ECL b for January 2018. A further comparison between the first and the second case study will now be discussed.

Discussion of the case studies
In the first case study, we showed that the difference in the ECL is primarily driven by the difference between the PD values of the benchmark model compared to the case study. Therefore, we only focused on comparing the PD in the second case study. Both Case studies 1 and 2 showed the benchmark ECL of Stage 1 is higher than the case study ECL (i.e. the bank is possibly underestimating ECL). Both Case studies 1 and 2 also showed the benchmark ECL of Stage 2 is lower than the case study ECL (i.e. the bank is possible overestimating ECL). When comparing the aggregates of Stages 1 and 2, the case studies are again similar, with both case studies having a higher ECL for the benchmark (the bank is possibly conservative). Note that the exposure composition of the underlying portfolios (i.e. Case study 1 compared to Case study 2) is different with Stage 1 contributing 90% of the total ECL in Case study 1 while only contributing 40% to the total ECL in Case study 2.

Conclusion and future research
In this paper, we developed a methodology to calculate empirical ECL using a transparent-modularised approach, by providing a methodology to calculate the marginal PD, the marginal recovery rates and resulting LGD, and lastly the EAD. These three components are combined to calculate the ECL. This paper did not focus on forward-looking macro-economic adjustments of the PDs. However, due to the flexibility allowed under IFRS 9, we propose an idea of probability weighting different PD term structures (calculated over different reference periods) to derive an expected PD term structure. We also provided two comparative studies to illustrate the use of our proposed methodology as a benchmark when more sophisticated IFRS 9 models are in production or still being developed.
The first case study was applied to an unsecured portfolio with Stage 1 contributing to 90% of the ECL. The second case study was applied to a secured portfolio with Stage 1 only contributing to 40% of the ECL. Both case studies revealed that the bank is potentially underestimating the Stage 1 ECL when compared to the benchmark. For Stage 2 the opposite is true. When the aggregated ECL is compared, both case studies indicate that the benchmark ECL is higher than the case study ECL. The case studies illustrate the importance of the PD component in the ECL calculations.  Figure 17. Aggregate of Stage 1 and Stage 2 ECL comparison.
Considering future research, we recommend additional research on the segmentation schemes and staging used in the IFRS9 models. Some motivation is as follows: IFRS9 requires the segmentation of assets into homogeneous pools for which a PD term structure is required (IFRS, 2014;McPhail & McPhail, 2014). Hence, prepayment, attrition and default rates are in accordance with each pool. For each pool, a 12 month or lifetime provision is calculated dependent on the stage that the account is allocated to. Staging and pooling are independent of each other but may or may not have risk drivers common to each, such as delinquency (GPPC, 2016). For example, if an account moves to Stage 2, the lifetime provision is based on the pool that the account has been assigned to, and not the future loss on the account given that the account has moved to Stage 2. IFRS9 is principle based, therefore it is difficult to state that any interpretation of segmentation is wrong. However, based on GPPC (2016), segmentation based solely on delinquency will be regarded as borderline compliant with the standard. The reason for this is that no objective forward-looking information is used to move accounts to Stage 2 before they become delinquent (the 30-day backstop for Stage 2). A case of too little too late. Such borderline segmentation schemes may lead to volatile impairment numbers, especially on the long-dated portfolios. Therefore, methods on how to determine the three IFRS9 stages and the methods on how to best segment the book, should be researched thoroughly. Jong (2016) underlines the importance of staging by proposing several tests to assess the effectiveness of the staging in use. In this regard, the investigation into segmentation performed in Section 3.2 is evidence that improved results may follow if segmentation is thoroughly researched. Therefore, future research should include investigating the best way to segment a portfolio for the IFRS9 calculations and to conduct additional testing on various segmentation schemes.
The complexity of the many IFRS9 models used in calculating ECL should also receive some attention in future research, especially the calculation of PDs. The question is raised whether some of these ECL models are not overly-complex? We have, however, proposed a simple ECL methodology in this paper and have shown how to compare complex models to more simplistic ones.
Following from the above, if one compares different methodologies to one another, the judgement of the difference in outcomes of various models are always difficult. Further research ideas are to focus on the quantification and assessment of how far internal models can be allowed to deviate from the benchmark model. A possible solution is to construct confidence bands around estimators based on the standard errors of the estimators under large sample assumptions. Another possible solution is to construct bootstrap confidence intervals for the various estimators.
In terms of incorporating varying economic scenarios, the establishment of a stable relationship between macro-economic scenarios and PDs will still have to be researched. We recommend the investigation of naïve methods where economic scenarios are linked to parallel shifts in the PD term structure. Such methods can then be expanded by using a probability weighting for each scenario, and thus calculating the required probability-weighted ECL according to the IFRS9 standard.