The Interaction between Monetary and Macroprudential Policy : Should Central Banks “ Lean Against the Wind ” to Foster Macro-financial Stability ? ?

The extensive harm caused by the financial crisis raises the question of whether policymakers could have done more to prevent the build-up of financial imbalances. This paper aims to contribute to the field of regulatory impact assessment by taking up the revived debate on whether central banks should “lean against the wind” or not. Currently, there is no consensus on whether monetary policy is, in general, able to support the resilience of the financial system or if this task should better be left to the macroprudential approach of financial regulation. We aim to shed light on this issue by analyzing distinct policy regimes within an agent-based computational macro-model with endogenous money. We find that policies make use of their comparative advantage leading to superior outcomes concerning their respective intended objectives. In particular, we show that “leaning against the wind” should only serve as first line of defense in the absence of a prudential regulatory regime and that price stability does not necessarily mean financial stability. Moreover, macroprudential regulation as unburdened policy instrument is able to dampen the build-up of financial imbalances by restricting credit to the unsustainable high-leveraged part of the real economy. In contrast, leaning against the wind seems to have no positive impact on financial stability which strengthens proponents of Tinbergen’s principle arguing that both policies are designed for their specific purpose and that they should be used accordingly.


Introduction
In a competitive environment, banks' private choices concerning money creation are not socially optimal burdening the economy with externalities and leaving the system vulnerable to financial crises. In this context, the focus is on "how to exploit the magic of credit for growth without inciting banks to imprudent lending practices", as Giannini (2011) puts it, and how to avoid states of the financial system which are macroeconomically destructive instead of growth-supportive.
Historically, central banks emerged as institutional counterbalance in order to be in control of the banking sector and to restrict the risk of financial imbalances [Haldane and Qvigstad (2014); Hellwig (2014); Stein (2012); Goodhart (1988)]. But over time, the focus more and more turned from (direct) crisis mitigation towards the current dual mandate since it was generally agreed that inflation represents one of the main sources of financial instability and that achieving price stability would be sufficient to ensure also financial stability [Schwartz (1995)]. The occurrence of the recent financial crisis disabused both practitioners as well as researchers. 1 In the course of the recent resurgence of interest in the nexus of finance and macroeconomics [Morley (2015)], there are numerous invocations to put such considerations back on the research agenda emphasizing that the focus on inflation bears the potential of omitting other measures of economic health [Woodford (2012); Walsh (2014); Borio (2014); Stein (2014); Tarullo (2014); George (2014)]. As a consequence, many central banks face calls to expand their policy goals towards financial stability issues. The corresponding debate is mainly on whether to continue to entirely rely on financial regulation and macroprudential policy instruments [Hanson et al. (2011); Criste and Lupu (2014); Tomuleasa (2015)] to ensure financial stability or to respond directly to financial imbalances through monetary policy.
For the vast majority of central banks around the world, flexible inflation targeting has become the predominant monetary policy regime and proponents argue that financial stability issues can represent a natural extension [Olsen (2015)]. For example, Woodford (2012) states that central banks should implement a policy which is seeking "to deter extreme levels of leverage and of maturity transformation in the financial sector ". Even "modest changes in short-term rates can have a significant effect on firm's incentives to seek high degrees of leverage or excessively short-term sources of funding. Again, this is something that we need to understand better than we currently do; acceptance that monetary policy deliberations should take account of the consequences of the policy decision for financial stability will require a sustained research effort, to develop the quantitative models that will be needed as a basis for such a discussion".
Moreover, R. Bookstaber adds in his speech at the INET conference 2014 that "we have to embed financial regulation deeply within macroeconomics and in particular monetary policy, the interface between those two is untried territory". A similar kind of invocation was also made by Mishkin (2011) who states that "research on the kind of quantitative models needed to analyze this issue should probably be a large part of the agenda for central-bank research staffs in the near term".
But there are not only arguments in favor of an extended flexible inflation targeting since monetary and financial-stability policy are distinct and separate policies with different objectives and different instruments, as Svensson (2012) argues. Thus, a direct central bank response to, say, credit growth would inevitably suggest a violation of Tinbergen's famous "effective assignment principle" [Tinbergen (1952)], i.e. to assign only one objective to each independent policy instrument which, in turn, implies that policymakers cannot be "the servant of two masters". Therefore, Svensson emphasizes that "[. . . ] the policy rate is not the only available tool, and much better instruments are available for achieving and maintaining financial stability. Monetary policy should be the last line of defence of financial stability, not the first line". Ignoring the principle of Tinbergen bears the risk of an overactive monetary policy leading to a highly volatile target rate which might entail destabilizing effects on the primary goals of the central bank. Also Yellen (2014); Giese et al. (2013) argue that using macroprudential policy would be the more effective and direct way while Smets (2014) emphasizes the importance of an appropriate coordination in order to avoid conflicts of interacting policies.
These considerations necessarily raise the question whether the analysis framework usually used by central banks is the right tool to consult for proper guidance. Existing research in this field is yet still dominated by studies using DSGE models as underlying framework for the analysis [Käfer (2014); Chatelain and Ralf (2014); Plosser (2014)]. In this context, Mishkin (2011) states that the underlying linear quadratic framework of pre-crisis theory of optimal monetary policy has a significant shortcoming, i.e. the financial sector does not play a special role for economic fluctuations. This naturally led to a dichotomy between monetary and financial stability policy resulting in a situation in which both are conducted separately. 2 However, Adrian and Shin (2008b,a) argue against "the common view that monetary policy and policies toward financial stability should be seen separately, they are inseparable". Moreover, there are some early studies which have argued that the current monetary policy framework could fail to deal with financial instability because it largely ignores the development of variables that are usually linked to financial imbalances, e.g. credit growth or asset prices [Cecchetti et al. (2000); Bordo and Jeanne (2002); Lowe (2002, 2004)]. For a more recent critique see Gelain et al. (2012) which state that the analysis of the nexus between monetary and macroprudential policy "requires a realistic economic model that captures the links between asset prices, credit expansion, and real economic activity. Standard DSGE models with fully rational expectations have difficulty producing large swings in [private sector] debt that resemble the patterns observed " in the data. Also Agnor and da Silva (2014) choose a simple dynamic macroeconomic model of a bankdominated financial system for their analysis since it "provides [. . . ] a better starting point to think about monetary policy [. . . ] compared to the New Keynesian model [. . . ] which by now is largely discredited. The days of studying monetary policy in models without money (and credit) are over [. . . ] ". 3 Although the framework is continuously extended and meanwhile also the banking sector and financial frictions are taken into account 4 , relying entirely on a single kind of model to analyze policy issues might bear the risk of "backing the wrong horse". 5 Hence, the new insights gained in the aftermath of the crisis might be a good reason to approach monetary policy analysis within alternative frameworks. Moreover, Bookstaber (2013) strongly argues in favor of agent-based computational (ACE) frameworks to do research on financial stability issues.
We contribute to the literature on regulatory impact assessment and the interactions between monetary policy and financial stability in the following way: First, by providing an agent-based macro model with endogenous money, we contribute to model pluralism in this area. Currently, we are not aware of any comparable studies using an ACE model in this field, except for Popoyan et al. (2015); da Silva and Lima (2015) and somewhat more broadly also Salle, Yıldızoglu and Sénégas (2013); Salle, Sénégas and Yildizoglu (2013) who analyze the credibility of central bank's inflation target announcements. Second, instead of usually incorporating only single macroprudential policy instruments (e.g. loan-to-value ratio (LTV)), our experiments encompass complete regulatory regimes, i.e. Basel II and Basel III. This enables us to run counterfactual simulations of the model relative to a benchmark scenario which is comparable with the economic environment of the pre-crises period, i.e. a situation with a rather loose regulatory environment (Basel II) and a central bank focusing solely on price and output stability. Based on this benchmark scenario, we then test the impact of either a tightened financial regulation, of various degrees of a central bank's response to financial imbalances and a combination of both. As also done by Gelain et al. (2012), results are considered in terms of the two objectives of both policies, (macro)economic and financial stability, in order to shed light on potential conflicts and crowding-out effects.
Our experiments provide three main findings. First, assigning more than one objective to the monetary policy instrument in order to achieve price, output and financial stability simultaneously, confirms the expected proposition of the Tinbergen principle in the sense that it is not possible to improve financial stability additionally to the traditional goals of monetary policy. The results of our experiment show that after a long phase of deregulation, "leaning against the wind" either has a positive impact on price and output stability but affects the fragile financial system only marginally. Moreover, in a system in which banks have to comply with tight prudential requirements, a central banks' additional response to the build-up of financial imbalances does not lead to improved outcomes concerning both macroeconomic and financial stability. In contrast, using prudential regulation as an independent and unburdened policy instrument significantly improves the resilience of the system. Second, "leaning against the wind" only should serve as a first line of defense in the absence of prudential financial regulation. If the activity of the banking sector is already guided by an appropriate regulatory framework, the results are in line with Svensson (2012) who argues that "the policy rate is not the only available tool, and much better instruments are available for achieving and maintaining financial stability. Monetary policy should be the last line of defense of financial stability, not the first line". Macroprudential policy dampens the build-up of financial imbalances and contributes to the resilience of the financial system by restricting credit to the unsustainable high-leveraged part of the real economy. This strengthens the view of opponents which argue that both policies are designed for their specific purpose and that they should be used accordingly.
Third, our results confirm that, in line with Adrian and Shin (2008b,a), both policies are inherently connected and, thus, influence each other which emphasizes that an appropriate coordination is inevitably and that the prevailing dichotomy of the currently used linear quadratic framework may lead to misleading results.
The remainder of the paper is organized as follows: in section 2, we give an overview of the structure of the underlying ACE model (while a part concerning common macroeconomic stylized facts which are replicated by the model is outsourced to the appendix B) followed by a detailed description of the conducted experiments in section 3. Section 4 provides a discussion of the results for different monetary policy rules comparing their performance in terms of macroeconomic and financial stability. Section 5 concludes.

General Characteristics
The agent-based macroeconomic model (ACE model) presented in the following consists of six types of agents, i.e. households and firms representing the real sector, a central bank, a government and a financial supervisory authority forming the public sector and a set of traditional banks (financial sector). Agents are heterogeneous in their initial endowments of e.g. productivity, amount of employees or clients and interact through a goods, labor and money market in order to follow their own needs like consuming or making profit. Figure 1 provides an overview of the relationships between types of agents on a monetary level. As a result of the interaction of heterogeneous agents, the model exhibits common macroeconomic stylized facts emerging through the course of the simulation such as endogenous business cycles, GDP growth, unemployment rate fluctuations, balance sheet dynamics, leverage/credit cycles and constraints, bank defaults and financial crises, as well as the need for the public sector to stabilize the economy [Riccetti et al. (2015)] (see also appendix B).
Since the model should serve as an experimental lab to analyze policies regarding monetary policy and banking regulation, we focus on the monetary system and model it in great detail. Therefore, we adopt as much as possible from the functionality of the real world template provided by the Bank of England's "UK Sterling Monetary Framework" [Bank of England (2014c)]. Here, the CB plays a crucial role since it implements monetary policy as usual in developed countries by setting a target rate which directly affects the whole set of existing interest rates, in particular the rates charged on loans to the real sector by means of increased refinancing costs. Through the resulting effect on credit demand, the CB's monetary policy transmits to overall economic activity, i.e. to production and price levels and, thus, to inflation and output. Therefore, the presented model is well suited to analyze the question of whether macro-financial stability issues should be an explicit concern of monetary policy decisions or if it should be better left to macroprudential regulation and banking supervision. In the rest of the chapter, the behavior of each type of agent is described in detail.

The Real Sector
In order to build a full macroeconomic model, we implemented a stylized real sector consisting of households and firms.

Households
Every household (HH) h (with h = 1, . . . , H) starts with an initial labor skill ψ h which is a random draw from a truncated normal distribution, i.e. ψ h ∈ max[0.5, ∼ N (2, σ 2 )], and it determines both the initial productivity and wage level of h 6 . If a HH is unemployed, it receives unemployment benefit from the government and starts searching for a job out of a fraction (α = 0.95) of all vacancies. On the labor market, HH offer their labor skill and firms search for an amount of workers which satisfy their specific labor skill demand enabling them to meet their production target. If there are any matchings, i.e. if the HH faces vacancies in its currently observed subset of all vacancies that demand at least ψ h,t , it is hired by a random firm from this individual subset and stays unemployed otherwise. As employees of firms, HH contribute to the production of a (homogeneous) bundle of goods 7 in an order of magnitude representing their individual labor skill (see production function of firms (5)). Moreover, HH plan their weekly consumption level, c p h,t , and update it once a quarter. c p h,t is composed of an autonomous part co-varying with the average wage of the previous quarter and a part depending more on the current individual financial situation of HH h, i.e.
where η represents the HH's adjustment speed to new levels of income and I h,t−12 the average weekly income of the previous quarter including received wages, interest on deposits as well as dividends on an accrual basis. The planned consumption level only deviates from the actual level c h,t in the case in which HH h cannot afford to consume c p h,t due to the lack of money or of supply. Thus, HH h might be restricted by its current bank deposits with taxes on income (τ I = 0.3), on capital gains (τ CG = 0.25) and on consumption (τ V AT = 0.2). Thus, D h,t depends on the surplus of income over expenditures since the beginning of the simulation. The HH's sources of income include a mix of wages (w) and unemployment benefits (U B) depending on how long she was unemployed until t as well as interest on its deposits (i D ). Moreover, at the end of each fiscal year, firms and banks (partially) distribute their profits in form of dividends (d F and d B , respectively). From these sources of income, the HH's expenditures consists of its previous consumption (until t − 1) and the investments in a firm or bank if it is stakeholder of corporation (e F and e B , respectively).

Firms
The technology of firms follows the work of Stolzenburg (2015) where the author implements parts of the famous Solow growth model into an agent-based framework [Solow (1956)]. Hence, each firm f (with f = 1, . . . , F ) determines its production target q * f,t for the next quarter according to a simple heuristic which is based on a target value for its capacity utilization of U * = s f,t /q * f,t = 0.75. Thus, U * < 1 leads to an expected additional production capacity exceeding sales s f by (1 − U * )s f and enabling firms to accommodate demand fluctuations. Hence, the firms production target is set according to The production function for the weekly output faced by each firm is of the Cobb-Douglas-type with aggregate labor skill currently used by firm f (Ψ f,t ) as input and technology parameter A t representing technological progress since labor productivity of HH grows at a constant exogenous rate of g A = 0.012 annually (or g Q A = 0.003 per quarter), i.e.
The wage per unit of labor skill w f,t offered by firms on the labor market also follows a simple heuristic with an update frequency of once per quarter. It grows at the same rate as the labor productivity g Q A and takes current expected inflation (π e t ) into account: Current expected inflation means a weighted sum of annualized monthly inflation rates of the past 2 years influenced by the CB's inflation target π * times the CB credibility parameter χ π = 0.25 Moreover, w f,t also depends on the firm's weighted employment gap (Ξ f,t ) as an indicator of the firm's ability to hire enough workers to meet its production target given its current offered wage, i.e.
In order to finance its planned production in advance, firms request loans L f,t from banks with a maturity of 10 years. The requested amount is determined by the expected labor costs for the subsequent quarter in dependence of the current production target q * with q −1 f,t (q) as the inverse production function giving the amount of labor skill units needed to produce a given amount of output (here the firm's production target q * f,t ). Equation (10) shows that if firms have insufficient funds to cover the expected labor costs entirely through internal financing and, thus, have a positive demand for bank loans, they add a markup of 10% (κ = 1.1) on top of the expected labor costs to have an appropriate financial margin for their operational business. 8 After receiving a request from a firm, banks decide on the interest to charge on the loan (i b,f,t ) which depends on the firm's ability to generate sufficient cash flows to meet its debt obligations during the past fiscal year. 9 Now firms can evaluate on the profitability of the investment given the offered loan conditions. This decision is based on the internal rate of return which is represented by the fact that the probability to take the loan under the offered conditions negatively depends on the offered interest rate i b,f,t , i.e.
Hence, there might be cases in which the firm does not take the loan due to the bank's high risk premium as a result of the firm's poor ability to generate a sufficient amount of cash flows. In these cases of a loan rejection, the firm can only employ an amount of workers appropriate to its internal financing capacity.
To set the retail price for a unit of the produced bundle, firms add a mark-up on expected unit costs (µ > 1) and account for expected inflation 8 In the case of q f,t > q * f,t , the firm fires an adequate amount of workers. 9 There is also the possibility of only partially granting the requested loan, but following a survey of the ECB, these cases are only of minor importance. The decision process used here represents over 80% of decisions made by banks within the Euro area [ECB (2010)]. The decision process of banks concerning the granting of loans is described in detail in subsection 2.4.
The generated revenues are used to pay wages and, if any, to settle due parts of their obligations from loan contracts, i.e. they make principal payments and pay interest to the bank. If a firm is not able to meet its debt obligations, it exits the market and all financial claims are cleared in such a way that banks have to depreciate the outstanding loans after receiving the proceeds of the liquidation of the firm's assets, if any, and owners lose their share of the firm's equity. Assuming that the bankruptcy of a firm happened in t, a new firm enters the market in t + 24 + (where is a positive uniformly distributed integer between zero and 48) given that there exists a sufficiently large group of investors 10 . If all goes well and the firm meets its obligations until the end of the fiscal year, it determines its profit before taxation where the cost of goods sold include due interest on outstanding debt i debt f and labor costs of the fiscal year (for a detailed description of interest rates charged on loans, see section 2.4). In the case of Π f,t > 0, firms are burdened by the government with a corporate tax so that the profit after tax results from From the remaining profit after taxation, θΠ at f,t serves as retained earnings to strengthen the internal financing capacity while the residual of (1 − θ)Π at f,t (with θ = 0.9) is distributed as dividends to equity holders.

The Public Sector
We decided to introduce the public before the financial sector because the description of bank agents is much more clear if all relevant parts of the monetary system affecting their behavior are introduced in advance. The public sector consists of a government, a central bank as well as a financial supervisory authority which imposes regulatory requirements on banks.

The Central Bank
In contrast to mainstream financial/credit market models building on the New Keynesian Model (NKM), the underlying monetary framework of the model follows the post-keynesian theory of endogenous money [see Lavoie (2003) among others], i.e. the amount of money in the system is determined by the investment decisions of real sector agents (demand-driven) instead of the supply of the CB (supply-driven). Thus, we implement a monetary system along the lines of the UK Sterling Monetary Framework of the Bank of England (BoE) using it as a template 11 . The orientation seems to be reasonable, since the BoE itself recently attracted attention in the field by implicitly accepting endogenous money theory in their in-house journal, the BoE Quarterly Bulletin [McLeay et al. (2014b,a)].
10 Firms which are shut down, do not vanish from the economy. In order to ensure the stock flow consistency of the model, these firms are just inactive until a new group of HH (investors) has enough capital to reactivate the firm [Dawid et al. (2014)].  At the heart of the UK reserve averaging scheme 12 is a real-time gross settlement (RTGS) system [Kelsey and Rickenbach (2014); Dent and Dison (2012); Nakajima (2011); Arciero et al. (2009)] which enables the CB to provide liquidity insurance to commercial banks via operational standing facilities (OSF) and, thus, to meet its lender of last resort (LOLR) function. This means that the settlement of a transaction between real sector agents takes place as soon as a payment is submitted into the system (real-time) and that payments can only be settled if the paying bank has enough liquidity to deliver the full amount in central bank money (gross settlement, i.e. no netting takes place) [Galbiati and Soramäki (2011)]. Banks have to finance their reserve accounts for the current maintenance period 13 in advance by setting a target average for their reserve holdings depending on their current interest bearing deposits and pledging a suitable amount of collateral with the CB (monthly repo 14 with government bonds as collateral 15 ) [Ryan-Collins et al. (2012)]. In turn, banks' reserve holdings are remunerated at the CB's target rate i * t on a period average basis. For that reason, the CB defines a narrow 1%-range around the target balance of each bank. Hence, if a bank has met its reserve target range, it will be credited with the interest earned against its average balance at the end of each maintenance period.
However, through the course of the maintenance period, banks face an unpredictable stream of transactions between real sector agents each affecting their reserve balances. Thus, economic activity usually leads banks to end up with an average reserve balance outside of their reserve target range, i.e. with either excess reserves or a reserve deficit. To ensure the compliance with the target range, banks are encouraged to appropriately manage their liquidity. By charging a premium of i OSLF − i * open (interbank) money market and reallocate outstanding reserves through overnight repos with peers before turning to the CB's standing facilities [compare Lavoie (2003)]. We model the interbank market as a (decentralized) over-the-counter (OTC) market which requires bank b (in need of reserves) to find a counterparty within the set of all other banks willing to lend reserves to b [Afonso and Lagos (2015)]. The conditions for overnight interbank repos are then based on bilateral negotiation about volume and interest charged (i M M b,t ). Whereas the volume depends on the counterparties current excess reserves, the money market rate i M M b,t faced by b depends on i * t , the current financial soundness of bank b and on the current supply of excess reserves on the money market expressed by which serves as a measure for how far the current aggregate average reserves (R t ) are away from the aggregate reserve target (R * t ). Hence, the prevailing incentives scheme shown in figure 4a leads to an individual money market rate for bank b of with as well as ε (ξ b,t ) representing a small risk premium/discount (between -10 and +10 basis points) depending on b's financial soundness measured by its (figure 4b shows this exemplary for Γ t ∈ (0, 2)). Table 1 shows the corresponding interest corridor build by the lending/deposit facility rates which depends on the current target rate i * t as well as the parameter sets for σ 1 , σ 2 , σ 3 and σ 4 . 16 Since we now have described how the CB uses the target rate as key instrument to transmit monetary policy in the model, we finally have to explain how decisions about its current level are made. The CB follows a standard Taylor Rule under flexible inflation targeting in order to ensure price and output stability. Equation (19) can be considered as a benchmark representing the case of conventional monetary policy which does not target any financial stability measure: with i r t = π * = 0.02 and x n t representing the long-term trend of real GDP measured by application of the Hodrick-Prescott-filter (with λ = 1600/4 4 = 6.25 for yearly data [Ravn and Uhlig (2002) The scheme's inherent interest incentive for banks combined with being in full control of the target rate and, thus, of the prevailing interest corridor, enables the CB to perfectly steer interest rates, indebtedness of the real sector and, hence, economic activity.

The Government
The government issues bonds with a face value of 1000 monetary units and a duration of 5 years. The fix annual coupon orientates at the interest rate on the money market at t, i.e. approximately at the target rate of the central bank, and lies slightly (15 basis points) above i * t [Choudhry (2010)]. The present value of each bond is determined by its clean price (neglecting accrued interest) using the standard textbook formula from Bodie et al. (2010) p clean where F V k,t denotes the face value of bond k in t, c k the coupon, n k,t the amount of remaining coupon payments at t, Ω k,t the amount of days since the last coupon payment, and Υ k,t the total days in the coupon period. At the beginning of every simulation, the government brings money into the system by issuing bonds and selling it to the commercial banks and the CB which pay by crediting the Balance Sheet 5: Government governments bank account. These deposits enable the government to spend and every time the government runs out of deposits, it repeats this transaction in order to ensure its financial ability to act [Lavoie (2003)]. 17 Its expenditures for unemployment benefit to HH and interest on outstanding public debt are financed by raising income taxes on wages (τ I = 30%), a VAT on the consumption of goods (τ V AT = 20%), a corporate tax on profits of firms and banks (τ C = 60%), and a tax on capital gains (τ CG = 25%).
In the case of a threatening default of a systemically important bank (SIB), i.e. of a bank that has significant market share and, thus, a crucial role for the functioning of the payment system, the government bails out the institution in distress by waiving of deposits and the issuance of new government bonds. The financial supervisory authority agent aims to ensure the growth-supportive capacity of the financial sector by imposing micro-and macroprudential capital requirements on banks according to the current Basel III accord of the Basel Committee of Banking Supervision (BCBS) [Krug et al. (2015)] 18 . So except for the leverage ratio of 3%, all capital requirements are riskbased, i.e. require a minimum amount of capital in relation to the riskiness of its loan portfolio measured by risk-weighted assets (RWA). We calculate the RWA b,t by assigning risk-weights to loans which depend on the current probability of default of firms and banks. Hence, the RWA are an increasing function of the client's D/E-ratio, i.e. the client's probability of default (PD) is determined by

The Financial Supervisory Authority (FSA)
for claims against firms and banks, respectively. Figure 5 shows the qualitative differences of risk weights between firms and banks based on their differing business models leading to the fact that the latter can have a much higher D/E-ratio for the same risk weight compared to firms. Positive risk weights are assigned to assets resulting from loan contracts whereas government bonds have a zero-risk weight. Imposed requirements consist of a required core capital of 4.5% extended by the capital conservation buffer (CConB) of 2.5%, a countercyclical Buffer (CCycB) of 2.5% which is set by the CB according to the rule described in Basel Committee on Banking Supervision (BCBS) (2010) and Drehmann and Tsatsaronis (2014); Agénor et al. (2013); Drehmann et al. (2010), i.e. according to the gap of the current credit-to-GDP ratio and its long term trend determined by applying the Hodrick-Prescott filter 19 with a smoothing parameter λ = 1600 [Ravn and Uhlig (2002)]: In line with the regulatory proposal of the Bank of International Settlement (BIS), we set N = 2 and M = 10. Finally, we impose surcharges on SIBs using the banks' market share measured by total assets as indicator for their assignment to the buckets, i.e. if holds, b is assigned to bucket 6 − z whereas an assignment to bucket 6 means no surcharge and to bucket 2 an extension of the risk-based capital requirement of 3.0% (the highest bucket with a surcharge of 3.5% is empty by definition).

The Financial Sector (Commercial Banks)
The initial bilateral relationships between bank b (with b = 1, . . . , B) and real sector agents are assigned randomly, i.e. each household and firm chooses a bank where it places its deposits and requests loans. These relationships do only change in the case of a default of an agent. In the case of a bank default, all clients of the insolvent bank randomly choose a new bank and if a new founded bank enters the market, clients of other banks have a small probability to switch. New firms also choose their banks randomly. The same holds for the ownership relationships since firms and banks are owned by households. Furthermore, we suppose that all transactions in the overdraft economy are conducted by only using scriptural money, i.e. there exist no banknotes (cashless economy).
The endogenous provision of credit money to firms represents the heart of commercial banks' (traditional) business model. The granting of loans is based on a 3-stage decision process: first, after receiving a loan request from a firm, the bank proofs whether it would still comply with the regulatory requirements if it would grant the loan. Thus, the firm can only receive credit money if the bank's balance sheet provides enough regulatory scope to make more loans without violating financial regulation 20 . In case of a positive finding, banks, in a second step, set the interest on the loan i b,f,t by consulting a simple internal risk model to evaluate the firm's creditworthiness. In fact, it determines the risk premium charged on the requested loan using the ratio of the firm's debt obligations and its revenues in order to measure the client's ability to generate sufficient cash flows to meet its debt obligations during the fiscal year Thus, i b,f,t moves in lock-step to the target rate i * t and includes a basic mark-up of 2% as well as a firm-specific risk premium of 10% if firm f has generated an amount of cash flows which 20 A violation can have several reasons and can violate either the non-risk based or risk-based capital requirements or both. The granting of the requested loan can either lead to a violation of the leverage ratio due to the loan volume or to an increase in bank's RWA which is too large because the client already exhibits a very high indebtedness. The implemented buffers do not restrict any lending activity, since a violation just leads to a (partly) payout block of dividends. exactly equal its debt obligations for the upcoming fiscal year. The risk premium declines with the amount the cash flow exceeds the debt obligations. 21 The third and final step involves the firm's decision on the acceptance of the offered conditions. The probability to accept the conditions is a negative function of i b,f,t and, hence, of the current target rate i * t (compare eq. (11)). So, in line with the endogenous money theory, the money supply depends on the current indebtedness of the real sector (implicitly via the regulatory channel) and on the CB's current monetary policy decisions.
In addition to the lending activity to the real sector, banks have also the opportunity to generate profits by exploiting the prevailing interest spreads.
holds, meeting the real sector's demand for credit has the highest priority whereas lending excess reserves to peers or placing them at the CB has a subordinated role. 22 Except for the rate charged on loans to the real sector which moves in perfect lock-step to the target rate, we decided to widen the spread for higher levels of i * t stepwise because this seems to be a property of FED funds data in the past. 23 We guess that if monetary aggregates increase along with economic activity, the CB indents to provide more scope for banks to reallocate reserves among themselves through interbank lending before turning to the standing facilities. Therefore, the calculation of i M M b,t in equation (17) is carried out accordingly (compare also table 1). Thus, the share of the profit after tax distributed to HH, i.e. the current dividend, depends on the bank's equity ratio and whether it has to build-up capital buffers or not.
In order to ensure the pure endogeneity of money in the model, bank agents have to manage their liquidity appropriately. The CB provides liquidity insurance for banks by means of standing facilities which can be used against collateral at the end of each settlement day. 24 But since the reserve target of banks only covers a fraction of their interest bearing deposits (see eq. (15)) and the volume of payments to execute during the settlement day is unpredictable, the CB additionally provides secured short-term repos for banks in need of liquidity during the course of the settlement day. These reserves are referred to intraday liquidity (IDL) and have to be repaid at the end of the settlement day [Bank of England (2014a); Dent and Dison (2012); Ryan-Collins et al. (2012)]. So, the provision of IDL ensures that any payment of a banks' client can be settled in real-time and on a gross basis. 25 An exemplary representation of bank agents' liquidity management during the maintenance period is shown in figure 7. After settling the IDL balances with the CB, banks might have negative reserve accounts. In such a situation, banks have, as described above, the possibility to either borrow from peers or, if this is not possible for some reason, to use the OSLF and borrow the needed funds overnight from the CB. The associated costs can be seen as a penalty for a negative reserve account. 3 Design of Experiments (DOE) Mishkin (2011) states that, despite the occurrence of the recent financial crisis, there is no reason to turn away from traditional new keynesian theory of optimal monetary policy, which caused us to do so to measure monetary policy outcomes. According to Verona et al. (2014), the assessment of the research question formulated above entails three main issues, i.e.
(i) determination of a financial stability measure, (ii) modelling of the CB's policy response, (iii) determination of a criterion for policy effectiveness.
Then policy outcomes will be compared in order to show whether crisis mitigation is better achieved with a monetary policy reaction or with financial regulation, i.e. macroprudential policy.
In this regard, the indicator in use for the measurement of financial instability to which the CB should respond to, is, indeed, a crucial issue. Woodford (2012) suggests that, from a theoretical point of view, using financial sector's leverage would be the natural choice. However, Stein (2014) argues that this would be hard to measure in a comprehensive fashion and one should better stick to a broader measure of private sector leverage. He points to the work of Drehmann et al. (2012); Borio and Drehmann (2009); Borio and Lowe (2002) which show that the ratio of credit to the private non-financial sector relative to GDP (the credit-to-GDP ratio) has considerable predictive power for financial crises. Hence, we try to shed some light on these issues by comparing policy outcomes of CB's response to either a measure for the financial sector's leverage which targets a prudent balance sheet structure of the aggregate banking sector [Adrian and Shin (2008b,a)] as well as to the credit-to-GDP ratio. In order to address (ii), the following paragraph describes the implementation in detail: • In line with the literature on early warning indicators for financial crises [Babecký et al. (2013); Gadanecz and Jayaram (2009)], we construct a composite financial stability indicator (CFSI) and augment the standard instrument rule by the deviation from its target value CF SI * : with i r t = π * = 0.02 and x n t representing the long-term trend of real GDP measured by application of the Hodrick-Prescott-filter (with λ = 1600/4 4 = 6.25 for yearly data [Ravn and Uhlig (2002)]). Moreover, the CF SI t consists of the average D/E-ratio of banking sector as well as of the inverse of banks' average equity ratio As a benchmark, we set CF SI * = 6 which corresponds to an average D/E-ratio in the banking sector of 33 (or an average leverage ratio of approx. 3%) as well as an average equity ratio of 7% core capital, both representing current thresholds of the Basel III accord. This setup leads to an increasing (declining) CFSI if the banking sector gets more fragile (stable) over time.
• In experiments, in which the CB responds to jumps in the credit-to-GDP ratio 26 , target rate decisions are guided by with Λ t as defined in eq. (23). The credit-to-GDP gap Λ t − Λ n t is determined by the difference between the current credit-to-GDP ratio and its long-term trend measured by means of applying the Hodrick-Prescott filter with a smoothing parameter λ = 6.25 [Ravn and Uhlig (2002)].
Concerning (iii), there are two main traditions in the literature. The first one is to search for the policy that maximizes social welfare, i.e. maximizes the utility function of HH, but according to Verona et al. (2014) this approach has some drawbacks which is why we go with the second one, that is, the policy that best achieves the objective at hand by minimizing loss functions. For the sake of clarity, we take up the approach of Gelain et al. (2012) and differentiate between (macro)economic (L M S δs,k,m ) and financial stability (L F S δs,k,m ). Hence, we define two loss functions in order to easily evaluate outcomes in both dimensions whereby the former is usually defined as the weighted sum of the variances of inflation, output gap and of nominal interest rate changes 27 , i.e.
L M S δs,k,m = α π Var(π δs,k,m ) + α x Var(x δs,k,m ) + α i Var(i δs,k,m ) with α π = 1.0, α x = 0.5, α i = 0.1 [Agénor et al. (2013); Agénor and Pereira da Silva (2012)]. The latter, however, addressing financial stability (L F S δs,k,m ) is defined in terms of the weighted sum of the volatility of the financial stability indicator in charge, the average burden for the public sector of a bank bailout, measured as the fraction of the average bailout costs for the government and the average amount of bailouts, as well as the average amount of bank and firm defaults (ζ δs,k,m , µ δs,k,m and γ δs,k,m , respectively), i.e.
The technical implementation of the experiments can be outlined as follows. In order to shed light on the question whether central banks should expand their dual mandate by financial stability issues or if this should better be left to financial regulation, the performance of various policy rules (scenarios) is evaluated in counterfactual simulations of the underlying agent-based (disequilibrium) macroeconomic model. 28 Therefore, we conduct Monte Carlo simulations for random seeds 1 . . . 100 29 while every run has a duration of T = 3000 periods and the chosen set up consists of 125 HH, 25 firms and 5 banks. 30 According to our setting, 31 this duration can be translated into approximately 60 years. Hence, for the analysis, we take the last 50 years (2400 periods) into account and use the first 600 periods as initialization phase. The analyzed scenarios differ in the implemented regulatory regime, i.e. banks have to comply either with regulatory requirements in line with the Basel III accord or with its predecessor, namely Basel II. A further difference consists in the CB's response to the financial stability measure which can either be the CFSI or the credit-to-GDP gap leading to 4 different policy-scenarios to analyze.
For each of these 4 scenarios, we basically follow the idea of the recent "model-based analysis of the interaction between monetary and macroprudential policy" of the Deutsche Bundesbank [Deutsche Bundesbank (2015)] which searches for optimal values for the coefficients in the monetary policy rule using three differing DSGE models including a macroprudential rule. Hence, we conduct a grid search within the three-dimensional parameter space spanned by δ π ∈ [1, 3], δ x ∈ [0, 3] and δ s ∈ [0, 2] 32 whereby the case of m = Basel II (no macroprudential policy) and δ s = 0.0 (no leaning against financial imbalances of the CB) represents the benchmark scenario. This benchmark scenario is chosen because it is comparable with the situation prior to the recent financial crisis, i.e. with a CB solely focusing on price and output stability through a traditional implementation of monetary policy using the standard Taylor rule (eq. (19)) as well as a rather loose regulatory environment.
So, the analysis procedure for raw data produced by the model includes the following steps: A. The grid search to detect areas of best performing parameterizations for all scenarios in order to find data points/parameter combinations to have a closer look at (using contour plots and heat maps).
B. A closer look at composition of losses in best performing areas and in which way they differ to the benchmark case (using distribution box plots).
28 The ACE Model is programmed in Scala 2.11.7 and the code is available upon request to s.krug@economics.uni-kiel.de. 29 We chose only 100 because of the pure amount of data points to simulate and the corresponding time restrictions. 30 We have also conducted experiments with a set up which follows Riccetti et al. (2015) implementing 500 households, 80 firms and 10 banks but the the results where qualitatively the same. 31 Within our model, every tick represents a week and every month has 4 weeks which adds up to 48 weeks for an experimental year. 32 The monthly report of March 2015 of the Deutsche Bundesbank states this parameter space as commonly used for DSGE models and refers to Schmitt-Grohé and Uribe (2007) in this regard.
C. Analysis of micro data to figure out which micro level-processes lead to the observed performance increase relative to the benchmark.
The next section presents the results of the described experiments.

Discussion of Results
We start from a situation comparable to the pre-crisis period, i.e. without any macroprudential policy in place and without any "leaning against the wind" of the central bank (δ s = 0). This is the benchmark case (representing 100%) and all other results (losses) are expressed relative to this case (in percent of corresponding benchmark loss). It can be found in the picture matrix of figure 8 with coordinates (1,1). All results of the four analyzed scenarios in figures 8-14 are m × n-matrices with rows representing a combination of the macroeconomic and financial stability loss functions, i.e. of eq. (29) and (30): in which m 1 = α L = 1 . . . m 5 = α L = 0 holds. Moreover, columns represent layers of the δ s -dimension of the parameter space with n 1 = δ s = 0.0 . . . n 9 = δ s = 2.0.
Scenario 1: Response to financial sector leverage in a loose regulatory environment Figure 8 shows the losses for the direct response to financial sector leverage in a rather loose regulatory environment (Basel II). If policy makers leave their focus on the traditional monetary policy goals of price and output stability (α L = 1; first row), "leaning against the wind" (δ s ≈ 1.0) has a positive effect on these for common values of δ π and δ x . In terms of financial stability (α = 0.0; 5th row), results show that such an extension of the central banks' mandate only leads to minor improvements. This stems mainly from the the already existing fragility of the system due to the lack of an appropriate regulatory environment. Of course, since there are no conflicting effects or a trade-off, respectively, of δ s > 0 on L M S and L F S in this scenario, implementing an extended monetary policy which tries to incorporate also financial stability issues (α = 0.5) still leads to a gain relative to the benchmark. Figure 9 shows how the individual components of the loss functions react to the central bank response in detail. Here, the caution against the consequences of an overreacting monetary policy seem not to be valid. Indeed, the volatility in the target rate increases significantly but at the same time the volatility in inflation and output gap decreases except for some tail events which, in turn, seems to lead to lower firm and considerably lower bank default rates. Also the tail risk for extremely high fiscal costs exhibit a large decline. The blue, dashed distribution represents the benchmark scenario while the red, solid one represents the counterfactual scenario.
Scenario 2: Response to unsustainable credit growth in a loose regulatory environment Figure 10 shows basically the same story for the response to the credit-to-GDP gap, meaning that in a poorly regulated financial system both analyzed transmission channels of monetary policy do not make much of a difference. Again, we can have a look at the composition of minimum losses. This time the volatility in the target rate reduces tremendously likewise with that of inflation. In opposition to the direct tackling of banks' balance sheet structure, a response to jumps in the credit-to-GDP ratio does only seem to have marginal effects on the resilience of the financial system. While the variance in firm and bank defaults increase, the fiscal costs of banking crises just seem to improve in the probability of extreme events. Again, there is no conflict between policy targets meaning that also with a response to unsustainable credit growth as an indicator for financial imbalances, "leaning against the wind" can contribute to the targets of policy makers. To sum up, our results concerning a deregulated system confirm the expected proposition of the Tinbergen principle in the sense that it is not possible to improve financial stability additionally to the traditional goals of monetary policy when addressing both distinct goals (macro and financial stability) using only monetary policy as policy instrument 33 . (e) ρ δs,k,m (bank defaults) (f) γ δs,k,m (firm defaults) Figure 11: Minimum loss given a response to C/GDP gap under Basel II; δ π = 1.5; δ x = 2.5; δ s = 1.0; α L = 1.
The blue, dashed distribution represents the benchmark scenario while the red, solid one represents the counterfactual scenario.
Scenario 3: Response to financial sector leverage in a tight regulatory environment If now the supervisory authorities decide to terminate a period of significant financial deregulation by burdening financial intermediaries with various prudential requirements, as happened in the aftermath of the recent financial crisis, the picture is somewhat different. With macroprudential policy as a separate and independent policy instrument to tackle financial instability, a supplementary action by the central bank seems to be counterproductive (cf. figure 12). Given the setting of the current scenario, the loss is minimized if central bankers would use the monetary policy instrument exclusively to target traditional goals, i.e. the common dual mandate, because the tighter financial regulation already serves as first line of defense against banking crises. Thus, any additional intervention via the target rate has a negative impact on the traditional monetary policy goals. Moreover, the results show that without an active guidance of economic activity through monetary policy, financial stability cannot be achieved, i.e. losses for δ π ≈ 1.25 significantly increase the fragility of the system which underpins the above mentioned common view that inflation can be seen as one of the main sources of financial instability. Hence, our results confirm that, in line with Adrian and Shin (2008a,b), both policy instruments are inherently connected and complementary, thus, influence each other which emphasizes that an appropriate coordination is inevitably and that the prevailing dichotomy of the currently used linear quadratic framework may lead to misleading results. Having a closer look at the composition of the minimum loss, figure 13 shows that even without a central bank which leans against the wind, both the traditional goals of monetary policy as well as the goal of a much safer banking sector seem to be achievable simultaneously leading to positive effects on the real economy. Put differently, the results suggest that a tightening of financial regulation only comes at marginal costs in terms of the central bank's primary goals (macroeconomic stability) but can significantly improve financial stability within the artificial economy. Under the Basel III accord, volatility of inflation rises while volatility of output and interest rates decrease vastly. In contrast, figure 13d-13f highlight the considerable role of an appropriate degree of financial regulation for the resilience of the financial system. The fiscal costs caused by the need to recapitalize significantly large institutions (government bail outs of banks which are "too big to fail") could be lowered tremendously. This stems mainly from the fact that the tail risk concerning the occurrence of bankruptcy cascades massively boosting fiscal costs could be strikingly decreased by providing an incentive scheme which is sufficiently able to control for banks' risk appetite through the imposition of prudential regulatory requirements. While also the amount of bank defaults decreases significantly, the more interesting part of the results is the effect of a tightened banking regulation on the real sector. The relatively stable range of firm defaults under Basel II (≈ 550 defaults per run) turns into a range with slightly increased variance but with a significantly lower mean. This stems from the fact that banks under Basel III have less lending capacity per unit of capital and also tighter leverage restrictions. At the first glance one might argue that this may lead to non-exhausted growth potential but it rather seems to implicitly restrict lending activity to the already (unsustainable) high-leveraged part of the real sector, dampening the build-up of financial imbalances and, therefore, improving the overall sustainability of economic activity. Hence, the implementation of macroprudential policy has the effect that banks are more cautious in their lending activity since they have to ponder whether to grant a credit to a firm since their lending capacity is much more sensitive to a possible future non-performance of its customers. The blue, dashed distribution represents the benchmark scenario while the red, solid one represents the counterfactual scenario.
Scenario 4: Response to unsustainable credit growth in a tight regulatory environment For the response to the credit-to-GDP gap, qualitative results are similar to a direct response to unsustainable levels of leverage in the financial sector (scenario 3). δ S > 0 has almost the same negative impact on the traditional monetary policy goals. The major difference here is that the resilience of the financial system does improve slightly for moderate levels of δ s , i.e. the minimum loss given the focus on L F S (α L = 0) is achieved for δ S = 0.5. But since it is doubtlessly useful to search for the best compromise of both targets, δ s = 0.0 would be appropriate due to the negative effect on volatility of inflation rates. Also the composition of the minimum loss differs from a response to the CFSI, mainly in the higher amount of bank defaults although fiscal costs and firm defaults decline sharply. This phenomenon seems to stems from the conflicting effects of the presence of prudential requirements (positive) and the δ s > 0 (negative) on the financial system. Thus, there are still cases in which tax payers are burdened with high costs of banking crises but stricter lending standards are clearly beneficial in order to prevent from frequent massive public sector interventions which is in line with the findings of Rubio and Carrasco-Gallego (2014) and Gelain et al. (2012). Also in line with Gelain et al. (2012) is that a direct interest response to excessive credit growth in the central bank's interest rate rule can stabilize output but has the drawback of magnifying the volatility of inflation. The blue, dashed distribution represents the benchmark scenario while the red, solid one represents the counterfactual scenario.

Concluding Remarks
The aim of this paper is to shed some light on the current debate on whether central banks should lean against financial imbalances and whether financial stability issues should be an explicit concern of monetary policy decisions or if these should be left to macroprudential regulation and banking supervision. Based on the pre-crisis situation in which financial regulation was way too loose and central banks just focused on their usual dual mandate, there are two policies that have been found adequate to increase the overall resilience of the financial system, i.e. either monetary or macroprudential policy (or a combination of both). So, we also shed some light on the nexus between financial regulation and monetary policy by considering the outcome of policy experiments in terms of macroeconomic and financial stability. As a framework for the analysis, we present an agent-based macro-model with heterogeneous interacting agents and endogenous money. The central bank agent plays a particular role here since it controls market interest rates via monetary policy decisions which, in turn, affect credit demand and overall economic activity. Therefore, we think that the presented model is well suited to analyze the question at hand.
Our simulation experiments provide three main findings. First, assigning more than one objective to the monetary policy instrument in order to achieve price, output and financial stability simultaneously, confirm the expected proposition of the Tinbergen principle in the sense that it is not possible to improve financial stability additionally to the traditional goals of monetary policy. The results of our experiment show that a long phase of deregulation, leaning against the wind either has a positive impact on price and output stability but affects the rather fragile financial system only marginally. Moreover, in a system in which banks have to comply with rather tight prudential requirements, a central banks' additional response to the build-up of financial imbalances does not lead to improved outcomes concerning both macroeconomic and financial stability. In contrast, using prudential regulation as an independent and unburdened policy instrument significantly improves the resilience of the system. Second, leaning against the wind only should serve as a first line of defense in the absence of prudential financial regulation. If the activity of the banking sector is already guided by an appropriate regulatory framework, the results are in line with Svensson (2012) who argues that "the policy rate is not the only available tool, and much better instruments are available for achieving and maintaining financial stability. Monetary policy should be the last line of defense of financial stability, not the first line". Macroprudential policy dampens the build-up of financial imbalances and contributes to the resilience of the financial system by restricting credit to the unsustainable high-leveraged part of the real economy. This strengthens the view of opponents which argue that both policies are designed for their specific purpose and that they should be used accordingly.
Third, our results confirm that, in line with Adrian and Shin (2008b,a), both policies are inherently connected and, thus, influence each other which emphasizes that an appropriate coordination is inevitably and that the prevailing dichotomy of the currently used linear quadratic framework may lead to misleading results.
Finally, the present paper is useful to understand that the famous principle of Tinbergen has indeed its justification since using monetary policy in order to achieve additional goals merely leads to an overburdened policy instrument still leading just to one-dimensional effects.
Moreover, Olsen (2015) is right when arguing that financial regulation probably cannot do it alone and that it needs support but without overburdening monetary policy's mandate. But this seems to be the crux of the matter. Indeed, there can be done too much when heading towards crises mitigation since additional central bank actions can also result in rather counterproductive activism merely contributing to unintended volatility than strengthening the resilience of the system. In any case, we think that further research in this area is needed in order to further explore the nexus between monetary policy and financial regulation to avoid such tensions.

A Functional Model Specification
Probability that firm f takes L f,t given   (2014), aggregate R&D investments are used. We use, instead, the firm sector's requested amount of loans from banks as a proxy for their investment in the production of goods. b Described as general characteristic of an economy, i.e. without explicit notion of empirical studies and found in Riccetti et al. (2015).
In order to validate the output data and the results of the presented agent-based macromodel, we use this appendix to jointly replicate a wide range of common empirical regularities like it has been done for other ACE models which are already accepted in the field of policy advice. In this context, the Keynes+Schumpeter model developed in Dosi et al. (2006Dosi et al. ( , 2008Dosi et al. ( , 2010Dosi et al. ( , 2013Dosi et al. ( , 2014Dosi et al. ( , 2015 or the model described in Riccetti et al. (2015) should be mentioned since both show that (decentralized) interactions among heterogeneous agents give rise to emergent macroeconomic properties. 34 In both cases, the authors are able to validate their results by showing in detail how the model's simulated macroeconomic dynamics lead to characteristic patterns and distributions within their experimental data that coincide with real macro data. According to Fagiolo et al. (2007); Fagiolo and Roventini (2012), this is the appropriate approach to show a robust empirical validation of the model framework and, hence, of the "computational lab" leading to plausible and comparable results when testing and analyzing various policy experiments. 35 To the best of our knowledge, the list of stylized facts to be met shown in Dosi et al. (2014), is the most complete one which is why we use it as a guide for the validation process of our model. The table is only extended by some additional facts found in Riccetti et al. (2015). Furthermore, we set the number of Monte Carlo simulation to be 1000, i.e. the experiments are 34 Riccetti et al. (2015) state that "[i]n particular, simulations show that endogenous business cycles emerge as a consequence of the interaction between real and financial factors: when firms profits are improving, they try to expand the production and, if banks extend the required credit, this results in more employment [;] the decrease of the unemployment rate leads to the rise of wages that, on the one hand, increases the aggregate demand, while on the other hand reduces firms profits, and this may cause the inversion of the business cycle, and then the recession is amplified by the deleveraging process." 35 Dosi et al. (2014) explicitly notes that this way of model validation, i.e. matching a large number of stylized facts simultaneously, is eminently costly and time-consuming. We can confirm this view. Going through table 3 step-by-step, the first macroeconomic stylized facts (SF1) would be the ability of the model to produce endogenous and self-sustained GDP growth characterized by persistent fluctuations both in nominal and real terms. Figure 16a shows the average log of nominal GDP for simulations with random seeds 1 . . . 1000 which is steadily growing whereas figure 16b shows exemplary the dynamics of nominal GDP of a single run. The right panel exhibits moderate fluctuations at the beginning of the simulation which are increasing with economic activity and overall size of the economy leading to business cycles including booms and deep downturns. The same holds for real GDP (see figure 16c/16d). Moreover, the comparison of both time series reveals the fact that the business cycles do not vanish when building the average of various simulation runs but are much more regular. The second replicated stylized fact directly connects on the first one and follows the empirical studies of Fagiolo et al. (2008); Castaldi and Dosi (2009) where the authors have shown that real data sets of GDP-growth rates have the property of fat-tailed distributions compared to their Gaussian benchmarks. This also holds for our model both in nominal (figure 17a) and real terms (figure 17b). Bins represent the data form the model, blue is the exponential fit of the data.
Concerning the recessions occurring during the simulations, we can confirm that the majority lasts for rather short periods of time and that their frequency declines substantially with rising duration. Empirical data shows that they are approximately exponentially distributed which is also the case in our experimental data (see figure 18). Volatility of GDP (blue); of consumption (orange); of investments (green) To verify whether our model can replicate SF4, we again follow Dosi et al. (2014) and bandpass filter the time series for GDP, consumption and firm investment in order to detrend the data and to analyze their behavior at business cycle frequencies. As figure 19 shows, the data produced by our model is in line with the empirical findings since the fluctuations of consumption are slightly smaller compared to GDP while firm investments is much more volatile than output.
While the stylized facts 1-4 have general macroeconomic character, the following focus on drivers of prevailing economic activity and, thus, the business cycle. This means that the pro-and counter-cyclicality of key variables is essential to ensure the proper functioning of the modelled monetary economy. Overall, they shed some light on the development of the lending activity and on the resulting financial stability dynamics over time. The first fact here is then the pro-cyclicality of firm's aggregate investment which tend to co-move with the business cycle ( figure 20). Moreover, Lown and Morgan (2006) have shown empirically, there exists a strong link between the total debt outstanding in the firm sector (21a) and the profits of the banking sector (21b) both being highly pro-cyclical. Hence, the lending activity co-moves with the business cycle whereas the experience from past financial crises suggests that the build-up of debt imbalances leads to downturns triggered by peaks in default rates which, in turn, result in rather counter-cyclical behavior of credit defaults (22). Figure 22 shows that these facts are also features of our model and can be simultaneously replicated as well.
Moreover, the slightly lagged correlation between indebtedness of the firm sector and credit default rates can be replicated just as well. Figure 23 validates in a very clear manner that in our experimental data the build-up of real sector debt imbalances is accompanied by banks facing excessive risk of bad debt and, thus, frequently paired with periods of financial distress which translates into economic downturns.
In order to cope with empirical regularities of financial crises data, we then define crises as periods from the first bank default until all banks B are back in their business. Thus, the empirical work of Reinhart and Rogoff (2009) suggests that the distribution of the duration of these periods is positively skewed (right skewed). This also holds for our model. Moreover, the ratio of fiscal costs-to-GDP is computed for such periods of financial distress. These fiscal or restructuring costs caused by financial crises mainly consists of recapitalization costs to stabilize the banking sector and, in reality, the distribution of the ratio is characterized by excess kurtosis (here above 12), i.e. fat tails, which is also the case in our experiments (see figure 25). 36 And last but not least, our experimental data exhibits a Phillips curve ( figure 26).
Finally, the replicated stylized facts shown above indicate the relevance of leverage cycles 36 Laeven and Valencia (2013) define a significant support by the government if fiscal costs exceed 3% of GDP. This seems to be a reasonable choice for real data but the typical real economy of interest is considerably larger and consist of more agents compared to our small-scale ACE model. In fact, this affects the fiscal costs-to-GDP ratio since the size of our banking sector relative to GDP is much larger than in reality since our model has less agents to contribute to GDP. Hence, this can lead to years in which the fiscal costs are twice or three times as high as GDP. These relatively high ratios might be comparable to the situation in small countries with large financial systems like Iceland or Ireland where the fiscal costs have reached very high levels amounting even to multiples of GDP. Ordinate scale relates to GDP (blue); whereas credit related variables (orange) are scaled appropriately to emphasize their pro-cyclicality. and credit constraints on economic performance as well as the importance of the government in its function as a compensating and balancing institutional agent providing stability to the economy.
The appendix shows that the presented macro model is generally able to serve as framework for the analysis of research questions concerning banks lending activity, leverage, financial crises as well as monetary and macroprudential policy.  Indebtedness of firm sector (blue); bad debt is measured by loan losses of banks (orange).