Greenspan’s adherence to the Taylor rule: examining Federal Reserve chairmen policy regimes and deviations from the Taylor rule

Abstract This paper examines the relationship between Federal Reserve policy and the Taylor rule, a commonly used model for guiding monetary policy. The study analyzes the deviation of the actual Federal Funds Rate from the Taylor Rule model during distinct structural changes, using real-time macroeconomic data available to the Fed at the time of their interest rate decision. The research focuses on whether former Fed chair Alan Greenspan’s policies from 2003 to 2006, which have been linked to the housing bubble, were deviant from the Taylor Rule. The findings show that there isn’t sufficient statistical evidence to support this claim, and a machine learning text analysis of the Federal Open Market Committee transcripts confirms the presence of only one regime during this period. These results contribute to the existing literature on monetary policy and its impact on the economy, providing valuable insights into the relationship between Federal Reserve policy and the Taylor Rule.


Introduction
Since the end of the inflationary episode in the 1970s, the U.S. has experienced a significant reduction in the volatility of GDP growth (with the exception of the 2008 recession) and only a moderate amount of yearly inflation (with the exception of 2021−present). This has prompted extensive empirical research to assess why economic conditions have changed so dramatically. A possible explanation is that the U.S. Federal Reserve has changed the decision-making process that determines the federal funds rate. Consequently, there is a growing interest in modeling the process by which this decision is made and determining whether this process has changed over time.
Various efforts have been made to model the decision-making process of the Federal Reserve on the federal funds rate question, with the "Taylor Rule" emerging as the clear winner. Named after its creator, John Taylor, the rule is presented as a simple equation that specifies how the federal funds rate should be set based on three variables: the equilibrium real interest rate, the deviation of real GDP from a target, and the inflation gap. The original rule from Taylor (1993) stipulates that the federal funds rate, i t , should be set in response to the equilibrium real interest rate, r � , the deviation of real GDP from a target, y t , and the inflation gap (the difference between the observed inflation, π t , and the target inflation rate, π*) according to i t ¼ r � þ π t þ λ 1 y t þ λ 2 ðπ t À π � Þ: (1) i.e. the percent difference between real output, Y t , and trend or potential output Y � t . Including Taylor's suggestion for the parameter values and targets, the rule becomes i t ¼ 2 þ π t þ 0:5y t þ 0:5ðπ t À 2Þ: (2) Taylor initially presented the rule at the 1992 Carnegie-Rochester Conference as an empirical regularity, but its descriptive power has gradually transformed the formula into a policy prescription, particularly by Taylor himself. The rule has been used to evaluate the US Federal Reserve's monetary policy over various periods. For instance, Taylor (2007Taylor ( , 2009) used the rule named after him to argue that monetary policy was "too loose" from 2003 to 2006 compared to the experience of the previous few decades and played a role in the formation of the housing bubble by making housing finance cheap and attractive, thereby contributing to the boom-bust cycle in housing starts. Taylor's arguments are based on evaluating Greenspan's adherence to a Taylor rule that uses final values of the output gap and inflation rate. Figure 1 displays the actual federal funds rate and the one suggested by the Taylor Rule with final values of the output gap and inflation rate. Plotting the difference between the actual federal funds rate and that suggested by Taylor Rule in Figure 2, we can see the dip from 2003 to 2006 in the series that constitutes the "loose" period in Greenspan's tenure that is contentious for Taylor. However, research in this area has been subject to some controversy due to different methods of measuring inflation. Alex et al. (2019) argue that the period from 2000 to 2007 was inconsistent with the Taylor Rule, due to the use of the real-time GDP deflator as the measure of inflation rather than the vintage core-PCE series the Federal Reserve was using at the time. The author argues that this inconsistency highlights the importance of using accurate measures of inflation when implementing the Taylor Rule. Similarly, Orphanides and Wieland (2008) conclude that policy actions from 1988 to 2007 (Greenspan's tenure) have been consistent with a stable Taylor rule. They confirm that many of the apparent deviations of the federal funds rate from the prescriptions of an outcome-based Taylor rule may actually be the result of the policymakers' systematic responses to projections of output gap and inflation as opposed to recent economic data, as confirmed by the authors. Orphanides (2001Orphanides ( , 2002 emphasize the necessity of evaluating monetary policy with rules based on real-time data, and one should judge the federal funds rate policy based on the information available to the policymakers at the time of policy decision rather than ex-post revised data, which is only available much later. In fact, Mehra and Sawhney (2010) find that applying a forward-looking Taylor Rule using real-time inflation data reduces much of the gap between the federal funds rate and the Taylor Rule recommendation in 2003-2006. Despite the existing research, questions remain about the consistency of the Taylor Rule and the Federal Reserve's adherence to it. This paper adds to this literature in two fronts. First, we perform a data-based determination of regime changes in U.S. monetary policy by employing the methods of Alex et al. (2019), but using the insight of Mehra and Sawhney (2010) that it is necessary to use real-time core-PCE as the measure of inflation in the Taylor Rule post-2004, rather than CPI or the GDP deflator thereby reflecting the series used by the Federal Reserve at the time. We employ a standard non-forward-looking version of Taylor's (Taylor, 1993(Taylor, , 1999 decision model of the federal funds rate, the version of the Taylor Rule suggested by the St. Louis Federal Reserve (Equation 2), where the potential GDP series comes from the Congressional Budget Office (CBO) model potential GDP in the U.S. Second, we add to the literature by confirming our more standard time series regimes switching analysis with a machine learning text analysis approach to determine regime changes in monetary policy. In this methodology, the monetary policy regime is determined by which cluster of related transcripts a particular FOMC meeting is determined to belong. This provides an independent method of finding policy regimes. However, we can only identify monetary policy regimes over a more limited period (since machine-readable FOMC meeting transcripts were not available until 1979). Due to the lack of availability of the full meeting transcripts, Greenspan's era can only be compared to Volcker's chairmanship period.
The rest of the paper is organized as follows. Section 2 discusses the literature on previous regime identification as well as on the application of machine learning techniques on FOMC transcripts. Section 3 reviews the Taylor Rule and the construction of deviation series with the data available to the Fed at the time of their decisions. Section 4 discusses the empirical regime detection methodologies, Section 5 presents the empirical results, and Section 6 concludes.

Previous literature
Regime changes in U.S. monetary policy have been extensively studied using a variety of methodologies, each with its own strengths and limitations. Table 1 summarizes the most common approaches interest rate difference Figure 2. i FedFund;t À i t :1954-2008.

Notes:
The figure shows the difference between the quarterly average federal funds rate, i FedFund and the federal funds rate implied by the Taylor Rule, i Taylor . The series is quarterly from 1954:Q3 to 2008:Q1.
in the literature. Early approaches, such as the binary indicator variable method proposed by Hamilton (1989), were limited to only two regimes and did not account for changes in the economy over time. Sims (1992) proposes a more general approach to identifying and estimating changes in the parameters of a dynamic model. Compared to Hamilton's (1989) Markov-switching model, which assumes a fixed number of regimes, Sims (1992) VAR approach does not require a pre-specified number of regimes. Instead, the number of regimes can be determined endogenously based on the data, making it a more data-driven approach. However, these models relied on assumptions about the structure of the economy and faced challenges in identifying relevant shocks. To address these limitations, researchers such as Fair (2001), and Judd and Rudebusch (1998) modified the SVAR model to allow for more flexible identification schemes and better incorporation of economic theory.
A typical method is to choose regime dates based on some known features and history of the available data and then use tests of parameter constancy, e.g., Chow tests, to justify the dates chosen. However, as Hansen (2001) observes, if the breakpoints are not known a priori, then the Chi-squared critical values for the Chow test are inappropriate. Using known features of the data (e.g., the Volcker policy experiment of 1979-82) to determine breakpoints can make these candidate break dates endogenously correlated with the actual data, leading to incorrect inferences about the significance of those candidate break dates. Furthermore, not all of the parameters or targets necessarily change at the same date. Fitting values to the policy parameters on the output and inflation gap, λ 1 and λ 2 in equation (1), with an OLS model, such as provides less than reliable parameter estimates if the regime includes few data, as is the case with potential Volcker policy experiment. Boivin (2006) attempt to address some of these issues by using a Time-Varying Parameter model that assumes that policy parameters are time series which follow drift-less random walks. This is the Kalman filter model of Cooley and Prescott (1976), and all the parameters in the model can be

Methodology Features References
Binary variable method Limited to only two regimes and did not account for changes in the economy over time Hamilton (1989) Structural VAR Allow for multiple regimes, but still assumed a fixed and known number of regimes Sims (1992), Judd and Rudebusch (1998), Clarida et al. (1999), Fair (2001), Sims and Zha (2006) Time-varying parameter models Allow for changes in the model parameters over time, but have their own limitations Primiceri (2005), Boivin (2006) Bayesian methods Allow for more flexible modeling of uncertainty and can produce more accurate estimates of the model parameters but require specifying prior distributions for the model parameters Timothy and Sargent (2005), Murray et al. (2015) and Alba and Wang (2017) Structural change model with regime switching Identify regimes and fit OLS regressions for each regime.
Alex et al. (2019) Machine learning methods Use methods such as sentiment analysis, neural network and topic modelling to estimate changes in monetary policy regimes, allowing for more flexible modeling of complex relationships between variables. (2022), Handlan (2021), and Hansen and McMahon (2016) estimated jointly by maximum likelihood estimation. However, when the variance of the policy parameter time series is small, the parameters can only change slowly over time, and policy regime shifts may not be visible. Boivin (2006) deals with this problem in an ad hoc manner but still fails to identify discrete regimes that align with the terms of particular Federal Reserve Chairs. He finds only a gradual shift in the Taylor Rule policy parameters until around 1982, the start of the Great Moderation.

Shapiro and Wilson
Timothy and Sargent (2005) use a Bayesian Markov-switching VAR model to identify changes in the conduct of monetary policy and their effects on key macroeconomic variables. Their model allows for different regimes, each with its own set of parameters and error variances, to capture the possibility that the relationship between policy and macroeconomic outcomes may change over time. Sims and Zha (2006) improve upon the earlier studies by introducing a new approach to modeling regime-switching dynamics that allow for continuous and endogenous shifts in the behavior of both policymakers and the economy. Murray et al. (2015) use Markov-Switching models to identify regimes from 1965 onward and find that the Taylor parameters are mostly consistent, except for 1973-1974and 1980-1985. Alba and Wang (2017 also identify monetary regimes between 1973 and 2014 using a k-state Markov regimeswitching model and find 2001Q2 to 2005Q4 to be mostly consistent with the Taylor Rule "low discretionary regime," and 2006Q1 to 2007Q4 to be completely consistent with the Taylor Rule, which is broadly consistent with other findings, despite their use of the GDP deflator instead of the CPI and core-PCE post-July 2004. These methods allow for more flexible modeling of uncertainty and can produce more accurate estimates of the model parameters but require specifying prior distributions for the model parameters. The work closest to ours, Alex et al. (2019), use the Perron (2003a, 2003b) structural change model to identify regimes and fit OLS regressions similar to equation (3) for each regime to check for significant deviations from the expected parameters on the inflation and output gap. However, their conclusion that the 2000-2007 period had significantly different parameters than the standard Taylor rule is dependent on using the GDP deflator as the measure of inflation rather than real-time core PCE, as shown in other studies.
Recently, machine learning methods, specifically text analysis, on central bank communication have been on the rise to study monetary policy. Hansen and McMahon (2016) use topic modeling to analyze the effect of forward guidance on macroeconomic aggregates. Shapiro and Wilson (2022) use sentiment analysis on FOMC transcripts to estimate the objectives of central bank preferences. They find that the FOMC's implicit inflation target was roughly 1.5 percent, significantly below the assumed value of 2 percent. Handlan (2021) uses neural networks for text analysis on FOMC meeting statements to generate "monetary policy shocks" series and find that the wording of the statements accounted for more variation in federal funds futures (FFF) prices than target federal funds rate change announcements. She also finds that the impact of forward guidance on real interest rates is twice as large when using these text-based shock series compared to other measures, such as changes in FFF prices. To our knowledge, ours is the first paper to identify different Federal Reserve decision framework regimes using machine learning methods.

Data
The federal funds rate, inflation, unemployment, and output time series come from the U.S.

Real time taylor rule
The Taylor Rule assumes that policymakers know, and can agree on, the size of the output gap. However, measuring the output gap is very difficult and FOMC members typically have different judgments. In addition, since the FOMC meets eight times per year, assessing the Taylor Rule consistency of the FOMC using quarterly data could be misleading. It is fairer to assess the consistency of the federal funds rate with Taylor Rule using monthly data that was available to the committee at the time of their meeting. Instead of attempting to interpolate quarterly output and potential output data with a method similar to Sims (1980), we choose to approximate the output gap using Okun's law, Equation 4 is the gap version of Okun's "rule of thumb" as presented in Abel et al. (2005). For the period of 1954-2008, the slope of the line is −1.26 ( Figure 3). This suggests that the Taylor Rule on a monthly frequency is The Natural Rate of Unemployment, U � t from the U.S. Congressional Budget Office must still be interpolated from quarterly to monthly frequency, producing the series shown in Figure 4. However, this version of the Taylor rule also has the advantage of being able to use the historical values of inflation (π t ) and unemployment (U t ) values that were the estimates at the time of the FOMC meeting, rather than the revised series. This data is available from the Federal Reserve Bank of Philadelphia's Real-Time Data Set from 1965 onward. The final version of "real-time" Taylor rule is , regressing the Output gap on the Unemployment gap using quarterly Output and Unemployment. The estimated slope for the period is −1.26, rather that the −2 estimated from Okun's original data. Figure 5 shows the difference between the monthly average federal funds rates and the federal funds rate implied by the Taylor rule in Equation 6. The series has 507 monthly data points from 1965 to 2008. The Taylor rule residual series is still biased (mean = −1.25%) toward a higher interest rate than the Taylor rule suggests (i.e. a bias toward less permissive monetary policy). The "Real-Time" series is obviously much closer to being stationary, but is still not consistent with the single Taylor rule over the entire period.
In February 2000, CPI was replaced by the personal consumption expenditures (PCE) deflator as the preferred FOMC measure of inflation. From July 2004 onward, the Fed began targeting the core-PCE price index that excludes food and energy prices. As Mehra and Sawhney (2010) point out, these adjustments reduce much of the apparent Greenspan deviation from the Taylor Rule from 2003 to 2006.

Empirical methodology
The Tukey Honest Significant Difference Test is a single-step multiple comparison procedure that determines if sample means are significantly different from each other simultaneously. The test   assumes that the observations are independent with and among groups, and there is homogeneity in within-group variance across the groups. Since we first wish to test whether Greenspan's tenure is distinguishable from the other Fed Chairmen on an aggregate basis, this is a suitable procedure to perform before attempting to identify regimes with an agnostic statistical procedure.
It is easiest to judge the break points, however, using the multiple mean model, even though there is autocorrelation in the federal funds rate-Taylor rule difference series. It is also useful to think of the FOMC monetary policy as having an unbiased error in relation to the Taylor rule in each regime. Using this assumption and using the methodology of Bai and Perron (1998, 2003a, 2003b, we fit multiple mean equations to the series and find the points in time that minimize the residual sum of squares for the chosen number of breakpoints. The optimal number of breakpoints is three, based on the Schwarz Information Criterion (SIC).
Another useful way to find the hidden regimes in monetary policy is with the Markov switching model of Hamilton (1989), one of the most popular nonlinear time series models in the literature. This model involves multiple structures (equations) that can characterize the time series behaviors in different regimes. By permitting switching between these structures, this model is able to capture more complex dynamic patterns. A novel feature of the Markov switching model is that the switching mechanism is controlled by an unobservable state variable that follows a first-order Markov chain. In particular, the Markovian property regulates that the current value of the state variable depends on its immediate past value. As such, a structure may prevail for a random period, and it will be replaced by another structure when a switching takes place. This is in sharp contrast with the random switching model of Quandt (1972) in which the events of switching are independent over time. The original Markov switching model focuses on the mean behavior of variables. This model and its variants have been widely applied to analyze economic and financial time series (c.f. Diebold et al. (1994); Engel (1994); Engel and Hamilton (1990); Hamilton (1988Hamilton ( , 1989; Filardo (1994); Garcia and Perron (1996); Ghysels (1994); Goodwin (1993); C.J. Kim and Nelson (1998); M.J. Kim and Yoo (1995); Lam (1990); Sola and Driffill (1994); Schaller and Van Norden (1997); Athanasios and Williams (2003); Westelius (2007)). In traditional Markov switching models, the regime probabilities are exogenous and are usually estimated using maximum likelihood estimation methods. However, Yoosoon et al. (2017) extend the regime switching methodology by allowing the Markov chain determining regimes to be endogenous, implying that the switching probabilities depend on the state of the underlying process. Similarly, Svensson (2017) uses a regime-switching model with an endogenous Markov chain to analyze the effectiveness of an approach that involves setting interest rates based on forecasts of future inflation and output rather than relying on a specific rule or model. Let s t denote the unobservable state variable. The switching model for the Taylor Rule deviation (i Taylor ) series we consider involves three regimes, This model could be thought of as representing three states of monetary policy relative to the Taylor Rule, where "tight", "loose", and "other" are three hidden states which each s t might represent. This formulation allows for the presence of different conditional variances across regimes, and so is a less restrictive version of the methodology of Bai and Peron.
When s t are independent Bernoulli random variables, it is the random switching model of Quandt (1972). In the random switching model, the realization of s t is independent of the previous and future states. This would imply that the deviation from the Taylor rule belongs to one of several regimes randomly, which is not consistent with the concept of the hidden state being the particular Fed chairman, who is likely not changing policy stances from month to month randomly. Suppose instead that s t follows a first-order Markov chain with the following transition matrix: where p ij (i,j = 0,1,2) denote the transition probabilities of s t ¼ j given that s tÀ 1 ¼ i so that the transition probabilities satisfy p i0 þ p i1 þ p i2 ¼ 1. The transition probabilities determine the persistence of each regime.

Text analytics
We also employ a machine learning procedure, specifically text clustering, to the FOMC transcripts and find evidence that the Greenspan tenure post-2000 was in large part consistent with his tenure in pre-2000. In particular, we employ k-means clustering on the FOMC transcripts to identify different monetary policy regimes. The idea with clustering is to categorize a set of texts in such a way that texts in the same cluster are more similar to each other than texts in other clusters by applying machine learning and natural language processing techniques. As such, the k-means clustering algorithm takes in a set of FOMC transcript texts as inputs and yields the list of detected clusters, where each cluster is taken to represent a distinct policy regime.
Before we describe the clustering technique, we provide a brief discussion of text pre-processing. In order to apply machine learning techniques, we need to convert the transcript texts to numerical vectors. We split the texts into single words and two-word phrases by removing numbers, punctuation, symbols, and white spaces. We also remove the names of the FOMC members present during a meeting to ensure that our clustering will not be driven solely by the names of the members. We then count the frequency of single-word and two-word phrases within the transcript and normalize the frequencies by the size of the document. As a final step, we weight the terms that occur in the majority of documents with diminishing importance. We use the tf-idf scheme by Salton and McGill (1983) to obtain weights for each term. 1 Essentially, each FOMC transcript is uniquely represented by a vector of normalized frequencies of single words and twoword phrases which can now be employed in the clustering exercise.
The k-means clustering algorithm requires the user to first specify the number of clusters before grouping the FOMC transcripts into those clusters. We use a widely used method called the "elbow method" to determine the number of clusters. Appendix A.2 discusses the k-means clustering algorithm in detail.

Optimal number of clusters
To apply the "Elbow Method" of determining the optimal number of clusters, the within-group sums of squares is plotted against the number of clusters. If the plot resembles an arm, then the "elbow" on the arm is the appropriate number of clusters, i.e. where the inflection point of the curve exists. This inflection point occurs where the marginal benefit (in terms of a lower sum of squares error) from adding additional clusters begins to diminish, and is thus the point where there is a balance between model parsimony and fit to the data. Figure A1 in the Appendix (Appendix A.3) plots the within-group sum of squares against the number of clusters. The appropriate number of cluster as suggested by the elbow method is anywhere between 3 and 4. Beyond the 4 clusters, the within-group sums of square do not fall much. We do present results for when FOMC transcripts are split into 2, 3, 4, and 5 regimes, but it turns out that the results are quite consistent across all clusters.  deviation is the smallest, and there is a statistically significant difference in Greenspan's mean deviation from the Taylor Rule and the average deviation of all other Chairmen, except for Bernanke. Figure 6 displays the federal funds rate-Taylor Rule difference series with separate means for each regime. The first regime that the Bai and Peron's statistical procedure identified covers the chairmanship tenures of William Martin (1951-1970) and Arthur Burns (1970-1978, from the start of the series until 1973. This was a period of very loose monetary policy, perhaps influenced by President Nixon's threats of taking away Federal Reserve independence. Burns' monetary policy under the Ford presidency after the breakpoint in 1973 was even looser and less consistent with the Taylor rule.

Results
The second breakpoint in 1980 is somewhat expected and agrees with the drifting output and inflation gap evidence from Boivin (2006). The chairmanship of Paul Volcker (1979)(1980)(1981)(1982)(1983)(1984)(1985)(1986)(1987)) exhibits a clear breakpoint in Taylor rule consistency to a regime of tight monetary policy in November of 1980 until the end of tenure, a result that is not surprising given that the Federal Reserve targeted non-borrowed reserve levels rather than the federal funds rate during 1979-1982. The high interest rate period continued until the end of Volcker's tenure in 1987, as the Fed continued to battle stagflation by first taming inflation (an emphasis on the inflation gap over the output gap in the standard Taylor rule).
Alan Greenspan's tenure from 1987 to 2006 was remarkably consistent with the Taylor Rule, regardless of whether the shift from targeting core CPI to core PCE in 2000 is reflected in the Taylor Rule (Figure 7). The conditional mean deviation of Greenspan's tenure is approximately zero, in either case, as Figures 6 and 7 indicate. However, as Figure 8 shows, Greenspan apparently did not account for inflation expectations in his decision-making, since when his inflation gap is calculated using the University of Michigan 1-year inflation expectations survey, his conditional mean policy stance is consistently "loose". Greenspan's tenure is still quite distinct from Volcker's even when using inflation expectations in the Taylor rule.
The less restrictive Markov-Switching regime structure reveals periods of greater and lesser adherence to the Taylor Rule. In some periods, Greenspan is indeed classified in the "loose" Regime 0 (conditional deviation mean less than zero), as shown in Figure 9, while the remainder Notes: The figure shows the periods corresponding to the first Markov switching regime conditional mean and variance for the deviations from the Taylor Rule. The fitted conditional mean and standard deviation are 0.358 and 0.722, respectively. This regime can be interpreted as "tight" regime, where the federal funds rate is higher than the recommendation from the Taylor Rule. Periods of the Greenspan and Bernanke chairmanships correspond to this regime.
of his regime is in a "tight" Regime 1 (conditional mean greater than zero), as shown in Figure 10. However, it is worth noting that the conditional standard deviation of both the "tight" and "loose" regimes is similar (0.722 vs. 0.764). This demonstrates that Greenspan was symmetric in his deviations from the Taylor Rule in addition to being cyclical.
The Markov-Switching model classifies Volcker and Burns in the same regime, despite the fact that Volcker was much tighter than the Taylor Rule, and Burns was much looser than the rule (Figure 11). Regime 2 can thus be interpreted as a monetary policy regime that is "inconsistent" with the Taylor Rule, and is either very tight or very loose. In Figure 11, Greenspan was "inconsistent" with the Taylor   i Taylor ¼ À 1:856 þ 2 t ; 2 t ,Nð0; 0:764Þ; s t ¼ 0 0:358 þ 2 t ; 2 t ,Nð0; 0:722Þ; s t ¼ 1 À 1:546 þ 2 t ; 2 t ,Nð0; 4:930Þ; s t ¼ 2 8 < : (9) and the estimated transition matrix is The transition matrix shows that regimes identified from fed funds deviations from the Taylor Rule are very persistent. The low probabilities of transitioning between regimes imply that changes in the monetary policy framework at any point in time are, in general, very improbable.

Text analytics
The findings of our clustering analysis (where we regard each cluster as a distinct monetary policy regime) of the FOMC meeting transcripts are presented in this section. We begin by discussing the results when only two clusters are considered, and then expand to three, four, and five clusters. Greenspan's era is quite consistent in general.
When the FOMC transcripts are grouped into only two clusters, representing two distinct policy regimes, Figure 12 presents the results. It can be observed that there is minimal overlap between the two regimes, indicating that the chairmanships of Volcker and Greenspan are clearly distinguishable from each other. Additionally, the mean deviation from the Taylor rule for the cluster period corresponding to the Greenspan period is close to zero (−0.0211 compared to 2.93 for the other cluster), as presented in Table 4. This consistency is particularly noteworthy as it persists even when allowing for greater clustering granularity. Notes: The figure shows the clustering results when the FOMC transcripts are divided into 2 clusters. We interpret each cluster of similar texts as a distinct regime of monetary policy. Figure 13 demonstrates the results when the FOMC transcripts are divided into three clusters. The majority of the transcripts from the Volcker era (August 1979to August 1987 belong to the same cluster, while the Greenspan meeting transcripts are split into the remaining two clusters. The post-1990 Greenspan regime looks quite consistent as evidenced by almost all of the post-1990 documents being classified as a single regime. This consistency holds even when we allow the k-means clustering algorithm to group the transcripts into four or five clusters, as shown in Figures 14 and 15 respectively. Most of the post-1990 transcripts belong to the same cluster, and most importantly, 2003-2006 is not distinguishable as a separate policy regime during Greenspan's tenure.
In addition, the analysis reveals that there is no clear distinction between the policy regime of 2003-2006 and the rest of Greenspan's tenure. This implies that Greenspan's approach to    monetary policy during his final years was consistent with the policies he implemented earlier in his tenure.
In summary, the results from our clustering analysis demonstrate that Greenspan's tenure as Federal Reserve Chair was characterized by a high degree of consistency in monetary policy. Even when using a more nuanced clustering approach, most of the post-1990 transcripts remain classified as belonging to the same policy regime. Moreover, our analysis indicates that there is no discernible difference between the policy regime of 2003-2006 and the rest of Greenspan's tenure. Figures 16,17, and 18 present the word cloud outcomes for the primary terms in clusters labeled 1, 2, and 3, respectively. The larger and bolder terms in the clouds indicate the significance of the term within the particular cluster. Although we only present the terms for the case of 3 clusters for clarity, the memberships of the clusters are generally consistent. This means that as we shift from, say, three to four clusters, just a few transcripts change their cluster memberships.

Cluster characteristics
Cluster 1, which roughly corresponds to the majority of Greenspan's tenure after 1990, features top relevant terms such as "recovery", "stock market", "recession", among others. Cluster 2, covering the initial years of Greenspan's tenure, includes terms like "exports", "international trade", "dollar", "exchange", indicating that the transcripts during this period focused on topics related to international trade and finance. In contrast, after 1990, the discussion in the transcripts Notes: The figure shows the wordcloud for the top single word and two-word phrases in the first cluster. The transcripts from the later era of Greenspan's tenure paid close attention to U.S. stock market and financial sector in general. seems to have heavily centered on the stock market, according to the clustering algorithm's findings. Lastly, cluster 3, which aligns with the Volcker era, highlights top terms such as "money supply", "federal funds", "targets", and "interest rate". During the Volcker regime, which began in 1979, the discussion in the text was centered around which targets the Fed should be using, switching to targeting the money supply from 1979 to 1981 and then reverting to interest rate targets after 1981.

Conclusion
Alan Greenspan's early years as the head of the Federal Reserve, spanning from 1988 to the end of 2000, were marked by remarkable consistency with the real-time Taylor Rule, with the federal funds rate oscillating around the Rule's recommended value with low variance each month. While the second part of Greenspan's Federal Reserve leadership was characterized by a policy that appeared to be looser than that suggested by the Taylor Rule, the conditional mean found by the Bai and Peron structural break process is still consistent across his tenure. The Markov switching regime identified the year 2003 as a "loose" period, but it was not significantly different from other "loose" periods during his tenure, or even during Martin's chairmanship in the late 1960s.
The contention by Taylor (2007Taylor ( , 2009) that Greenspan inflated the housing bubble is inconsistent with a historical inspection of Federal Reserve deviations from the Taylor Rule. Greenspan had a conditional mean deviation of zero throughout his tenure, assuming a constant level of variance with Bai and Perron (2003a). A less restrictive Markov-Switching model finds that some periods of Greenspan's tenure corresponded to "loose" monetary policy, but the conditional variance was extremely close for both the "loose" and "tight" Markov- Notes: The figure shows the wordcloud for the top single word and two-word phrases in the second cluster. This cluster corresponds mostly to texts early in Greenspan's tenure, where international trade and exchange rates relative to U.S. dollar figured prominently in the conversation.
switching regimes as Equation 9 shows. To the extent that Greenspan deviated from the Taylor Rule, he deviated in a cyclical, symmetric manner: Negative deviations were offset by positive deviations from the Taylor Rule, which is inconsistent with Taylor's argument that the period of 2003-2006 differed from what economic agents had come to expect of monetary policy. On the whole, we find that Greenspan's interest rate policies were broadly consistent with the Taylor Rule. In addition, using an FOMC text-based analysis to find policy regimes, the policy discussions from 2003 to 2006 were also no different than the vast majority of policy discussions earlier in Greenspan's tenure.

Policy Implications
The study has several policy implications worth considering. One of the main findings is that the Taylor Rule, while a useful guideline, has limitations due to data constraints, which means that strict adherence to it may not always be the best solution for addressing speculative bubbles. Additionally, it would be unwise for Congress to impose mechanical adherence to the Taylor Rule for the FOMC in setting interest rates. The study shows that the FOMC, under Greenspan's tenure, has generally followed the Taylor Rule, taking into account the available economic data at the time of the policy decision. Therefore, any future attempts to restrict the FOMC to follow the Taylor Rule should consider the real-time data available at the time of the rate-setting decision, as this can significantly alter the implied short-term interest rates compared to the final economic data. Notes: The figure shows the wordcloud for the top single word and two-word phrases in the third cluster. This cluster corresponds mostly to texts in Volcker's tenure, where the transcripts reflect the debate over the use of money supply or federal funds rate as targets, as well as the desire to reduce unemployment.