A Comparison of Bond Ratings from Moody's S&P and Fitch IBCA

Previous research has found that the bond market values the ratings of Moody's and Standard &amp; Poor's. This paper extends earlier research by comparing the ratings of Moody's, Standard and Poor's, and Fitch IBCA. The authors examine a very large database with monthly observations of bonds and bond ratings over a five&#8208;year time period. The analysis focuses on comparing rating levels, rating changes, and the impact of ratings on bond yields. The results show that firms with publicly available Fitch IBCA ratings have higher ratings from Moody's and S&amp;P than firms without Fitch IBCA ratings. The typical firm releasing a Fitch IBCA rating has a lower yield (controlling for Moody's and S&amp;P rating), a more stable rating, and is more likely to receive an upgrade. For split&#8208;rated bonds (Moody's vs. S&amp;P), Fitch IBCA serves as a tiebreaker. This evidence is consistent with the bond market valuing the ratings of all three raters&#8212;Moody's, Standard &amp; Poor's, and Fitch IBCA.


OVERVIEW
Bond ratings have long been considered important by government regulators, firms, and bond investors as an indicator of the credit risk of an issuer. In the academic literature, the consensus is growing that bond ratings convey useful information to the market. 1 However, studies of bond ratings have been largely confined to the two largest raters-Moody's and Standard & Poor's (S&P). 2 To some extent this limitation in the literature is logical since Moody's and S&P are the clear leaders in the credit rating industry. However, many firms are rated not only by the two large raters, but also by one or more smaller rating agencies such as Fitch IBCA or Duff and Phelps. The purpose of this paper is to compare the ratings of one of the smaller rating agencies, Fitch IBCA, to those of Moody's and S&P. By doing this we hope to see whether the market values Fitch IBCA ratings as well as those of Moody's and S&P.
Moody's and Standard & Poor's follow a policy of rating most SEC registered, U.S. corporate debt issues. Thus, almost all large public bond issues have at least two ratings. This is true whether or not the issuing firm chooses to pay the rating agency for the rating. However, Fitch IBCA and Duff & Phelps, the other two "full service" credit rating agencies, provide ratings only when requested and paid. Thus, an issuing firm must actively seek out these costly ratings in order to obtain them. In addition, both Fitch IBCA and Duff and Phelps allow issuing firms to stop the release of a rating before it becomes public if the firm is not satisfied for some reason. 2

Jeff Jewell and Miles Livingston
There are several possible views of the potential benefits of seeking out additional ratings. First, an additional rating may not convey any incremental information beyond the Moody's and S&P ratings. According to this view, Moody's and S&P have all the necessary information to determine ratings and to properly evaluate this information. A second view is that Moody's and/or S&P may misjudge some bond issues. For these misjudged issues, an additional rating could provide useful information that is valued by the bond issuer and the bond market. Mis-valuation can occur because Moody's and S&P overlook and/or misinterpret some information. If the additional rating conveys useful information to the issuer and the market, we would expect the rating to impact the bond yield, over and above the impact of the Moody's and S&P ratings. 3 The third view is that firms may hunt for rating agencies that provide inflated ratings (so-called "rating shopping"). 4 If the requested rating is favorable, the issuer publicizes it; if the requested rating is unfavorable, the rating is not released. Since some bond investors are constrained by regulation to purchase bonds with a particular rating or higher, an inflated rating may allow these bonds to be purchased. However, if additional ratings are consistently inflated, the market would not believe the rating and the yield on the bond should not be affected.
To provide evidence about the validity of these views, two data samples are examined. The full sample contains a very large number of bonds rated by Moody's and S&P from January 1991 through March 1995. The 3-rater sample is a subset of the full sample and includes bonds rated by Moody's, S&P, and Fitch IBCA. 5 Our findings are summarized as follows.
1. In the full sample, the average rating for Fitch IBCA is considerably higher than the average rating for Moody's and S&P. In the 3-rater sample, the average rating for Fitch IBCA is only marginally higher (.3 rating notches) than the other raters. This indicates that firms releasing Fitch IBCA ratings to the public have higher ratings from Moody's and S&P than firms without a Fitch IBCA rating. In addition, about 85% of the difference in mean ratings between the full and 3-rater samples is caused by this selection bias. 2. In the 3-rater sample, Fitch IBCA changes its rating less often than Moody's and S&P. When Fitch IBCA changes ratings, the changes are bigger than the rating changes for the other raters. 3. Firms with public Fitch IBCA ratings (and therefore in the 3-rater sample) 3 have more stable Moody's and S&P ratings than the other full sample firms, more likely upgrades by Moody's and S&P, and less likely downgrades by Moody's and S&P. 4. For a given rating by Moody's and S&P, firms with publicly released Fitch IBCA ratings have somewhat lower Treasury spreads than other firms. 5. In the 3-rater sample, when Moody's and S&P disagree on a rating, a public Fitch IBCA rating serves as a tie-breaker. Regression analysis shows that publicly released Fitch IBCA ratings have an impact upon yields, particularly when Moody's and S&P disagree on the rating.
This evidence is consistent with the market valuing the ratings of all three raters-Moody's, S&P, and Fitch IBCA. 6

I. INTRODUCTION
Bond ratings have long been considered an important part of the credit certification process by government regulators, firms, and the general public in the issuance of public corporate debt. The academic literature has vigorously debated the importance of bond ratings. However, a consensus appears to have been reached that ratings do convey important information to the market above and beyond that conveyed by financial information alone. 7 Recent developments in the credit rating industry have raised new questions about the role of the bond rating, particularly when multiple ratings are obtained for the same debt issue. Cantor and Packer (1994) point out that there has been a recent increase in the number of agencies rating public debt. There are currently four full service rating agencies that rate a wide variety of debt issues: Moody's, Standard and Poor's (S&P), Fitch IBCA, and Duff & Phelps. In addition, they have been joined in the industry recently by several specialized rating agencies. 8 According to Cantor and Packer, the SEC currently designates six agencies as nationally recognized statistical rating organizations (NRSROs), and several more agencies have applications pending with the SEC. It is clear that firms seeking to issue public debt have more alternatives than ever before in obtaining a rating. In addition it appears that more firms are seeking third and even fourth ratings for debt issues.
While the number of agencies rating debt has increased recently, our understanding of the role these agencies play has not. In fact, until recently only ratings provided by Moody's and Standard and Poor's had been studied by academics. Little is known about ratings from Fitch IBCA and Duff & Phelps except that on 6 Though this paper does not directly test this, it is likely that these results would generalize to other smaller raters, such as Duff and Phelps, as well. 7 See Hand, Holthausen, and Leftwich (1992), Reiter and Zeibart (1991), Ederington, Yawitz, and Roberts (1987), Liu and Thakor (1984) among others. 8 For example, Thompson Bankwatch and IBCA, both started in the early 1990s, rate financial institution debt exclusively. A.M. Best rates insurance companies' ability to pay claims exclusively. average their ratings appear to be higher than those issued by Moody's and S&P. 9 (Even less is known about ratings from the "niche" raters such as A.M. Best.) Due to differences in market share, reputation, and operating procedures between Moody's and S&P on the one hand and Fitch IBCA, Duff & Phelps, and other rating agencies on the other hand, it is not clear that results from research done on ratings from Moody's and S&P should generalize to ratings from the other agencies.
Moody's and S&P both maintain a policy of rating most SEC registered, U.S. corporate debt securities, thus ensuring that these issues typically have at least two ratings. These ratings are issued regardless of whether the firm requests a rating. However, firms willing to pay a rating fee 10 gain the benefit of participating in the rating process, which allows them to put their best case before the agencies (Cantor and Packer, 1994). According to , less than 2% of domestic issuers receiving a rating from S&P fail to pay the rating fee.
Other rating agencies follow very different policies from Moody's and S&P in rating debt. For example, Fitch IBCA and Duff & Phelps only rate debt issues upon request from the issuing firm. Both of these agencies charge fees comparable to Moody's and S&P for their services.
The trend towards obtaining more than two ratings has the potential to add an additional layer of complexity to assessing a firm's true credit risk. Even when given access to the same information and hearing the firm's best case, Moody's and S&P do not always reach the same conclusion about the creditworthiness of a debt issue. Several studies in the literature document that approximately 13%-17% of U.S. corporate debt issues receive different letter ratings from Moody's and S&P (a split rating). As the number of rating agencies increases, it is logical to assume that the number of debt issues receiving split ratings will increase as well. Thus, more information is available when there are more than two agencies, but the information is not necessarily easy to interpret.
One possible explanation offered by Cantor and Packer for the increase in the use of additional raters is regulatory in nature. Many financial institutions have limits, either self imposed or imposed by government regulators, on the amounts of debt they can hold of certain ratings. Traditionally the cutoff rating of interest was that between investment and non-investment grade securities (Baa and Ba on the Moody's scale). However, recent regulations have established other important cutoffs at the Aa and even A ratings. 11 As most of these regulations only require that the highest or second highest rating be above the cutoff point, the firm's chances of meeting the standard increase if a third or fourth rating is obtained. Therefore, 9 Cantor andPacker (1994, 1996) document this fact. They also show that part of this difference (but not all) can be explained by differences in the firms rated by Duff & Phelps and Fitch IBCA. 10 According to Cantor and Packer, typical fees on new long-term corporate debt range from 2 to 3 basis points of the principal for each year the rating is maintained. 11 For example, Congress has established the AA rating as the cutoff in determining the eligibility of mortgage-related securities and foreign bonds as collateral for margin lending. In addition the National Association of Insurance Commissioners has adopted capital rules that give the most favorable capital charge to bonds rated A or above. (Cantor and Packer, 1994) firms could have a strong incentive to obtain multiple ratings in order to make it possible to sell their debt to these regulated institutions. 12 This hypothetical practice of obtaining multiple ratings in the hopes of getting one rating above a regulatory cutoff is termed "rating shopping." Adding to the desirability of seeking many ratings is that once a rating has been requested from Fitch IBCA or Duff & Phelps, it is only made public if the firm is satisfied with it. Thus, requesting a rating from Fitch IBCA or Duff & Phelps is similar to buying an option on a rating. This has the effect of insuring that lower than expected ratings from these agencies are rarely, if ever, made public. Thus, the average observed rating from Fitch IBCA and Duff & Phelps is likely to be significantly higher than the "true" average rating from the two agencies.
Another possibility is that obtaining a better bond rating from the third or fourth rater may convey information to the market that reduces the cost of borrowing for the firm. Several academic papers have investigated the effect of split bond ratings from Moody's and S&P on bond yields. These papers have failed to reach a consensus on how the market prices bonds with split ratings. Billingsley, et al. (1988), Liu and Moore (1987) and Perry, Liu, and Evans (1988) all find that the market prices bonds with split ratings as if only the lower of the two ratings conveys information. Thus the higher of the two ratings gives no interest cost reduction to the firm. However, Hsueh and Kidwell (1988) and Reiter and Zeibart (1991) find that the market prices the bonds as if only the higher of the two ratings conveys information. These different results may be attributable to differences in the samples used by the various papers. More recently, Jewell and Livingston (1997) show that when firms receive a split rating from Moody's and S&P the Treasury (default) spread on the bond is an average of the typical spreads on bonds with the higher of the ratings and the typical spreads on bonds with the lower of the ratings. This suggests that the market considers an average of the two ratings when determining default spreads for the bond. Thus the market places some value on both bond ratings. To date, no research has been done on the impact of Fitch IBCA or Duff & Phelps ratings on bond default spreads.  lean toward endorsing the regulatory theory. They suggest that the difference in average ratings between Moody's and S&P versus Fitch IBCA and Duff & Phelps is due to the latter group having lax rating standards. They go on to suggest that there may be a need for government regulators to impose uniform standards on all rating agencies. This would prevent firms from obtaining artificially high ratings merely for the purpose of meeting the above mentioned regulatory hurdles on debt ratings. However, in their 1997 paper, Cantor and Packer empirically test for the existence of "rating shopping." They find no evidence that firm's obtaining Fitch IBCA ratings are doing so in order to "game" rating regulations.

Jeff Jewell and Miles Livingston
The purpose of this paper is to compare the ratings of three of the major bond rating agencies (Moody's, S&P, and Fitch IBCA) 13 in an attempt to ascertain whether or not the third rating provides the market with any incremental information. If there is incremental information in the ratings, then regulation to insure uniformity in rating processes is likely to destroy it, thus impairing the market's ability to accurately assess the credit risk of firms. In addition, the existence of incremental information in a third rating would explain why some firms seek out these additional ratings. The issue of whether or not Fitch IBCA ratings provide any incremental information can be addressed through answering three questions.
First, do all three agencies appear to have the same policies on how to grade default risk? This will primarily impact the mean rating level of each agency. Fitch IBCA ratings are found to be significantly higher than those of Moody's and S&P, even after attempting to correct for the selection bias present in the Fitch IBCA ratings. However, the magnitude of the difference in ratings is small in absolute and relative terms. In 90% of the observed cases Fitch IBCA gives the same letter rating to an issue as either Moody's or S&P (or both).
Second, do all three agencies appear to have the same policies on when to change ratings? This will impact both the frequency of rating changes and the magnitude of the change when a change occurs. Fitch IBCA is found to change its ratings far less frequently than either Moody's or S&P. However, this is somewhat offset by larger magnitudes of rating changes for Fitch IBCA. This is consistent with a policy of focusing on long-term default risk, which Fitch IBCA professes to follow. 14 Third and finally, do Treasury spreads reflect the current level of the Fitch IBCA rating for bonds rated by Fitch IBCA? This question addresses in a direct manner whether the market finds any information content in publicly released Fitch IBCA ratings. If Fitch IBCA ratings contain incremental information there should be a statistically significant correlation between the ratings and Treasury spreads. Regression analysis shows that this is the case. Publicly released Fitch IBCA ratings are found to provide additional information over and above that provided by Moody's and S&P.
In sum, the evidence shows that Fitch IBCA appears to follow somewhat different policies on evaluating credit risk and on changing ratings than its larger competitors. However, these different policies appear to lead to incremental information that the market values.

II. THE RATING PROCESS
The purpose of bond ratings has always been to provide the public, and government regulators with an estimate of the default risk associated with particular bond issues. Many studies over the years have documented the fact that bond ratings do an excellent job at rank ordering the default risk of debt issues. For example, AA-rated bonds have lower default probabilities over any time horizon than A rated bonds, which in turn have lower default probabilities than BBBrated bonds, etc. See Hickman (1958), Cantor and Packer (1995) and Carty and Fons (1994) among others. registered corporate debt and to request the issuer to pay a fee. The fee is optional, but the issuers paying the fee are able to present their case to the raters through a series of meetings and other interactions. The importance placed on this process by the issuer is evidenced by the fact that the CEO and CFO typically attend meetings with the rating agency.
The structure of the rating fees varies somewhat among the agencies and even among the firms rated by the same agency. The most common fee charged by Moody's and S&P for firms that already have outstanding debt is two to three and a quarter basis points of the par value of the bond issue for each year that the rating is maintained, though this could be subject to modifications based on issue complexity. The most common fee for Fitch IBCA is 2 and a half basis points. The resulting fees are $20,000 to $30,000 per year on a $100 million dollar issue. First time issuers are subject to higher fees due to the additional time and effort involved in a new rating.
Fitch IBCA provides ratings for individual bond issues, at prices ranging from $10,000 to $100,000 plus a smaller annual maintenance fee, depending on the size and complexity of the issue. However, Fitch IBCA encourages issuing firms to pay an annual "relationship fee" that will cover the cost of rating all preferred stock, bonds, and commercial paper issues over the course of the year. These fees range in size from $10,000 to over $1,000,000 depending on the expected market activity of the issuing firm. According to , user fees constitute approximately 80% of the revenue of the rating agencies.
Although there are some variations in the details of the rating process among the various agencies, all of the full-service agencies follow the same basic sequence of events when analyzing firms and assigning ratings. If the firm has existing public, rated debt, the issuer along with the underwriter will approach the rating agencies, which assign a rating team and support personnel. If the firm does not have publicly rated debt, the issuer may request a preliminary meeting with the rater. In some cases, the rater may issue a preliminary opinion based on public information without a preliminary meeting. After this preliminary opinion, the first time issuer must decide whether or not to proceed with the debt issue and rating process. If the firm proceeds, the rating agency will assign a team of analysts and support personnel to the project. The firm typically provides this team with five years of financial statements, forecasts of key financial performance measures, and capital spending and financing plans. Analysts are also sometimes provided with inside information, such as internal reports created for the use of senior officers and the board.
The rating team will meet with the senior officers of the firm, typically including the CEO, CFO, and Treasurer and discuss the firm's position in depth. The rating team presents its analysis of the firm's credit position and answers questions posed by the firm's representatives. Following this meeting, the rating agency holds a meeting of its rating committee. This committee typically consists of the lead analyst from the rating team, along with other analysts familiar with the industry, 9 and several senior officers from the rating agency. The final rating is decided by a majority vote of this committee.
Fitch IBCA's rating process is different from Moody's and S&P in two key respects. First, Moody's and S&P rate most issues of sufficient size, while Fitch IBCA rates issues only on request. 16 Second, Fitch IBCA provides issuers several opportunities to decide against publicizing the rating once the process has begun. Typically, Fitch IBCA allows the issuing firm to withdraw the rating at any point prior to the meeting of the rating committee. Once the rating committee decides on a final rating, Fitch IBCA is committed to making the rating public. However, the firm can estimate its likely rating with a very high degree of accuracy following the meeting of the rating team with the issuing firm's management. Therefore, the rating process can easily be halted if the expected rating is below the desired level. When this situation occurs, the issuing firm must pay Fitch IBCA for expenses incurred up to that point, but no rating is made public. 17 Thus, the purchase of a bond rating from Fitch IBCA has option-like characteristics.
Clearly, this option-like characteristic of Fitch IBCA ratings merits more exploration. Unfortunately, there is no data available on firms that request ratings and then refuse to release them. In fact, Fitch IBCA claims to not even maintain summary data about how many firms have requested ratings but failed to release them. Until these data problems are solved, it will be difficult to conduct empirical tests on potential differences between the firms that release ratings and those that do not.

III. LITERATURE REVIEW
Bond ratings have long been an area of interest for academic researchers. Historically, there have been several major branches of research in this area. The first branch focused on attempting to determine how rating agencies arrive at their assigned rating for a particular issue. This usually involved a statistical model with rating categories as the dependent variable and various firm and issue characteristics as the independent variables. West (1970) and Kaplan and Urwitz (1979) among many others are excellent examples of this branch of the literature.
A second branch of the literature has focused on determining whether or not bond ratings have any predictive power for financial distress. In other words, whether low rated firms are more likely to default than high rated bonds. Beaver (1966) and Fons and Kimball (1991) are typical of research in this area.
The current study is much more closely related to two other areas of bond research: (1) comparing ratings from different agencies; and (2) assessing the impact of bond ratings on yields. Unlike the first branch of the literature mentioned above, we are not concerned with the determinants of the bond ratings. Unlike the second branch of the literature we are not concerned with future default. Rather, we take the ratings as a given, then compare the ratings of the various agencies. In addition, we are concerned with the market perception of the ratings, hence the need for a statistical model of ratings and yields. The following is a more complete survey of the literature that closely relates to this study.

LITERATURE COMPARING RATINGS OF FITCH IBCA, MOODY'S AND S&P
To date, very few studies have acknowledged the existence of rating agencies other than Moody's and S&P. One of the first acknowledgments of "third raters" was from Cantor and Packer (1994). The authors used a large sample of bond ratings from the end of 1990 to perform various tests. The sample contains 1398 bonds jointly rated by Moody's and S&P, 524 bonds rated jointly by Moody's and Duff & Phelps, and 295 bonds rated jointly by Moody's and Fitch IBCA. Moody's ratings were used as the base case since Moody's had the most ratings in the sample. A comparison of the mean rating levels of these jointly rated bonds revealed that S&P's mean rating was .05 notches higher than Moody's, while Duff & Phelps was .38 notches higher and Fitch IBCA was .29 notches higher. Similar comparisons were also done for original issue junk bonds over the period 1989 to 1993. Again Moody's and S&P had virtually identical mean ratings, while Duff & Phelps was .97 notches higher than Moody's and Fitch IBCA was almost 1.4 notches higher than Moody's. The authors interpret these differences as evidence that Fitch IBCA and Duff & Phelps have more lenient rating scales than Moody's and S&P.
The authors next attempt to find out what types of firms are more likely to seek out a third (or fourth) bond rating. They find that 46% of firms in their sample with one investment grade rating and one non-investment grade rating from the two major agencies seek a third rating. Of these firms, approximately 85% (29 of 34) receive an investment grade rating from the third agency. As the firms' ratings from Moody's and S&P grew further from the investment grade cutoff, fewer third ratings were sought. The authors conclude that it appears third ratings are more likely if the firm is closer to an investment grade rating. Combined with the above results on differences in mean rating levels, this was very suggestive of rating shopping on the part of some firms.  revisits the issue of rating shopping by firms. More specifically the authors test two theories on the existence of third ratings. The first theory is that third ratings are more likely when there is great uncertainty about the default risk of the firm. If this is the case, the third rating could provide valuable incremental information to the market about the default risk of the firm. There are several factors that would support this theory. First, third ratings would be more common for firms that have split ratings from Moody's and S&P. Second, the likelihood of a third rating should increase as the difference (in rating notches) between Moody's and S&P grows. Finally, the authors believe that default risk should be inherently more uncertain for small firms and firms with high leverage. Interestingly, probit regressions revealed that none of the above factors increased the likelihood of a third rating. In fact, many of the above factors significantly decreased the likelihood of a third rating.
The second theory the authors investigate is that third ratings are more likely when the debt-issuing firm is shopping for a better rating. According to this theory, a third rating should be more likely when the existing ratings of the firm are close to important regulatory cutoff ratings, such as the investment grade cutoff. However, regression analysis revealed that this also was not true. Therefore, rating shopping does not appear to explain the existence of third ratings.
The authors finally turn themselves to attempting to explain the difference in mean rating levels between the third rating agencies and the two major rating agencies. In a sample of year-end 1993 ratings, the mean Fitch IBCA rating was .74 notches higher than the mean Moody's rating and .56 notches higher than the mean S&P rating, while the mean Duff & Phelps rating was .57 notches higher than Moody's and .36 notches higher than S&P. Heckman's two stage approach was used to determine how much of the difference in mean ratings was due to sample selection bias caused by differences in the firms rated by the agencies. These tests show that .31 of the .74 notch observed difference between Fitch IBCA and Moody's can be explained by sample selection bias. However, selection bias can account for none of the observed .56 notch difference between Fitch IBCA and S&P. Selection bias also accounts for .33 of the .57 notch difference between the mean ratings of Duff & Phelps and Moody's, and .16 of the observed .36 difference between Duff & Phelps and S&P. In sum, the authors find that selection bias can account for about 40%-50% of the observed difference in ratings between the major agencies and the third agencies. They infer that the remaining difference is due to more lenient rating standards used by Fitch IBCA and Duff & Phelps. The authors conclude that reputational concerns do not prevent Fitch IBCA and Duff & Phelps from giving artificially high ratings on average. The implication is that one of two actions should be taken. Either financial regulations should be redesigned to insure equivalent rating scales (equivalent average ratings) on the part of all rating agencies; or ratings from Fitch IBCA and Duff & Phelps should not be considered for purposes of meeting regulatory rating requirements.
The two studies by Cantor and Packer make several useful contributions to the literature on rating agencies. They document the higher average ratings of the "third" rating agencies compared to the two major agencies. In addition, they find no evidence for the theories that only firms with greater default risk uncertainty or firms engaged in ratings shopping are interested in obtaining third ratings. However, there are several questions the authors leave unanswered. First, if ratings shopping and default risk uncertainty are not the major motivation for obtaining third ratings, what is? Second, how do the third rating agencies compare to Moody's and S&P in areas beside mean ratings, such as frequency of rating changes? Third, does the market value the ratings of the third rating agencies, and if so, why should they be forced to conform to the standards of Moody's and S&P?

LITERATURE ON SPLIT RATINGS AND THEIR IMPACT ON BOND YIELDS
The first real acknowledgement that split ratings might impact bond yields and underwriter spreads was from Sorensen (1979). In a study devoted primarily to comparing interest costs of bonds sold by competitive bids versus those sold by negotiation, Sorensen used control variables that identified issues with split ratings. Using a data set of 716 newly issued industrial and utility bonds issued between January 1974 and April 1978, the author attempted to find the determinants of the "true" interest cost, 18 bond yield, and underwriter spread. The independent variables included dummies representing each of the Moody's ratings, two dummies indicating whether the S&P rating was higher or lower than the Moody's rating 19 (with the case where Moody's and S&P issued the same rating being omitted), a dummy indicating whether the issue was sold by competitive bid or negotiation, and other control variables.
The test showed that when the S&P rating was higher than the Moody's rating, yield and true interest cost both fell by about 16 basis points, while underwriter spread increased by 1.7 basis points. When the S&P rating was lower than Moody's rating, true interest cost and yield both rose by about 12 basis points, while underwriter spread increased by about 5 basis points. Sorensen concluded from this that a second rating would lower the cost of borrowing if it were more favorable than the first rating. Conversely a second rating would raise the cost of borrowing if it were less favorable than the first rating. Thus the incentives to obtain a second rating were not clear. It is interesting to note that the underwriter spread always increased if a split rating occurred, regardless of whether the second rating was favorable or not. Ederington (1986) explored three possible reasons why Moody's and S&P might disagree about the ratings on new debt issues. The first possible reason is that the two agencies agree on the probability of default for the bond, but have different standards for assigning particular ratings. The second possibility is that there may be systematic differences in the rating procedures used by the two agencies that lead to different estimates of the probability of default for certain issues. The third hypothesis is that there are no systematic differences in the agencies' standards for particular ratings or in their rating procedures. According to this third hypothesis split ratings would occur because "some nonsystematic variation in raters' judgements occurs from issue to issue and from day to day." This would cause a particular problem for issues whose "true" rating lies close to the cutoff point between adjacent ratings.
Ederington used a sample of 494 industrial bonds, 67 of which had split rat-ings, to test the three hypotheses. Using an ordered probit model to predict both Moody's and S&P ratings based on publicly available financial information, he found no consistent differences in the standards for particular ratings between the two agencies (thus rejecting the first hypothesis). In addition, Ederington found no evidence that the two agencies place different levels of importance on the various financial information included in the tests (thus rejecting the second hypothesis). Ederington therefore accepted the third hypothesis and concluded that split ratings must be the result of random differences in raters' judgements about the creditworthiness of particular issues. This implies that firms would have an incentive to seek additional ratings (beyond the first) if they believed that an error in judgement caused them to receive an inaccurately low rating, or conversely that an error in judgement could cause them to receive an inaccurately high rating from the next rater. Billingsley, Lamy, Marr, and Thompson (1985) , henceforth BLMT, attempted to empirically test whether or not the market prices split ratings as if they are caused by random differences in judgement. The authors examined a sample of 258 industrial nonshelf bonds rated Ba and above, 33 of which received split ratings, issued between January 1977 and June 1983. The authors assigned a bond a rating of Aaa, Aa, A, or Baa (indicated by four dummy variables) only if the equivalent rating was received from both rating agencies. In addition four different dummy variables were used to indicate split ratings. For example, a bond which received a Aaa rating from Moody's and a AA rating from S&P would have been assigned the rating SPLIT1. Likewise a bond receiving a AA rating from S&P and an A rating from Moody's would have been assigned the rating SPLIT2.
The authors then regressed the yield off-Treasury 20 against the eight rating dummy variables (bonds assigned a Ba from both agencies were omitted) and several control variables. The regression results showed an inverse relationship between the yield off-Treasury and the better ratings, as expected. In addition, each of the eight rating categories except for SPLIT4 were found to have coefficients significantly different from zero. However, when the authors tested the adjacent coefficients for significant differences, an interesting pattern emerged. The coefficients on the split ratings were found to be significantly different from the coefficients on the higher of the adjacent ratings, but not significantly different from those on the lower of the adjacent ratings. The authors concluded from this that the market prices bonds with split ratings as if only the lower of the two ratings conveys information. In addition, the authors argued that split ratings are not merely a result of random differences in judgement, but that they do in fact represent a significant "divergence of opinion concerning the true default risk" of particular issues. The market notes this divergent opinion, but chooses to value only the lower (more conservative) opinion when pricing the bond. Liu and Moore (1987) and Perry, Liu, and Evans (1988) use very different techniques 21 and different samples 22 to find essentially the same result as BLMT. In both cases the authors conclude that the market only considers the lower of the two ratings when determining bond yields. According to the results of these three studies there is no evidence of a cost based incentive for firms to seek additional ratings.
However, three other studies find strikingly different results which could indicate a powerful cost based incentive to seek additional ratings. Hsueh and Kidwell (1988) used a sample of 1512 general obligation bonds issued in the state of Texas between 1976 and 1983 to test the hypothesis that there is no benefit to seeking a second rating once the first rating has been obtained. Of the 1512 bonds in the sample, 560 (41%) of them had two identical bond ratings, 135 (9%) had split ratings, and 817 (59%) had only one rating. In order to account for the possibility that the decision to obtain more than one rating is not random, the authors used a switching regression to estimate how a second rating affected the interest cost of the debt issues. They found that having two identical ratings lowered the cost of borrowing by approximately five basis points over the cost with only one rating. In addition, the authors found that split ratings lowered the cost of borrowing over the lower of the two ratings by 16 to 21 basis points. F tests of the coefficients on the ratings confirm that in general there is no significant difference between the coefficients on the split ratings and those on the adjacent higher synonymous rating. This is the opposite of the BLMT result. The only exception to this finding was in the split rating category between A and Baa. For this category, the F tests showed that its coefficient was significantly different from the coefficients on both the A and Baa rating. In other words, the market priced this particular split category as a unique rating, between the A and Baa categories.
Reiter and Zeibart (1991) examined a sample of 320 public utility issues sold between February 1981 and February 1984, 53 of which (16.56%) have split ratings, to test several hypotheses. The authors used a simultaneous equations model to show that bond ratings provide incremental ability to explain bond yields over and above that provided by firm financial information. In addition, they showed that when split ratings occur, on average bond yields reflect the higher of the two ratings. This result should be interpreted with caution, however, as their sample contained systematic differences between Moody's and S&P ratings. Over 50% of the split ratings in the sample were rated A by Moody's and BBB by S&P, a systematic difference not present in the rest of the literature. 21 The basic technique used in both of these papers involves comparing the average default yield premiums of split rated issues to those of adjacent synonymously rated issues. Since this is a nonregression procedure, it does not control for other factors which could be contributing to differences in the default yield premiums. 22 Liu and Moore (1987) used a sample of 282 corporate bonds listed in the June 1984 issue of Moody's Bond Record. All of the bonds were nonconvertible, senior claims which were rated as investment grade by both agencies. Perry, Liu, and Evans (1988) used a sample of 269 non-financial corporation bonds obtained from two separate periods in 1982. Jewell and Livingston (1997) use a sample of 1277 industrial bonds issued from 1980 to 1992 to re-examine the impact of split ratings on treasury spreads and underwriter spreads. The authors performed tests similar to those of BLMT, with the expectation that their larger sample size would yield more powerful and accurate results. The regression analysis showed that when a split rating between Moody's and S&P occurred, the treasury spread 23 on the bond was an average of the spread typically found on bonds with two of the higher or two of the lower ratings. Thus, the market was placing some value on both ratings, not systematically ignoring either the higher rating or the lower rating as had previously been argued in the literature. The authors also showed, in separate tests, that ratings of both Moody's and S&P added significant explanatory power to the treasury spread model, but neither agency's ratings were found to be more important than those of the other.
As with the other areas of the literature reviewed above, the primary shortcoming of the split rating literature is that it has focused exclusively on ratings from Moody's and S&P. The next logical question is whether ratings from the third rating agencies can also impact the spread. If ratings from other agencies are shown to impact the spread, this would be compelling evidence that these ratings "matter" to the market.

IV. DESCRIPTION OF THE DATA
The purpose of this paper is to compare Fitch IBCA's ratings to those of Moody's and S&P by answering the following questions. (1) Does Fitch IBCA have higher ratings than Moody's or S&P? (2a) Are Fitch IBCA's rating changes different from Moody's and S&P? (2b) Are the rating changes of Moody's and S&P different for firms with publicly released Fitch IBCA ratings compared to firms without? (3) Do Fitch IBCA ratings have a measurable impact upon yields?
To answer these questions, we examine a large data set of utility and industrial bonds for the 51-month period from January 1991 through March 1995. Moody's and S&P ratings, bond yields, maturities, and various indenture provisions were taken from the Warga Fixed Income Database, which is based on data collected by Lehman Brothers. 24 The database contains information on almost all bonds with face value over one million dollars that had an investment grade rating at some point. Thus one weakness of the database is the omission of most original issue junk bonds. This weakness is offset by the very large number of monthly observations from other bonds. Fitch IBCA ratings were obtained from Fitch IBCA Insights. Interest rates on U.S. Treasury securities were obtained from the Federal Reserve Bulletin.
We denote rating with the symbols used by S&P and Fitch IBCA as shown in Table 1. Ratings in the database are coded on a scale of 1 to 22, with 1 representing AAA, 2 representing AA+, and so on. Each number from 1 to 22 represents a rating "notch" or subrating.
Several different groups of data are used in the tests in this paper. The following is a brief description of each data set.

FULL SAMPLE
The full sample contains monthly information on publicly traded corporate straight debt over the 51 month period January 1991 through March 1995. This sample is limited to one senior bond and one subordinated bond per firm to reduce "double counting" of events affecting all bonds of a firm. When one bond from a large number of issues of a firm must be selected, the bond with the longest time to maturity is selected. If more than one bond from an issuer has the same time to maturity, the bond with the largest issue size is selected. There is no restriction on the length of time a bond may be in the sample. Therefore, some bonds may appear in the sample for only a few months, while others may be present for the full 51 month sample period. The total number of bonds in the full sample is 1475 at the beginning of the sample period, and 1766 at the end of the sample period. Of these, 1177 were rated by both Moody's and S&P at the beginning of the sample period, and 1555 were rated by both at the end of the period. More details of the coverage of the full sample are available in Tables 2 and 3.

3-RATER SAMPLE
The 3-rater sample includes only those bonds rated by all three agencies -Moody's, S&P, and Fitch IBCA. Further, bonds must be rated by all three agencies for at least 12 consecutive months to appear in the sample. The 3-rater sample contains 235 bonds at the beginning of the sample period and 267 bonds at the end of the sample period.
The purpose of the 3-rater sample is to minimize the selection bias present in the full sample due to Fitch IBCA providing solicited ratings only. Firms must make a conscious decision to retain the rating services of Fitch IBCA. In contrast, Moody's and S&P rate almost all SEC registered public debt, whether the issuer wants the rating or not. Firms requesting a Fitch IBCA rating can decline to make the rating public. Thus, the observable Fitch IBCA ratings are from firms willing to make the rating public, possibly implying that bonds rated by Fitch IBCA have high ratings compared to the other raters. This bias can be reduced by comparing bonds with ratings from all three agencies.  1991, 1992, 1993, 1994, and 1995. Firms may be represented by multiple bonds in this sample. This results in a total of 24,886 observations. This sample is used exclusively for regression analysis of the determinants of bond Treasury spreads. Multiple bonds per firm should not be a problem in these tests. Although the ratings of the firm's bonds are highly correlated, there is a low degree of correlation among the other independent variables in the regressions.

MARCH 3-RATER SAMPLE
The March 3-rater sample is a subset of the March sample. It includes only bonds rated by all three rating agencies. Thus, the sample contains 8,359 observations, roughly one-third the number of observations in the full sample. This sample is used to examine the impact of Fitch IBCA ratings on market yields without the impact of selection bias. More details of the composition of the 3-rater and March 3-rater samples are available in Table 19. Table 2 shows a summary of the number of bonds rated by each agency in the full sample at the beginning and end of the sample period. It is interesting to note that approximately three to four percent of the bonds are unrated by any of the major agencies. These issues may be rated by other, unspecified, rating agencies. For purposes of this study, however, they are considered unrated, and thus excluded from any further analysis. Surprisingly, 16% of the sample in 1991 and 7.5% of the sample in 1995 have only one rating. Most of these issues have Moody's as their sole rater. The number of cases in which Moody's and/or S&P do not provide ratings is puzzling in light of their stated policies to rate virtually all SEC registered public debt. Table 3A summarizes ratings, issue size, and other issue characteristics by agency for the each bond in the full sample. These statistics show the bonds rated by Moody's and S&P to be virtually identical. However, the smaller mean and median issue size for Moody's indicates that the approximately 150 bonds rated by Moody's and not rated by S&P must be far smaller issues on average than the bonds rated by both agencies. Table 3A shows several differences in the full sample between bonds rated by Fitch IBCA and those rated Moody's and S&P. First, Fitch IBCA has a far larger percentage of AA, A, and BBB rated bonds than the other two agencies, and a far smaller percentage of AAA and non-investment grade bonds. Second, a very high percentage of the firms that Fitch IBCA rates are utilities. The Moody's and S&P samples are both approximately 20% utilities, while the rate for Fitch IBCA is three times as high. Third, the mean issue size for Fitch IBCA is relatively close to that of Moody's and S&P, but the median issue size is only about 20-25% the medians of the other two agencies. This indicates Fitch IBCA rates relatively more small issues. Finally, the Fitch IBCA issues appear to have a slightly smaller time to maturity than those of the other agencies. Table 3B shows summary information for the 3-rater sample at the beginning of the period. Cantor and Packer (1997) have suggested that many bonds rated by Fitch IBCA are rated as junk bonds by Moody's and S&P. From Tables 3A and 3B, the overwhelming majority of the bonds rated by Fitch IBCA are rated investment grade by Moody's and S&P. Fitch IBCA has somewhat more bonds rated in the AA category than the other raters. Cantor and Packer (1995, 1996a have argued that Fitch IBCA ratings are inflated. To examine this issue more carefully, we focus on a 3-rater sample containing firms rated by all three raters-Moody's S&P, and Fitch IBCA. In this 3-rater sample, Fitch IBCA's ratings are only slightly higher than the other two rating agencies. Table 4 compares the mean ratings of the three rating agencies at two points in time, the beginning and end of the sample period, for both the full and 3-rater Note: The full sample limits each firm to one senior bond and one subordinated bond. Therefore most firms will be represented by a single bond. The 3-rater sample is a subset of the full sample composed of bonds rated by all three agencies for a period of at least one year. Bonds in default are excluded from this comparison. Ratings are converted to numbers using the system presented in Table 1. In this system a 1 represents a AAA rating while a 22 represents a D rating. Therefore lower numerical ratings represent higher perceived credit quality. The intervening years (1992)(1993)(1994) show a very similar pattern in the ratings. They are omitted for purposes of readability and simplicity.

VI. COMPARING RATING LEVELS
samples. The intervening years of 1992-1994 are omitted for simplicity and readability since they show essentially identical patterns as the two years presented. This same convention is used in several other tables as well. In the full sample, the mean ratings of Fitch IBCA are quite different from the other two agencies. At the beginning of the sample, Fitch IBCA's mean of 6.27 corresponds most closely to a letter rating of A. S&P and Moody's mean ratings of 8.45 and 8.86 correspond most closely to letter ratings of BBB+ and BBB respectively. Therefore, the difference in the mean ratings at the beginning of the sample is almost a full letter grade. A similar pattern holds at the end of sample period as well.
The full sample means may be misleading because Fitch IBCA may provide ratings for higher quality firms (with higher ratings). To correct for this bias, the 3-rater sample is examined. This contains bonds rated by all three agencies. In the 3-rater sample, Fitch IBCA continues to have a higher mean rating, but the difference is much smaller. Rather than a difference of 2 to 2.5 rating notches as in the full sample, the difference in means is approximately .3 notches (subratings) at the beginning of the sample period. By the end of the sample this difference rises to over .5 notches; but this is still a far smaller difference than in the full sample.
In Table 4, the mean ratings for Moody's (and S&P) are significantly higher (lower numerical rating codes) for the 3-rater sample than for the full sample. 25 For example, for Moody's the January 1991 mean rating for the 3-rater sample is 6.56, but the mean rating for the full sample is 8.86. For S&P the January 1991 mean rating for the 3-rater sample is 6.52, but the mean rating for the full sample is 8.45. These differences are significantly different at the one percent level. The differences for the January 1995 samples are also significantly different at the one percent level. Thus, the 3-rater sample contains firms with higher ratings than the full sample. Table 5 tests the differences in mean ratings for statistical significance. In the full sample, every single difference among the three rating agencies is statistically significant at the .01 level. However, in the 3-rater sample there is no significant difference between the mean S&P and the mean Moody's ratings at either the beginning or the end of the period. Fitch IBCA's mean rating is still significantly higher than both S&P and Moody's at the .01 level. Table 6 addresses the same issue of comparing mean ratings from the agencies in a slightly different way. It presents mean ratings at the beginning and end of the sample period, but only for those bonds that are present in the 3-rater sample for at least 48 months. This technique not only looks at mean ratings at two different points in time, but also holds the bonds analyzed constant across the two time periods. Thus, it probably gives the best method of comparing the ratings' relationship to each other over time. Table 7 tests the difference in means for the 3-rater sample.
Tables 6 and 7 show that Fitch IBCA still has the highest mean ratings throughout the sample period. The differences are slightly smaller for this group of bonds than for the 3-rater sample as a whole. Mean ratings at the beginning of the sample period are 6.16 for Fitch IBCA, 6.36 for S&P and 6.48 for Moody's. The differences between Fitch IBCA and S&P and Fitch IBCA and Moody's are statistically significant, while the difference between S&P and Moody's is not. This same pattern holds for the end of the sample period. Table 7 also shows clearly that the differences in mean ratings between Fitch IBCA and the other two agencies diverge slightly over the sample period. The difference between Fitch IBCA and S&P increases from .2 notches to .47 notches, while the difference between Fitch IBCA and Moody's increases from .32 notches to .39 notches. The difference between Moody's and S&P ratings actually narrows slightly from .12 notches to .08 notches.
In summary, the 3-rater sample has higher quality firms than the average of the full sample. This indicates that firms seeking out a Fitch IBCA rating have lower Note: Comparisons are calculated by comparing mean ratings from Table 4. Since a rating of 1 represents AAA and 22 represents a D rating, lower numerical ratings represent higher perceived credit quality. Negative differences in mean ratings indicate the first firm in the comparison had the lower numerical rating (higher perceived credit quality). The intervening years (1992)(1993)(1994) show a very similar pattern in the ratings. They are omitted for purposes of readability and simplicity. perceived credit risk than the average firm. When comparing ratings for firms rated by all three agencies in the 3-rater sample, the average rating is much closer than for the full sample. For the beginning period, the difference in mean ratings drops from approximately two notches in the full sample to approximately .3 notches in the 3-rater sample. Thus, Fitch IBCA does give higher average ratings than either Moody's or S&P, but about 85% of the difference in mean ratings is caused by differences in the credit quality (as perceived by Moody's and S&P) of the firms that Fitch IBCA rates.

VII. COMPARING SPLIT RATINGS
A split rating occurs when rating agencies assign different ratings to the same issue. Split ratings may shed special light on the value of a Fitch IBCA rating. A priori, when Moody's and S&P disagree on a rating, a Fitch IBCA rating is expected to have value as a tie-breaker. Table 8 compares Moody's and S&P at the beginning and end of the sample period. In the full sample, the two agencies gave different ratings at the notch level Note: The 3-rater sample is a subset of the full sample composed of bonds rated by all three agencies for a period of at least one year. 1 represents a AAA rating while 22 represents a D rating. Therefore lower numerical ratings represent higher perceived credit quality. The intervening years (1992)(1993)(1994) show a very similar pattern in the ratings. They are omitted for purposes of readability and simplicity.  Table 6. Lower numerical ratings represent higher perceived credit quality. Negative differences in mean ratings indicate the first firm in the comparison had the lower numerical rating (higher perceived credit quality). 49.70% of the time at the beginning of the sample period, and 55.31% by the end of the sample period. At the letter level, 15.12% of issues had split ratings at the beginning of the period and 18.14% by the end of the sample period. In the 3-rater sample, a similar pattern holds. The percentage of bonds with split ratings at the 24 Jeff Jewell and Miles Livingston 54.55% * * * * * * Significant at 1% level * * Significant at 5% level * Significance at 10% level Note: The intervening years (1992)(1993)(1994) show a very similar pattern in the ratings. They are omitted for purposes of readability and simplicity.
notch level begins at 43.40% and increases to 49.63%, while the percentage split at the letter level begins at 12.77% and increases to 16.18%. These percentages of split ratings are all consistent with those reported previously in the literature. 26 If split ratings are caused by random errors on the part of one of the rating agencies, and if each agency is equally likely to commit one of these random errors, then we would expect each agency to have the higher rating approximately 50% of the time when a split rating occurs. However, if one of the agencies consistently gives the higher rating when a split occurs, a significant difference between the agencies in the evaluation of credit risk is more likely.
In Table 8, S&P gives the higher rating 49.57% of the time for the full sample when a split occurs at the notch level at the beginning of the period. This percentage increases to 55.47%, which is statistically different from 50%, by the end of the sample period. When a split occurs at the letter level, S&P gives the higher rating a surprising 69.10% of the time at the beginning of the period. This declines to 62.06% by the end of the sample period. Both of these percentages are statistically different from 50%. In the 3-rater sample a similar pattern holds, with S&P giving the higher rating every time there is a significant difference from the 50% mark. Table 9 focuses on the 3-rater sample in order to include Fitch IBCA in the analysis. The technique employed is to examine cases when Moody's and S&P have identical ratings and then cases where Moody's and S&P disagree. Since there is very little difference between the results for the beginning and end of the sample period, the discussion focuses on the beginning of the period.
When Moody's and S&P have identical ratings at the notch level, Fitch IBCA agrees with them 58.65% of the time. If Fitch IBCA disagrees with the other agencies, Fitch IBCA gives the higher rating 70.91% of the time. When Moody's and S&P have the same letter rating, Fitch IBCA agrees with them over 87% of the time. If Fitch IBCA disagrees with them, Fitch IBCA gives the higher rating 73.08% of the time. This pattern holds identically for the end of the period. Thus, when Moody's and S&P agree with each other at the notch or letter level, Fitch IBCA also agrees with them the majority of the time. However, if Fitch IBCA disagrees with the other two agencies, the Fitch IBCA rating is typically higher.
When S&P and Moody's disagree at the notch level, Fitch IBCA agrees with Moody's 33.33% of the time and with S&P 38.25% of the time. Thus, Fitch IBCA agrees with either Moody's or S&P over 71% of the time when Moody's and S&P split at the notch level. In these cases, the market may view the Fitch IBCA rating as a "tie-breaker" between the different ratings of Moody's and S&P. The remaining 29% of the time Fitch IBCA has a different rating than either Moody's or S&P, creating a three way split. When Fitch IBCA disagrees with Moody's at the notch level, Fitch IBCA is higher 60.29% of the time. When Fitch IBCA disagrees with S&P, Fitch IBCA is higher 69.84% of the time.
When Moody's and S&P disagree at the letter level, Fitch IBCA agrees with Moody's 40% of the time and with S&P 50% of the time. Thus, Fitch IBCA agrees with either Moody's or S&P 90% of the time when a split rating occurs between the two major agencies. Fitch IBCA consistently gives the higher rating when it disagrees with the other agencies. When Fitch IBCA and Moody's disagree at the letter level, Fitch IBCA gives the higher rating 72.22% of the time. When Fitch IBCA disagrees with S&P at the letter level, Fitch IBCA gives the higher rating 60.00% of the time.
Perhaps the most interesting fact about Table 9 is that Fitch IBCA gives the higher rating when a split occurs a majority of the time in every single set of comparisons, both at the beginning and the end of the sample period. This confirms the earlier evidence of higher ratings by Fitch IBCA.
Also shown in Table 9 are comparisons of the mean ratings of the agencies for each of the three cases. The differences are calculated as Fitch IBCA rating-Other Agency rating. Negative values indicate a higher rating for Fitch IBCA. Fitch IBCA has the higher mean rating in all six comparisons shown in the table, although none of the differences are significant at conventional levels.
Agreement between Moody's and S&P on a rating at the notch level indicates relatively little uncertainty about the bond's default risk. In this case, we expect  (1992)(1993)(1994) show a very similar pattern in the ratings. They are omitted for purposes of readability and simplicity.
Fitch IBCA's mean rating to be closer to those of Moody's and S&P. When Moody's and S&P have a split rating, we expect Fitch IBCA's mean rating to be farther from those of the other agencies due to the greater uncertainty about the bond's default risk. This pattern does not hold true in the data, however. Fitch IBCA does have the higher mean rating in every case, but there is no significant relationship between the size of the difference in mean rating and a Moody's/S&P split. In addition, due to the reduced sample size from splitting the sample into two cases (split and nonsplit ratings), the mean rating for Fitch IBCA in each case is not significantly different from the mean ratings of the other agencies.
Taken together, Tables 8 and 9 suggest important differences among the three rating agencies. Though the agencies agree with each other a large percentage of the time, both at the notch and the letter level, S&P consistently gives higher ratings than Moody's when they disagree and Fitch IBCA consistently gives higher ratings than either Moody's or S&P when there are disagreements.

VIII. RATING CHANGES
As the financial condition of a firm changes, rating agencies adjust their ratings. In deciding upon a policy for rating changes, each rating agency must consider several factors. Frequent rating changes keep investors up to date on the current financial health of the firm, but may rely too heavily on extremely short-term fluctuations in credit quality. Infrequent changes may indicate a policy of focusing on long-term default risk, but may fail to keep investors informed about relevant changes in short-term credit quality. Firms that are dependent on frequent debt issuances for corporate funding and want to know their expected rating with a high degree of certainty may prefer a policy of fewer rating changes.
Fitch IBCA professes a policy of focusing on long-term default risk and "rates through the business cycle." Therefore, we would expect to see fewer changes in Fitch IBCA ratings than in S&P and Moody's ratings (assuming they are not following this same long-term policy). The evidence below confirms this expectation. Table 10 reports the number of bonds in the full and 3-rater samples that have no rating changes from each agency. Fitch IBCA has by far the highest percentage of bonds with no rating changes in both samples, while Moody's has the lowest percentage of bonds with no rating changes in each sample. Table 11 reports the probability of a rating change per bond per year for each rating agency. This technique considers all rating changes by each agency for each bond relative to the number of observations for each bond. In Table 11, S&P or Moody's have a greater probability of rating change than Fitch IBCA. In fact, the probability of Fitch IBCA changing is approximately 1/3 that of the other two agencies in the full sample and about 1/2 of the other two changing in the 3-rater sample. Panel B confirms that all of the differences in probabilities are statistically significant in both samples.
The nature of the firms rated by Fitch IBCA can be shown by comparing the ratings of Moody's and S&P in the full sample and the 3-rater sample. We shall see that firms rated by Fitch IBCA (and thus in the 3-rater sample) have different Moody's and S&P ratings than firms not rated by Fitch IBCA. For example in Table  11, firms rated by Fitch IBCA (and in the 3-rater sample) have a considerably lower 28 Jeff Jewell and Miles Livingston

3-rater Sample
Fitch IBCA-Moody 24.56% 7.28 * * * SP-Moody 6.58% 1.85 * * * * Significant at 1% level * * Significant at 5% level * Significance at 10% level Note: Differences are calculated by comparing Percentages of Unchanged Ratings from Panel A. Positive differences indicate the percentage of firms with unchanged ratings is higher for the first agency in the comparison. Z Statistics are calculated using the normal approximation of the binomial distribution.
probability of a rating change by Moody's and S&P than firms in the full sample, and the differences are statistically significant at the one percent level. 27 Thus, firms rated by Fitch IBCA have more stable ratings as indicated by the ratings of Moody's and S&P. Table 12 reports the number of rating changes by the agency over the sample period divided by the number of bonds rated over the sample period. This gives the average number of rating changes per bond over the entire sample period. In Table 12, Fitch IBCA changes its ratings far less frequently than either S&P or Moody's in the 3-rater sample. For example, in the 3-rater sample Fitch IBCA has a rating change ratio of .357, compared to .673 for S&P and .813 for Moody's. Panel B confirms that the differences in rating change ratio are all significant at the .01 level.
In a comparison of the full and 3-rater sample, Moody's and S&P have a lower (and significantly different) rating change ratio in the 3-rater sample than the full sample. 28 Similarly, the mean rating change for Moody's and S&P is smaller (closer to an upgrade) in the 3-rater sample than the full sample. These findings suggest that firms rated by Fitch IBCA have more stable Moody's and S&P ratings.
Tables 13, 14, and 15 refine the results of Table 12 by examining upgrades and downgrades separately. In Table 13, upgrades by Moody's and S&P are far more likely in the 3-rater sample than the full sample. 29 In Table 14, downgrades by Moody's and S&P are less likely in the 3-rater sample than the full sample. 30 Thus, 28 For Moody's, the difference in the ratings change ratio is .10 with a Z value of 4.90, which is significant at the one percent level. For S&P, the difference is .14 with Z value of 5.62, which is significant at the one percent level. 29 For Moody's the difference between % of changes is 20.67 with Z value of 7.08 (significant at the 1% level). For S&P, the difference between % of changes is 5.43 with Z value of 1.67 (significant at the 5% level). 1.90 * * * * Significant at 1% level * * Significant at 5% level * Significance at 10% level Note: Differences are calculated by comparing Mean Rating Changes and Rating Change Ratios from Panel A. The Rating Change Ratio is defined as the number of rating changes divided by the number of rated bonds over the sample period. Mean and median rating changes are expressed in terms of rating notches. Negative changes represent upgrades and positive changes represent downgrades. Negative differences in rating change ratio indicate the first agency in the comparison had a lower rating change ratio. Negative differences in mean rating change indicate the first agency in the comparison had a lower mean rating change. Lower mean rating changes indicates that more changes lead to "better" ratings on average for the first agency in the comparison.
when a firm has a Fitch IBCA rating, the other raters are more likely to upgrade and less likely to downgrade. This finding is consistent with the view that firms hire Fitch IBCA in the belief that the firm's bonds are undervalued by Moody's and S&P. Table 15 compares the size of the rating changes (measured in notches) for Moody's, S&P, and Fitch IBCA in the full and 3-rater samples. In the full sample, there are no significant differences in the size of the changes between Fitch IBCA and the other two rating agencies. However, in the 3-rater sample, Fitch   Table 14), but this may be partially offset by the larger magnitude of Fitch IBCA's downgrades. Table 16 considers rating reversals, defined as a rating change in one direction followed by a change in the opposite direction. Rating reversals occur only a small percentage of the time for all three raters. The average length of time for a reversal is quite long, typically 1.5 to 2 years. The length of time for a Fitch IBCA reversal is somewhat larger than the other agencies. For Moody's and S&P, a rating reversal  Tables 13  and 14. Differences are in terms of rating notches. Positive differences indicate that the first agency in the comparison had smaller mean upgrades (larger mean downgrades) than the second agency. Negative differences indicate that the first agency had larger mean upgrades (smaller mean downgrades) than the second agency in the comparison. Note: A rating reversal is defined as a rating change that results in the rating returning to a value it held previously. Reversals may be evidence of a rating agency focusing heavily on short term fluctuations in credit quality as opposed to focusing on long term default risk.
is less likely and the time to reversal is longer in the 3-rater sample than the full sample. This indicates that firms rated by Fitch IBCA have more stable Moody's and S&P ratings than firms not rated by Fitch IBCA. Table 17 examines net upgrades and net downgrades over the period of the 3-rater sample. This table shows whether the differences between Fitch IBCA and the other two agencies are important over time. There are several points of interest in Table 17. First, Fitch IBCA has fewer net upgrades over the sample period than the other two agencies. Specifically, Fitch IBCA has 60 net upgrades, compared to 75 for S&P and 121 for Moody's. These numbers translate into 15.19%, 18.99% and 30.63% of the sample respectively. Second, the net changes in rating for those bonds receiving net upgrades are very close to each other, with Fitch IBCA having Thus, Fitch IBCA has approximately the same mean net change as the other two agencies for bonds experiencing a net change. Fitch IBCA achieves its net change in fewer steps than the other two agencies. But, Fitch IBCA has significantly fewer net upgrades and net downgrades over the sample period than the other two agencies.
The evidence in this section shows that Fitch IBCA changes ratings less frequently and has rating changes of greater magnitude. In addition, when Fitch IBCA provides a rating, the ratings of Moody's and S&P are more stable, upgrades are more likely, and downgrades less likely.

YIELD REGRESSION FOR THE FULL SAMPLE
Fitch IBCA ratings have incremental information if the existence of a Fitch IBCA rating affects bond yields. The impact of Fitch IBCA on bond yields is analyzed in two ways. First, in the full sample, firms receiving Fitch IBCA ratings typically have lower bond yields after adjusting for rating. Second, in the 3-rater sample, Fitch IBCA ratings are found to serve as tie-breakers when Moody's and S&P disagree on their ratings. This evidence is consistent with the view that the market attaches value to Fitch IBCA ratings.
A well-established technique for determining the impact of ratings is to regress bond Treasury spreads against bond ratings and a group of control variables. 31 When the raters disagree, split rating dummy variables can be used. The splits are defined in Table 18. For example, a bond with a Aa rating from Moody's and a AA rating from S&P would be placed in the AA category. Likewise a bond with a Aa rating from Moody's and an A rating from S&P would be placed in a split rating category named AA/A. If the coefficients on the split rating categories are significantly different from those of the adjacent nonsplit rating categories, then the market is pricing the split rating as a unique rating class because the market values each agency's ratings when a split occurs. In this case, the market yield on a split-rated bond is an average of the yields typical for the higher and the poorer of the ratings in the split. 32 Table 19 provides summary statistics for the two samples to be used in the regression analysis. The full March sample includes all publicly traded corporate straight debt for which all necessary data were available in March of 1991March of , 1992March of , 1993March of , 1994March of and 1995 Bonds were required to have ratings from both Moody's and S&P in order to eliminate any potential selection bias in the choice of those two raters. This results in over 24,000 observations. The 3-rater March sample is 31 The Treasury Spread is defined as the yield to maturity of the bond minus the yield to maturity of the same maturity Treasury security. 32 The literature shows somewhat mixed results for this type of regression, with some studies finding that the market values only the higher of the two ratings in the case of a split and other studies finding the market values only the lower of the two ratings in the case of a split. However, all of these studies had somewhat limited sample sizes. Jewell and Livingston (1998) use a large sample to show that the market does indeed price split ratings as unique rating classes. Treasury Spread = f (identical bond ratings and split bond ratings; natural log of issue size and natural log of years to maturity; dummy variables for utility issues, callability, presence of sinking funds, putability, and whether 33 Since 1982, subratings have been available from all three rating agencies. Subratings divide each rating category into three rating notches. These rating notches were used in earlier tables when reporting information on mean rating levels and on mean and median rating changes. However, subratings are not considered in the regression analysis for several reasons. First, using subratings would reduce the number of bonds in each rating category and lower the power of the tests. Second and more importantly, it is not clear that split ratings between subratings are as important as split ratings between letter ratings. Discussions with individuals in the ratings industry indicate that differences in letter ratings are far more important in setting yields than differences in subratings. Therefore this analysis focuses on splits in letter ratings.

Size
The dollar amount of the issue, in millions.

Maturity
Years to maturity of the issue. In Table 20, the base case for the rating dummies is BB/B and below. The typical Treasury spread for bonds in these rating categories is represented by the regression constant of 4.254%. All of the rating categories have coefficients significantly different from zero. This indicates that the other rating categories are priced differently than the base case. As the rating gets higher, the regression coefficient gets more negative and the yield is lower. In addition, the control variables generally have the expected sign. Issue maturity and the put feature are significant. It should be noted that the AAA/AA split category is omitted as there are no bonds that fall into that category. Table 21 gives us a clearer view of the significance of the rating categories. The tests between adjacent ratings show that each split category is priced as a unique rating class. Thus, the market values the ratings of both Moody's and S&P when a split rating occurs.
The Fitch IBCA dummies for each rating in Table 20 show the impact of the existence of a Fitch IBCA rating upon the yield spread. A positive coefficient indicates a higher average yield if a Fitch IBCA rating exists. Only two of the dummies have a positive sign, i.e., AAA and AA/A. A negative coefficient implies a lower yield if a Fitch IBCA rating exists. This occurs for ratings of A, A/BBB, BBB, BB, and the base case. Since most of the Fitch IBCA ratings occur for the ratings of BBB and A, the yield is reduced for the majority of the cases when a Fitch IBCA rating exists. This finding is consistent with the view that firms seeking Fitch IBCA ratings believe they are under-rated by Moody's and S&P. Further, the regression indicates that on average the market agrees with this perception, evidenced by the lower yield awarded firms with Fitch IBCA ratings. .4949 * * * Significant at .01 level. * * Significant at .05 level. * Significant at .10 level.
Note: Differences are calculated by comparing rating coefficients from the regression results in Table  20. Statistically significant differences between adjacent rating categories indicates that the bond market prices those categories distinctly. For example, the fact that AA/A Split is statistically different from both the AA and A categories means the market prices the split ratings as a distinct rating. This is true of each of the tested split rating categories. Table 20 suggests that bonds with a Fitch IBCA rating tend to have lower yields. To provide further insight into the value of Fitch IBCA ratings, Table 22 considers the 3-rater sample, containing only firms with Fitch IBCA ratings. In Table 22, each split category is redefined as two separate categories. The "Upper" split category contains bonds where Fitch IBCA agrees with the higher of the S&P or Moody's ratings. The "Lower" split category contains bonds where Fitch IBCA agreed with the agency giving the lower rating. In addition, two dummy variables were added to indicate whether Fitch IBCA gave the highest or the lowest of the three ratings (in other words Fitch IBCA agreed with neither Moody's or S&P).

YIELD REGRESSION FOR THE 3-RATER SAMPLE
Tables 22 and 23 consider whether Fitch IBCA ratings "matter" to the market in two possible ways. First, Fitch IBCA matters if one or both of the dummies indicating that Fitch IBCA has the highest or lowest of the three ratings is significant. The dummy variable indicating that Fitch IBCA was the lowest of the three ratings is positive and significant, indicating that the market increases the Treasury spread by 23 basis points when Fitch IBCA gives the lowest rating. The converse does not appear to be true, however, as the Fitch IBCA highest dummy variable has the wrong sign and is insignificant. 0.3869 * * * Significant at 1% level. * * Significant at 5% level. * Significant at 10% level.
Note: OLS results are corrected for heteroskedasticity using White's measure. Coefficients for year dummies omitted due to space considerations.
This regression model differs from the Table 20 model in two ways. First, in order to measure the relative impact of a Fitch IBCA rating, the sample has been limited to bonds rated by all three raters-Moody's, S&P, and Fitch IBCA. Second, the model explicitly accounts for the relationship between the Fitch IBCA rating and the Moody's/S&P rating. This is done in several ways. If all three raters give the same letter rating, then the bond is placed in the obvious letter rating category (for example, three 'A' ratings means the bond is categorized as A). If Moody's and S&P disagree on the letter level, but Fitch IBCA agrees with one of them, then the bond is placed in one of the split rating categories. If Fitch IBCA agrees with the higher of Moody's/S&P then the bond is placed in one of the "upper" categories. If Fitch IBCA agrees with the lower of Moody's/S&P then the bond is placed in one of the "lower" categories. For example, if Moody's gives an 'A' rating, S&P gives a 'BBB' rating, and Fitch IBCA gives an 'A' rating, then the bond is placed in the A/BBB Upper category. Finally, in cases where Fitch IBCA disagrees with both Moody's and S&P the bond will be placed in either the "Fitch IBCA Highest" or "Fitch IBCA Lowest" categories, depending on whether the Fitch IBCA rating was the highest or the lowest of the three ratings given. There are no cases in the sample where Fitch IBCA gives a letter rating in between those of Moody's and S&P; therefore this situation is not addressed.  .401 * * * * * * Significant at 1% level. * * Significant at 5% level. * Significance at 10% level.
Note: Differences are calculated by comparing rating coefficients from the regression results in Table  22. Statistically significant differences between adjacent rating categories indicates that the bond market prices those categories distinctly. For example, the fact that BBB/BB Upper is statistically different from both the BBB and BBB/BB Lower categories means the market prices the category as a distinct rating. If both the split rating categories (upper and lower) are different from each other and different from their adjacent synonymous categories, then the Fitch IBCA rating is being priced as a unique third rating. This is the case in the BBB/BB group of rating categories. In the case of the AA/A categories, AA/A Upper is statistically different AA and from AA/A Lower. However, AA/A Lower is not different from the A category. Thus, if Fitch IBCA agrees with the higher of S&P or Moody's in this category a new pricing category is created. If Fitch IBCA agrees with the lower of S&P or Moody's the Fitch IBCA rating is essentially ignored and no new pricing category is created. Thus the Fitch IBCA rating "breaks the tie" created by the split rating in some sense.
Second, Fitch IBCA matters if the existence of a Fitch IBCA rating breaks split ratings into two categories. In the case of the AA/A split, we find three distinct categories-AA, AA/A Upper, and AA/A Lower combined with A. In the case of the A/BBB split, there is no distinction between A/BBB Upper and A/BBB Lower. In the case of the BBB/BB split, the market prices the BBB/BB Upper and BBB/BB Lower ratings as unique categories; the two split categories are significantly different from each other and each is also significantly different from its adjacent identically-rated category.
In sum, these findings are consistent with the view that the market values the incremental information provided by Fitch IBCA ratings. This is particularly true in the case when the two major agencies have a differing opinion of the credit