• No results found

The Issuer-Pays Rating Model and Ratings Inflation: Evidence from Corporate Credit Ratings

N/A
N/A
Protected

Academic year: 2021

Share "The Issuer-Pays Rating Model and Ratings Inflation: Evidence from Corporate Credit Ratings"

Copied!
44
0
0

Loading.... (view fulltext now)

Full text

(1)

The Issuer-Pays Rating Model and Ratings Inflation:

Evidence from Corporate Credit Ratings

G¨unter Strobl† and Han Xia‡

May 2012

Abstract

This paper provides evidence that the conflict of interest caused by the issuer-pays rating model leads to inflated corporate credit ratings. Comparing the ratings issued by Standard & Poor’s Ratings Services (S&P) which follows this business model to those issued by the Egan-Jones Rating Company (EJR) which adopts the investor-pays model, we demonstrate that the difference between the two is more pronounced when S&P’s con-flict of interest is particularly severe: firms with more short-term debt, a newly appointed CEO or CFO, and a lower percentage of past bond issues rated by S&P are significantly more likely to receive a rating from S&P that exceeds their rating from EJR. However, we find no evidence that these variables are related to corporate bond yield spreads, which suggests that investors may be unaware of S&P’s incentive to issue inflated credit ratings.

JEL classification: D82, G24

Keywords: Corporate credit ratings; Issuer-pays rating model; Ratings inflation

We thank Paolo Fulghieri, Paige Ouimet, Anil Shivdasani, as well as seminar participants at the Fordham University, University of New South Wales, the University of North Carolina, the University of Texas at Dallas, Washington University,the 7th Annual Conference on Corporate Finance at Washington University in St. Louis and the 2011 European Finance Association Annual Conference for comments on an early draft. We thank the Egan-Jones Rating Company for sharing their data with us. We particularly thank Peter Arnold and Chris Bauman for their help on data collection. Any errors belong to the authors.

G¨unter Strobl, Kenan-Flagler Business School, University of North Carolina at Chapel Hill, McColl Building, C.B. 3490, Chapel Hill, NC 27599-3490. Tel: 1-919-962-8399; Fax: 1-919-962-2068; Email:

strobl@unc.edu

Corresponding author: Han Xia, School of Management, University of Texas at Dallas, 800 West Campbell Rd., SM31, Richardson, TX 75080-3021. Tel: 1-972-883-6385; Fax: 1-972-883-2799; Email:

(2)

1

Introduction

Many market observers have accused credit rating agencies of having contributed to the recent financial crisis by having been too lax in the ratings of some structured products. The picture that has emerged in the press is one in which rating agencies compromised the quality of their activities to facilitate the selling of their services. This behavior has been attributed to an inherent conflict of interest in the agencies’ business model: rating agencies receive their principal revenue stream from issuers whose products they rate (“issuer-pays model”). Rating agencies have responded to this accusation by arguing that such an attitude would put their reputation, which is arguably their most valuable asset, at risk and would therefore irrevocably damage their business in the long run. The objective of this paper is to empirically investigate whether reputational concerns are sufficient to prevent rating agencies from issuing inflated credit ratings.

We address this question by comparing the credit ratings issued by two different rating agencies, Standard & Poor’s Ratings Services (S&P) and Egan-Jones Rating Company (EJR). Unlike S&P (and other major rating agencies) whose ratings are paid for by issuers, EJR relies on subscription fees paid by investors as its principal source of income (“investor-pays model”). To the extent that investors value accurate ratings, this alternative business model eliminates EJR’s incentive to shade its ratings upwards so as to please their clients, making them an ideal benchmark for identifying any bias in S&P’s ratings.

Although S&P and EJR use the same categories for their credit ratings (AAA to D for long-term ratings and A-1 to D for short-term ratings), one might argue that their meaning is potentially different. A higher (i.e., closer to AAA) rating from S&P may thus not be an indi-cation of ratings inflation, but simply reflect differences in the agencies’ rating methodology. Rather than comparing S&P’s ratings directly to those of EJR, we rely on the cross-sectional analyses. Specifically, we identify circumstances under which the conflict of interest caused by the issuer-pays model is particularly severe and examine whether the difference in ratings

(3)

between S&P and EJR is especially pronounced under these circumstances.

We employ several proxies to measure the severity of S&P’s conflict of interest. Our first proxy exploits the amount of the issuer’s short-term debt. The greater the firm’s short-term liquidity needs, the more likely it is to issue large amounts of debt in the near future, and thus the more business S&P can obtain from the firm in the future. The prospect of earning additional rating fees gives S&P an incentive to issue a favorable rating so as to attract the firm’s future business and forestall the firm’s taking its business to another rating agency. Our second set of results are related to the concentration of the issuer’s past business with S&P. In particular, we examine whether S&P serves as the only credit rater for an issuer’s bond issues, as well as the percentage (in numbers) of the the issuer’s bond issues rated by S&P, over the past 2, 4, 6, or 8 quarters. A lower business concentration indicates the issuer’s extensive interaction with other rating agencies and hence increases its flexibility in switching to another agency for future business. This increase S&P’s incentive to produce issuer-friendly ratings so as to keep its current and attract more future business. In contrast, when S&P serves as an exclusive rater, it raises the switching cost for the issuer to seek a new agency, mitigating the severity of S&P’s conflict of interest.1 Thus a low concentration of the issuer’s past business with S&P increases the agency’s incentive to produce issuer-friendly ratings so as to keep its current and attract more future business. Our final set of tests are motivated by the observation that newly appointed corporate leaders are more inclined to change the firm’s operational and financial strategy (e.g., Giambatista et al., 2005), and may therefore also be more likely to switch credit rating agencies. We hypothesize that S&P may thus have a particularly strong desire to please its customers by issuing favorable ratings for firms that recently appointed a new CEO or a new CFO.

1In practice, the switching costs are associated with an issuer’s approaching another rating agency,

disclos-ing its financial information, negotiatdisclos-ing ratdisclos-ing terms with the ratdisclos-ing agency, and any legal expenses incurred during this process. Alternatively, the switching costs also arise from potential additional rating expenses the issuer needs to pay. For example, evidence suggests that rating agencies charge extra fees for a new client. They also tend to increase the “maintenance fees” for an existing customer if it has not brought any new business or create another payment for the agency in a certain period of time (Klein (2004)).

(4)

In each test, we find evidence that S&P is more likely to issue an inflated rating when its conflict of interest is particularly severe. The effect of the issuer-pays model on rating inflation is of significant economic magnitude. The average difference in the two agents’ ratings is about 0.2 notch, indicating that S&P issues a rating that is one notch step-up from EJR’s for approximately every one out of five issuers. Moreover, an increase inLn(Short-term Debt) from the 25th to the 75th percentile is associated with 0.19 additional notch step-up in S&P’s ratings compared to EJR’s. Similarly, going from the only rater of an issuer’s past-4-quarter business to a non-single rater leads S&P to issue a rating about 0.236 notch higher than EJR’s, and the appointment of a new CFO increases the difference by a total of 0.29 notch in the two years following the appointment .

While our findings indicate that the conflict of interest associated with the issuer-pays model leads to a significant amount of ratings inflation, it is not clear whether it leads to any misallocation of resources. If investors anticipate the bias in S&P’s ratings, the firm’s debt will be correctly priced and inflated credit ratings may not be harmful to society. To address this issue, we examine whether our variables that predict the extent of S&P’s ratings inflation can also predict a bond’s yield spread at issuance. We find no evidence that this is the case. The amount of a firm’s short-term debt, S&P’s past revenue share, and the appointment of a new CEO/CFO have no significant effect on the bond’s yield spread. These findings are consistent with the view that investors are unaware of S&P’s incentive to issue inflated credit ratings. Hence, rating inflation can lead to misallocation of resources. These results further imply that regulators’ intervention in the credit rating industry would be beneficial to investors who use credit ratings to guide their investment decisions, and can potentially increase social welfare.

Our study adds to the literature that examines different properties in ratings from issuer-paid and investor-issuer-paid rating agencies. As one of the first studies to compare ratings issued by different types of agencies, Beaver, Shakespeare, and Soliman (2006) find that EJR reacts more timely in changing its ratings than Moody’s, and EJR’s rating changes are followed by a

(5)

stronger market reaction. These results suggest that EJR seems to provide more informative ratings than the issuers-paid major rating agencies. The authors attribute this finding to the two agencies’ NRSRO designation; while Moody’s ratings serve for the regulatory and contractual purposes due to Moody’s NRSRO certification, the “non-certified” EJR only servers as an additional information provider.2 In a more recent study by Cornaggia and Cornnagia (2011), the authors also document that Financial Health Ratings, an independent rating company compensated by investors, provide more timely information than Moody’s. They provide a market-based explanation for this result; regulatory reliance on sluggish ratings from NRSROs can benefit regulated investors by allowing them to hold riskier fixed-income securities for higher returns. Different from these studies, in this paper we exploit the compensation structures as an explanation of the different properties in ratings from the two types of rating agencies.

Our study is also related to a growing literature on incentive problems of credit rating agencies raised from the issuer-pays model. Relying on historical rating data between 1971 and 1978, Jiang, Stanford, and Xie (2011) examines changes in bond ratings surrounding the date when S&P began to adopt the issuer-pays business model. In a difference-in-difference setting and using Moody’s rating for the same bond as a benchmark, the authors find that S&P increases its rating levels once it switches to collecting fees from issuers. While consistent with the results in our paper, their finding leaves an question regarding the persistency of the conflict of interest resulting from the issuer-pays model. As documented in Blume, Lim, and MacKinlay (1998) and Baghai, Servaes, and Tamayo (2011), rating agencies become more conservative and adopt a stricter rating standard over the period 1978-2006. This increased conservatism makes it nontrivial whether S&P’s incentive problems is merely a short-term phenomenon immediately following the change of its compensation structure. In this paper, we fill in this gap by showing that this incentive problem is still present in the 2The sample in Beaver, Shakespeare, and Soliman (2006) ends in 2002, while EJR was designated as one

(6)

more recent corporate credit ratings, despite the rising rating standard. Analyzing a sample of collateralized debt obligations (CDO) issued between 1997 and 2007, Griffin and Tang (2009) report that rating agencies frequently made adjustments to their quantitative model that, on average, amounted to a 12% increase in the AAA tranche size. Ahcraft, Goldsmith-Pinkham, and Vickery (2010) document a progressive decline in rating standards for mortgage-backed securities (MBS) between the start of 2005 and mid-2007. He, Qian, and Strahan (2011) provide evidence that Moody’s and S&P rewarded large issuers of MBS by granting them unduly favorable ratings during the boom years of 2004 through 2006. Becker and Milbourn (2010) argue that the increased competition from Fitch over the past decade resulted in more issuer-friendly and less informative ratings from S&P and Moody’s. Our approach differs from these studies in an important way. Rather than relying on changes in the agencies’ incentive to issue inflated ratings caused by changes in overall market conditions or in the competitive landscape of the rating industry, we directly compare the ratings of two agencies that follow different business models (issuer-pays model versus investor-pays model), and relate the difference in ratings to issuer-level proxies for the severity of the conflict of interest associated with the issuer-pays model. This approach enables us to provide more direct evidence on rating inflation that is not driven by changes in market and industrial factors but rather the inherent incentive problems resulting from rating agencies’ own compensation structure.

The remainder of this paper is organized as follows. Section 2 provides some institutional background of the credit rating industry. Section 3 describes the data and discusses our empirical methodology. Section 4 presents our empirical results and Section 5 discusses their robustness. Section 6 summarizes our contribution and concludes.

(7)

2

Institutional Background

The credit rating industry has long been dominated by a handful of companies designated as “nationally recognized statistical rating organizations” (NRSROs) by the Securities and Exchange Commission (SEC). As of 2002, Standard and Poor’s, Moody’s, and Fitch were the only rating agencies that were granted the NRSRO status. More recently, the SEC—arguably as a result of political pressure and/or concern about concentration in the industry—added another seven rating agencies to this group.3

A majority of these rating agencies follow the issuer-pays business model. An exception is the Egan-Jones Rating Company (EJR). EJR is an independent rating agency founded by Sean Egan and Bruce Jones that started issuing ratings in December 1995. Since its foun-dation, EJR has rated more than 1,300 companies in the industrial, financial, and service sectors. According to its rating policy, EJR “selects an issuer for a credit analysis generally based on developments within issuers and industries, market developments and requests of subscribers.” EJR uses the same credit rating scales as S&P, namely from AAA to D (in-cluding the modifiers “+” and “-”) for long-term ratings, and from A-1 to D for short-term ratings.

Relying on subscription fees paid by investors, EJR claims that it “delivers highly accurate ratings with predictive value for equity, debt, and money market portfolios and has no conflicts of interest.” With this aim, it successfully predicted the pitfalls of Enron, WorldCom, and more recently, Lehman Brothers through its credit ratings. Beaver et al. (2006) empirically compare EJR’s and Moody’s credit ratings, and find that EJR provides more informative ratings than the issuers-paid major rating agencies. As an extension of their findings, we compare the ability of EJR’s and S&P’s ratings to predict defaults, the most important

3

Dominion Bond Rating Service (a Canadian rating agency) and A.M. Best (highly regarded for its ratings of insurance companies) received their NRSRO designation in 2003 and 2005, respectively. In 2007, the SEC added two Japanese rating agencies (Japan Credit Rating Agency, Ltd. and Ratings and Investment Information, Inc.) and Egan-Jones Rating Company (EJR). More recently, two other rating agencies, LACE Financial and Realpoint LLC, joined this group.

(8)

credit events, as the first step of our tests. We obtain information on issuers’ default from S&P’s issuer rating database. S&P identifies an issuer as in default if it misses a payment of any debt obligations and assigns a “D” rating to this issuer to indicate its default status.4 We then calculate the 5-year default rates for each of S&P’s rating categories based on the “static pool” methodology used by S&P.5Specifically, at the beginning of each fiscal quarter, we group issuers into different pools based on their S&P ratings (AAA, AA, A, etc.). Each pool is followed from that point forward. The five-year default rates is calculated as the percentage of issuers in each rating category that default within the next five years (20 quarters). We present the default rates next to each of S&P’s rating categories on the X-axis in Panel A. Consistent with S&P’s rating report (Standard & Poor’s (2008)), default is a relatively rare event. For example, the average default rate for issuers with an S&P rating of A or above is only 0.1% in the 5-year horizon. This rate goes up as ratings approach the lower end of “D” and reaches about 27% for ratings of the “CCC” category or below.

To further compare the informativeness of EJR’s and S&P’s ratings, we divide issuers within each rating category into two subgroups: (1) those whose EJR ratings are less favorable than S&P’s, and (2) those whose EJR ratings are equal to or more favorable than S&P’s. We can see in Panel A that conditional on a certain S&P’s rating category, issuers with a lower EJR rating are indeed significantly more likely to default at the five-year horizon. For example, in the “BB” category, issuers with an EJR rating equal to or more favorable than S&P’s have an average default rate of 0.71%. In contrast, their counterparts whose EJR ratings are less favorable are over ten times more likely to default at the five-year horizon, with an average default rate of 10.3%.6 Similar results can be found for all other rating categories. Interestingly, when we sort issuers by their EJR ratings in Panel B, we see a different pattern.

4More details about this database is discussed in Section 3. 5

In Figure 1, we focus on defaults at the five-year horizon, even though our results are qualitatively unchanged using the three-year or ten-year horizon.

6

In comparison, the average default rates of issuers with a “B” rating from S&P is only 8.9% (lower than 10.3%). This suggests that EJR’s lower ratings are not just capturing issuers whose credit worthiness falls into the lower end of the distribution of all issuers’ credit quality within the “BB” category.

(9)

First, within each EJR rating category, all the firms have very similar default rates, even if they have different S&P ratings. This is consistent with the expectation that firms with the same rating should have similar credit quality and hence, their default rates should not be significantly different. Second, within each rating category, issuers whose S&P ratings are lower than EJR’s have even lower default rates than the other group, contradicting the notion that S&P’s rating are more informative than EJR’s. These findings are consistent with Beaver, Shakespeare, and Soliman (2006). They further alleviates the concern that EJR may issue ratings that are over conservative (and thus, uninformative) in order to differentiate itself from existing issuer-paid rating agencies to attract business from investors. Overall, the above findings justify our use of EJR’s ratings as a reasonable benchmark to identify bias in S&P’s ratings.

3

Sample Selection and Empirical Methodology

To construct our rating sample, we first merge two rating databases from EJR and S&P. EJR’s issuer credit ratings are collected from Bloomberg and EJR’s database via the com-pany’s website. EJR keeps its historical rating records back to July 1999. This database contains EJR’s issuer ratings in a time series. Each observation is a credit rating correspond-ing to a certain ratcorrespond-ing action, includcorrespond-ing new ratcorrespond-ing assignments, affirmation, upgrades and downgrades. For each observation, we also have related identification information including company names and rating action dates, which we later use to merge EJR’s database to others. EJR’s original database covers the period from July 1999 to July 2009, with 23,223 observations representing 2,033 issuers. We eliminate issuers that only obtained a newly assigned rating but had not been followed since then. We also delete observations with a rating of “NR”, which indicates a rating withdrawal. These two steps reduce the EJR rating sample to 22,816 observations with 1,642 issuers. We obtain S&P’s issuer credit ratings from S&P’s rating Xpress data services. This database contains detailed information on S&P’s

(10)

credit ratings back to 1920s, including issuers’ long-term credit ratings and rating Watchlist and Outlook provision. Similar to EJR’s rating database, each observation in S&P’s rating database is also a credit rating corresponding to a rating action. In the initial database, there are 127,849 observations representing 17,298 private and public issuers globally. We restrict our analyses to U.S. issuers, which leaves us with 72,641 observations from 9,100 private and public issuers.

We construct two quarterly panel datasets based on the two rating databases respectively, starting from the third quarter of 1999 to the third quarter of 2009. Following existing lit-erature, we assign numerical values to S&P’s and EJR’s ratings on notch basis: AAA=1, AA+=2, AA=3, AA-=4, A+=5, A=6, A-=7, BBB+=8, BBB=9, BBB-=10, BB+=11, BB=12, BB-=13, B+=14, B=15, B-=16, CCC+=17, CCC=18, CCC-=19, CC=20, C=21, and D=22. Since both rating databases treat a credit rating with an rating action (rather than a credit rating itself) as an observation, we assign a rating in the current quarter equal to the issuer’s rating in the past quarter if no rating action happens. In addition, if two rating actions happen in the same quarter (which means there are two observations in the same quarter), we take the mean of the ratings based on the above numerical conversion. We then merge these two panel datasets by manually matching company names and year-quarter information. We successfully merged 1,574 out of 1,642 issuers from EJR’s rating database. Since we are interested in issuers’ financial activities in our analyses, we restrict our sample to non-financial and non-utility issuers.

Issuers’ financial information are obtained from COMPUSTAT quarterly database. Since our tests are based on the comparison of S&P’s ratings to EJR’s, we delete any issuer-quarter observations if either of the two ratings is not available. We further delete any observations if the issuer is rated as default by either S&P or EJR due to these issuers’ potential abnormal financing activities and ongoing restructuring.7 Following these criteria, our primary sample 7This criterion is not crucial. All our results still hold when we include issuers rated as default by S&P or

(11)

consists of 25,713 issuer-quarter observations representing 964 issuers.

Panel A of Table 1 presents descriptive statistics of the primary sample consisting of issuers rated by both S&P and EJR in Column (2). In comparison, Column (1) presents summary statistics for all non-financial and non-utility public U.S. issuers that are rated by S&P in the sample period. From Panel A, we can see that issuers that are rated by both rating agencies are, on average, larger than all the issuers rated by S&P. For example, issuers in our rating sample have a mean capitalization of 10,591 million U.S. dollars (median 2,798), compared to the mean of 7,541 (median 1,516) million for the other group. Similarly, the mean total assets of issuers rated by both agencies are 11,172 million (median 3,701), while the average assets of firms in the other group are 8,873 (median 2,013) million. These differences between the two groups of firms are highly significant at 1% level. In addition, issuers rated by both rating agencies have lower leverage, higherAltman’s Z-Score and higher ROA. This evidence suggests that these issuers appear to be less risky and more productive than their counterparts. However, their higher Market-to-Book, lowerTangibility and higher R&D/Sales indicate that these issuers tend to invest more heavily on R&D to accommodate their higher growth opportunity and are possibly more difficult to evaluate due to their low proportion of fixed assets. As expected, these issuers’ ratings are more likely to be requested by EJR’s client base.

To associate the difference between S&P’s and EJR’s ratings with the severity of S&P’s conflict of interest at the issuer level, we generate our left-hand-side variable to capture the rating difference. Following Jiang et al. (2011), we calculate the cardinal different between the two rating agencies’ ratings (EJR’s rating minus S&P’s rating) based on previous numerical conversion. For example, the rating difference equals 2 for a difference between “AA+” and “AA-”, and equals 3 for a difference between “AA+” and “A+”. A higher value of the rating difference variable indicates that compared to EJR, S&P issues a more favorable rating to the issuer. As a robustness check, we also employ alternative approaches to define rating difference. For example, we convert each S&P’s and EJR’s rating to the corresponding

(12)

five-year default rates calculated using EJR’s ratings in our sample. We then generate the rating difference variable as the difference between the default rates corresponding to the two agencies’ ratings. Alternatively, we also define a dummy variable that equals 1 if S&P’s rating is higher than EJR’s for a given issuer at a certain point of time, and equals 0 otherwise. More discussions and the results using these alternative definitions are presented in the Robustness Section.

Panel B summarizes the difference between S&P’s and EJR’s ratings. It is worth noting that this difference is significantly different from 0, lending support to the notion that on average, S&P assigns more favorable ratings than EJR. The magnitude of the difference is also economically significant. For example, the mean of the difference is 0.195, suggesting that for an average issuer, EJR’s ratings is about 0.2 notch lower than S&P’s. Equivalently, S&P issues a rating that is one notch step-up from EJR’s for approximately every one of out five firms.

4

Empirical Results

In this section, we discuss our main tests on the link between rating inflation and the issuer-pays compensation model. We present our test results based on three proxies for the severity of S&P’s conflict of interest: issuers’ short-term liquidity needs, S&P’s revenue share and issuers’ management turnover.

4.1 The Issuer’s Short-term Debt

We start by examining the association between rating inflation and issuers’ amount of short-term debt. Issuers that are exposed to a large amount of short-short-term debt are likely to issue new debt in the future to replace their expiring debt. Hence, they are more likely bring new rating business to the agency. In this case, the rating agency will be tempted to issue a favorable rating to obtain these clients’ lucrative future business.

(13)

Table 2 presents the results of OLS regression models. We first examine Specification (1) where we include the logarithm of the issuer’s total short-term debt amount as the indepen-dent variable. To capture any changes in rating standards over time as suggested in Blume, Lim, and MacKinlay (1998) and Baghai, Servaes, and Tamayo (2011), we also include year dummies in this specification. The positive coefficient on Ln(Short-term debt) confirms that S&P is more likely to issue favorable ratings when issuers have higher short-term liquidity needs. Issuers’ short-term debt volume may be correlated with the amount of their long-term debt, which in turn, captures issuers’ past relationship with the rating agency (Covitz and Harrison (2003)). To isolate the effect of this past relationship from the short-term debt variable, in Specification (2) we include the logarithm of the issuer’s long-term debt amount. We can see that while issuers’ past relationship with the agency also plays a significant role in determining rating inflation, the effect of future business opportunities captured by the amount of short-term debt remains significant at the 1% level. To further check the robust-ness of our results, we include additional issuer characteristics as controls in Specification (3), including the logarithm of Sales, Tangibility, R&D Expense/Sales (and R&D Missing Dummy), and Market-to-Book. Further more, one concern over our dependent variable is that by construction, the difference between S&P’s and EJR’s ratings will be larger when S&P’s (EJR’s) ratings are closer to (further from) the higher end (AAA) of the rating spec-trum. In this case, if the right-hand-side variables we have included in the models happen to capture the relative positions of issuers’ ratings along the rating spectrum rather than the true factors that affect rating inflation, our results are biased. To address this concern, we generate dummy variables for S&P’s rating categories on letter basis (AAA, AA, A, BBB, BB, B, CCC and CC/C) and include them as additional controls. The results in Specification (3) are consistent with previous specifications, confirming the positive relation between issuers’ short-term debt and rating inflation. The economic significance of this effect is sizable. For example, based on Specification (4), an increase inLn(Short-term Debt) from the 25th to the 75th percentile is associated with 0.19 additional notch step-up in S&P’s ratings compared

(14)

to EJR’s. This changes indicates that S&P issues a rating one notch more favorable than EJR’s for an additional one out of every five firms as the issuer’s short-term debt increases.

In a study by Ederington and Goh (1998) that examines the value of information provided by stock analysts and rating agencies, they find that both agents provide new information to the market and that Granger causality of this information flows both ways. Inspired by this finding, we examine the relation between stock analysts’ information and rating inflation. More specifically, we include two variables, Number of Analysts and Standard Deviation of Analysts’ Reports (on EPS) in our model. We obtain this information from I/B/E/S monthly summary database and use the values of the two variables in the last month of each quarter for our issuer-quarter sample. The estimation is presented in Specification (4). S&P tends to issue less inflated ratings if an issuer is followed by more stock analysts, but is more likely to do so if their opinions are more dispersed. This finding indicates that rating agencies’ tendency to issue inflated rating may be constrained by other information providers. It also implies that stock analysts in the equity market may have a disciplinary role on rating agencies’ behavior in the credit market.

One limitation of the models so far is that they do not control for unobservable charac-teristics of issuers that may be correlated with S&P’s incentives to issue high ratings. To address this concern, we rectify our model by including issuer fixed effects. This model is estimated in Specification (5). We can see that the effects of the issuer’s short-term debt remains highly significant at the 1% level. The economic significance is also comparable to previous specifications. This result further confirms the positive relationship between rating inflation and issuers’ short-term liquidity needs.

4.2 The Concentration of the Issuer’s Past Business with S&P

We now turn to our tests using the concentration of the issuer’s past business with S&P as the second proxy for the severity of the agency’s conflict of interest. While many issuers obtain more than one credit rating from major rating agencies, only fewer than ten percent

(15)

of investors are required to hold securities from issuers with two or more ratings (Baker and Mansi (2001)). This implies that issuers retain the option to choose different rating agencies for business. Indeed, Kronlund (2011) shows that firms choose which agencies to do business with based on the ratings they obtained from these agencies in the past. On the other hand, a firm’s choice of rating agencies may also depend on the extend to which it has relied on their current agencies for past business. An issuer that has done business exclusively with a certain rating agency (i.e., with a high business concentration) may find it costly to seek another agency. This can result from direct costs involving communicating with the new rating agency, disclosing required financial information, settling legal issues, and the potential additional expenses related to rating fees.8 The switching costs may also arise from the information hold-up problem that results in the issuer being considered as a lemon when it seeks to switch to a new financial intermediary (Rajan (1992)). An issuer’s more concentrated business relationship with S&P thereby reduces the likelihood the agency will lose its clients , while the issuer’s more extensive interaction with all major rating agencies (S&P, Moody’s and Fitch) provides it more room and flexibility to seek alternative relationships with agencies other than S&P. In the latter case, S&P has a stronger incentive to issue favorable ratings so as to keep its business.

To measure the concentration of issuers’ past business with S&P, we trace each issuer’s debt-issuance activity on a quarterly basis back to the past two years at each time point. We define a dummy variable S&P Only Rater as 1 if S&P served as the only credit rater for the issuers’ bond issues during the past 2, 4, 6, or 8 quarters, respectively. Furthermore, we also calculate the number of bonds issued by an issuer that are rated by S&P as a fraction of those that are rated by the three major rating agencies (S&P, Moody’s and Fitch) in total during the past 2, 4, 6, or 8 quarters. A value of 1 for the S&P Only Rater dummy, or a higher 8For example, based on a list of fees collected by The Washington Post, rating agencies charge extra rating

fees for a new client, in addition to regular fees applied to their existing customer. Further more, evidence also suggests that rating agencies tend to increase the “maintenance fees”, which are paid on an ongoing basis after the initial rating fee when a rating was first issued, for an existing customer if it has not brought any new business or create another payment for the agency in a certain period of time (Klein (2004)).

(16)

value of the fraction measure indicates a more concentrated business relation with S&P (and hence, higher switching costs for the issuer). We conjecture a negative relationship between rating inflation and the two concentration measures.

Issuers’ debt issuance information is obtained from The Fixed Investment Securities Database (FISD). This database provides key characteristics on almost all publicly traded bond issuance. We merged this database to our primary rating sample using issuers’ 6-digit CUSIPs. Since firms only issue debt periodically, we omit an issuer-quarter observation if the issuer has not had any bond issues during the past n quarters, where n corresponds to different revenue share windows. Figure 2 plots the average fraction of bond issues rated by S&P for each firm during the past 4 and 8 quarters on a quarterly basis from 1999 to 2007. We exclude 2008 and 2009 because of the abnormally small amount of bond issue during the financial crisis. A few observations are worth noting. First, the fraction measures based on the two windows move closely with each other close to 50% between 1999 and 2003. Second, consistent with the finding in Becker and Milbourn (2010), there is an apparent decline in the fraction of bonds rated by S&P starting from the second half of 2003. These features arise from the fact that many issuers obtain two ratings from both S&P and Moody’s before 2003 when competition in the rating industry was mostly limited to the two major rating agencies and Fitch’s market share was relatively small. Starting from 2005, however, Fitch has been playing an growing role in the rating industry because of its inclusion as a rater to Lehman Brothers Aggregate U.S. Bond index (now Barclays Capital Aggregate Bond Index). This change shifts S&P’s market share from close to 50% to around 33% on the average level.

Table 3 presents the issuer-fixed-effect OLS regression analyses. In all specifications, we include year dummies to control for the increase in Fitch’s growing involvement in the rating industry over time as well as other time-related unobservable variables as in Table 2.9 Consistent with our hypothesis, Panel A of Table 3 shows that issuers are more likely 9We also check the robustness of our results by explicitly controlling for Fitch’s market share in our models.

(17)

to receive a higher rating from S&P when its revenue share is lower. For example, using the past-4-quarter window, the coefficient onS&P Only Rater is -0.236 and is significantly at 1% level. This means that going from the only rater of an issuer’s past-4-quarter business to a non-single rater will lead S&P to issue a rating about 0.236 notch higher than EJR’s. Similar results are found in Panel B using Fraction of Bond Issues Rated by S&P as the measure of business concentration. For instance, following Specification (2) of Panel B, one standard deviation decrease in S&P’s revenue share in the past 4 quatres (0.27) will lead S&P to issue a rating about 0.09 notch higher than EJR’s, amounting to 44% change of the sample mean of rating difference. 10 In Specification (4), the coefficient of the past-8-quarter revenue share

is not significant at the 10% level, indicating that the effect of revenue share on the agency’s rating strategy may be more concentrated in shorter terms. Overall, the negative relation between S&P’s revenue share and its tendency to issue inflated ratings again reveals S&P’s conflict of interest due to the issuer-pays business model.

4.3 Issuers’ Management Turnover

As our final measure of the severity of S&P’s conflict of interests, we explore issuers’ manage-ment turnover. A firm’s CEO and CFO play an important role in the rating process (Graham and Harvey (2001), Kisgen (2006), Kisgen (2007), Kisgen (2009) and Norris (2009)). CFOs and CEOs usually determines which rating agency to request a rating from, and are also ac-tively involved in the credit rating process (Fight (2001)). Since newly appointed corporate leaders are more inclined to change the firm’s operational and financial strategies (Giambat-ista, Rowe, and Riaz (2005)), we hypothesize that issuers’ new management is more likely to switch rating agencies. Hence, S&P is tempted to issue an inflated rating following the appointment of a new CFO or CEO in order to build a good relationship with the new

10

The coefficients of other variables are similar to those in Table 2. One difference is that the coefficients of

Ln(Short-term Debt) now become insignificant. This is because by construction, our sample in Table 3 only consists of firms that have recently issued bonds in the past few quarters. This, in turn, may have diluted the “liquidity needs” effectLn(Short-term Debt)is supposed to capture, and hence, reduces the magnitude of its coefficient.

(18)

management and generate more future business.

To examine this hypothesis, we obtain CEO and CFO information from COMPUSTAT EXECUCOMP annual database. We identify issuers’ CEOs using data item CEOANN, where a CEO is identified if CEOANN=CEO. Following Gopalan, Song, and Yerramilli (2010), we identify CFOs based on managers’ titles from data item TITLEANN. A CFO is identified if a manager’s title contains: CFO, chief financial officer, finance, treasurer, VP-finance or a combination of two or more of them. We identify that a new CFO (CEO) is assigned if an issuer’s current CFO (CEO) is different from the one in the last fiscal year. To be consistent with EXECUCOMP annually based data, we aggregate the difference between S&P’s and EJR’s ratings to annual level by taking the mean of its values in the four quarters during each fiscal year. We further restrict our analysis to issuer-year observations where information on both CEO and CFO is available.

Table 4 presents the results of the issuer-fixed-effect OLS regression models. We see that in Specification (1), there is a significant boost in rating inflation in the year when a new CFO is appointed (new CFO (t)) and in the following year (new CFO (t-1)). The appointment of a new CFO is associated with a total of 0.29 notch (0.18+0.11) increase in the rating difference during these two years. In Specification (2), while the coefficients of new CEO (t) and new CEO (t-1) are also positive, their significance levels are lower. This result indicates that CFOs seem to have a bigger impact than CEOs in affecting the rating agencies’ strategies. This evidence is consistent with prior studies that find CFOs are more influential in certain areas related to the management of an issuer’s financial system because of their ultimate responsibility in those areas (Mian (2001), Geiger and North (2006) and Jiang, Petroni, and Wang (2008)). Specification (3) includes both CFO and CEO appointment dummies in the regression. The coefficients on the CFO dummies are comparable to those in Specification (1), suggesting that the effects of a new CFO are not likely to be driven by concurrent CEO changes. This result lends further support to the notion that issuers’ new management generates incentives for the rating agency to build a good relationship through an inflated

(19)

rating.

4.4 The Information Value of Credit Ratings

The results so far indicate that the issuer-pays rating model leads to a significant amount of rating inflation. However, it is not clear whether inflated ratings have any welfare implica-tions. If investors anticipate bias in S&P’s ratings and correctly adjust for such bias, rating inflation is not likely to lead to misallocation of resources. To explore investors’ knowledge about rating bias, we examine the association between issuers’ bond yield spreads and the proxies for the severity of S&P’s conflict of interest. If investors adjust for rating inflation, we expect to see a significant relation between yield spreads and these proxies, after controlling for S&P’s contemporary credit ratings.

Following Beaver et al. (2006), we obtain information on Treasury Spread for new bond issuance, defined as the difference between the issue’s offering yield and the yield on a bench-mark treasury security (a U.S. treasury bond) with similar duration and maturity from the Fixed Investment Securities Database (FISD). Since major rating agencies define an issuer’s credit rating based on the issuer’s ability to pay back its senior unsecured bonds, we restrict our sample to the issuance of senior unsecured bonds during the sample period. In addi-tion, we exclude bonds that are callable, puttable, convertible, exchangeable, with sinking fund or with refund protection. We also generate the following financial variables for each issue as control variables: Enhancement is a dummy variable that equals 1 if the issue has credit enhancements; Covenants is a dummy variable that equals 1 if the debt issue contains covenants in the contract; Ln(Bond Issue Amount) is the logarithm of the par value of the debt issue in millions of dollars; Maturity in Years is the number of years to maturity of the debt. We then regressTreasury Spread on our three proxies for the severity of S&P’s conflict of interest, controlling for S&P’s issuer credit ratings. The results are presented in Table 5.

Notice that most variables that can predict rating inflation show up insignificantly (cer-tain variables show opposite signs as expected). One exception is Ln(Short-term Debt). In

(20)

Specification (1), its coefficient is positively significant at 5% level.11 However, its economic significance is relatively small. One standard deviation increase in Ln(Short-term Debt) is only associated with 7.6% increase in the yield spread for an average firm. This change is marginal compared to the impact of the issuer’s short-term debt on rating inflation. Further-more, the effects of short-term debt becomes insignificant in other specifications where we control for additional proxies. Overall, we can not reject the null hypothesis that investors do not adequately adjust for the potential rating bias. These results suggest that investors may not well understand the information value of credit ratings, and hence, rating inflation can lead to misallocation of resources. These results further imply that regulators’ intervention in the credit rating industry would be beneficial to investors who use credit ratings to guide their investment decisions, and can potentially increase social welfare.

5

Robustness

5.1 Adjusted and Broader Rating Categories

One concern over the tests so far is that S&P’s ratings are usually “through-the-cycle” and may not necessarily reflect the agency’s opinions on issuers’ short-term credit worthiness. This implies that compared to EJR’s ratings, S&P’s ratings tend to be more forward-looking and more stable. Our previous results may capture the difference in the nature of the two rating agencies’ rating technology rather than rating inflation. To resolve this concern, we take into account S&P’s watchlist and outlook provision. These two rating actions, by definition, reflect information in a more timely manner and are usually treated as a refinement of long-term credit ratings. We adjust S&P’s long-term ratings downwards (closer to the lower end of the rating spectrum) by one notch if S&P have put an issuer on negative watchlist or outlook, and upwards (closer to “AAA”) by one notch if S&P have put an issuer on positive watchlist or 11This result is consistent with Gopalan, Song, and Yerramilli (2010). In their paper, they find that bonds

issued by firms with a higher proportion of short-term debt trade at higher yield spreads, even after controlling for issuers’ credit ratings.

(21)

outlook. We then recalculate the rating difference variable based on S&P’s ratings adjusted for watchlist and outlook provision.

Another concern over our results is that previous tests utilize rating categories on notch basis that takes into account rating modifiers (“+” and “-”). One may argue that this notch-based definition may capture rating differences that are too small in magnitude to be economically meaningful. As a robustness check, we suppress rating modifiers and re-define rating differences on letter basis. More specifically, a rating of “AA+” is now considered the same as a rating of “AA” or “AA-”, and is more favorable than a rating of “A+”, “A” or “A-”, which belongs to the same “A” letter category. We then recalculate the difference between S&P’s and EJR’s ratings using a similar numerical conversion as before: AAA=1, AA=2, A=3, BBB=4, BB=5, B=6, CCC=7, CC=8, C=9 and D=10.

Using these adjusted ratings, we re-estimated selected specifications from Table 2, Table 3, and Table 4. The estimation results are presented in Table 6. We can see that the coefficients of all of the key variables remain significant, and all of them are of the correct sign. This evidence confirms that our previous results are robust to watchlist/outlook-adjusted ratings and letter-based rating categories.

5.2 Endogeneity

Issuers’ amount of debt is endogenously determined. Issuers who obtain an inflated rating may want to take advantage of the lower cost of capital and issue more debt. This raises the concern that our previous tests in Table 2 may be subject to a reverse causality problem. To resolve this endogeneity problem, we replace Ln(Short-term Debt) with a new variable Ln(Long-term Debt Due). Ln(Long-term Debt Due)is defined as the logarithm of the amount of long-term debt that is due within one year. Similar to Ln(Short-term Debt), this variable also measures how much future business an issuer is likely to bring to the rating agency. However, since the repayment schedule of long-term debt have already been determined many years in the past, it is not likely to be affected by the rating agency’s current ratings. In

(22)

other words,Ln(Long-term Debt Due) is less likely to be subject to the endogeneity problem. We repeat the estimations in Table 2 usingLn(Long-term Debt Due)as the main indepen-dent variable. Table 7 reports the results. We can see that the coefficient on Ln(Long-term Debt Due) is positive and significant, confirming that the endogenous choice of debt is not likely to drive the results in Table 2.

5.3 Rating Shopping

One may argue that some issuer characteristics used in our previous tests such as the amount of short-term debt may also capture issuers’ engagement in rating shopping. Rating shopping refers to the situation where an issuer approaches different rating agencies for preliminary ratings and then cherry-pick the most favorable one to disclose to the public, while conceal the lower ratings. Since rating agencies’ signals on issuers’ credit quality are noisy, the final observed rating tend to be higher if an issuer has shopped for ratings, even though none of the agencies has overstated the issuer’s credit worthiness. In this case, if certain issuer characteristics happened to capture issuers’ involvement of rating shopping, which in turn, leads to a higher rating, our results are biased.

To address this concern, we explicitly control for issuers’ rating shopping activities in our previous tests. Notice that in our second sets of tests where we use the concentration of the issuer’s past business with S&P as a proxy for the severity of conflict of interest, rating shopping is not likely to explain our results. If the lower business concentration just captures firms’ intention to shop with other rating agencies for better ratings, then we would expect to see S&P’s ratings less favorable (relative to EJR’s) when its business concentration with an issuer is low, compared to the case when S&P’s concentration is high. This is because based on the rating shopping argument, a low business concentration suggests that S&P’s ratings are not sufficiently high to make issuers not inclined to approach S&P’s competitors for more favorable ratings. On the other hand, a high concentration should be associated with higher ratings from S&P since this is the case when issuers will want to stay with S&P and hence

(23)

are least likely to shop for better ratings from other agencies. This intuition, however, is opposite to what we found in Table 3. Similar argument can also be found in Becker and Milbourn (2011). Given this intuition, we focus our robustness checks on the tests based on issuers’ short-term debt volume and their management turnover.

Following the definition of rating shopping, we define a Rating Shopping dummy equal to 0 if an issuer has three published ratings from S&P, Moody’s and Fitch, and equal to 1 if it only has one published rating from S&P. This definition relies on the assumption that an issuer is not likely to have picked the highest rating if it discloses all the ratings from agencies they have approached (we only consider the three major rating agencies). Using the bond-rating information from FISD database, we identify an issuer to have a published issuer credit rating from Moody’s (Fitch) if one of this issuer’s outstanding senior unsecured bonds is rated by Moody’s (Fitch) at that time. This approach is based on the fact major rating agencies provide an issuer credit rating for every borrower for which it rates any security. This approach generates comparable statistics as previous studies. For example, in our sample, over 95% of issuers obtain issuer’s ratings from both S&P and Moody’s, and about 60% of issuers obtain a third rating from Fitch, consistent with Bongaerts, Cremers, and Goetzmann (2010).

The results controlling for rating shopping are presented in Table 8. As expected, the rat-ing shopprat-ing dummy shows up positively, indicatrat-ing that S&P indeed issues a more favorable rating if issuers have shopped for ratings. The coefficient ofRating Shopping, however, is not significant at the 10% level. More importantly, we can see from these specifications that all our previous results still hold with similar economic magnitude. These results confirms that the relation between rating inflation and the issuer-pays model we uncover throughout the paper is not driven by issuers’ endogenous selection choices.

(24)

5.4 Alternative Definitions of the Dependent Variable

To further check the robustness of our results, we use alternative definitions for the dependent variable. First, we noticed that even though the differences between S&P’s and EJR’s ratings bear the same number of notches, the interpretation of these differences can be different. For instance, the difference between “AA+” and “AA-” may have a different meaning from the difference between “BB+” and “BB-”, even though both differences are at the two-notch level. In our previous tests, this nonlinearity is captured by the S&P rating dummies we included as controls. In a further attempt, we convert each S&P’s and EJR’s rating to the corresponding five-year default rate calculated using EJR’s ratings in our sample. As before, we apply the “static pool” methodology to calculate the five-year default rates. Panel A reports the estimated default rates using EJR’s rating categories. As a comparison, we also present the default rates based on S&P’s rating categories in Column (2). Notice that these default rates are not monotonically increasing with ratings. This is probably due to the relatively short time horizon EJR’s ratings have covered, leading to noisy estimations. To calculate rating difference, we assign the corresponding EJR’s five-year default rate to each S&P and EJR rating. For example, if for a certain issuer-quarter, EJR issues a rating of “BB” while S&P issues a rating of “BB+”, the rating difference is 0.0149-0.0121=0.0028. Because this difference measures the extent to which S&P under report issuers’ default probability, it captures the economic meaning incorporated in the nonlinearity of rating differences.

In addition to the default rate conversion, we also define a dummy variable that equals 1 if S&P’s rating is higher than EJR’s for a given issuer and at a certain time point, and equals 0 otherwise. We employ a conditional logit model using this dummy as the dependent variable. It is worth noting that by definition, this variable categorizes all cases where S&P’s rating is higher than EJR’s in the same regime (the dependent variable equals 1), regardless of the magnitude of the rating differences within the regime. This is the most restrictive definition we have considered so far, and is likely to reduce the number of observations in

(25)

issuer-fixed-effect models (due to the lack of variation in the dependent variable within issuers). This may reduce the power of our tests and lower the significance of our results.

Table 9 presents the results using the alternative definitions for the dependent variable. We can see that most of our previous results still hold and the coefficients on all the variables that proxy for the severity of S&P’s conflict of interest are of expected signs. Given the limitations of the two alternative dependent variables as discussed above, we focus on the signs of these coefficients, without inferring too much about the significance and magnitudes. Overall, these results confirm that our findings are robust to different definitions of rating difference.

6

Conclusion

In this paper, we test whether the issuer-pays rating model adopted by major rating agencies contributes to rating agencies’ incentives to issue inflated ratings. We compare ratings from an paid rating agencies with those from an investor-paid agency. We find that the issuer-paid rating agency assigns a more favorable rating to an issuer when the conflict of interest raised by the issuer-pays compensation model is more severe. By employing a number of measures to proxy for the severity of S&P’s conflict of interests, including (1) issuers’ short-term liquidity needs, (2) the rating agency’s revenue share and (3) issuers’ management turnover, we find evidence of rating inflation that is both statistically and economically significant. Interesting, investors do not seem to adjust for any rating bias in pricing issuers’ securities. These findings raise questions about the value of credit ratings, and point to a potential misallocation of resource due to rating inflation.

Our findings shed light on the continuing debate over whether the issuer-pays compen-sation model distorts rating agencies’ incentives. They also provide policy implications that regulators’ intervention and effort to promote a more transparent rating industry will benefit investors and can potentially lead to improvement in social welfare.

(26)

References

Ahcraft, A., P. Goldsmith-Pinkham, and J. Vickery, 2010, “MBS Ratings and the Mortgage Credit Boom,” Federal Reserve Bank of New York Staff Reports.

Baghai, R., H. Servaes, and A. Tamayo, 2011, “Have Rating Agencies Become More Con-servative? Implications for Capital Structure and Debt Pricing,” Working Paper, London School of Economics.

Baker, H., and S. A. Mansi, 2001, “Assessing Credit Rating Agencies by Bond Issuers and Institutional Investors,” Working Paper, American University and Texas Tech University. Beaver, W. H., C. Shakespeare, and M. T. Soliman, 2006, “Differential Properties in the Rating of Certified Versus Non-certified Bond-rating Agencies,”Journal of Accounting and Economics, 42, 303–334.

Becker, B., and T. Milbourn, 2010, “How did Increased Competition Affect Credit Ratings,” forthcoming at the Journal of Financial Economics.

Blume, M. E., F. Lim, and A. C. MacKinlay, 1998, “The Declining Credit Quality of U.S. Corporate Debt: Myth or Reality?,”Journal of Finance, 53, 1389–1413.

Bongaerts, D., K. M. Cremers, and W. N. Goetzmann, 2010, “Tiebreaker: Certification and Multiple Credit Ratings,” Working Paper, Yale University.

Cornaggia, J., and K. Cornnagia, 2011, “Does the Bond Market Want Informative Credit Ratings?,” Working Paper, Indiana University.

Covitz, D. M., and P. Harrison, 2003, “Testing Conflicts of Interest at Bond Ratings Agen-cies with Market Anticipation: Evidence that Reputation Incentives Dominate,” Working Paper, Federal Reserve Board.

(27)

Ederington, L. H., and J. C. Goh, 1998, “Bond Rating Agencies and Stock Analysts: Who Knows What When?,”Journal of Financial and Quantitative Analysis, 33, 569–585. Fight, A., 2001,The Ratings Game, John Wiley & Sons, New York.

Geiger, M. A., and D. S. North, 2006, “Does Hiring a New CFO Change Things? An Investigation of Changes in Discretionary Accruals,”The Accounting Review, 81, 781–809. Giambatista, R. C., W. G. Rowe, and S. Riaz, 2005, “Nothing Succeeds Like Succession: A Critical Review of Leader Succession Literature Since 1994,” Leadership Quarterly, 16, 963–991.

Gopalan, R., F. Song, and V. Yerramilli, 2010, “Debt Maturity Structure and Credit Quality,” Working Paper, Washington University in St. Lous, Pennsylvania State University and University of Houston.

Graham, J. R., and C. R. Harvey, 2001, “The Theory and Practice of Corporate Finance: Evidence from the Field,”Journal of Financial Economics, 60, 187–243.

Griffin, J. M., and D. Y. Tang, 2009, “Did Subjectivity Play a Role in CDO Credit Ratings?,” Working Paper, University of Texas at Austin and University of Hong Kong.

He, J., J. Qian, and P. E. Strahan, 2011, “Are All Ratings Created Equal? The Impact of Issuer Size on the Pricing of Mortgage-Backed Securties,” Working Paper, Boston College. Jiang, J., K. R. Petroni, and I. Y. Wang, 2008, “CFOs and CEOs: Who Has the Most

Influence on Earnings Management?,” Working Paper, Michigan State University.

Jiang, J., M. Stanford, and Y. Xie, 2011, “Does It Matter Who Pays for Bond Ratings?,” Journal of Financial Economics, Forthcoming.

Kisgen, D., 2007, “The Influence of Credit Ratings on Corporate Capital Structure Decision,” Journal of Applied Corporate Finance, 19, 56–64.

(28)

Kisgen, D. J., 2006, “Credit Ratings and Capital Structure,” Journal of Finance, 61, 1035– 1072.

Kisgen, D. J., 2009, “Do Firms Target Credit Ratings or Leverage Levels?,” Journal of Financial and Quantitative Analysis, Forthcoming.

Klein, A., 2004, “Credit Raters’ Power Leads to Abuses, Some Borrowers Say,” The Wash-ington Post, November 24, 2004.

Kronlund, M., 2011, “Best Face Forward: Does Rating Shopping Distort Observed Bond Ratings?,” University of Illinois at Urbana-Champaign.

Mian, S., 2001, “On the Choice and Replacement of Chief Financial Officers,” Journal of Financial Economics, 60, 143–175.

Norris, F., 2009, “New SEC Rule Hightens Risk of Insider Trading,” The New York Times, September 24, 2009.

Rajan, R., 1992, “Insiders and Outsider: The Choice Between Informed and Arm’s Length Debt,”Journal of Finance, 47, 1367–1400.

Standard & Poor’s, 2008, 2007 Annual Global Corporate Default Study and Rating Transi-tions, Februray 5, 2008.

(29)

0 0.05 0.1 0.15 0.2 0.25 S&P: AAA-A (0.0017) BBB (0.0086) BB (0.0251) B (0.0885) CCC-C (0.2665)

5-year Default Rates for S&P's Rating Categories

EJR's Rating is Less Favorable than S&P's

(30)

0 0.05 0.1 0.15 0.2 0.25 EJR: AAA-A (0.0007) BBB (0.0055) BB (0.0134) B (0.0785) CCC-C (0.2005)

5-year Default Rates for EJR's Rating Categories

S&P's Rating is Less Favorable than EJR's

(31)

0.28 0.33 0.38 0.43 0.48 0.53 Av er ag e Fra ctio n of B on d Is su es Ra te d by S & P

(32)
(33)
(34)
(35)
(36)
(37)
(38)
(39)
(40)
(41)
(42)
(43)
(44)

References

Related documents

Moody’s credit rating agency raised ericsson’s credit rating during 2006, while standard & poor’s (s&p) last upgraded their ratings in 2005.. With most r&d invested

Undoubtedly, Hedda achieves “release”, but Ibsen’s attitude towards Hedda’s release for the sake of social freedom (like Goethe’s attitude towards Werther’s suicide done for

However, Moody’s Deutschland GmbH has endorsed the ratings of Moody’s and S&P Global Ratings Europe Limited has endorsed the ratings of Standard & Poor’s Japan

A credit rated, or investment grade rated tenant (rating of BBB-, Baa3 or NAIC-2 or higher) is a tenant or the parent of a tenant with a credit rating from S&P Global

Controlling for the average Moody’s and S&P rating, for issues where Moody’s and S&P ratings are on opposite sides of the HY-IG boundary, a Fitch rating pushing the issue

[r]

The 'B+' issue rating on the notes issued by Sibmetinvest and EvrazHolding Finance reflects the 'B+' corporate credit ratings on these entities and our recovery rating of '4' on

In the TPACK partnership framework illustrated in Figure 1 , the two inputs of pedagogical knowledge by science teachers were at: (i) the design of the E∆RTH