• No results found

An Analysis of Interest Rate Forecasts from Professional Forecasters.

N/A
N/A
Protected

Academic year: 2020

Share "An Analysis of Interest Rate Forecasts from Professional Forecasters."

Copied!
195
0
0

Loading.... (view fulltext now)

Full text

(1)

ABSTRACT

TANG, XIAOYAN. An Analysis of Interest Rate Forecasts from Professional Forecasters. (Under the direction of Professor Douglas Pearce).

This dissertation analyzes the interest rate forecasts provided by professional forecasters to a monthly

survey compiled by the Wall Street Journal (WSJ) survey from three aspects: rationality evaluation,

herding behavior, and forecast disagreement and uncertainty.

The rationality evaluation includes tests of unbiasedness, efficiency, and rigidity of information. The

evaluation finds that although the funds rate forecasts from the WSJ display some inefficiency and

imperfect information, the forecasts are generally unbiased and most forecasters are efficient during the

non-crisis time period. During the crisis, most of the funds rate forecasts from the WSJ tend to be

higher than the actual funds rates. The note yield forecasts from the WSJ also have some inefficiency,

and most forecasters predict higher than actual rates. This should be kept in mind when using the note

yield forecasts.

Using a regression method, the herding behavior analysis finds that overall most forecasters use the

changes in the actual interest rate and the previous mean forecasts when they revise the funds rate

forecasts and the note yield forecasts. Some forecasters change their forecast patterns between the

non-crisis time period and the financial non-crisis time period. Another non-parametric method is used to examine

the herding behavior in the same forecast sample. This method finds most forecasters are unbiased, which

is not consistent with the results of the regression method. Some simulation experiments raise questions

about the performance of the non-parametric method.

The analyses of two measurements of the forecast disagreement in the forecasts from the WSJ find

that: 1. the effect of model differences on the disagreements seems to be dominant in the financial crisis,

QE1, after QE1 and before QE2, and after QE2; the effect of the public information signal seems to

be dominant during the time period from mid 2004 to mid 2006 and QE2. 2. during QE1, the FOMC

announcements do not significantly change the dispersions of the funds rate forecasts, but significantly

(2)

© Copyright 2014 by Xiaoyan Tang

(3)

An Analysis of Interest Rate Forecasts from Professional Forecasters

by

Xiaoyan Tang

A dissertation submitted to the Graduate Faculty of

North Carolina State University

in partial fulfillment of the

requirements for the Degree of

Doctor of Philosophy

Economics

Raleigh, North Carolina

2014

APPROVED BY:

(4)

DEDICATION

(5)

BIOGRAPHY

(6)

ACKNOWLEDGEMENTS

I could not finish this dissertation without the guidance of my advisor and committee members, the help

from the faculty and staff in the Department of Economics, and the support from my family.

I would like to express my sincere appreciation to my advisor, Dr. Douglas Pearce. I thank him

for his excellent guidance, his patience in reading the drafts, correcting my writing errors, and making

suggestions. I also thank him for providing the data and useful papers.

I would like to thank Dr. Bloomfield, Dr. Inoue, and Dr. Pelletier for guiding me through the past

years. My special thanks goes to Dr. Mitchell for participating in my final oral exam committee on

behalf of Dr. Inoue.

I would like to thank Dr. Morant and Ms. Carpenter for the generous help since I joined the Ph.D.

program in the Department of Economics. I thank the Department of Economics for financial support.

I thank my mom, dad, husband, and daughter for their support, love and encouragement during the

(7)

TABLE OF CONTENTS

LIST OF TABLES . . . . ix

LIST OF FIGURES . . . . xii

Chapter 1 Introduction . . . . 1

Chapter 2 Rationality of the Interest Rate Forecasts from Professional Forecasters 3 2.1 Introduction . . . 3

2.2 Literature review on rationality evaluation . . . 5

2.2.1 Muth’s rationality . . . 5

2.2.2 Testable implications of rationality . . . 5

2.2.3 Review of the unbiasedness test . . . 6

2.2.4 The efficiency test . . . 8

2.2.5 Forecast error decomposition and multiple dimension forecast structure . . . 8

2.2.6 Sticky information and imperfect information . . . 10

2.2.7 Other issues . . . 11

2.2.7.1 Heterogeneity . . . 11

2.2.7.2 Consensus forecasts and individual forecasts . . . 12

2.2.7.3 Alternative loss functions . . . 12

2.3 Forecast data . . . 13

2.3.1 The forecasts from the WSJ survey . . . 13

2.3.1.1 Data selection for the WSJ forecasts . . . 14

2.3.1.2 Data structure of the WSJ forecasts . . . 14

2.3.1.3 The financial crisis and non-financial crisis sections of the forecasts from the WSJ survey . . . 15

(8)

2.4.1.3 The efficiency test . . . 18

2.4.2 The unbiasedness test results for individual forecasters . . . 20

2.4.2.1 The unbiasedness test for the funds rate forecasts . . . 20

2.4.2.2 Ten-year Note yield forecast . . . 21

2.4.2.3 The twenty-two common forecasters’ unbiasedness test results . . . 22

2.4.2.4 The impact of the financial crisis on the funds rate forecasts and the note yield forecasts . . . 22

2.4.3 The efficiency test results for individual forecasters . . . 23

2.4.3.1 The results for the funds rate forecasts . . . 24

2.4.3.2 The results for the note yield forecasts . . . 24

2.4.3.3 The twenty-two common forecasters efficiency test results . . . 24

2.5 Rationality analysis for the average forecasts . . . 25

2.5.1 The unbiasedness test . . . 25

2.5.2 The unbiasedness test results . . . 25

2.6 Information rigidity . . . 26

2.6.1 Test of rigidity of information for the forecasts from the WSJ survey . . . 27

2.6.2 Test of rigidity of information for the forecasts from the SPF . . . 28

2.7 Conclusion . . . 29

Chapter 3 Herding or anti-herding . . . . 63

3.1 Introduction . . . 63

3.2 Models in past studies . . . 64

3.2.1 Traditional regression methods . . . 65

3.2.2 A nonparametric test method proposed by Bernhardt, Campello, and Kutsoati (2006) . . . 67

3.3 Data and Model . . . 70

(9)

3.4.1 Regression method . . . 72

3.4.1.1 Funds rate forecasts . . . 73

3.4.1.2 Note yield forecasts . . . 74

3.4.2 S statistic method . . . 75

3.4.2.1 Funds rate forecasts . . . 75

3.4.2.2 Note yield forecasts . . . 76

3.4.3 Comparison of regression to S test for twenty-two common forecasters . . . 76

3.5 Simulate herding forecasts for testing the reliability of the S statistic test . . . 78

3.5.1 Generate parameters needed in the simulation procedure . . . 78

3.5.2 The simulation procedure for examining the performance of the S test . . . 79

3.5.2.1 A special case: the accumulated common future shocksλth= 0 . . . 79

3.5.2.2 The accumulated common future shocks ˜λth= Σ1j=hu˜jh . . . 81

3.5.3 The S statistic under the error decomposition framework . . . 82

3.5.3.1 The special caseλth= 0, and φi= 0 . . . 82

3.5.3.2 λth= Σ1j=hutj,φi= 0 . . . 84

3.5.4 The interpretation for the S test results in section 3.4.2 . . . 85

3.6 Conclusion . . . 85

Chapter 4 Uncertainty and dispersion of forecasts across forecasters . . . . 101

4.1 Introduction . . . 101

4.2 Literature review . . . 102

4.2.1 Proxy of uncertainty . . . 102

4.2.2 Understanding disagreement . . . 104

4.3 Forecast data and the measurement of forecast uncertainty . . . 107

4.3.1 Forecast Data . . . 107

4.3.2 Measurements of uncertainty . . . 108

(10)

4.4.1.1 Data and model 4.1 and 4.2 . . . 113

4.4.1.2 The results of model 4.1 and 4.2 for the funds rate forecasts . . . 116

4.4.1.3 The results of model 4.1 and 4.2 for the note yield forecasts . . . 117

4.4.1.4 Data and Model 4.3 . . . 118

4.4.1.5 The results of model 4.3 for the funds rate forecasts . . . 120

4.4.1.6 The results of model 4.3 for the note yield forecasts . . . 121

4.4.1.7 A summary for the results of the model 4.1, 4.2, and 4.3 . . . 122

4.4.2 The connection between forecast dispersion and the FOMC actions (treating the change of the funds rate target as a categorical variable) . . . 124

4.4.2.1 Data . . . 124

4.4.2.2 Model . . . 124

4.4.2.3 The results of model 4.4 for the funds rate forecasts . . . 125

4.4.2.4 The results of model 4.4 for the dispersion of the note yield forecasts . 126 4.4.2.5 A summary of the above results . . . 128

4.5 Conclusion . . . 129

Chapter 5 Summary . . . 159

References . . . . 161

Appendices . . . 167

Appendix A. Estimation ofσ2 u,σε2i, σ 2 ut andσ 2 εit . . . 168

Appendix B. Estimate the variance of the coefficient in the unbiasedness test . . . 169

Appendix C. Correlation between revisionsνthandνt,h+1 . . . 173

Appendix D. The covariance matrix in the unbiasedness test for the mean forecasts . . . 174

Appendix E. The sticky information model and the test for FIRE . . . 176

(11)

LIST OF TABLES

Table 2.1 An example for explaining the structure of individual’s forecasts . . . 31

Table 2.2 The unbiasedness test for the funds rate forecasts using all individual’s forecasts in the sample . . . 32

Table 2.3 The unbiasedness test for the funds rate forecasts (in the non-crisis subset and the crisis subset) . . . 33

Table 2.4 Unbiasedness test for the note yield forecasts . . . 34

Table 2.5 Unbiasedness test for the note yield forecasts (in the non-crisis subset and the crisis subset) . . . 34

Table 2.6 Unbiasedness test of 22 forecasters in different economies for the funds rate forecasts and the note yield forecasts . . . 35

Table 2.7 Efficiency test for the funds rate forecasts (the first test) . . . 36

Table 2.8 Efficiency test for the funds rate forecasts (the second test) . . . 37

Table 2.9 Efficiency test for the funds rate forecasts (the third test) . . . 38

Table 2.10 Efficiency test for the ten-year note yield forecasts (the first test) . . . 39

Table 2.11 Efficiency test for the ten-year note yield forecasts (the second test) . . . 40

Table 2.12 Efficiency test for the 10-year Note yield forecasts (the third test) . . . 41

Table 2.13 Efficiency test in the funds rate forecasts and the note yield forecasts for the 22 common forecasters (test 3) . . . 42

Table 2.14 The unbiasedness test for average forecasts . . . 42

Table 2.15 The rigidity test for the forecasts from the WSJ survey and SPF . . . 43

Table 2.16 The sticky test for the average fund rate forecasts . . . 43

Table 2.17 The sticky test for the average 3 month T-bill rate forecasts (SPF) . . . 43 Table 2.18 The sticky test for the average 3 month T-bill rate forecasts before 2007 Q3 (SPF) 44

(12)

Table 3.4 Herding test in the note yield forecasts for individual forecaster (using the whole

sample data) . . . 89

Table 3.5 Herding test in note yield forecasts for individual forecaster in the non-crisis and the crisis . . . 90

Table 3.6 Herding test in note yield forecasts for each category . . . 90

Table 3.7 S test of the funds rate forecasts for individual forecasters . . . 91

Table 3.8 S test of the funds rate forecasts for six industry categories . . . 91

Table 3.9 S test for individual forecaster in note yield forecasts . . . 92

Table 3.10 S test of the note yield forecasts for 5 industry categories . . . 92

Table 3.11 Test of herding towards the mean for 22 common forecasters (regression and S statistic methods) . . . 93

Table 3.12 Firms and their industry categories . . . 94

Table 3.13 Simulate the simple case of forecasts and take the S statistic test . . . 94

Table 3.14 Simulate forecasts with λth= Σ1j=hu˜tj and take the S statistic test . . . 95

Table 3.15 The S test on the funds rate forecasts without biases and (or) common shocks in the future . . . 96

Table 3.16 The S test on the note yield forecasts without biases and (or) common shocks in the future . . . 97

Table 4.1 Correlation coefficients of disagreements and uncertainties for the funds rate forecasts132 Table 4.2 Correlation coefficients of disagreements and uncertainties for the note yield forecasts132 Table 4.3 The dispersion and robust dispersion of the funds rate forecasts vs the number of announcements of FOMC (model 4.1) . . . 132

Table 4.4 The dispersions and robust dispersions of the funds rate forecasts vs the changes in the funds rate target (model 4.2) . . . 133

(13)

Table 4.8 The dispersions and the robust dispersions of the note yield forecasts vs the changes in the actual funds rate and macroeconomic variables (model 4.3) . . . . 136 Table 4.9 The dispersions and the robust dispersions of the funds rate forecasts vs FOMC

actions (reference: “−1.25to −0.5”) . . . 137 Table 4.10 The dispersions and the robust dispersions of the funds rate forecasts vs FOMC

actions (reference: “no announcement”) . . . 138 Table 4.11 The dispersions and the robust dispersions of the note yield forecast vs FOMC

actions (reference: “−1.25to −0.5”) . . . 139 Table 4.12 The dispersions and robust dispersions of the note yield forecasts vs FOMC actions

(14)

LIST OF FIGURES

Figure 2.1 Time line of a forecaster’s funds rate forecasts . . . 45

Figure 2.2 Cross-section structure of funds rate forecasts (target time June 2004) . . . 45

Figure 2.3 The estimated biases of forecasters on the funds rate forecasts . . . 46

Figure 2.4 The estimated biases of forecasters on the note yield forecasts . . . 47

Figure 2.5 The estimated monthly shocks for the funds rate forecasts . . . 48

Figure 2.6 The estimated monthly shocks for the note yield forecasts . . . 49

Figure 2.7 The funds rate forecast errors of Standard and Poor’s over time . . . 50

Figure 2.8 The funds rate forecast errors of Standard and Poor’s (in horizons 10, 7, 4, 1) . . 51

Figure 2.9 The individual forecasters’ forecast errors of the funds rate (1) . . . 52

Figure 2.10 The individual forecasters’ forecast errors of the funds rate (2) . . . 53

Figure 2.11 The individual forecasters’ forecast errors of the funds rate (3) . . . 54

Figure 2.12 The individual forecasters’ forecast errors of the funds rate (4) . . . 55

Figure 2.13 The individual forecasters’ forecast errors of the funds rate (5) . . . 56

Figure 2.14 The individual forecasters’ forecast errors of the funds rate (6) . . . 57

Figure 2.15 The individual forecasters’ forecast errors of the ten-year note yield (1) . . . 58

Figure 2.16 The individual forecasters’ forecast errors of the ten-year note yield (2) . . . 59

Figure 2.17 The individual forecasters’ forecast errors of the ten-year note yield (3) . . . 60

Figure 2.18 The individual forecasters’ forecast errors of the ten-year note yield (4) . . . 61

Figure 2.19 Adjustments of the Federal Funds Target from Jan 2003 to Dec 2008 . . . 62

Figure 3.1 Forecasts for funds rate at four target dates . . . 98

Figure 3.2 Actual funds rate over time . . . 99

Figure 3.3 Actual note yield over time . . . 100

(15)

Figure 4.6 Two measures of scaled dispersion for the funds rate forecasts through years (WSJ

survey) . . . 146

Figure 4.7 Two measures of scaled dispersion for the funds rate forecasts before December 2007 (WSJ survey) . . . 147

Figure 4.8 Two scaled measures of dispersion for the note yield forecasts through years (WSJ survey) . . . 148

Figure 4.9 Two measures of uncertainty for the funds rate forecasts through years (WSJ survey)149 Figure 4.10 Two measures of uncertainty for the note yield forecasts through years (WSJ survey)150 Figure 4.11 Correlation between disagreement and uncertainty for the funds rate forecasts . . 151

Figure 4.12 Correlation between disagreement and uncertainty for the note yield forecasts . . 152

Figure 4.13 Two measures of dispersion for the funds rate forecasts through years (WSJ survey) and the actual funds rate . . . 153

Figure 4.14 Two measures of dispersion for the note yield forecasts through years (WSJ survey) and the actual note yield . . . 154

Figure 4.15 Scatter plots . . . 155

Figure 4.16 The actual funds rate over time . . . 156

Figure 4.17 Changes in the actual funds rates over time . . . 156

Figure 4.18 Inflation rate from December 02 to December 11 . . . 157

Figure 4.19 Unemployment rates from December 02 to December 11 . . . 157

(16)

Chapter 1

Introduction

Forecasts of interest rates are required for people to make decisions related to their finance, industry and policy making activities. To know the quality of the forecasts and the process of the formation of the forecasts therefore is important. In this dissertation, my research focuses on analyzing the federal funds rate forecasts and the U.S. ten-year note yield forecasts from the Wall Street Journal (WSJ) survey in three aspects: rationality, herding behavior, and uncertainty and disagreement.1

Two testable implications of rationality are unbiasedness and efficiency. In Chapter 2, the unbiasedness and efficiency of the funds rate forecasts and the note yield forecasts from the WSJ survey are examined. Rationality with information friction is an alternative explanation for expectation formation. A simple test is used to examine the information rigidity of the two interest rate forecasts from the WSJ.

In Chapter 3, I use two methods to test the herding behaviors in the funds rate forecasts and the note yield forecasts from the WSJ survey. The first method is a regression method. The second method is a new, non-parametric method. Since no reports exist on the performance of the non-parametric method, I use some simulation experiments to examine this method.

(17)

1. which source has a dominant impact on the disagreement of the funds rate forecasts and the note yield forecasts after a public announcement, the public information signal or model difference?;

2. did the FOMC policies reduce the disagreement of the funds rate and note yield forecasts after the financial crisis?

(18)

Chapter 2

Rationality of the Interest Rate

Forecasts from Professional

Forecasters

2.1

Introduction

Forecasts of interest rates are a very important information source for people to make their investment strategies, purchasing plans, and even to stabilize policy. Evaluating forecasts can help forecast users to better understand the quality of the forecasts they are using.

(19)

economists produce unbiased forecasts, the predictions of the change directions are not more accurate than chance. For most of the economists, the Treasury bill rate forecasts are not significantly different from the predictions of the random walk model in accuracy, and the Treasury bond rate forecasts are significantly worse than the predictions of the random walk model. They find evidence of heterogeneity across economists.

Although rational expectations is accepted as a theoretical foundation, it is criticized for the difficulties in producing some features of macroeconomic data. Economists are exploring alternative assumptions for expectation formation. Models of rational expectations with information frictions are one of them. These types of models include the sticky-information model and the imperfect information model.

Forecast data are useful in testing alternative models of expectations. Carroll (2003) uses household expectations data to show that although households’ expectations are not rational in regular rationality tests, the process of households forming their expectations by updating probabilistically toward the views of professional forecasters may be rational. Coibion and Gorodnichenko (2010) propose a simple model to test for information rigidity. They apply the test to historical inflation forecasts from the U.S. Survey of Professional Forecasters (SPF), and find that agents put a weight of 0.45 on new information and 0.55 on previous forecasts.

(20)

investigates models of information rigidity. Section 2.7 gives the conclusions.

2.2

Literature review on rationality evaluation

Researchers use different models to describe expectation formation, such as structural models, implicit expectation models and rational expectation models (Lovell, 1986, Pesaran and Weale, 2006). Among these models, the rational expectation of Muth (1961) is generally accepted as a theoretical foundation.

2.2.1

Muth’s rationality

Muth (1961) defined expectations as rational if “expectations of firms (or, more generally, the subjective probability distribution of outcomes) tend to be distributed, for the same information set, about the prediction of the theory (or the “objective” probability distributions of outcomes)” (Muth, 1961, P. 316). Rational expectations assert that “the economy generally does not waste information” (Muth, 1961, P. 315). A rational analyst would make unbiased forecasts implied by the model (Poole, Phelps, and Baily, 1976).

2.2.2

Testable implications of rationality

Expectation rationality implies some statistical properties. Survey forecasts can be evaluated by testing the statistical properties implied by expectation rationality.1 Under the assumption of a quadratic loss function, unbiasedness and efficiency are two often used properties.2

1 McNees (1978) examines the unbiasedness and efficiency properties for three commercial forecasters’ expectations

(21)

The unbiasedness of expectations means that the expected value of forecast errors is zero (Lovell, 1986). In Batchelor and Dua (1991), the unbiasedness impliesE(eith|Ωith) = 0, where eithis the forecast

error of individual i, eith=AtFith3,At is the actual value at timet, andFith is the expectation of

forecasteriat time th, Ωithis the available information set at timethof individual i.

The efficiency of expectations means consistent information use (Berger and Krane, 1985), or forecast revisions should not be predictable from past revisions. There are weak efficiency forecasts and strong efficiency forecasts. The strong efficiency forecasts are obtained by using all available information to minimize the loss function; and the weak efficiency forecasts efficiently use information about past forecasts (Nordhaus, 1987).

2.2.3

Review of the unbiasedness test

The unbiasedness test of Theil (1966) regresses the actual values on the survey expectations:

At=α+βFth+εth, (2.1)

whereAt is the actual values,Fth is the expectation forAtat time th, andεth is the random error

term. The null hypothesis is:

H0: (α, β) = (0,1)

Instead of using the actual value and the forecasts in the regression, Batchelor and Dua (1991) use the forecast errors to test the unbiasedness and orthogonality conditions.

eith=α

0

ixith+ith, (2.2)

(22)

whereeithis the forecast error of the forecasteriat timeth,xithis any variable known to the forecasters

at time th,ith is the error term, and the testing restriction isαi= 0.

The ordinary least squares regression (OLS) requires that the errors in the regression are serially uncorrelated. This requirement can be satisfied when the forecast interval equals the sampling interval. When the survey data have multi-period forecasts and the forecast horizon is longer than the sample frequency, the errors in the above tests (2.1) and (2.2) are serially correlated, making the variance-covariance matrix generated by OLS incorrect (McNees, 1978; Brown and Maital, 1981; Berger and Krane, 1985; Bryan and Gavin, 1986; Hansen and Hodrick, 1980). Analysts use different methods to solve this problem.

1. Using rolling-event forecasts

One way to avoid serially correlated errors is to use only fixed horizon (or rolling-event) forecast data so the forecast interval is the same as the sampling interval (Hansen and Hodrick, 1980; Bryan and Gavin, 1986). But this method reduces sample size and the power of tests.

2. Using generalized least squares (GLS) instead of OLS

The forecasts used in McNees (1978) are multi-period forecasts. He pools the fixed horizon forecasts over the sample time periods, and tests for the unbiasedness for each horizon. McNees (1978) uses GLS to estimate the coefficients and the variance-covariance matrix, since there is serial dependence in the forecasts and errors.

3. Using the method of Hansen and Hodrick (1980) and Hansen (1982)

(23)

4. Using the efficiency test of Nordhaus (1987)

The weak efficiency test of Nordhaus (1987) uses the revisions of the fixed-event forecasts. The forecast revisions avoid the above problems associated with using the multi-period forecasts. For variables like GNP, using revisions can also avoid the problem associated with using the actual values.

2.2.4

The efficiency test

The weak efficiency test of Nordhaus (1987) regresses current forecast revisions on the last period forecast revisions:

νth=βνt,h+1+th, (2.3)

whereνth=FthFt,h+1 is the forecast revision at timeth,th is the error term. The null hypotheses

isH0:β= 0.6

Batchelor and Dua (1991) extend the efficiency test to a more general version:

νith=βixit,h+1+ηith, (2.4)

wherexit,h+1 can be available values of any variable before timeth−1, ηithis the error term, and the

test restriction isβi= 0.7

2.2.5

Forecast error decomposition and multiple dimension forecast structure

(24)

components of the error covariance matrix can be estimated in a easy way.

Suppose a forecast survey hasN forecasters to provide forecasts for the interest rate at target timet (At). The first forecast survey forAtis one year ahead oft, and then forecasters update their forecasts for

Atmonthly.8 LetFith represents the forecast forAt made by the forecasteriat the forecast horizonh,

and i∈[1, N],t∈[ts, te], h∈[1, H], wheretsandteare the first and the last target time respectively.9

Davies and Lahiri (1995) set up the following decomposition model for forecast error:

AtFith=φi+λth+εith (2.5)

whereAtis the actual interest rate at timet,φi is the bias of individuali,λthis the accumulated common

shocks in the future (from the horizon hto the horizon 1, and εithis the idiosyncratic error ( or all other

errors).

λth=

1

X

j=h

utj

whereutj are the common shocks that occurred betweentj totj+ 1.

This framework indicates that forecasters should be responsible for the biasφi and the idiosyncratic

errorsεith, but not for the unanticipated shocksλth. Davies and Lahiri (1995) regress the forecast errors

on the individual-specific dummy variables to estimate individuals’ biases and calculate the corrected covariance matrix for the unbiasedness tests.10

Under the error decomposition model of Davies and Lahiri (1995), Clements, Joutz, and Stekler (2007) construct their efficiency test as:

νth=α(xt,h+1−xt,h+2) +ut,h+1+εthεt,h+1, (2.6)

wherexis any variable known by forecasters at the time they make their forecasts. (xt,h+1−xt,h+2) can

be the forecast revision, orνt,h+1. But in one efficiency test of Clements, Joutz, and Stekler (2007), they

(25)

2.2.6

Sticky information and imperfect information

The rational expectation model discussed above assumes all-knowing agents with full-information. Another type of expectation formation model is rational expectations with information frictions, such as the sticky-information model and the imperfect information model.

The sticky information model is proposed by Mankiw and Reis (2002). They believe that the costs of collecting information and re-optimizing expectations prevent the spread of information. 11 In the sticky

information model, agents do not update information continuously. But when they update, they make a full information rational expectation.

The imperfect information model is different from the sticky information model. In this model, the agents update their information continuously, but can’t obtain full information each time (Coibion and Gorodnichenko, 2010).12

Coibion and Gorodnichenko (2010) propose approaches to test full information rational expectations (FIRE) and measure the degree of rigidity.

From the sticky information model, Coibion and Gorodnichenko (2010) find the following relationship,

AtFth= [λ/(1−λ)]∆Fthvth (2.7)

where λ can be interpreted as the degree of rigidity (forecasters update their information with the probability (1−λ) each period),vthis the error term.

From the imperfect information model, Coibion and Gorodnichenko (2010) finds the following relationship,

AtFth=

1−K

K ∆Fth+εth (2.8)

whereK represents the relative weight placed on new information relative to previous forecasts. K <1 means incomplete information, and 1−Kcan be interpreted as the degree of information rigidity.

(26)

They find that both the sticky information model and the imperfect information model predict the same relationship13,

eth=α+βνth+εth, (2.9)

where the forecast error eth=AtFth, the forecast revision νth =FthFt,h+1, and εth is the error

term. Since β= λ

1−λ orβ =

1−K

K , the degree of information rigidity can be estimated by: ˆλ=

ˆ

β

1+ ˆβ or

1−Kˆ = βˆ

1+ ˆβ.

They apply the test of FIRE to historical inflation forecasts from 1969 to 2010 in the U.S. Survey of Professional Forecasters (SPF). The null hypothesis of FIRE is rejected. The degree of information rigidity is found to be 0.55. After testing some testable implications of the sticky information model and the imperfect information models, they conclude that the imperfect information model is a reasonable description for the expectation formation process of their forecast data. In their inflation forecasts, agents put a weight of 0.45 on new information and 0.55 on previous forecasts.

They also analyze the effects of policy changes on the expectation formation process. For example, they find that information rigidity was low before the Great moderation, and high after the Great moderation.

Their approach can be extended to study expectations on other variables.

2.2.7

Other issues

2.2.7.1 Heterogeneity

Mitchell and Pearce (2007) find evidences of systematic heterogeneity in the interest rates forecasts and the exchange rate forecasts of WSJ. Davies and Lahiri (1995) find heterogeneity in the blue chip forecasters, and they also consider heterogeneity over time.

To avoid heterogeneity caused by different forecasters, Batchelor and Dua (1991), Clements, Joutz, and Stekler (2007) do not pool individual forecasts. They analyze individual forecasters instead.

(27)

2.2.7.2 Consensus forecasts and individual forecasts

Consensus forecasts are the mean, median, geometric mean, or harmonic mean of the forecasts (Bonham and Cohen, 2001). Researchers use consensus forecasts in the rational expectation analyses because:

1. The consensus forecasts can avoid missing data (Bonham and Cohen, 2001);

2. The consensus forecasts are useful to consumers. Carroll (2003) finds that the typical household updates their expectations probabilistically toward the mean forecasts of professional forecasters;

3. The consensus forecasts can be treated as a proxy of market expectations (Bonham and Cohen, 2001).

But consensus forecasts can cause at least two kinds of biases (Bonham and Cohen, 2001):

1. When the expectations are made rationally but with incomplete information, the mean forecast may produce bias and inconsistency (see Figlewski and Wachtel, 1983, and Keane and Runkle, 1990);

2. The averaged forecasts may hide the individuals’ biasedness, and lead to falsely accepting the unbiasedness hypothesis.

An alternative is to use individual forecasts. There are two advantages of individual level study:

1. Individual level data avoid heterogeneity across individuals;

2. The variance-covariance matrix of the coefficient is simplified.

2.2.7.3 Alternative loss functions

(28)

2.3

Forecast data

2.3.1

The forecasts from the WSJ survey

Since 1981, the WSJ has published forecasts of economic indicators.14 The forecasts are provided by

the economists from banks, securities firms, non-financial industries, consulting and forecasting companies, universities and professional associations. One important feature of the WSJ survey is that the names of forecasters and their firms are attached to their forecasts. Forecasters may be more careful to publish their forecasts than forecasters in other forecast surveys without names (such as SPF).

The prime rate was the first economic indicator published by the WSJ forecast survey. In January 1982, the Treasury bill and the Treasury bond interest rates were brought in. In December 1985, the GDP growth rate and CPI inflation rate were added. The dollar-yen exchange rates became available after December 1988. The federal funds rate has been reported since 2002 and the ten-year note yield since 2003.

Before 2002, the WSJ survey was conducted in the first week of January and July. In January, economists made forecasts for the last business day of June (six-month horizon) and the last business day of December (one-year horizon). In July, economists made forecasts for the last business day of December (six-month horizon) and the last business day of June (one-year horizon). These forecasts could not be used in the efficiency test of Nordhaus (1987) since there was only one revision for each target.

In 2002, the structure of the forecasts changed. The target dates are still the last business day of June and the last business day of December. But the forecast horizon is increased from two horizons to at least ten horizons. Starting from one year before the target date, the WSJ publishes forecasts for the economic variables at the target dates, and updates the forecasts monthly15. With this new structure,

the efficiency test of Nordhaus (1987) can be conducted.

(29)

2.3.1.1 Data selection for the WSJ forecasts

To avoid missing data and simplify the variance-covariance matrix, complete data sets are selected. The selected forecasters satisfy the following conditions: 1. The forecaster continuously provides forecasts for more than 10 target interest rates. In this continuous sequence, the target time starts at ts,

and ends at te; 2. For each target interest rate at the target time t( where t∈[ts,te]), the forecaster

submits forecasts for at least 10 horizons.17 The forecast sample of each selected forecaster contains at

least 100 forecasts (the number= (tets+ 1)×10) for the federal funds rate (or the ten-year note yield).

• Funds rate forecast

The federal funds rate forecasts are from December 2002 to December 2011. Thirty two forecasters are selected.18

• Ten-year note yield forecast

The ten-note yield forecasts are from December 2003 to December 2011. Twenty two forecasters are selected. These forecasters are also selected in the funds rate forecast sample. So there are twenty-two overlapping forecasters in the sample of the funds rate forecasts and the sample of the note yield forecasts.

2.3.1.2 Data structure of the WSJ forecasts

Figure 2.1 displays a time line of a forecaster’s funds rate forecasts. Each month, the forecaster makes two forecasts, one for the last business day of June, and another for the last business day of December. For each survey, around fifty-five forecasters submit their forecasts. The cross-section structure is displayed in figure 2.2.

(30)

2.3.1.3 The financial crisis and non-financial crisis sections of the forecasts from the WSJ

survey

Usually, August 9, 2007 (BNP Paribas suspended funds) is marked as the start of the 2007-2008 financial crisis (Federal Reserve Bank of St. Louis, 2009). But the time line of the financial crisis events in Federal Reserve Bank of St. Louis (2009) shows that there were some events happened before August 2007. The monthly shocks estimated from the funds rate forecasts of the WSJ survey seem to contain some events around August 2007 (see Figure 2.5 and Figure 2.6). The Chow test provided by the r package “strucchange” (Zeileis, Kleiber, Kr¨amer, and Hornik, 2003 and Zeileis, Leisch, Hornik, and Kleiber, 2002) confirms that there are significant changes around May 2007 to August 2007, in this study, I choose June 2007 as the beginning of the financial crisis.

The Business Cycle Dating Committee of the National Bureau of Economic declared that the recession ended in June 2009. The estimated monthly shocks in Figure 2.5 dropped around June 2009. I use June 2009 as the end time of the financial crisis in my study.

Some funds rate forecasts are deleted from the non-crisis section, since their target times are in the financial crisis time period, and their forecast errors are impacted by the financial crisis. These forecasts are the forecasts from horizon 10 to horizon 6 for the target December 2007. 19 For the similar reason,

the forecasts from the 10th forecast horizon to the 6th horizon for the target December 2009 are deleted from the financial crisis section since their target time is not in the financial crisis section.

2.3.2

Forecast data from the survey of professional forecasters (SPF)

Section 2.6 analyzes the rigidity information in the expectation formation process. Since the size of the forecast sample from the WSJ is small, a larger forecast sample from the SPF is used. The SPF is published by the Federal Reserve Bank of Philadelphia. The survey provides forecasts for macroeconomic variables.

(31)

2.4

The rationality analysis for individual forecasters from the

WSJ

This section evaluates the rationality of the forecasts on the federal funds rate and the ten-year U.S. note yield. The subsection 2.4.1 sets up the models used to test the unbiasedness and weak efficiency for each forecaster. The next two subsections report the results of the unbiasedness tests and the weak efficiency tests.

2.4.1

Models

2.4.1.1 Assumptions

I adopt the forecast error decomposition model of Davies and Lahiri (1995),

AtFith=φi+λth+εith, (2.10)

whereAtis the actual interest rate,φi is the bias of individual i,εithis the idiosyncratic error, andλth

is the accumulated common shocks in the future (from horizon hto 1),

λth=

1

X

j=h

utj

whereutj is the accumulated common shocks that occurred between horizonj to horizon j−1.

Following Davies and Lahiri (1995), I assume that the expected values ofεith equal 0 acrossi,t, and

hrespectively, and the expected values of the accumulated common monthly shocksuth acrosst andh

respectively are 0.

cov(εith, uth) = 0,i∈[1, N], t∈[ts, te], h∈[1, H]

(32)

There are ten horizons for each target interest rate, i.e. H = 10. Each forecaster gives two interest rate forecasts each time, one for June 30, and another for Dec 31. For example, see table 2.1 ( and figure 2.1), in June 2004 (blue color cells), forecasteriprovided two forecasts, one for the interest rate on December 31, 2004 (Fi,Dec2004,5), and another for the interest rate on June 30, 2005 (Fi,J un2005,10).

uDec2004,5 anduJ un2005,10 are the same shocks occurred between May 2004 and June 2004, and they

produce the covariance in the second row of equation 2.11.

At the individual level, the subscriptican be dropped from equation (2.10),

AtFth=φ+λth+εth (2.12)

The structure of individual level data is displayed in figures 2.1 and 2.2.

2.4.1.2 The unbiasedness test

Under the framework of Davies and Lahiri (1995), the forecast error is

eth=φ+ (λth+εth), (2.13)

whereeth=AtFthis the forecast error at horizonhfor the interest rate at target timet, andt∈[ts, te],

h∈[1,10].

To test the unbiasedness in an individual’s forecasts, the vector e

∼is regressed on vector 1∼(with length

(tets+ 1)×H), where

e

∼= [ets,H, ets,H−1, . . . , ets,1, ets+1,H, ets+1,H−1, . . . , ets+1,1, . . . , ete,H, . . . , ete,1] 0

.

The intercept is the estimated bias ˆφ, the null hypothesis isH0:φ= 0.

(33)

σ2

εt.).

Two potential problems may exist in the forecasts. The first problem is the heteroskedasticity in the forecasts for different target times20. The second problem is the structure change in the financial crisis 21. If the heteroskedasticity or the structure change are significant, I replace the constant variancesσ2

u and

σ2

ε with the target-time-specific variancesσ2ut andσ

2

εt in the variance-covariance matrix estimation.

22 If

the structural change is significant, I analyze the forecasts separately in two subsets: the financial crisis subset and the non-crisis subset.

The individual forecasters’ forecast errors of funds rate are plotted in figure 2.9 to 2.14. The individual forecasters’ forecast errors of ten-year note yield are displayed in figure 2.15 to 2.18. The plots show that the financial crisis seems has less impacts on the note yield forecasts than on the funds rate forecasts.

Section 2.4.2.1 and section 2.4.2.2 present the estimated results for the funds rate forecasts and the note yield forecasts.

2.4.1.3 The efficiency test

Efficiency is a testable property of the rational expectations. Nordhaus (1987) defines the weak efficiency of forecasts as forecast revisions that should not be predictable from past revisions.

From equation (2.12), the revision of the forecast for a given end point ,

FthFt,h+1 = ut,h+1+εt,h+1−εt,h

or

νth=ut,h+1+εt,h+1−εth (2.14)

whereνth=FthFt,h+1is the forecast revision from horizonh+ 1 tohfor targett. To test the efficiency

of the forecasts, Nordhaus (1987) uses νth=bνt,h+1+εt,h and testsH0:b= 0. Under my assumptions,

20Interest rates in some target times are easier to be predict than those in some other target times. The Breusch-Pagan

test is used to find the heteroskedasticity.

(34)

νthis correlated withνt,h+1,

cov(νth, νt,h+1) = cov(ut,h+1+εt,h+1−εt,h, ut,h+2+εt,h+2−εt,h+1) (2.15)

= cov(εt,h+1,εt,h+1)

= −σ2εt

6

= 0 (unless σε2t= 0)

For a rational forecaster, the revisionνthis not correlated with any information before horizonh+ 2.23

Isiklar (2005) used the second lag of revisions in their efficiency tests24; and Clements, Joutz, and Stekler (2007) exploited the variableνt,h+2 in this fixed target efficiency test model.25

I use the following two models to test efficiency,

νth=ανt,h+2+ηth (2.16)

νth=α∆rt−(h+1)+ηth (2.17)

where vt,h+2 is the second lag of revision, r represents the actual interest rate, and ∆rt−(h+1) =

rt−(h+1)−rt−(h+2)is the change in the actual interest rate from timeth−2 toth−1, and ηthis

the error term.26

For both models, the null hypotheses areH0:α= 0. To run the regressions, I poolνthby the order

of target and horizon,

ν= [νts,H−1, νts,H−2, . . . , νts,1, νts+1,H−1, νts+1,H−2, . . . , νts+1,1, . . . , νte,H−1, νte,H−2, . . . , νte,1] 0

,

23Boero, Smith, and Wallis (2008) think forecast revisions have a moving average structure. Their model of revision is

νth=θh+1ut,h+1+εt,h+1−εth. The first order coefficient is negative, and the magnitude depends on the variances of

θh+1ut,h+1andεth.

24In the efficiency test of Nordhaus (1987), 50 of 51 fail the tests. Davies and Lahiri (1995) use a similar model, and the

only difference is that Davies and Lahiri (1995) pooled forecast revisions across three dimensions: forecaster, target, and horizon. Their sample forecasts fail the efficiency test. They both usedνt,h+1in their efficiency test model.

25Previous efficiency tests use the regression

(35)

and other variables are pooled in the same way.

Based on my assumption, for a rational forecaster, cov(νth, νt,h+1) =−σε2t. If σ

2

εt 6= 0, then this covariance is not zero. I use another model to test if vt,h is correlated with revision νt,h+1, but not

correlated withvt,h+2, ∆rt−(h+1), ∆rt−(h+2).

νth=β0+β1νt,h+1+β2νt,h+2+β3∆rt−(h+1)+β4∆rt−(h+2)+ωth, (2.18)

whereωth is the random error term. I expect thatβ1 is negative, andβ2 ,β3,β4 are zero.

Section 2.4.3.1 and section 2.4.3.2 report the efficiency test results.

2.4.2

The unbiasedness test results for individual forecasters

2.4.2.1 The unbiasedness test for the funds rate forecasts

Figure 2.3 displays the estimated individual biases ˆφi of 94 forecasters27. Most forecasters’ biases are

negative, which means that the forecasts of most forecasters are greater than the actual values. Using the method of Davies and Lahiri (1995) to test the heterogeneity of individual forecasters, the estimated variances of the forecasters’ idiosyncratic errors ˆσ2

εi are regressed on a constant and the dummy variables generated by the id of 94 forecasters. Under the null hypothesis ˆσε2i= ˆσ

2

ε,i,RN T Hχ2N−1. The

null hypothesis is rejected since thep value≈0, and individual forecasters have significant (at the 0.05 level) heterogeneity in the funds rate forecasts.

Table 2.2 reports the results of the unbiasedness tests using all forecasts in the sample period and the structural change tests 2829. Four forecasters have significant negative biases (forecasts significantly

higher than the actual funds rates). Since the structural change tests are significant for all forecasters,

27All forecasts of the 94 forecasters in the sample are used to estimate the individual biases ˆφ

i, monthly shocks ˆuth,

idiosyncratic errors ˆεith, and the variance of idiosyncratic errors ˆσ2εi and the variance of monthly shocks ˆσ

2

ut.

28The structural change test is used to check the structural change between the two subsets: the financial crisis period

subset and the non-crisis period subset. The test is performed by adding a dummy variable in the regression (the value of the dummy variable is 1 when the forecast timethis in the time period from June 2007 to June 2009, and is 0 in other time period.). Under the null hypothesis the coefficients of the dummy variable equal zero.

29Since multiple tests are taken simultaneously, the p values of the estimated coefficients are adjusted by the

(36)

the funds rate forecasts are analyzed in the financial crisis subset and the non-crisis subset respectively. Table 2.3 reports the estimated results of the non-crisis subset and the crisis subset. Column 3 gives the estimated biases in the non-crisis time period, and columns 5 gives the estimated biases in the crisis time period.

Almost all forecasters have no significant biases during the non-crisis time period, while twenty four (out of thirty-two) forecasters have significant negative biases in the financial crisis time period. During the financial crisis, the expectations of 75% professional forecaster missed the declines in the funds rate, which suggests that the reduction of the funds rate by the Federal Reserve was unanticipated for most professional forecasters.

The root mean square forecast error (RMSFE) of the funds rate forecasts in the non-crisis time period is 0.4537 (in percent); the RMSFE in the financial crisis time period is 1.7647, and is about four times higher than the RMSFE in the non-crisis.

2.4.2.2 Ten-year Note yield forecast

Figure 2.4 presents the estimated individual biases on the note yield forecasts ˆφi of 94 forecasters.

Almost all forecasters’ forecasts are higher than the actual note yields. Using the method of Davies and Lahiri (1995) to test the heterogeneity of individual forecasters, the null hypothesis ˆσ2

εi = ˆσ

2

ε,i,

R2×N T Hχ2

N−1 is rejected sincep value≈0.0003, and individual forecasters have significant (at the

0.05 level) heterogeneity in the note yield forecasts.

The estimated results of the unbiasedness tests and the structural change test for the note yield forecasts are reported in table 2.4. Eighteen out of twenty-two forecasters have significant biases.

Seventeen forecasters do not have significant structural changes in the two subsets. Among them, three forecasters do not have significant biases while the forecasts of the other fourteen forecasters are significantly higher than the actual note yields.

(37)

is 0.7699 and the RMSFE in the financial crisis time period is 1.1119. Compared to the non-crisis time period, the RMSFE of the note yield forecasts increases by about 40% in the financial crisis, much less than for the funds rate.

2.4.2.3 The twenty-two common forecasters’ unbiasedness test results

I have twenty-two common forecasters in the samples of the funds rate forecasts and the note yield forecasts. In order to observe the differences between the two interest rate forecasts, the unbiasedness test results of the twenty-two overlapping forecasters are presented in Table 2.6.

Seventeen forecasters have significant structural changes in the funds rate forecasts, but no significant structural changes in the note yield forecasts. Among them, fourteen forecasters have no significant biases during the non-crisis time period, but have significant negative biases (the forecasts are higher than the actual funds rate) during the financial crisis in forecasting the funds rate; in forecasting the note yield, three of them do not have significant biases while eleven of them have significant negative biases. One forecaster has no significant biases in forecasting the funds rate (in both the crisis and non-crisis subsets) and the note yield. One forecaster has no significant biases in forecasting the funds rate (in both subsets), but has significant biases in forecasting the note yield. One forecaster has significant biases in forecasting the funds rate (in both subsets) and the note yield.

Five forecasters have significant structural changes in the unbiasedness test for both the funds rate forecasts and the note yield forecasts. During the non-crisis time period, all of the five have no significant biases in the funds rate forecasts, while in forecasting the note yield two of them have no significant biases and three have significant negative biases. During the financial crisis time period, the five forecasters have significant negative biases in both the funds rate forecasts and the note yield forecasts.

2.4.2.4 The impact of the financial crisis on the funds rate forecasts and the note yield

(38)

2. The RMSFE of the funds rate forecasts in the financial crisis is about four times higher than the RMSFE in the non-crisis time period, while the RMSFE of the note yield forecasts in the financial crisis is about 40% higher than the RMSFE in the non-crisis time period. The funds rate forecasts seem more volatile than the note yield forecasts;

3. In addition, during the non-crisis time period, the funds rate forecasts are more accurate than the note yield forecasts since the RMSFE of the funds rate forecasts (0.4537) is less than the RMSFE of the note yield (0.7699). During the financial crisis time period, the funds rate forecasts are less accurate than the note yield forecasts since the RMSFE of the funds rate forecasts (1.7647) is larger than that of the note yield (1.1119).

The above results may be caused by the different characters of the funds rate and the note yield. The federal funds rate is determined mainly by the FOMC. In the non-crisis time period, the FOMC adjusts the funds target rate in relatively small steps, and the forecasters have good performances in forecasting the funds rate. But in the financial crisis time period, the FOMC reduced the funds target rate in relatively larger steps (see figure 2.19), and it seems to be unanticipated for almost all the forecasters. The note yield approximately equals the average of the expected one-year rates over the next ten years plus a liquidity premium. It is mainly affected by the market supply and demand.

2.4.3

The efficiency test results for individual forecasters

Three regression models are used to test the efficiency of the forecasts (see section 2.4.1.3). The first test is

νth=ανt,h+2+ηth (2.19)

The second test is

(39)

2.4.3.1 The results for the funds rate forecasts

The results of the first test (equation 2.19) for the funds rate forecasts are reported in Table (2.7). Four out of thirty-two forecasters are not efficient since the tests have significant coefficients on the second lag of forecast revisions. The other forecasters are not found to be significantly inefficient in the first test.

In Table 2.8, the second test (equation 2.20) finds that ten forecasters are significantly inefficient since the coefficients on the last period changes in the actual funds rate are significant. The other twenty-two forecasters do not show inefficiency in the second test.

The estimated results of the third test (equation 2.18) are reported in Table 2.9. No significant coefficients are found for the first lag and the second lag of forecast revisions (the columns of ˆβ1 and

ˆ

β2). Eight forecasters are found to be significantly inefficient in using the last period of the changes in

the actual funds rate ∆rt−(h+1) (the column of ˆβ3). Other five forecasters are found to be significantly

inefficient in using ∆rt−(h+2)(the column of ˆβ4). Totally, thirteen out of thirty-two forecasters are found

to be inefficient. More than half are not significantly inefficient in the third test.

2.4.3.2 The results for the note yield forecasts

In Table (2.10), no forecasters have significant inefficiencies in the first efficiency test (equation 2.16) since the coefficients on the second forecast revisions are all insignificantly different from zero.

In Table (2.11), the second test (equation 2.17) finds that two forecasters are significantly inefficient since the coefficients on ∆rt−(h+1)are significant, and other forecasters are not significantly inefficient.

Table 2.12 displays the results of the third test (equation 2.18). Three forecasters are found to have significantly negative coefficients on the first lag of forecast revisions (νt,h+1), which are rational from

the prediction of the model 2.18. Six forecasters are found to be significantly inefficient with respect to ∆rt−(h+1). No forecasters are found inefficient with respect to νt,h+2 and ∆rt−(h+2).

2.4.3.3 The twenty-two common forecasters efficiency test results

(40)

come from using the information about the past changes of the actual funds rate or the actual note yield.

2.5

Rationality analysis for the average forecasts

Since Carroll (2003) presents evidence that households use the average forecasts of professionals when making their forecasts, an analysis of the rationality of the average forecast is warranted. Section 2.5.1 presents the unbiasedness test. Section 2.5.2 reports results.

2.5.1

The unbiasedness test

From model 2.10, I know that under rationality,

¯

eth= ¯φ+λth,

where ¯ethis the average forecast error for the interest rate at time tacross all forecasts at horizonh, ¯φis

the average systematic bias. I regress vector ¯e

∼on vector 1∼, where ¯∼e = [¯etsH, . . . ,e¯ts1,¯ets+1,H, . . . ,e¯ts+1,1, . . . ,e¯teH, . . . ,¯ete1]

0. Based

on my assumptions and the above model, the errors are likely to be serially correlated. The covariance matrix is corrected using the method in section 2.4 (see Appendix D)

2.5.2

The unbiasedness test results

(41)

2.6

Information rigidity

From both of the sticky-information model and the imperfect information model, Coibion and Gorodnichenko (2010) obtain the same relationship between the ex post mean forecast error and the ex ante mean forecast revision. They propose a simple method to test information rigidity, and the degree of the rigidity can be estimated from their model. 30

Coibion and Gorodnichenko (2010) use the following regression to test the information rigidity:

eth=α+βνth+εth, (2.21)

where the forecast erroreth=AtFth, the forecast revisionνth=FthFt,h+1, andεthis the error term.

Since β = 1λλ (for the sticky information model) orβ = 1−KK (for the imperfect information model), the degree of information rigidity can be estimated by: ˆλ= βˆ

1+ ˆβ or 1−

ˆ K= βˆ

1+ ˆβ. The full information

rational expectations (FIRE) should have β = 0, α= 0. The expectations with information rigidities should haveβ >0.

If the estimated result has a significantly positiveβ, then more analyses have to be taken to identify the cause of rigidity: sticky information, imperfect information or other reasons.

The sticky information model implies that the degree of rigidity does not change across forecast horizons and macroeconomic variables (Coibion and Gorodnichenko, 2010). The following seemingly unrelated regressions ( SUR ) are used to test ifβ >0 is caused by sticky information:

et,horizon9 = α9+β9νt,horizon9+ε9

et,horizon8 = α8+β8νt,horizon8+ε8 (2.22)

· · · ·

(42)

of rigidity across horizons and variables. The testable implications of the imperfect information model are that the degree of rigidity is negatively related to the persistence of the target variable and positively related to the amount of noise in the signal (Coibion and Gorodnichenko, 2010). The forecast sample of Coibion and Gorodnichenko (2010) allow them to estimate the persistence of different variables in different countries and the amount of noise in the signal. The forecast sample in this paper does not have enough variables to test the imperfect information model.

The approach of Coibion and Gorodnichenko (2010) is adopted to study the information rigidity for the forecasts from the WSJ in section 2.6.1, and for the forecasts from the SPF in section 2.6.2.

2.6.1

Test of rigidity of information for the forecasts from the WSJ survey

The estimated results of the test from the equation (2.21) for the forecasts from the WSJ survey are displayed in column 2 and 3 of table 2.15.

The first panel of Table 2.15 are estimated results using all forecasts. The second panel are the results of using the forecasts from December 2002 to June 2007 (in order to avoid the financial crisis time period). In the top panel, both of the funds rate forecasts (column 2 in Table 2.15) and the note yield forecasts (column 3 in Table 2.15) have significant intercepts, which mean that the rationalities of the funds rate forecasts and the note yield forecasts are rejected. No significant rigidity is found in the note yield forecasts since the coefficient on the forecast revision is not significant. But for the funds rate forecasts, there is a significant positive coefficient on the forecast revisions.

The forecast sample covers the financial crisis time period, which may impact the test results. The second panel of Table 2.15 are the test results for the funds rate forecasts and the note yield forecasts before the financial crisis (from December 2002 to June 2007). Columns 2 and 3 display that rationality is not rejected for the funds rate forecasts, while for the note yield forecasts, rationality is rejected. There are no significant rigidity in both the funds rate forecasts and the note yield forecasts.

(43)

as predicted by the information rigidity models of Coibion and Gorodnichenko (2010). The hypothesis test H0 : βH1 = βH2 = · · · =βH9 (equal coefficients on the forecast revisions across the horizons) is rejected, which implies that the rigidity found in the funds rate forecasts can not be explained by the sticky-information. The imperfect information in the financial crisis time period is a possible cause for the rigidity. The degree of the rigidity in the funds rate forecasts is 0.4955, or in the context of the imperfect information model, forecasters put more than 50% weight on the new information when they make their forecasts in the funds rate.

Using the methods of Coibion and Gorodnichenko (2010), I find rigidity information in the funds rate forecasts from the WSJ survey. But the forecasts have small sample sizes and cover the financial crisis time period which complicate the analysis. Therefore I repeat the same tests using a larger sample forecasts from the SPF.

2.6.2

Test of rigidity of information for the forecasts from the SPF

The T-bill rate forecasts and the T-bonds rate forecasts from the SPF are chosen to test the rigidity information. The estimated results of test 2.21 presented in column 4 and 5 of Table 2.15.

The tests for all forecasts in the sample are reported in the top panel of Table 2.15, and the tests for forecasts before June 2007 are reported in the bottom panel of Table 2.15. The tests produce similar estimated results. Both the T-bill rate forecasts and the T-bonds rate forecasts are significantly not rational since they have significant intercepts. No significant rigidity information is found in the T-bonds rate forecasts. The T-bill forecasts have positive coefficients on the forecast revisions.

(44)

2.7

Conclusion

I evaluate the unbiasedness and the efficiency of the funds rate forecasts and the note yield forecasts from 2002 to 2011. The performances of the forecasters in the non-crisis time period are compared with those in the financial crisis time periods. The information rigidity in the funds rate forecasts and the note yield from the WSJ are examined, and the same rigidity test is used for a larger sample of the T-bill rate forecasts and the T-bonds rate forecasts from the SPF.

Almost all forecasters make no significant biases in the funds rate forecasts during the non-crisis time period, but most of them make funds rate forecasts significantly higher than the actual funds rate in the financial crisis time period. During the financial crisis, most of the professional forecasters missed the declines of the actual funds rate, which suggests that the reduction of the funds target rate by the Federal Reserve was unanticipated.

Compared to the funds rate forecasts, the financial crisis has less impact on the note yield forecasts. Only 23% forecasters have significant structural changes in the unbiasedness test. For those forecasters having no structural changes, most of them make note yield forecasts higher than the actual note yield. The funds rate forecasts are more accurate than the note yield in the non-crisis time period, but are less accurate than the note yield forecasts in the financial crisis time period.

The average funds rate forecasts are significantly higher than the actual rate during the financial crisis, but no significant biases during the non-crisis time period. The average note yield forecasts are significantly higher than the actual in both the non-crisis time period and the financial crisis time period.

The efficiency tests find that inefficiencies come from using the information of the past changes of the actual funds rate or the actual note yield. 40.63% forecasters are not efficient in using ∆rt−(h+1)or

∆rt−(h+2) to forecast the funds rate. 27.27% forecasters are not efficient in using ∆rt−(h+1) to forecast

the note yield.

(45)
(46)

Table 2.1: An example for explaining the structure of individual’s forecasts

h=10 h=9 ... h=6 h=5 h=4 ... h=1

. . .

. . .

... ... ... ... Jun,04 ... ... ...

Fi,Dec2004,10 Fi,Dec2004,9 Fi,Dec2004,6 Fi,Dec2004,5 Fi,Dec2004,4 Fi,Dec2004,1

Jun,04 ... ... ... ... ... ... ...

Fi,Jun2005,10 Fi,Jun2005,9 Fi,Jun2005,6 Fi,Jun2005,5 Fi,Jun2005,4 Fi,Jun2005,1 .

. .

(47)

Table 2.2: The unbiasedness test for the funds rate forecasts using all individual’s forecasts in the sample

Forecaster Company Estimated intercept N Structtest Nariman Behravesh Global Insight -0.3098(0.1484) 150 0.00012 *** Richard Berner Morgan Stanley -0.2060(0.1805) 100 0.00090 *** /David Greenlaw

Joseph Carson Alliance Bernstein -0.5819(0.1830) . 100 0.00001 *** Mike Cosgrove Econoclast -0.2282(0.1428) 170 0.00001 *** J. Dewey Daane Vanderbilt Univ -0.3590(0.1412) 170 0.00001 *** Richard DeKaser National City Corp. -0.4676(0.1555) . 140 0.00001 *** Maria Fiorini Ramirez MFR Inc. -0.2415(0.1801) 100 0.00090 *** /Joshua Shapiro

Gail Fosler The Conference Board -0.4410(0.1867) 100 0.00040 *** Ethan S. Harris Lehman Brothers -0.1635(0.1813) 100 0.00073 *** Maury Harris UBS -0.0760(0.1787) 100 0.00104 ** Tracy Herrick The Private Bank -0.4684(0.1450) * 160 0.00012 *** Peter Hooper/ Deutsche Bank -0.3859(0.1725) 110 0.00005 *** Joseph A. LaVorgna Securities Inc.

William B. Hummer Wayne Hummer -0.7114(0.1756) ** 110 0.00004 *** Investments LLC

Kurt Karl/ Arun Raha Swiss Re -0.3646(0.1534) 140 0.00008 *** Bruce Kasman JP Morgan Chase Co. -0.6548(0.1841) * 100 0.00040 *** Paul Kasriel The Northern Trust -0.2006(0.1545) 140 0.00005 *** Daniel Laufenberg Ameriprise Financial -0.5745(0.1855) . 110 0.00000 *** Edward Leamer UCLA Anderson Forecast -0.2253(0.1496) 150 0.00001 *** Mickey D. Levy Bank of America -0.3260(0.1818) 100 0.00003 *** John Lonski Moody’s Investors -0.3729(0.1397) 170 0.00063 ***

Service

Jim Meil/Tianlun Jian Eaton Corp. -0.1415(0.1798) 100 0.00078 *** /James Meil

/Richard Kaglic

Nicholas S. Perna Perna Associates -0.5092(0.1723) . 110 0.00175 ** Ian Shepherdson High Frequency -0.2686(0.1539) 140 0.00173 **

Economics

John Silvia Wachovia Corp. -0.3176(0.1445) 160 0.00002 *** Allen Sinai Decision Economics Inc. -0.3886(0.1409) 170 0.00010 *** James F. Smith Western Carolina Univ -0.1823(0.1885) 120 0.00000 ***

and Parsec Financial Management

Sung Won Sohn California State Univ -0.3222(0.1541) 140 0.00003 *** Neal Soss Credit Suisse -0.3173(0.1406) 170 0.00009 *** Susan M. Sterne Economic Analysis -0.4654(0.1593) . 140 0.00090 *** Diane Swonk Mesirow Financial -0.4863(0.1686) 120 0.00001 *** Brian S. Wesbury/ First Trust -0.6353(0.1436) *** 170 0.00003 *** Robert Stein Advisors L.P.

David Wyss Standard and Poor’s -0.2981(0.1444) 160 0.00004 ***

Significant level codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1; the p values are corrected by the Holm-Bonferroni method; Numbers within parentheses are standard deviations (corrected by the covariance matrix usingσ2

u, σε2);

N: sample size;

(48)

Table 2.3: The unbiasedness test for the funds rate forecasts (in the non-crisis subset and the crisis subset)

Non-crisis Crisis

Forecaster Company Estimated intercept N Estimated intercept N Nariman Behravesh Global Insight 0.0576(0.0743) 105 -1.2263(0.3840) * 35 Richard Berner/ Morgan Stanley 0.1075(0.0907) 80 -1.2333(0.5062) 15 David Greenlaw

Joseph Carson Alliance Bernstein 0.0716(0.1026) 55 -1.5263(0.3853) * 35 Mike Cosgrove Econoclast 0.1588(0.0788) 125 -1.3543(0.3908) * 35 J. Dewey Daane Vanderbilt Univ 0.0149(0.0731) 125 -1.4763(0.3911) * 35 Richard DeKaser National City Corp 0.0002(0.0799) 95 -1.5180(0.3938) * 35 Maria Fiorini Ramirez MFR Inc. 0.0694(0.0905) 80 -1.2833(0.5105) 15 /Joshua Shapiro

Gail Fosler The Conference Board -0.0488(0.0956) 80 -1.6333(0.5466) . 15 Ethan S. Harris Lehman Brothers 0.1669(0.0955) 80 -1.2500(0.5207) 15 Maury Harris UBS 0.1856(0.0918) 80 -1.0667(0.4962) 15 Tracy Herrick The Private Bank -0.1011(0.0776) 115 -1.3900(0.3828) * 35 Peter Hooper/ Deutsche Bank 0.0419(0.0908) 80 -1.5140(0.4472) * 25 Joseph A. LaVorgna Securities Inc.

William B. Hummer Wayne Hummer -0.1500(0.0969) 65 -1.5943(0.3976) ** 35 Investments LLC

Kurt Karl/ Arun Raha Swiss Re 0.0431(0.077

References

Related documents

Lectures for the Equal Opportunities Official Course, Faculty of Law 2004/05: Economics of Great Regional Blocks, Faculty of Economics. Regional Economics, Faculty of Economics

After analyzing the results the study pointed out perceived risk (X1), convenience (X4) and relative advantage (X2) as the most significant influencing factors

There may well have been grounds on which Hayek could have criticised the idea of social justice as it developed in Catholic social teaching, but social justice is clearly

We use governance data for a sample of firms from emerging markets, and seek to identify differences in corporate governance practices for firms who, at a particular point in

methodological approach (i.e., quantitative study with qualitative component) was used to explore and examine the academic and social involvement factors of community college

Blood stains were completely removed from the cloths after rinsing them with a combination of detergent and partially purified enzyme for a period of 20 min,

7 This obligation to choose between hyper- links and navigation tools reduces the user's attention that is available for text comprehension – a problem known in hypertext research

From the established results, the thin film QMPS rear surface contact solar cell show an enhancement which overtakes 4 mA/cm 2 in the photocurrent density and 2.25 % in the