Also, we use the Survey of ProfessionalForecasters (SPF) data to understand how market participants revise their forecasts of key macroeconomic variables. The median SPF forecasts data for relevant variables were obtained from the Philadelphia Fed website for the period between 1968Q4 and 2017Q3. 13 There were 9 changes in the base year in the National Income and Product Account (NIPA) during this sample period. Some authors (Ramey (2011); Forni and Gambetti (2016)) used growth rates of the SPF forecasts without adjusting for changes in the base year, which generates 9 outlier observations in the data. To prevent this, we re-scaled all relevant forecast data so that they are expressed in 2009 dollar terms.
Given publication lags, up to the middle of the 1st month, the releases are all pertaining to the previous quarter and hence the MSFEs, using both information sets, coincide. The first current quarter information comes with the release of soft data (the business outlook survey of the PFED and the consumer confidence index of the University of Michigan) and produce a noticeable decrease in MSFE of the nowcasts which includes all available information. Then, as we move forward, more information, especially hard data, on the current quarter is released, and the precision of these nowcasts increases (although not monotonically) for all specifications as well as their performance relative to the ones which use only previous quarter informa- tion. This highlights the importance of incorporating the available current quarter information on the accuracy of the estimate as well as for competing with professionalforecasters (SPF) who exploit timely information. Notice that in Figure 2 we did not display the MSFE of the univariate benchmarks, i.e. the AR and RW models, as they are nearly twice that of the model and the SPF, but are shown in Table 2 of the next section.
In this paper we present a novel way of assessing the validity of the SIM for professionalforecasters. Speci…cally, the question we ask is whether professionalforecasters are attentive to the latest news about the macroeconomy when they form their expectations. The answer is that professionalforecasters, taken as a group, do not always update their estimates of the current state of the economy to re‡ect the latest releases of revised estimates of key data. As ever when testing a theory of expectations formation a number of auxiliary assumptions need to be made. Our assessment of the SIM is not di¤erent in this respect, although the assumptions we make are arguably relatively weak, and at least in part testable. We are careful to consider other possible explanations for the empirical …ndings that are taken as support for the SIM. By and large the SIM appears to be the most plausible explanation.
The rationality evaluation in chapter 2 examines the unbiasedness and efficiency of the interest rate forecasts from the WSJ survey. During the non-crisis time period, almost all the funds rate forecasts are not significantly biased. But during the financial crisis, most of the professionalforecasters missed the declines of the actual funds rate. The majority of the forecasters make note yield forecasts significantly higher than the actual note yield. The inefficiencies in the forecasts mainly come from not efficiently using the past changes of the actual funds rate or the actual note yield. Some degree of rigidity are found in the funds rate forecasts from the WSJ survey and the T-bill rate forecasts from the SPF. They possibly are caused by imperfect information.
Abstract: This paper uses forecasts from the European Central Bank’s Survey of ProfessionalForecasters to investigate the relationship between in- ‡ation and in‡ation expectations in the euro area. We use theoretical struc- tures based on the New Keynesian and Neoclassical Phillips curves to inform our empirical work. Given the relatively short data span of the Survey of Pro- fessional Forecasters and the need to control for many explanatory variables, we use dynamic model averaging in order to ensure a parsimonious econo- metric speci…cation. We use both regression-based and VAR-based methods. We …nd no support for the backward looking behavior embedded in the Neo- classical Phillips curve. Much more support is found for the forward looking behavior of the New Keynesian Phillips curve, but most of this support is found after the beginning of the …nancial crisis.
The plan of the rest of the paper is as follows. Section 2 discusses the SPF, from which we obtain the forecast data. Section 3 reports the comparison of the histograms and point forecasts across all forecast- ers using the Engelberg et al. (2006) non-parametric bounds approach. Section 4 reports an extension of this approach to the comparison of the histograms and probabilities of decline. We also consider whether favourable point and probability forecasts tend to be issued simultaneously by an individual. Section 5 reports the individual-level analysis of the hypothesis that forecasters have asymmetric loss functions, as a possible explanation of the …ndings that individuals’ point forecasts are sometimes too favourable relative to their histograms. In section 6 we take a stand on what it is that the forecasters are trying to forecast - whether they are forecasting an early announcement of the data or a later revision. Up to this point, we have not needed to make any assumptions about the actual values of the series against which a forecaster would wish their forecast to be judged, as forecasts are not compared to outcomes at any stage. By making such an assumption we are able to assess the asymmetric loss hypothesis in away that complements the analysis of section 5.
3.2 TYPES OF MODEL USED FOR FORECASTING The responses indicated that the type of model preferred varies according to the forecast horizon and to the variable being forecast. A pattern emerged where the use of time series models is more common for shorter-term horizons and for in ﬂ ation forecasts, whereas traditional supply and demand-based macro models are used more for longer-term horizons and slightly more for real GDP and unemployment rate forecasting (see Chart 6). Considering in more detail the types of model used for forecasting, most respondents (around 85%) reported that they use at least one type of time series model. Three of these are relatively widely used: ARIMA, single equation, and VAR or VEC models (see Chart 7). A smaller proportion uses other time series models, such as factor models. Most respondents who use time series models reported that they use two or more types of such models. Almost 70% of respondents reported that they use traditional supply and demand-based macro models, while very few forecasters indicated that they use DSGE models or some other type of model not speci ﬁ ed in the questionnaire.
It is important to emphasize here that our analysis is based on a straightforward out-of-sample methodology: we compare the no change forecast resulting from the Driftless Random Walk model to real-time exchange rate forecasts coming from the Survey of ProfessionalForecasters carried out by the Central Bank of Chile on a monthly basis. In our exercise there is no need for parameter estimation. Inference is carried out using the traditional Diebold and Mariano (1995) and West (1996) test. Our total number of observations is 201. Furthermore, during our sample period, Chile had a free float with only a handful of preannounced intervention periods carried out by the monetary authority. We also explore the stability of our results analyzing a shorter subsample with the last 74 observations of our sample period. In sum, our findings are neither the result of a novel econometric artifact nor the results of a magic black box. They are plain and strong, yet surprising giving the long tradition of frustration with economics models, and some surveys too, when it comes to compare their forecasts with those of the simple random walk.
In any dataset with individual forecasts of economic variables, some forecasters will perform better than others. However, it is possible that these ex post differences reflect sampling variation and thus overstate the ex ante differences between forecasters. In this paper, we present a simple test of the null hypothesis that all forecasters in the US Survey of ProfessionalForecasters have equal ability. We construct a test statistic that reflects both the relative and absolute performance of the forecaster and use bootstrap techniques to compare the empirical results with the equivalents obtained under the null hypothesis of equal forecaster ability. Results suggest little support for the idea that the best forecasters are actually innately better than others, though there is evidence that a relatively small group of forecasters perform very poorly.
Various survey data on expectations from many countries over last few decades have produced mounting evidence on substantial inter-personal heterogeneity in how people perceive the current and form inferences about the future economic conditions. 1 Lucas (1973) attributed the observed disagreement to individuals having exposed to different information sets, whereas in Carroll (2003) and Mankiw et al. (2003) disagreement arises because agents update new information only occasionally. While analyzing the diverse behavior of professionalforecasters, Kandel and Pearson (1995) and Kandel and Zilberfarb (1999) have produced some startling evidence that agents not only have differential information sets, but may also interpret the same public information differentially. Heterogeneity in processing information and lack of common beliefs is a central theme that has emerged in many areas of economics; see Acemoglu et al. (2006).
In terms of inflation targeting we considered the anchoring heuristics for forecasting as the first type, because the work of professionalforecasters requires such effort. The announced target can be interpreted as the core of a region of uncertainty for the private forecasters, because they are uncertain that the central bank will meet the target. The forecasters start their mental predictions considering the anchor represented by the announced target. Then, depending on the monetary policy moves and the effects on current inflation, the forecasters adjust their estimates (and this can be thought of as a kind of learning) by mentally moving from the anchor. For example, if a central bank loses credibility as current inflation increases and menaces to depart from the core, the forecasters start the adjustment from the top because they are likely to expect an even higher future inflation. Then, they move back to the core within the range of uncertainty, but the adjustment is likely to end before the core is reached. As we will see, this example fits the interpretation we will provide for our results.
This paper estimates the unobserved inflation expectations in India between 1993: Q1 to 2017: Q1 from the Fisher equation relation based on the state space approach using Kalman Filter. We find inflation forecast obtained from Fischer equation by applying Kalman Filter match well with the inflation forecasts made by the Survey of ProfessionalForecasters and Inflation Ex- pectations Survey of Household conducted by the Reserve Bank of India and the International Monetary Fund for the Indian economy.
Reported forecast probabilities may sometimes be rounded in the sense that a 33% probability might be reported as 35%, or 30%, or 40%, or 50%, or even 0%, re‡ecting rounding to the nearest 5%, 10%, 20%, 50%, or to either 0 or 100%. In this paper we consider probability forecasts taken from one of the best known and longest-running surveys of the US economy, the Survey of ProfessionalForecasters (SPF), and assess the potential impact of rounding in terms of: i ) the e¤ect on internal consistency of the probability forecasts of a decline in real output and the histograms for annual real output growth, and ii ) the relationship between the probability forecasts and the point forecasts of quarterly output growth. The SPF is perhaps a unique resource from the perspective of allowing us to assess the impact of rounding on the internal consistency of the survey respondents, as the SPF allows us calculate matched point, probability and probability distribution forecasts for the individual forecasters. In the case of the SPF probability forecasts we have no idea to what extent the survey participants indulge in rounding, and presumably this is true of most surveys where respondents are required to report probabilities. We do not know whether there are di¤erences between individuals in this regard, or even whether individuals behave in the same way over time. Hence we will investigate the potential impact of rounding under a number of plausible assumptions, including the assumption that individuals adopt common response patterns, as motivated by Manski and Molinari (2008).
There is a now a large literature addressing various aspects of the rationality of point forecasts of key macro aggregates, such as output and in‡ation (see Stekler (2002) for a recent review), and smaller but expanding literatures on the evaluation of probability distributions (e.g., Diebold, Gunther and Tay (1998), Diebold, Hahn and Tay (1999a)), interval and quantile forecasts (e.g., Granger, White and Kamstra (1989), Christo¤ersen (1998) and Giacomini and Komunjer (2005)), volatility forecasts (e.g., Andersen and Bollerslev (1998), Andersen, Bollerslev, Diebold and Labys (2003)), and probability forecasts (e.g., Clements and Harvey (2006)). Recently, a number of authors have sought to assess forecaster rationality in terms of the internal consistency of the di¤erent types of forecasts simultaneously made by individual forecasters. The key papers are Engelberg, Manski and Williams (2007), who compare the point forecasts and histograms of the respondents to the US Survey of ProfessionalForecasters, and Clements (2008), who in addition assesses the evidence for consistency of the SPF respondents’histograms and forecast probabilities of declines in real output growth. Inconsistencies are found, and are generally in the direction of the point and probability forecasts indicating a rosier outlook than the histograms: the point forecasts of output growth and in‡ation are higher and lower, respectively, than implied by the histogram forecasts, 1 and the histogram probabilities of declines in output tend to overstate the directly-reported probabilities that respondents assign to such an event.
revealing the underlying subjective probability distributions. Section 6 o¤ers some concluding remarks. Finally, an appendix outlines the calculation of point predictions and histogram means for a single respondent’s returns to two adjacent surveys. These forecasts were not selected randomly, but were chosen as an example of the way in which the two types of forecasts are clearly not generated as part of an integrated forecasting system. These forecasts are not meant in any sense to be typical: our case against the use of histogram forecasts for point prediction rests on the empirical …ndings reported in sections 3 and 4, not on this anecdotal evidence. These detailed calculations are meant to better illuminate the nature of the calculations that underpin the results reported in these sections. However it does serve to show that not all professionalforecasters are as careful in their deliberations as is sometimes assumed. 3
The best-known series of density forecasts in macroeconomics dates from 1968, when the American Statistical Association and the National Bureau of Economic Research jointly initiated a quarterly survey of macroeconomic forecasters in the United States, known as the ASA-NBER survey; Zarnowitz (1969) describes its original objectives. In 1990 the Federal Reserve Bank of Philadelphia assumed responsibility for the survey, and changed its name to the Survey of ProfessionalForecasters (SPF). Survey respondents are asked not only to report their point forecasts of several variables but also to attach a probability to each of a number of pre-assigned intervals, or bins, into which output growth and inflation, this year and next year, might fall. In this way, respondents provide density forecasts of these two variables, in the form of histograms. The probabilities are then averaged over respondents to obtain the mean or combined density forecasts, again in the form of histograms, and these are included in the quarterly publication of forecasts. Similar questions about inflation have been asked in the Bank of England Survey of External Forecasters (SEF) since 1996, and about GDP growth since 1998; the combined density forecasts likewise feature in the Bank’s quarterly Inflation Report.
Comparisons of RMSE for forecasts of GDP growth show little difference between the SEF average and MPC forecasts, whether real-time or revised data are used as actual outcome data. Again we have a clear indication of the increased difficulty of forecasting the revised GDP growth data, as discussed above. For the inflation forecasts, which receive greater attention in an inflation targeting context, the RMSE comparison clearly favours the SEF average forecast, as noted in Section 2.2 above. It is also notable that the SEF average forecast RMSE is smaller than any individual regular respondent’s RMSE shown in Table 3. Although the 19 regular respondents do not always enter the published survey average, which also includes other less regular respondents, this result supports the familiar advantage to be gained by forecast pooling. The same result does not hold for the GDP growth forecasts, Table 4 showing a few individual RMSEs smaller than that of the SEF average forecast in each case, again suggesting ambiguity over the forecasters’ target measure.
The relationship meteorologists have with the weather helps in their mastery of understanding and telling its story. Despite the highly localized nature of scientific weather prediction processes across forecast offices and international borders (Daipha 2015, Fine 2007), one general commonality found in this expert group is the unique relationship each has with the weather, one that often extends far beyond their official duties and requirements as forecasters. For many, work is not only about issuing forecasts on-time or watches and warnings in enough time, but instead is centred on carrying out an organizational-turned-deeply-personal mission for protecting life and property, a role similar to emergency first responders as evidenced by one severe weather meteorologist who said: “I feel like we’re firemen at a firestation -- waiting for a fire in the hole.” For these scientific experts, waiting for that fire in the hole is what weather prediction is all about. Often working 12-hour shifts and taking minimal breaks, many eat lunch at their desks, and many stay long after they are required when the weather calls for it. Members of this group describe the weather in human ways, noting a front’s beauty or a
In UK, the Investment Property Forum (IPF) which has been setup in 1998, is now recognised as one of the leading specialist property industry bodies in the UK. It comprises an influential network of senior professionals active in the property investment market. IPF real estate forecast surveys have been conducted since November 1998 and have been conducted quarterly (February, May, August and November) since 1999. These IPF expert opinion forecasting surveys collect information on future rental growth, capital growth and total returns from a range of UK real estate forecasters, including real estate advisors, fund managers and equity brokers. These rental growth, capital growth and total return forecasts are presented at the “total” UK property level, with office, retail and industrial property sub- sector forecast results not available (McAllister et al., 2008). The IPF UK real estate forecasts were then compared with the respective Investment Property Databank (IPD) actual UK annual real estate returns. The IPD real estate indices represent the commercial real estate performance benchmarks for the UK. The IPD annual database is the most reliable benchmark of direct real estate performance in the UK. Full details of the IPD UK real estate indices are available from www.ipdindex.co.uk (McAllister et al., 2008).
Capistrán and Timmermann (2009) assume that forecast uncertainty is the same across all individuals, and measure it as the one-quarter ahead conditional variance from a GARCH model …tted to the errors of an AR model for in‡ation. The …nding of Rich and Tracy (2003) that there is no relationship between the conditional variance of the consensus errors and an ex ante measure of uncertainty casts doubt on this step. Model-based forecasts of the conditional variance are necessarily backward-looking and will adjust slowly when the economic environment changes relative to forecasters perceptions. It seems likely that assuming a constant-parameter GARCH model for a period that spans the Great Moderation might lead to the degree of persistence in in‡ation volatility being over-estimated. 5