• No results found

180. We feel that the forecast efficiency methodology used by Faust et al (2000) is a very useful one for the study of revisions, and the ONS has recently published its own paper applying this methodology to our standard revisions dataset (Richardson, 2003).

181. One of the issues with revisions analysis is that by simply looking at the mean revision (commonly defined as the bias of a series), we ignore the issue that revisions to different quarters (or years if annual data) may cancel each other out. Hence it is possible to have a series with no bias, but which has large revisions in both

directions. This prompted Öller and Hansson (2002) and Akritidis (2003) to also study the absolute mean revision (formerly known in ONS articles as the “mean of revisions regardless of sign”), and the variance.

182. A particular strength of the methodology used in Faust et al (2000) is that the F-test for forecast rationality encompasses a number of the descriptive statistics used in the ONS approach, such as variance of revisions (which is what the model tries to explain), absolute revisions (the issue of revisions cancelling out is less important in a time series framework) and mean reversion (by relating the revision to the initial growth rate).

183. The key result of Faust et al with regards to the UK is that revisions to quarterly growth are highly predictable, and the relationship is negative, a high initial estimate is likely to be revised downwards. They find this using both a long run sample of 1965 – 1997 and a shorter sample of 1988 – 1997.

184. ONS analysis has shown that revisions analysis are often dogged by the issue of sample dependency, i.e. the results are often highly dependent on the period chosen for analysis. Ideally, both start and end dates should be chosen objectively, and the period of study should be relevant to the production processes currently used by the statistical institute. Obviously, this complicates any international study, where the aim is often to compare results over the same period for different countries.

185. To maintain a rigid framework of production stages, recent ONS work has used 1993 for the start of the analyses, which was when the preliminary M1 estimate was introduced. This gives us an objectively set start date. As with Faust, the ONS acknowledges the need for the data to be allowed to mature for the revisions analysis to paint a consistent picture.

186. Faust et al (2000) use two samples, one which includes the longest run of data available for each country, and a shorter sample of 1988 – 1997. The relevancy of studying the revisions performance of the ONS in the 1960s is questionable, given the vast changes in data collection and production techniques that have occurred over the last 4 decades. However, this issue also affects Faust et al’s shorter sample. The late 1980s saw a number of methodological changes. For example, in 1989 the Pickford review (Pickford (1989)) investigated the large discrepancies that had arisen between the three (supposedly equivalent) measures of GDP since 1986. This review led to a number of changes to the way in which the National Accounts are compiled, for example improvements in the monthly and quarterly collection of data. We do not believe that it is relevant to include revisions from older systems alongside revisions arising from our current practices.

187. One way to study the impact of the choice of start dates on the UK result is to run a series of regressions, starting at different periods. Using the Faust et al dataset, Table 21 shows ONS calculated F-statistics for the basic forecast efficiency

regression run on different sample periods for the UK, allowing the start date to vary but keeping the end date fixed at 1997Q4. The Faust regression of 1988Q1 to 1997Q4 is shown in bold.

G

Responses to NIESR Questions

76

Table 21: Various Regression samples – Faust et al model

Sample Period Start F-statistic Probability Sample size

(quarters) 1985Q1 23.01 0.0000 52 1986Q1 23.85 0.0000 48 1987Q1 22.37 0.0000 44 1988Q1 20.48 0.0000 40 1989Q1 5.42 0.0090 36 1990Q1 6.04 0.0063 32 1991Q1 6.79 0.0043 28 1992Q1 8.86 0.0015 24 1993Q1 5.32 0.0153 20 1994Q1 2.81 0.0943 16

188. As we can see, using a sample of 1988 – 1997 results in a vastly different F statistic when compared to a sample of 1989 – 1997. This is mainly due to a number of large revisions in 1988, as illustrated by Chart 5 below. A recursive residuals test illustrates a break in the Faust et al regression in 1990.

Chart 5: Total Revisions to GDP – 1985 – 1997

189. We believe that this highlights the issue of methodological improvements. ONS data shows that the substantial revisions to 1988 came from the 1989 Blue Book, and represent methodological improvements to the consistency of the accounts. This is a clear example of a trade-off between different aspects of quality, in the case of the revisions arising from the Pickford Review improvements were made to the consistency of the accounts, at the expense of larger revisions. Similarly, revisions arising from the adoption of ESA 1995 in the 1998 Blue Book improved the international comparability of the accounts, but again at the expense of revisions.

–2 –1 0 1 2 3 4 1986 1988 1990 1992 1994 1996

190. Another aspect is timeliness. Faust et al (2000) acknowledges that the UK is one of the fastest in the world to publish its preliminary estimate, and advises readers that they should bear this in mind when comparing the magnitudes of revisions. However, this aspect of quality does not feature in the regressions.

Related documents