On Comparing Macroeconomic Models Using Forecast Encompassing Tests

2009 
It is clearly of interest to macroeconomists to be able to evaluate whether one large-scale macroeconometric model ‘is better’ than another. Although comparisons between models are sometimes invidious, because the purposes for which the models were built differ, it is the case that formal comparisons of two models' statistical properties are rare. This is in spite of considerable theoretical advances in the econometric methodology, namely the development and use of non-nested and encompassing tests. Chong and Hendry (1986) advocate the use of the forecast encompassing regressions, where the outturns are regressed on competing (one-step-ahead) forecasts. This paper reports the findings of applying this rather easy-to-use method of comparing large scale macroeconometric models. The forecast data we use are those published by three macroeconometric modelling groups, namely: Liverpool; the National Institute; and The London Business School. Forecasts for up to three years ahead are published for unemployment, growth, and inflation, throughout the 1980's. Forecast encompassing tests fail to separate one model from another, based on one-year-ahead forecasts. Each model ‘wins’ once. However, the conclusions are not the same as using root-mean-square-forecast-error criteria, illustrating Clements and Hendry's (1994) observation that minimum root-mean-square-forecast-error is neither necessary nor sufficient for a model to have constant parameters, to provide accurate forecasts, or to encompass its rivals.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    27
    References
    28
    Citations
    NaN
    KQI
    []