When using GARCH to forecast volatility, keep in mind that GARCH works well for estimating volatility, but its ability to forecast is weak because conditional volatility is unobserved.
When forecasting volatilities using the GARCH model, we need to back test the quality of the model in order to:
- Determine how successful we are in forecasting
- Compare different GARCH specifications to each other
- Compare GARCH models to other types of forecasts.
White's Reality Check provides a framework for backtesting GARCH models. White's Reality Check is a non-parametric test that any of a number of concurrent methods yields better forecasts than a given benchmark method. For this purpose, Souza et al. carries out a Monte Carlo simulation with four different processes: one Gaussian white noise model and the rest as GARCH specifications. As a benchmark, they use a naive predictor (in sample variance) and the Riskmetrics method.
Souza et al. use the following measures to test whether a practitioner can have a good assessment of the accuracy of volatility forecasts:
- The root mean squared error (RMSE),
- the heteroskedasticity-adjusted mean squared error (HMSE),
- the logarithmic loss (LL),
- and the likelihood (LKHD).
They make the following findings:
- The choice of the comparison statistic (naive predictor or Riskmetrics) has a large effect on the results
- They recommend he root mean squared error and likelihood tests
- The forecasting performance of GARCH models increase with the heteroskedasticity of the data
- The choice of volatility proxy when comparing models (true volatility or squared observations) is important
- Finally, "[they] find that the Reality Check may not be suitable to compare volatility forecasts within a superior predictive ability framework, and we conjecture that this is due to assumptions made on the test statistic as reported in Hansen (2001)". I'm not sure what they mean by "superior predictive ability framework" (yet).
No comments:
Post a Comment