Volatility in financial time series tend to cluster. Since one of the conditions for carrying out a regresson analysis using OLS is homoskedasticity, this is creates a problem. We can solve this by modelling the changing volatility using Generalized Autoregressive Conditional Heteroskedasticity (GARCH).
As is suggested by R-bloggers, I will use daily data for FTSE 100 returns. I use adjusted daily prices from Google Finance. Here is the raw data:
A generally accepted explanation of this volatility clustering is that information arrives in chunks to the market and that this drives price reactions. It is these chocks that we seek to model using GARCH. By taking a look at the returns of FTSE 100 since 2004, it is clear that there was a big volatility spike in the end of 2008 durin gthe outbreak of the financial crisis.
How much data should be used for estimation? R-blogger writes that ideally, tens of thousands of observations, but 2000 is not unreasonable. I have 2608 observations. What GARCH model should be used? I will go with GARCH(1, 1) as recommended by R-blogger. Financial time series typically don't have a normal distribution. we could hope that this is entirely due to the GARCH effect. In practice, a t-distribution performs better.
Following the instructinos on R-blogger, I do the following:
> gspec.ru <- mean.model="list(<br" ugarchspec=""> armaOrder=c(0,0)), distribution="std")
> gfit.ru <- br="" eturn="" ftse="" gspec.ru="" ugarchfit="">> coef(gfit.ru)->->
I then plot the in-sample volatility estimate by writing:
> plot(sqrt(252) * gfit.ru@fit$sigma, type='l')
1 comment:
For your consideration.
Post a Comment