The R-library "Buy 'Til You Die" for customer equity forecasting looked promising, but I quickly realised that the implementation relies heavily on loops and is not suited for any data set bigger than a couple of thousand rows.
To speed things up, I switched out a couple of the loops for some data.table magic. Here it is, about 10000X faster than the original, but there is still some room for improvement.
If you're having trouble getting the BTYD package to run, take a look at this post for fixes.
Showing posts with label forecasting. Show all posts
Showing posts with label forecasting. Show all posts
Monday, October 12, 2015
Tuesday, March 24, 2015
Rovio revenue estimate 2014
Is there a correlation between Rovio's revenue and the amount of Google searches for their most popular title Angry Birds? Admittedly, we only have three data points to go on, but they do line up nicely. The upper chart plots the log change in search volume (x-axis) against revenue (y-axis). Based on that correlation, Rovio's revenue should decline somewhat in 2014, to 152 million €.
Supercell revenue 2014 is 1.55 billion €, compared to forecasted 1.7 billion €
How powerful is Google Trends for predicting revenue of Internet compaines? This is just one data point, but my previous prediction for Supercell's 2014 revenue was not far off.
Supercell's revenue for 2014 was 1.55 billion €, compared to my prediction of 1.7 billion €.

The next prediction I have my eye on is for the Apple Watch. Google Trends data suggests that the Apple Watch will sell well below what market analysts expect. While the launch of the Apple Watch did create some buzz on search engines, that quickly died out.
Another mobile games company from Finland is Rovio. If would be interesting to see if the correlation holds up for them as well. It's not looking good.
Supercell's revenue for 2014 was 1.55 billion €, compared to my prediction of 1.7 billion €.
The next prediction I have my eye on is for the Apple Watch. Google Trends data suggests that the Apple Watch will sell well below what market analysts expect. While the launch of the Apple Watch did create some buzz on search engines, that quickly died out.
Another mobile games company from Finland is Rovio. If would be interesting to see if the correlation holds up for them as well. It's not looking good.
Monday, March 09, 2015
Apple Watch sales prediction based on Google Trends data
Back in September 2014, I estimated that the unit sales of the Apple Watch will be 2700 000 in the first three months of sales. The number is based on the correlation between Google searches around the announcement for the iPhone and iPad. Later on in October I revised the number down to 400 000 based on low interest for the product.
When compared to the interest in the iPhone and iPad, the Apple Watch is still lagging behind. In fact, the iPod generates more Google Searches than the Apple Watch.
Industry analysts expect Apple to sell between 10-30 million watches in the first year, or 4-7.5 million per quarter. Even if 400 000 is way too low, the low search interest for the watch indicates that sales will be lower than what analysts predict.
Google Trends data is always two days behind, so we will have to wait until Wednesday to see how the Apple Watch launch compares to the iPad and iPhone. So far, it doesn't look great.
More on the methodology
When compared to the interest in the iPhone and iPad, the Apple Watch is still lagging behind. In fact, the iPod generates more Google Searches than the Apple Watch.
Industry analysts expect Apple to sell between 10-30 million watches in the first year, or 4-7.5 million per quarter. Even if 400 000 is way too low, the low search interest for the watch indicates that sales will be lower than what analysts predict.
Google Trends data is always two days behind, so we will have to wait until Wednesday to see how the Apple Watch launch compares to the iPad and iPhone. So far, it doesn't look great.
More on the methodology
Tuesday, August 26, 2014
Testing GARCH forecasts with White's Reality Check
This post is a summary of the findings from the article "Evaluating the Forecasting Performance of GARCH Models Uisng White's Reality Check" by Souza, Veiga and Meidros (2005).
When using GARCH to forecast volatility, keep in mind that GARCH works well for estimating volatility, but its ability to forecast is weak because conditional volatility is unobserved.
When forecasting volatilities using the GARCH model, we need to back test the quality of the model in order to:
White's Reality Check provides a framework for backtesting GARCH models. White's Reality Check is a non-parametric test that any of a number of concurrent methods yields better forecasts than a given benchmark method. For this purpose, Souza et al. carries out a Monte Carlo simulation with four different processes: one Gaussian white noise model and the rest as GARCH specifications. As a benchmark, they use a naive predictor (in sample variance) and the Riskmetrics method.
Souza et al. use the following measures to test whether a practitioner can have a good assessment of the accuracy of volatility forecasts:
They make the following findings:
When using GARCH to forecast volatility, keep in mind that GARCH works well for estimating volatility, but its ability to forecast is weak because conditional volatility is unobserved.
When forecasting volatilities using the GARCH model, we need to back test the quality of the model in order to:
- Determine how successful we are in forecasting
- Compare different GARCH specifications to each other
- Compare GARCH models to other types of forecasts.
White's Reality Check provides a framework for backtesting GARCH models. White's Reality Check is a non-parametric test that any of a number of concurrent methods yields better forecasts than a given benchmark method. For this purpose, Souza et al. carries out a Monte Carlo simulation with four different processes: one Gaussian white noise model and the rest as GARCH specifications. As a benchmark, they use a naive predictor (in sample variance) and the Riskmetrics method.
Souza et al. use the following measures to test whether a practitioner can have a good assessment of the accuracy of volatility forecasts:
- The root mean squared error (RMSE),
- the heteroskedasticity-adjusted mean squared error (HMSE),
- the logarithmic loss (LL),
- and the likelihood (LKHD).
They make the following findings:
- The choice of the comparison statistic (naive predictor or Riskmetrics) has a large effect on the results
- They recommend he root mean squared error and likelihood tests
- The forecasting performance of GARCH models increase with the heteroskedasticity of the data
- The choice of volatility proxy when comparing models (true volatility or squared observations) is important
- Finally, "[they] find that the Reality Check may not be suitable to compare volatility forecasts within a superior predictive ability framework, and we conjecture that this is due to assumptions made on the test statistic as reported in Hansen (2001)". I'm not sure what they mean by "superior predictive ability framework" (yet).
Labels:
forecasting,
garch,
riskmetrics,
white's reality check
Subscribe to:
Posts (Atom)