Fitting time series models to the forex market: are ARIMA/GARCH predictions profitable?

Recently, I wrote about fitting mean-reversion time series models to financial data and using the models’ predictions as the basis of a trading strategy. Continuing my exploration of time series modelling, I decided to research the autoregressive and conditionally heteroskedastic family of time series models. In particular, I wanted to understand the autogressive integrated moving average (ARIMA) and generalized autoregressive conditional heteroskedasticity (GARCH) models, since they are referenced frequently in the quantitative finance literature, and its about time I got up to speed. What follows is a summary of what I learned about these models, a general fitting procedure and a simple trading strategy based on the forecasts of a fitted model.

Several definitions are necessary to set the scene. I don’t want to reproduce the theory I’ve been wading through; rather here is my very high level summary of what I’ve learned about time series modelling, in particular the ARIMA and GARCH models and how they are related to their component models:

At its most basic level, fitting ARIMA and GARCH models is an exercise in uncovering the way in which observations, noise and variance in a time series affect subsequent values of the time series.  Such a model, properly fitted, would have some predictive utility, assuming of course that the model remained a good fit for the underlying process for some time in the future.

An ARMA model (note: no “I”) is a linear combination of an autoregressive (AR) model and moving average (MA) model. An AR model is one whose predictors are the previous values of the series. An MA model is structurally similar to an AR model, except the predictors are the noise terms.  An autoregressive moving average model of order p,q  – ARMA(p,q) –  is a linear combination of the two and can be defined as:

(1)   \begin{equation*}  X_{t} = a_{1}X_{t-1} + a_{2}X_{t-2} + ... + a_{p}X_{t-p} + w_{t} + b_{1}w_{t-1} + b_{2}w_{t-2} + ... + b_{q}w_{t-q} \end{equation*}

where w_{t} is white noise and a_{i} and b_{i} are coefficients of the model.

An ARIMA(p,d,q) model is simply an ARMA(p,q) model differenced ‘d’ times – or integrated (I)-  to produce a stationary series.

Finally, a GARCH model attempts to also explain the heteroskedastic behavior of a time series (that is, the characteristic of volatility clustering) as well as the serial influences of the previous values of the series (explained by the AR component) and the noise terms (explained by the MA component).  A GARCH model uses an autoregressive process for the variance itself, that is, it uses past values of the variance to account for changes to the variance over time.

With that context setting out of the way, I next fit an ARIMA/GARCH model to the EUR/USD exchange rate and use it as the basis of a trading system. The model’s parameters for each day are estimated using a fitting procedure, that model is then used to predict the next day’s return and a position is entered accordingly and held for one trading day. If the prediction is the same as for the previous day, the existing position is maintained.

A rolling window of log returns is used to fit an optimal ARIMA/GARCH model at the close of each trading day. The fitting procedure is based on a brute force search of the parameters that minimize the Aikake Information Criterion, but other methods can be used. For example, we could choose parameters that minimize the Bayesian Information Criterion, which may help to reduce overfitting by penalizing complex models (that is, models with a large number of parameters). This fitting procedure was inspired by Michael Halls-Moore’s post about an ARIMA+GARCH trading strategy for the S&P500, and I borrowed some of his code.

I chose to use a rolling window of 1000 days to fit the model, but this is a parameter for optimization. There is a case for using as much data as possible in the rolling window, but this may fail to capture the evolving model parameters quickly enough to adapt to a changing market. I won’t explore this too much here, but it would be interesting to investigate the strategy’s performance as a function of the lookback window. Here’s the code:

First, the directional predictions only: buy when a positive return is forecast and sell when a negative return is forecast. The results of this approach are shown below (no allowance for transaction costs):

GARCH Returns EURUSD

You might have noticed that in the model fitting procedure above, I retained the actual forecast return values as well as the direction of the forecast return. I want to investigate the predictive power of the magnitude of the forecast return value. Specifically, does filtering trades when the magnitude of the forecast return is below a certain threshold improve the performance of the strategy? The code below performs this analysis for a small return threshold. For simplicity, I converted the forecast log returns to simple returns to enable manipulation of the sign of the forecast and easy implementation.

And the results overlaid with the raw strategy:

Filtered on magnitude of forecast

It occurred to me that the ARIMA/GARCH model we fit on certain days may be a better or worse representation of the underlying process than other days. Perhaps filtering trades when we have less confidence in our model would improve performance. This approach requires that the statistical significance of each day’s model fit be evaluated, and a trade only entered when this significance exceeds a certain threshold. There are a number of ways this could be accomplished. Firstly, we could visually examine the correlogram of the model residuals and make a judgement on the goodness of fit on that basis. Ideally, the correlogram of the residuals would resemble a white noise process, showing no serial correlation. The correlogram of the residuals can be constructed in R as follows:

acf(fit@fit$residuals, main = 'ACF of Model Residuals')

ACF of Model Residuals

While this correlogram suggests a good model fit, it is obviously not a great approach as it relies on subjective judgement, not to mention the availability of a human to review each day’s model. A better approach would be to examine the Ljung-Box statistics for the model fit. The Ljung-Box is a hypothesis test for evaluating whether the autocorrelations of the residuals of a fitted model differ significantly from zero. In this test, the null hypothesis is that the autocorrelation of the residuals is zero; the alternate is that the series possesses serial correlation. Rejection of the null and confirmation of the alternate would imply that the model is not a good fit, as there is unexplained structure in the residuals. The Ljung-Box statistic is calculated in R as follows:

The p-value in this case provides evidence that the residuals are independent and that this particular model is a good fit. By way of explanation, the Ljung-Box test statistic (X-squared in the code output above) grows larger for increasing autocorrelation of the residuals. The p-value is the probability of obtaining a value as large or larger than the test statistic under the null hypothesis. Therefore, a high p-value in this case is evidence for independence of the residuals. Note that it applies to all lags up to the one specified in the Box.test()  function.

Applying the Ljung-Box test to each day’s model fit reveals very few days where the null hypothesis of independent residuals is rejected, so extending the strategy to also filter any trades triggered by a poor model fit is unlikely to add much value:

2 filters

Conclusions and future work

The performance of the ARIMA/GARCH strategy outperforms a buy and hold strategy on the EUR/USD for the backtest period, however the performance is nothing spectacular. It seems that it is possible to improve the performance of the strategy by filtering on characteristics such as the magnitude of the prediction and the goodness of fit of the model, although the latter does not add much value in this particular example. Another filtering option could be to calculate the 95% confidence interval for each day’s forecast and only enter a trade when the sign of each limit is the same, although this would greatly reduce the number of trades actually taken.

There are many other varieties of the GARCH model, for example exponential, integrated, quadratic, threshold, structural and switching to name a few. These may or may not provide a better representation of the underlying process than the simple GARCH (1,1) model used in this example. For an exposition of these and other flavors of GARCH, see Bollerslev et. al. (1994).

An area of research that I have found highly interesting recently is time series forecasting through the intelligent combination of disparate models, for example by taking the average of the individual predictions of several models or seeking consensus or a majority vote on the sign of the prediction. To borrow some machine learning nomenclature, this ‘ensembling’ of models can often produce more accurate forecasts than any of the constituent models. Perhaps a useful approach would be to ensemble the predictions of the ARIMA/GARCH model presented here with a suitably trained artificial neural network or other statistical learning method. We could perhaps expect the ARIMA/GARCH model to capture any linear characteristics of the time series, while the neural network may be a good fit for the non-linear characteristics. This is all pure speculation, potentially with some backing from this paper, but an interesting research avenue nevertheless.

If you have any ideas for improving the forecast accuracy of time series models, I’d love to hear about them in the comments.

Finally, credit where credit is due: although I worked my way through numerous sources of information on financial time series modelling, I found Michael Halls-Moore’s detailed posts on the subject extremely helpful. He starts from the beginning and works through various models of increasing complexity. As stated in the main post, I also borrowed from his ARIMA + GARCH trading strategy for the S&P500 in designing the EUR/USD strategy presented here, particularly the approach to determining model parameters through iterative minimization of the Aikake Information Criterion. The ideas around filtering trades on the basis of the results of the Ljung-Box test and the absolute magnitude of the forecast value were my own (although I’m sure I’m not the first to come up with them).

Other references I found particularly useful:

Bollerslev, T. (2001). Financial Econometrics: Past Developments and Future Challenges, in Journal of Econometrics, Vol. 100, 41-51

Bollerslev, T., Engle, R.F. and Nelson, D.B. (1994). GARCH Models, in: Engle, R.F., and McFadden, D.L. (eds.) Handbook of Econometrics, Vol. 4, Elsevier, Amsterdam, 2961-3038.

Engle, R. (2002). New Frontiers for ARCH Models, in Journal of Applied Econometrics, Vol. 17, 425-466

Qi, M. and Zhang, G.P. (2008). Trend Time Series Modelling and Forecasting with Neural Networks, in IEEE Transactions on Nerual Networks, Vol. 19, No. 5, 8-8-816.

Tsay, R. (2010). Conditional Heteroscedastic Models, in Tsay, R. Analysis of Financial Time Series, Third Edition, Wiley, 109-174.

Here you can download the code and data used in this analysis: arima_garch

14 Comments

  • Matt haines

    February 4, 2016

    I was literally struggling through a bunch of ARMA ARIMA GARCH box test reading and then took a break to read your blog post. “Yes!” I shouted (in my head) when I read about you pondering all those big words and acronyms I’ve been struggling with. And then I realized you were also basing your work off Michaels writings. Damn stuff hurts my head. But I’m slowly getting it. You’re about 4 parsecs ahead of me so I’m going to have to keep an eye on your work as well. 😃 Thanks.

    Reply
    • Robot Master

      February 4, 2016

      Hey Matt, thanks for the comment! I hope my article was useful for you. Yes, I learned a lot from Michael’s posts on this subject. He’s heavy on the detail and presents it in a logical way that continuously builds on the previous information. I recently purchased the rough cut of his latest book and refer to it often. Very much looking forward to the final release. In my article, I was aiming to succinctly summarise the theory and focus on some trading ideas that seem a fairly natural extension. Hopefully it was helpful!

      Reply
  • Beliavsky

    February 5, 2016

    Thanks for your post. Could you say what the Sharpe ratios of the tested strategies were?

    Reply
    • Robot Master

      February 8, 2016

      I didn’t calculate Sharpe ratios when I ran these strategies. You could easily do this yourself by running the script (available via the download link, along with the data I used) and using the performanceAnalytics package in R.

      Reply
  • Pingback: Quantocracy's Daily Wrap for 02/04/2016 | Quantocracy

  • Pingback: Build Better Strategies! Part 2: Model-Based Systems – The Financial Hacker

  • Pingback: The Best Links While I Was Away | Quantocracy

  • Pingback: URL

  • Pingback: Build Better Strategies! Part 2: Model-Based Systems | Trade Signal Machine Hub

  • KAFEBR

    July 8, 2016

    Hello, I am new to time series fitting and found your article very interesting. My question is: Is there not a random value involved in the prediction of the price per definition of a GARCH series ? If so, would it not make sense to calculate the probability for the forecast to be long or short by using the ARMA value as mean and the standard deviation and maybe apply a filter then by accepting only values above a certain threshold ?

    Reply
    • Robot Master

      August 2, 2016

      Hey, thanks for reading my blog. I think you are referring to the noise term in the GARCH definition? You could certainly experiment with an ARMA model – I’d love to hear about the results – but I’m not sure how that relates to the noise term in the GARCH model?

      Reply
  • Infinity

    August 30, 2016

    Thanks for the tutorial. What line(s) of code would we need to account for transaction costs?

    Reply
    • Robot Master

      August 31, 2016

      There’s a few ways to do it, depending on how accurate or complex a transaction cost model you want. If a fixed transaction cost model would suffice, you could simply subtract this fixed transaction cost from each of your returns. For example, if a round turn on the EUR/USD costs you 1.5 pips, you simply do returns < - returns - 0.00015

      Of course, in reality you would get variable spread and variable slippage depending on factors such as real-time volatility and liquidity, so this may or may not be accurate enough for your purposes.

      Reply

Leave a Reply