Back to Basics Part 3: Backtesting in Algorithmic Trading

 

This is the final post in our 3-part Back to Basics series. You may be interested in checking out the other posts in this series:
We’ve also compiled this series into an eBook which you can download for free here.

Nearly all research related to algorithmic trading is empirical in nature. That is, it is based on observations and experience. Contrast this with theoretical research which is based on assumptions, logic and a mathematical framework. Often, we start with a theoretical approach (for example, a time-series model that we assume describes the process generating the market data we are interested in) and then use empirical techniques to test the validity of our assumptions and framework. But we would never commit money to a mathematical model that we assumed described the market without testing it using real observations, and every model is based on assumptions (to my knowledge no one has ever come up with a comprehensive model of the markets based on first principles logic and reasoning). So, empirical research will nearly always play a role in the type of work we do in developing trading systems.
So why is that important?
Empirical research is based on observations that we obtain through experimentation. Sometimes we need thousands of observations in order to carry out an experiment on market data, and since market data arrives in real time, we might have to wait a very long time to run such an experiment. If we mess up our experimental setup or think of a new idea, we would have to start the process all over again. Clearly this is a very inefficient way to conduct research.
A much more efficient way is to simulate our experiment on historical market data using computers. In the context of algorithmic trading research, such a simulation of reality is called a backtest. Backtesting allows us to test numerous variations of our ideas or models quickly and efficiently and provides immediate feedback on how they might have performed in the past. This sounds great, but in reality, backtesting is fraught with difficulties and complications, so I decided to write an article that I hope illustrates some of these issues and provides some guidance on how to deal with them.

Why Backtest?

Before I get too deeply into backtesting theory and its practical application, let’s back up and talk about why we might want to backtest at all. I’ve already said that backtesting allows us to carry out empirical research quickly and efficiently. But why do we even need to do that? Everyone knows that we should just buy when the Relative Strength Index drops below 30, right?
OK, so that was obviously a rhetorical question. But I just wanted to highlight one of the subtly dangerous modes of thinking that can creep in if we are not careful. Now, I know that for the vast majority of Robot Wealth readers, I am preaching to the converted here, but over the last couple of years I’ve worked with a lot of individuals who have come to trading from non-mathematical or non-scientific backgrounds who struggle with this very issue, sometimes unconsciously. This is a good place to address it, so here goes.
In the world of determinism (that is, well-defined cause and effect), natural phenomena can be represented by tractable, mathematical equations. Engineers and scientists reading this will be well-versed for example in Newton’s laws of motion. These laws quantify a physical consequence given a set of initial conditions and are solvable by anyone with a working knowledge of high school level mathematics. The markets however are not deterministic (at least not in the sense that the information we can readily digest describes the future state of the market). That seems obvious, right? The RSI dropping below 30 does not portend an immediate price rise. And if price were to rise, it didn’t happen because the RSI dropped below 30. Sometimes prices will rise following this event, sometimes they will fall and sometimes they will do nothing. We can never tell for sure, and often we can’t describe the underlying cause, beyond more people buying than selling. Most people can accept that fact. However, I have observed time and again a paradox: namely, that the same person who accepts that markets are not deterministic will believe in a set of trading rules because they read them in a book or on the internet.
I have numerous theories about why this is the case, but one that stands out is that it is simply easy to believe things that are nice to believe. Human nature is like that. This particular line of thinking is extraordinarily attractive because it implies that if you do something (simple) over and over again, you will make a lot of money.  But that’s a dangerous trap to fall into. And you can even fall into it if your rational self knows that the markets are not deterministic, but you don’t question the assumptions underlying that trading system you read about.
I’m certainly not saying that all DIY traders fall into this trap, but I have noticed it on more than a few occasions. If you’re new to this game, or you’re struggling to be consistently profitable, maybe this is a good thing to think about.
I hope it is clear now why backtesting is important. Some trading rules will make you money; most won’t. But the ones that do make money don’t work because they accurately describe some natural system of physical laws. They work because they capture a market characteristic that over time produces more profit than loss. You never know for sure if a particular trade is going to work out, but sometimes you can conclude that in the long run, your chances of coming out in front are pretty good. Backtesting on past data is the one tool that can help provide a framework in which to conduct experiments and gather information that supports or detracts from that conclusion.

Simulation versus Reality

You might have noticed that in the descriptions of backtesting above I used the words simulation of reality and how our model might have performed in the past. These are very important points! No simulation of reality is ever exactly the same as reality itself. Statistician George Box famously said “All models are wrong, but some are useful” (Box, 1976).  It is critical that our simulations be accurate enough to be useful. Or more correctly, we need our simulations to be fit for purpose, after all, a simulation of a monthly ETF rotation strategy may not need all the bells and whistles of a simulation of high frequency statistical arbitrage trading. The point is that any simulation must be accurate enough that it supports the decision-making process for a particular application, and by “decision making process” I mean the decisions around allocating to a particular trading strategy.
So how do we go about building a backtesting environment that we can use as a decision-support tool? Unfortunately, backtesting is not a trivial matter and there are a number of pitfalls and subtle biases that can creep in and send things haywire. But that’s OK, in my experience the people who are attracted to algorithmic trading are usually up for a challenge!
At its most simple level, backtesting requires that your trading algorithm’s performance be simulated using historical market data, and the profit and loss of the resulting trades aggregated. This sounds simple enough, but in practice it is incredibly easy to get inaccurate results from the simulation, or to contaminate it with bias such that it provides an extremely poor basis for making decisions. Dealing with these two problems requires that we consider:

  1. The accuracy of our simulation; and
  2. Our experimental methodology and framework for drawing conclusions from its results

Both these aspects need to be considered in order to have any level of confidence in the results of a backtest. I can’t emphasize enough just how important it is to ensure these concepts are taken care of adequately; compromising them can invalidate the results of the experiment. Most algorithmic traders spend vast amounts of time and effort researching and testing ideas and it is a tragic waste of time if not done properly. The next sections explore these concepts in more detail.

Simulation Accuracy

If a simulation is not an accurate reflection of reality, what value is it? Backtests need to be as true a reflection of real trading as necessary to make them fit for their intended purpose. In theory, they should generate the very same trades with the same results as live trading the same system during the same time period.
In order to understand the accuracy of a backtest, you need to understand how the results are simulated and what the limitations of the simulation are. The reality is that no model can precisely capture the phenomena being simulated, but it is possible to build a model that is useful for its intended purposes. It is imperative that we create models that are as accurate a reflection of reality as possible, but equally that we are aware of the limitations. Even the most realistic backtesting environments have limitations.
Backtesting accuracy can be affected by:

  • The parameters that describe the trading conditions (spread, slippage, commission, swap) for individual brokers or execution infrastructure. Most brokers or trading setups will result in different conditions, and conditions are rarely static. For example, the spread of a market (the difference between the prices at which the asset can be bought and sold) changes as buyers and sellers submit new orders and amend old ones. Slippage (the difference between the target and actual prices of trade execution) is impacted by numerous phenomena including market volatility, market liquidity, the order type and the latency inherent in the trade execution path. The method of accounting for these time-varying trading conditions can have a big impact on the accuracy of a simulation. The most appropriate method will depend on the strategy and its intended use. For example, high-frequency strategies that are intended to be pushed to their limits of capacity would benefit from modelling the order book for liquidity. That approach might be overkill for a monthly ETF rotation strategy being used to manage an individual’s retirement savings.
  • The granularity (sampling frequency) of the data used in the simulation, and its implications. Consider a simulation that relies on hourly open-high-low-close (OHLC) data. This would result in trade entry and exit parameters being evaluated on every hour using only four data points from within that hour. What happens if a take profit and a stop loss were evaluated as being hit during the same OHLC bar? It isn’t possible to know which one was hit first without looking at the data at a more granular level. Whether this is a problem will depend on the strategy itself and its entry and exit parameters.
  • The accuracy of the data used in the simulation. No doubt you’ve head the modelling adage “Garbage in, garbage out.” If a simulation runs on poor data, obviously the accuracy of the results will deteriorate. Some of the vagaries of data include the presence of outliers or bad ticks, missing records, misaligned time stamps or wrong time zones, and duplicates. Financial data can have its own unique set of issues too. For example, stock data may need to be adjusted for splits and dividends. Some data sets are contaminated with survivorship bias, containing only stocks that avoided bankruptcy and thus building in an upward bias in the aggregate price evolution. Over-the-counter products, like forex and CFDs, can trade at different prices at different times depending on the broker. Therefore a data set obtained from one source may not be representative of the trade history of another source. Again, the extent to which these issues are problems depends on the individual algorithm and its intended use.

The point of the above is that the accuracy of a simulation is affected by many different factors. These should be understood in the context of the strategy being simulated and its intended use so that sensible decisions can be made around the design of the simulation itself. Just like any scientific endeavour, the design of the experiment is critical to the success of the research!
As a practical matter, it is usually a good idea to check the accuracy of your simulations before deploying them to their final production state. This can be done quite easily by running the strategy live on a small account for some period of time, and then simulating the strategy on the actual market data on which it was run.  Regardless of how accurate you thought your simulator was, you will likely see (hopefully) small deviations between live trading and simulated trading, even when the same data is being used. Ideally the deviations will be within a sensible tolerance range for your strategy, that is, a range that does not significantly alter your decisions around allocating to the strategy. If the deviations do cause you to rethink a decision, then the simulator was likely not accurate enough for its intended purpose.

Development Methodology

In addition to simulation accuracy, the experimental methodology itself can compromise the results of our simulations. Many of these biases are subtle yet profound: they can and will very easily creep into a trading strategy research and development process and can have disastrous effects on live performance. Accounting for these biases is critical and needs to be considered at every stage of the development process.
Robot Wealth’s course Fundamentals of Algorithmic Trading details a workflow that you can adopt to minimize these effects as well as an effective method of maintaining records which will go a long way to helping identify and minimize bias in its various forms. For now, I will walk through and explain the various biases that can creep in and their effect on a trading strategy. For a detailed discussion of these biases and a highly interesting account of the psychological factors that make accounting for these biases difficult, refer to David Aronson’s Evidence Based Technical Analysis (2006).

Look-Ahead Bias, Also Known As Peeking Bias

This form of bias is introduced by allowing future knowledge to affect trade decisions. That is, trade decisions are affected by knowledge that would not have been available at the time the trade decision was taken. A good simulator will be engineered to prevent this from happening, it is surprisingly easy to allow this bias to creep in when designing our own backtesting tools. A common example is executing an intra-day trade on the basis of the day’s closing price, when that closing price is not actually known until the end of the day.
When using custom-built simulators, you will find that you will typically need to give this bias some attention. I commonly use the statistical package R and the Python programming language for various modelling and testing purposes and when I do, I find that I need to consider this bias in more detail. On the positive side, it is easy to identify when a simulation doesn’t properly account for it, because the resulting equity curve will typically look like the one shown below, since it is easy to predict an outcome if we know about the future!
Equity CurveAnother way this bias can creep in more subtly is when we use an entire simulation to calculate a parameter and then retrospectively apply it to the beginning of the next run of the simulation. Portfolio optimization parameters are particularly prone to this bias.

Curve-Fitting Bias, Also Known As Over-Optimization Bias

This is the bias that allows us to create magical backtests that produce annual returns on the order of hundreds of percent. Such backtests are of course completely useless for any practical trading purpose.
The following plots show curve-fitting in action. The blue dots are an artificially generated linear function with some noise added to distort the underlying signal. It has the form \(y=mx + b + ϵ\) where \(ϵ\) is noise drawn from a normal distribution, and we have 20 points in our data set. Regression models of varying complexity were fit to the first 10 points of the data set (the in-sample set) and these models were tested against the unseen data consisting of the remaining 10 points not used for model building (the out-of-sample set). You can see that as we increase the complexity of the model, we get a better fit on the in-sample data, but the out-of-sample performance deteriorates drastically. Further, the more complex the model, the worse the deterioration out of sample.
overfitting
The more complex models are actually fitting to the noise in the data rather than the signal, and since noise is by definition random, the models that predict based on what they know about the in-sample noise perform horrendously out-of-sample. When we fit a trading strategy to an inherently noisy data set (and financial data is extremely noisy), we run the risk of fitting our strategy to the noise, and not to the underlying signal. The underlying signal is the anomaly or price effect that we believe provides profitable trading opportunities, and this signal is what we are actually trying to capture with our model. If we fit our model to the noise, we will essentially end up with a random trading model, which of course is of little use to anyone, except perhaps your broker.

Data-Mining Bias, Also Known As Selection Bias

Data mining bias is another significant source of over-estimated model performance. It takes a number of forms and is largely unavoidable – therefore rather than eliminating it completely, we need to be aware of it and take measures to account for it. Most commonly, you will introduce it when you select the best performer from a number of possible algorithms, algorithm variants, variables or markets to take forward with your development process. If you try enough strategies and markets, you will eventually (relatively often actually) find one that performs well due to luck alone.
For example, say you develop a trend following strategy for the forex markets. The strategy does exceptionally well in its backtest on the EUR/USD market, but fails on the USD/JPY market. By selecting to trade the EUR/USD market, you have introduced selection bias into your process, and the estimate of the strategy’s performance on the EUR/USD market is now upwardly biased. You must either temper your expectations from the strategy, or test it on a new, unseen sample of EUR/USD data and use that result as your unbiased estimate of performance.
It is my experience that beginners to algorithmic trading typically suffer more from the effects of curve-fitting bias during the early stage of their journey. That may be because these effects tend to be more obvious and intuitive. Selection bias on the other hand can be just as severe as curve-fitting bias, but is not as intuitively easy to understand. A common mistake is to test multiple variants of a strategy on an out-of-sample data set and then select the best one based on the out-of-sample performance. While this is not necessarily a mistake per se, the mistake is treating the performance of the selected variant in the out-of-sample period as an unbiased estimate of future performance. This may not be the end of the world if only a small number of variants were compared, but what if we looked at hundreds or even thousands of different variants, as we might do if were using machine learning methods? Surely among hundreds of out-of-sample comparisons, at least one would show a good result by chance alone? In that case, how can we have confidence in our selected strategy?
This begs the question of how to account for this data-mining bias effect.  In practical terms, you will quickly run into difficulty if you try to test your strategy on new data after each selection or decision point since the amount of data at your disposal is finite. Other methods to account for data mining bias include comparing the performance of the strategy with a distribution of random performances, White’s Reality Check and its variations, and Monte Carlo Permutation Tests.

Conclusion

This final Back to Basics article described why we take an experimental approach to algorithmic trading research and detailed the main barriers to obtaining useful, accurate and meaningful results from our experiments. A robust strategy is one that exploits real market anomalies, inefficiencies, or characteristics, however it is surprisingly easy to find strategies that appear robust in a simulator, but whose performance turns out to be due to inaccurate modelling, randomness or one of the biases described above. Clearly, we need a systematic approach to dealing with each of these barriers and ideally a workflow with embedded tools, checks and balances that account for these issues. I would go as far to say that in order to do any serious algorithmic trading research, or at least to do it in an efficient and meaningful way, we need a workflow and a research environment that addresses each of the issues detailed in this article. That’s exactly what we provide in Fundamentals of Algorithmic Trading, which was written in order to teach new or aspiring algorithmic traders how to operate the tools of the trade, and even more importantly, how to operate them effectively via such a workflow. If you would like to learn such a systematic approach to robust strategy development, head over to the Courses page to find out more.

References

Aronson, D. 2006, Evidence Based Technical Analysis, Wiley, New York
Box, G. E. P. 1976, Science and Statistics, “Journal of the American Statistical Association” 71: 791-799

Appendix – R Code for Demonstrating Overfitting

If the concept of overfitting is new to you, you might like to download the R code below that I used to generate the data and the plots from the overfitting demonstration above. It can be very useful for one’s understanding to play around with this code, perhaps generating larger in-sample/out-of-sample data sets, using a different model of the underlying generative process (in particular applying more or less noise), and experimenting with model fits of varying complexity. Enjoy!

## Demonstration of Overfitting
# create and plot data set
set.seed(53)
a.is <- c(1:10)
b.is <- 0.8*a.is + 1 + rnorm(length(a.is), 0, 1.5) # y = mx + b + noise
plot(a.is,b.is)
# build models on IS data
linear.mod <- lm(b.is~a.is)
deg2.mod <- lm(b.is~ poly(a.is, degree=2))
deg3.mod <- lm(b.is~ poly(a.is, degree=3))
deg9.mod <- lm(b.is~ poly(a.is, degree=9))
# is/oos predictions
aa <- c(1:20)
set.seed(53)
bb <- 0.8*aa + 1 + rnorm(length(aa), 0, 1.5)
# plots
plot(aa,bb)
par(mfrow=c(2,2))
plot(aa, bb, col='blue', main="Degree 1 Fit", xlab="x", ylab="y")
abline(linear.mod, col='red')
abline(v=length(a.is), col='black')
text(5, 15, "IN-SAMPLE")
text(15, 5, "OUT-OF-SAMPLE")
plot(aa, bb, col='blue', main="Degree 2 Fit", xlab="x", ylab="y")
preds <- predict(deg2.mod, newdata=data.frame(a.is=aa))
lines(aa, preds, col='red')
abline(v=length(a.is), col='black')
text(5, 15, "IN-SAMPLE")
text(15, 5, "OUT-OF-SAMPLE")
plot(aa, bb, col='blue', main="Degree 3 Fit", xlab="x", ylab="y")
lines(aa, predict(deg3.mod, data.frame(a.is=aa)), col='red')
abline(v=length(a.is), col='black')
text(5, 15, "IN-SAMPLE")
text(15, 5, "OUT-OF-SAMPLE")
plot(aa, bb, col='blue', main="Degree 9 Fit", xlab="x", ylab="y")
lines(aa, predict(deg9.mod, data.frame(a.is=aa)), col='red')
abline(v=length(a.is), col='black')
text(5, 15, "IN-SAMPLE")
text(15, 5, "OUT-OF-SAMPLE")

[/vc_column_text][/vc_column][/vc_row][vc_row][vc_column][/vc_column][/vc_row]

8 thoughts on “Back to Basics Part 3: Backtesting in Algorithmic Trading”

  1. Another detailed and informative blog post. Great work Kris. Love the ‘Awesome equity curve’.
    One thing I’m really interested in is to see how GAN’s might be applied towards generating synthetic market data. Not sure how practical it would be to create the data or if it would be useful even if it were practical. Are you aware of any research currently going into this field?

    Reply
    • Thanks Jordan, really glad you liked it!
      While I haven’t actually used a GAN to generate synthetic market data, it does seem a sensible approach. I don’t see why you couldn’t mimic the generating process for a particular stock or a particular asset over a particular time period. This would also provide a (very good) starting point for modelling that data in a trading system since you understand the underlying process. The difficulty will be that the generating process itself is unlikely to be stationary. Still, on the surface of it, it seems an interesting and potentially useful research project.

      Reply
      • Yes, totally agree with the issue of non-stationarity. Definitely looks like an interesting area of research. I think they could be particularly useful for generating more data within specific market regimes for the purpose of feeding data hungry models where perhaps there isn’t a large amount of historical data. Still pretty new to this stuff though so really I have no idea. Look forward to finding out though!

        Reply
  2. I just finished reading the 3 parts of this blogpost (took me one and a half hour). It’s quite a mind boggling learning challenge if I would really want to make this work so I’m going to let your words sink in for now..
    By the way, your link to the courses does not work:
    https://www.robotwealth.net/courses
     

    Reply
    • Hello Jaap, yes it certainly is a big challenge. There’s a few reasons for that, one of them being its multi-disciplinary nature. Nearly everyone has to learn something new when they come to the markets. It’s also something that can be approached from many different perspectives and using many different techniques, which results in this web of information about trading and how to trade. Sifting through that information is a big task in itself. But, as they say, if it were easy….
      Thanks for the heads up re the broken links too.
      Cheers
      Kris

      Reply

Leave a Comment