Posted on Oct 18, 2015 by Kris Longmore

Picture this: A developer has coded up a brilliant strategy, taking great care not to over-optimize. There is no look-ahead bias and the developer has accounted for data-mining bias. The out of sample backtest looks great. Is it time to go live?    I would've said yes, until I read Ernie Chan's Algorithmic Trading and realised that I hadn't adequately accounted for randomness. Whenever we compute a performance metric from a backtest, we face the problem of a finite sample size. We can't know the true value of the performance metric, and the value we computed may or may not be representative of this true value. We may have been simply fooled by randomness into thinking we had a profitable strategy. Put another way, was the strategy's performance simply due to being in the market at the right time? There are a number of empirical methods that can be used to address this issue. Chan describes three in his book mentioned above, and there are probably others. I am going to implement the approach described by Lo, Mamaysky and Wang (2000), who simulated...