# Hurst Exponent for Algorithmic Trading

This is the first post in a two-part series about the Hurst Exponent. Tom and I worked on this series together and I drew on some of his previously published work as well as other sources like Quantstart.com.

UPDATE 03/01/16: Please note that the Python code below has been updated with a more accurate algorithm for calculating Hurst Exponent.

Mean-reverting time series have long been a fruitful playground for quantitative traders. In fact, some of the biggest names in quant trading allegedly made their fortunes exploiting mean reversion of financial time series such as artificially constructed spreads, which are used in pairs trading. Identifying mean reversion is therefore of significant interest to algorithmic traders. This is not as simple as it sounds, in part due to the non-stationary nature of financial data.

We both think that Ernie Chan’s book “Algorithmic Trading: Winning Strategies and Their Rationale”, is one of the better introductions to mean reversion available in the public domain. In the book, Ernie talks about several tools that can be used when testing if a time series is mean reverting. One is the Augmented Dickey-Fuller test for mean reversion. Ernie also goes into some detail about the Johansen test. Both of these have previously been explored on Robot Wealth and implemented using some simple R code (here and here).  Another interesting aspect of testing for mean reversion is the calculation of the Hurst Exponent.

The idea behind the Hurst Exponent H is that it can supposedly help us determine whether a time series is a random walk (H ~ 0.5), trending (H > 0.5) or mean reverting (H < 0.5) for a specific period of time. However, if you’ve ever used Hurst, you know that it can be a bit bewildering: not only does it often give unexpected results, but it also returns different results depending on the implementation used in its calculation. Further, there are a few different methods for calculating Hurst; we found that these generally agree for a randomly generated time series, but disagree when we use real data.

## So how can Hurst Exponent be of value to algo traders?

The remainder of this post is devoted to presenting and discussing some Python code for calculating Hurst. In the next post, we are going to delve more deeply into the calculation and work out what’s going on. Our ultimate goal is to demystify the Hurst Exponent and show how to take it beyond some nice theory to something of practical value to algo traders.

Without further ado, here is the code for calculating the Hurst Exponent in Python. We determine Hurst by firstly calculating the standard deviation of the difference between a series and its lagged counterpart. We then repeat this calculation for a number of lags and plot the result as a function of the number of lags. If we plot this on a log-log scale, we end up with a straight line, the slope of which provides an estimate for the Hurst exponent. I found this article which describes this approach to calculating Hurst, as does this one.

from numpy import *
from pylab import plot, show
# first, create an arbitrary time series, ts
ts = [0]
for i in range(1,100000):
ts.append(ts[i-1]*1.0 + random.randn())
# calculate standard deviation of differenced series using various lags
lags = range(2, 20)
tau = [sqrt(std(subtract(ts[lag:], ts[:-lag]))) for lag in lags]
# plot on log-log scale
plot(log(lags), log(tau)); show()
# calculate Hurst as slope of log-log plot
m = polyfit(log(lags), log(tau), 1)
hurst = m[0]*2.0
print 'hurst = ',hurst

You can see in the code that we used lags 2 through 20 for calculating H.  These lags are somewhat arbitrary, but based on the best results obtained using synthetic data with known behaviour. Specifically, we found that if we set the maximum number of lags too high, the results became quite inaccurate. These values are the defaults used in some other implementations, such as the standard Hurst function in MATLAB.

In Demystifying the Hurst Exponent: Part 2, we will look at these lags in more detail and show how they are actually crucial for calculating Hurst in such a way that is useful and meaningful. We tweak this part of the calculation to uncover a practical application of Hurst in developing algo trading systems.

If you have used the Hurst Exponent, or indeed any of the other tests for mean reversion that we mentioned in this post, please share your experiences in the comments. Thanks!

### 15 thoughts on “Hurst Exponent for Algorithmic Trading”

1. Hi Kris,
I’ve had a brief look at Kaufman’s Efficiency Ratio. The results weren’t great, but I didn’t dig into it effectively enough to fully rule it out as a filter for the mean reversion systems I’m trading.
Happy to write it up with a proper analysis.
Nick

• Hey Nick
I’d love to see a proper analysis of Kaufman’s Efficiency Ratio! You are welcome to share it here!
Cheers

• Thanks Kris – will do – I’ll write it up and submit. Many thanks

2. Interesting post thanks… I’m currently investigating applying the Hurst exponent in machine learning to improve my trade selection. Try as I might, I can’t find a default Hurst Matlab function however!

3. Hi Kris,

Thanks for the article, incredibly interesting. I was just wondering if you could shed some light on where the sqrt of std comes from? I’ve been through both articles you linked to and find the RS method of calculating the Hurst exponent (the one originally derived by Hurst for the work on Nile levels) to be intuitively easier to understand. Looking at your update note, my guess would be that you were initially using that algorithm for calculating Hurst, but then moved on to the Generalized Hurst Exponent calculation that Mike from QuantStart uses on his page?
I’ve been through Mike’s explanation, but I still can’t seem to rationalize why we are using the square root of the standard deviation in this algorithm. Any light you could shed on that would be much appreciated.
Thanks again for the great article.

4. Hi Kris,

thanks for sharing.
I think we can save the sqrt() step here in : tau = [sqrt(std(subtract(ts[lag:], ts[:lag]))) for lag in lags],
and no need to multiply 2 here in : hurst = m[0]*2.0.
So we the computation is simplified to:

tau = [std(subtract(ts[lag:], ts[:lag])) for lag in lags],

hurst = m[0]
which saves computation and easier to be understood from the Hurst exponent definition.

But still thanks a lot for sharing this, learned a lot. 🙂

Regards
Andy

• Nice one! Thanks Andy, that makes a lot of sense.

5. Hello,

Noob here, why are you taking the square root of the standard deviation of the differenced lags? Thought we were just trying to find the standard deviation?

Is it just so they are on a more condensed scale?