This is the first post in our 3-part Back to Basics series. You may be interested in checking out the other posts in this series:We’ve also compiled this series into an eBook which you can download for free here.
This is the first in a series of posts in which we will change gears slightly and take a look at some of the fundamentals of algorithmic trading. So far, Robot Wealth has focused on machine learning and quantitative trading research, but I had several conversations recently that motivated me to explore some of the fundamental questions around algorithmic trading. In the next few posts, we will investigate questions such as:
- What is algorithmic trading?
- What can algorithmic trading do for me?
- What are the pre-requisites? What should I think about before getting started?
- What’s all this fuss about curve fitting and robust optimisation? Why should I care?
So without further ado, lets dive in!
What is Algorithmic Trading?
At its most basic level, algorithmic trading is simply the automated buying and selling of financial instruments, like stocks, bonds and futures. It requires a networked connection to an electronic exchange, broker or counterparty, and a means of programmatically buying, selling and performing other tasks related to trading, such as monitoring price action and market exposure.
Algorithmic trading is enabled thanks to the rise of electronic exchanges – a relatively recent phenomenon. Once upon a time, financial products were traded in the so-called ‘pit’ inside the exchange building using the ‘open outcry’ method. This consisted of brokers and traders being physically present in the pit and shouting prices at which they were willing to buy and sell. Participants even used hand signals to convey their intentions. This gradually began to give way to telephone trading and eventually to electronic trading. The shift started sometime in the 1980s and continues to this day, however the vast majority of exchanges around the world are now electronic.
Naturally, this evolution changed the dynamics of the trading process. Anecdotally, pit traders could sometimes read each other’s intentions through the physical contact that comes with being in the pit – obviously this is incredibly implausible when market participants trade electronically and can be separated by potentially vast spaces. Stories of life in the pit makes for interesting and often amusing reading. Some curated links:
- The Chicago Business School compiled the reflections of a number of pit traders
- This post makes for an interesting insight into life in the pits
- As do these ‘Tales of the Pit’.
- There’s even a blog about pit trading!
It is also worth noting that algorithmic trading is not just for exchange-traded markets: over-the-counter (OTC) markets are also traded algorithmically. An OTC market is one where orders are not executed through a central exchange, but rather between two parties. OTC algorithmic trading typically takes place via an Electronic Communication Network (ECN) or dark pool. The former is typically used by market makers to disseminate and match orders with their network of counterparties. The latter is more like a private execution venue where liquidity is provided by the participants of the dark pool, away from the exchange.
Advocates of electronic trading point out the attendant increased market efficiency and reduced opportunity for manipulation. Electronic trading is also typically less expensive and with the advent of cheap Internet, is accessible to anyone with a decent connection. This means that an individual can buy or sell a financial product from their living room.
It must also be pointed out that as electronic trading has taken off, the instance of ‘flash crashes‘ – huge spikes in volatility over short periods of time – has also increased. A case can be made that suggests that algorithms exacerbate such a crash because they act much faster than a human can intervene. But on the other hand, exchange operators are finding ways to handle this new environment in safer ways, for example electronic mechanisms to curb extreme volatility, order routing co-ordination between exchanges and re-thinking the role of market makers. Whether it is fair to blame flash crashes on electronic trading is a huge and sometimes contentious topic.
In order to execute trades algorithmically, we use a computer program connected to the exchange (either directly or via a broker) that executes our desired trading behavior on our behalf. Such a program or algorithm is simply a set of detailed instructions that a computer understands. A trivial example might go something like “read some price data, calculate its mean and standard deviation, then if the most recent value of the price data is above its average and the standard deviation is less than some threshold, send a buy order to market.” That’s a trivial example and of course most trading algorithms are much more complex, but you get the idea.
The simple algorithm described above had some of the common aspects of an algorithmic trading system:
- A method to acquire data (“read some price data”), noting that this in itself could be quite a complex standalone algorithm and requires connection to a source of market data, usually in real time.
- Some analysis of that data (“calculate its mean and standard deviation”).
- A means of checking if some condition has been fulfilled based on the previous analysis (“if the most recent price is above its mean and the standard deviation is less than some threshold”).
- Execution of the trading logic, which again can be quite a complex standalone algorithm requiring a means of communicating with a broker or exchange, managing that communication link and keeping track of orders and fills.
Other common components of such systems include:
- Risk management modules, for example position sizing calculations, exposure tracking and adjustment, and tools to track a system’s performance and behavior.
- Portfolio management tools, which are somewhat related to the above.
- Data handling and storage.
- Post-trade reconciliation and analysis.
The assumption behind the description of the algorithm above is that its purpose is the implementation of a signal-based trading system which essentially follows the fundamental logic of “if this happens, buy, and if that happens, sell.” The term algorithmic trading can actually have a slightly different nuance, particularly in an institutional setting, that it pays to be aware of. In this setting, algorithmic trading can refer to the automated splitting of a large order to get the best execution possible. Such algorithms typically split up a large order into smaller pieces and send the pieces to market in a way that optimizes the overall cost of the transaction.
Consider a large hedge fund that wants to hold a significant position in a stock, say a couple of percent of its market capitalization. Buying such an order in one transaction would have an impact on the price of that stock (depending on its liquidity) that would be disadvantageous to the fund. Entering the position gradually in line with the capacity of the stock to absorb the order often reduces the overall cost of the transaction, sometimes significantly.
Many in the institutional space will refer to algorithmic trading as the type that splits up a big order. These same folks will typically refer to the signal-based system as an automated trading system. In my experience, the terms are largely used interchangeably, and it therefore pays to understand the context when talking about algorithmic trading.
Different Strokes for Different Folks
The type of algorithmic trading that most readers of Robot Wealth are interested in is the kind that seeks to identify opportunities to profit by buying low and selling higher. This is the signal-based type of system mentioned above. Within this broad category, there are different subsets of trading algorithm. While there is no one accepted nomenclature for classifying algorithmic trading systems, they can generally be described as follows.
Technical Analysis (TA) refers broadly to the analysis of patterns of price and volume to predict future market movement. It is therefore based on the assumption that there exist repeating patterns in the price action of a market. The TA toolkit consists of a collection of indicators (price/volume transformations) like the Relative Strength Index (RSI) and Moving Average Convergence Divergence (MACD). It also includes the suite of trend lines, support and resistance lines, formations like ‘flag’ and ‘pennant’ and patterns like ‘head and shoulders.’ There is even a catalog of candlestick patterns like ‘engulfing bear’ that allegedly portend the future direction of the market. On the more esoteric end of the spectrum, we have things like Elliot Wave Theory which asserts that markets move in predictable ‘waves’.
While some of the tools of TA are sometimes used in more scientifically rigorous quantitative trading (see below), opinion is divided on whether TA as an approach to predicting the market has any utility; indeed, there are published academic papers that support both sides of this argument (see for example Lo, Mamaysky and Wang, 2000 and Chan, Jagadeesh and Lakonishok, 1996). We will return more to this question in the second part of this series.
Quantitative trading means different things to different people. For some, it may be simply another name for TA-based trading. For me, the distinguishing feature of quantitative trading is the removal of subjectivity (decisions are based on quantifiable information). This implies that quantitative trading is based on some sort of mathematical or statistical model of market behavior.
There are an infinite number of models for market behavior, however finding one that is accurate enough to generate profits is no trivial endeavor. Sometimes, a tool from the TA world might be used in a quantitative model, hence the cross-over that I mentioned above. An example of a popular model is the cross-sectional momentum strategy, which essentially boils down to buying winners and selling losers. Another popular one is the mean-reversion strategy, which essentially equates to selling winners and buying losers.
As we move along the complexity scale, we might encounter a cointegrating pairs model. In this model, we find a pair of securities that can be combined such that together they form a mean-reverting series. Such strategies are appealing because they can be engineered to keep the trader market neutral, or close to it, with the goal of minimizing market risk. Just be careful with the underlying assumptions of such a strategy – that is, that a past cointegrating relationship will continue into the future.
We can also build quantitative models based on fundamental data. Fundamental data like earnings announcements are just numbers, and we now have the tools to efficiently and automatically process the news releases and company filings from which these numbers are taken.
There is a subset of quantitative trading that is currently undergoing a huge upsurge in interest. I am of course referring to machine learning and artificial intelligence, which seems to have captured the imagination of both technologists and lay people around the world. With good reason, I might add. At its most basic level, machine learning is simply the derivation of insights from data using statistical models. As such, linear regression can be considered a low-level machine learning algorithm. Today we have much more complex tools, such as artificial neural networks and deep learning, support vector machines and boosting algorithms. Such tools are already widely used to support business decision-making and improve the performance of complex systems. It seems a very natural and obvious step to apply these tools to the markets.
The reality is that while such tools are incredibly powerful, it is difficult (but not impossible) to use them to model the markets directly. Applying the ‘classic’ data science approach generally doesn’t work too well on financial data, at least in my experience. The more celebrated machine learning applications in finance seem to be around efficiently extracting insights from large and complex non-market data sets, like libraries of satellite images, social media feeds and other proprietary and open data sets. Entire books could be written about this topic, but if you are really interested in machine learning, there is enormous scope to apply it to financial decision making – just don’t expect an easy ride.
High Frequency Trading
High Frequency Trading (HFT) is admittedly not something I have a lot of direct experience with, but I have worked with folks who are directly engaged in the activity. HFT by definition must be algorithmic since it occurs on the scale of microseconds – or less. No human could engage in HFT without the support of computers. While HFT is generally signal based – that is, something occurs to trigger a buy or sell signal – speed and latency are generally more important than the actual signal itself. The implication of this is that co-location of the algorithm either in the exchange or as close as possible is a prerequisite, and code must be optimized for speed and usually written in a low level language like C++. This results in barriers that are simply too high for DIY traders, and indeed for many trading firms. From my conversations with HFT traders, it is something of an arms race, and an expensive one at that.
Why Should You Care about Algorithmic Trading?
The trend towards algorithmic trading, and automation more generally, has been underway in the institutional trading space for some time. For instance, Bloomberg (2016) recently reported that:
“Hedge funds almost doubled their use of algorithmic trading in the forex market last year … executing 61% of their trades via automated computer systems.”
This trend can be seen in other markets too. In 2006, 40% of all orders were entered by algo traders at the London Stock Exchange, rising to 60% in 2007. In 2010, that estimate stood at over 80% (thefinancer.com, 2010).
So why this shift towards algorithmic trading?
Most algo traders that I speak to say that they would never trade any other way, typically quoting similar reasons that relate to overcoming human limitations:
- Computers process enormous amounts of data in the blink of an eye, enabling them to constantly scan dozens or hundreds of markets simultaneously for trading opportunities. A human trader can’t keep up with that sort of workload.
- Computers are not prone to the same execution difficulties that humans face: calculation errors, time required to manually enter order details into a broker platform, “fat fingers” errors.
- Computers have no emotional attachment to a trade or a market. Either there is a trade opportunity present or there isn’t.
- Computers can execute a system consistently and continuously. Contrast this with a human who is limited to a few hours chart time per day, suffers from fatigue and needs to pursue a social life (not that I would ever advocate leaving an algorithm to run with no human oversight).
What I also find interesting is that most algo traders that I know have an enormous respect for successful manual or discretionary traders. That’s because anyone who has made money through algo trading knows precisely how difficult it really is, even with all the advantages of algorithms and automated computation described above. These things are powerful tools for navigating the markets, and folks who can beat the market without them deserve tremendous respect.
Personally, in addition to the advantages listed above, I view algorithmic trading as a means of allocating tasks to the resource most capable of performing them. This needs more explanation:
Reading the points above, you would get the impression that humans are becoming obsolete in the world of trading. Not true! Just as there are many tasks that algos are better suited for, likewise there are certain things that humans are simply brilliant at. We have the creativity to view the markets in novel ways and come up with new ideas for trading systems. We can perform research and stay abreast of the macroeconomic environment in ways that a machine simply can’t (for now, at least).
Using algos to trade the markets frees up the trader to pursue meaningful tasks that make use of their skill set. I don’t know about you, but I don’t get much joy out of staring at charts all day. Nor am I particularly good at distilling meaning from them. My time is better spent researching and overseeing than looking for and executing trading opportunities.
Another nice by-product that arises through algorithmic trading is an automated research and development environment. If a trader writes an algorithm for executing trades in the live market, it is possible to test the algorithm on historical or synthetic data before taking it live. This provides crucial feedback about the algorithm’s past performance as well as insight into when, where and why it might fail. Such feedback is difficult, if not impossible, to get with a manually traded or discretionary system.
Obviously I am firmly in the algorithmic trading camp and have so far focused on the advantages that a trader receives by taking this approach to the markets. Of course, there is another side to every story and this one is no different. Some of the drawbacks to algorithmic trading include:
- It requires certain skills which one either needs to acquire personally or rely on others to provide. Programming is the obvious prerequisite, but it is also useful to know about market microstructure and computer hardware, software and networking. The tools that become available through computational trading, like optimisation and machine learning, are incredibly powerful and require specific knowledge to use appropriately. Algo trading is actually very difficult and requires skills from multiple disciplines.
- Algorithmic trading comes with certain infrastructure considerations, such as backup power and network connectivity. This is less of a problem with the rise of affordable managed private servers and cloud-based services, but definitely needs to be considered.
- Hardware dependency – what happens if the server hosting the algorithm goes down?
- A (perceived) lack of control over the behaviour of the algorithm.
- For some traders, the loss of discretion or ‘gut feel’ that comes with algorithmic trading is problematic.
I have also noticed from time to time a misconception that algorithmic trading systems can be simply set up and then forgotten about. This is most definitely not the case! Managing an algorithmic trading system actually represents a significant amount of work and it takes a lot of oversight. Any system that trades at the intra-day frequency would ideally be monitored in real time. Where this is not possible, my personal preference is to have alerts sent to my phone when trades are entered or closed or when my system loses its connection to my brokerage account. Trades also need to be reconciled, preferably on a daily basis. This is important to ensure that the system is behaving as expected, as well as to monitor any deviations between simulated and actual performance. Bugs can and do creep into any computer program and a trading algorithm is no exception!
Where to Next?
I hope this article was useful for people who are new to algorithmic trading or who have questions around the fundamentals – it was certainly a shift from the usual research-focused content that I post on robotwealth.com. In the coming weeks, I’ll be posting the remaining two articles in this series, which I hope will appeal to both beginners and veterans alike, before returning to my exploration of using machine and deep learning in algorithmic trading.
If you are interested in learning the fundamental aspects of algorithmic trading – the stuff I personally wish I’d known during my early years of playing at this game – please check out our course The Fundamentals of Algorithmic Trading. It has been getting some good reviews and our community of students is steadily growing.
Chan, L.K.C.; Jegadeesh, N.; Lakonishok, J. 1996, “Momentum Strategies”. The Journal of Finance. The Journal of Finance, Vol. 51, No. 5. 51 (5): 1681–1713.
Lo, A.; Mamaysky H.; and Wang, J. 2000, “Foundations of Technical Analysis: Computational Algorithms, Statistical Inference, and Empirical Implementation”, Journal of Finance 55 (2000), 1705-1765.