# We Love Free Data: Replacing Yahoo Finance Market Data

In keeping with our recent theme of providing useful tidbits of algo trading practicalities, here’s an elegant solution that resolves Yahoo’s unceremonious exit from the free financial data space.

Regular readers would know that I use various tools in my algo trading stack, but the one I keep coming back to, particularly when I’m ready to start running serious simulations, is Zorro. Not only is it a fast, accurate, and powerful backtester and execution engine, the development team is clearly committed to solving issues and adding features that really matter, from a practical perspective. This is another example of the speedy and elegant resolution of a serious issue – namely, the loss of free access to good quality, properly adjusted equities data, thanks to Yahoo’s exit.

Zorro version 1.60 is currently undergoing it’s final stages of beta testing and will likely be released publicly in the coming days. The latest version includes integration with Alpha Vantage‘s API, providing access to free, high quality, properly adjusted stock and ETF price data. All you need to do to use it is sign up at Alpha Vantage for a free API key, then enter your key in Zorro’s initialization file. Then, you can use Zorro’s assetHistory()  function with the arguments Name  and FROM_AV  to download data for the stock or ETF denoted by Name  directly from the Alpha Vantage database.

From my testing, this can be used as a direct replacement in any Zorro script that previously used Yahoo data, and it seems to load even faster than Yahoo data did. Here’s a simple example that downloads daily data for a portfolio of ETFs, saves that data locally, then plots returns series for each component and a price chart for one:

And here’s the resulting plot:

Looks good! Now I just need to replace Yahoo Finance with Alpha Vantage in all those scripts….

For readers who are yet to take Zorro for a test drive, I firmly believe that it is the most useful simulation and trading tool for it’s price available on the market today (and the free version packs a mighty punch as well). I’ve watched the platform go from strength to strength as new and better features are added with almost clockwork regularity. It will undoubtedly become ubiquitous as more and more people find out about it.

To become a Zorro ninja and learn to leverage features like the one shown here, join the Robot Wealth community, where you can access courses that will get you up to speed with robust strategy development, even if you’re a novice coder.

# How to Run Trading Algorithms on Google Cloud Platform in 6 Easy Steps

Earlier this year, I attended the Google Next conference in San Francisco and gained some first hand perspective into what’s possible with Google’s cloud infrastructure. Since then, I’ve been leaning on Google Cloud Platform (GCP) to run my trading algorithms (and more) and it has become an important tool in my workflow.

In this post, I’m going to show you how to set up a GCP cloud compute instance to act as a server for hosting a trading algorithm. I’ll also discuss why such a setup can be a good option and when it might pay to consider alternatives. But cloud compute instances are just a tiny fraction of the whole GCP ecosystem, so before we go any further, let’s take a high level overview of the various components that make up GCP.

# What is Google Cloud Platform?

GCP consists of a suite of cloud storage, compute, analytics and development infrastructure and services. Google says that GCP runs on the very same infrastructure that Google uses for its own products, such as Google Search. This suite of services and infrastructure goes well beyond simple cloud storage and compute resources, providing some very handy and affordable machine learning, big data, and analytics tools.

GCP consists of:

• Google Compute Engine: on-demand virtual machines and an application development platform.
• Google Storage: scalable object storage; like an (almost) infinite disk drive in the cloud.
• BigTable and Cloud SQL: scalable NoSQL and SQL databases hosted in the cloud.
• Big Data Tools:
• BigQuery: big data warehouse geared up for analytics
• DataFlow: data processing management
• DataProc: managed Spark and Hadoop service
• DataLab: analytics and visualization platform, like a Jupyter notebook in the cloud.
• Data Studio: for turning data into nice visualizations and reports
• Cloud Machine Learning: train your own models in the cloud, or access Google’s pre-trained neural network models for video intelligence, image classification, speech recognition, text processing and language translation.
• Cloud Pub/Sub: send and receive messages between independent applications.
• Management and Developer Tools: monitoring, logging, alerting and performance analytics, plus command line/powershell tools, hosted git repositories, and other tools for application development.
• More that I haven’t mentioned here!

The services and infrastructure generally play nicely with each other and with the standard open source tools of development and analytics. For example, DataLab integrates with BigQuery and Cloud Machine Learning and runs Python code. Google have tried to make GCP a self-contained, one-stop-shop for development, analytics, and hosting. And from what I have seen, they are succeeding.

## Introduction

Google Compute Engine (GCE) provides virtual machines (VMs) that run on hardware located in Google’s global network of data centres (a VM is simply an emulation of a computer system that provides the functionality of a physical computer). You can essentially use a VM just like you would a normal computer, without actually owning the requisite hardware. In the example below, I used a VM instance to:

• Host and run some software applications (Zorro and R) that execute the code for the trading system.
• Connect to a broker to receive market data and execute trades (in this case, using the Interactive Brokers IB Gateway software).

GCE allows you to quickly launch an instance using predefined CPU, RAM and storage specifications, as well as to create your own custom machine. You can also select from several pre-defined ‘images’, which consist of the operating system (both Linux and Windows options are available), its configuration and some standard software. What’s really nice is that that GCE enables you to create your own custom image that includes the software and tools specific to your use case. This means that you don’t have to upload your software and trading infrastructure each time you want to launch a new instance – you can simply create an instance from an image that you saved previously.

Before we get into a walk-through of setting up an instance and running a trading algorithm, I will touch on the advantages and disadvantages of GCE for this use case, as well as the cost.

## Pros and Cons of Running Trading Algorithms on Google Compute Engine

There’s a lot to like about using GCE for managing one’s trading infrastructure. But of course, there will always be edge cases where other solutions will be more suitable. I can only think of one (see below), but if you come up with more, I’d love to hear about them in the comments.

### Pros:

• GCE abstracts the need to maintain or update infrastructure, which allows the trader to focus on high-value tasks instead, like performance monitoring and further research.
• The cost of a cloud compute instance capable of running a trading algorithm is very reasonable (I’ll go into specifics below). In addition, you only pay for what you use, but can always increase the available resources if needed.
• Imaging: it is possible to create an ‘image’ of your operating system configuration and any applications/packages necessary to run your algorithm. You can start a new compute instance with that image without having to manually install applications and configure the operating system. This is a big time-saver.
• Scalability: if you find that you need more compute resources, you can add them easily, however this will interrupt your algorithm.
• Security: Google claim to have excellent security and employ a team of over 750 experts in that field, and take measures to protect the physical security of their data centres and the cybersecurity of their servers and software.
• Uptime: Google commits to providing 99.95% uptime for GCE. If that level of uptime isn’t met in any particular month, Google issues credit against future billing cycles.
• Access to other services: since the GCP services all play nicely together, you can easily access storage, data management, and analytical tools to compliment or extend a compute instance, or indeed to build a bigger workflow on GCP that incorporates data management, research and analytics.

### Cons:

• If your trading algorithm is latency sensitive, GCE may not be the best solution. While you do have some choice over where your algorithm is physically hosted, this won’t be ideal if latency is a significant concern. For the vast majority of independent traders, this is unlikely to be a deal-breaker, but it is certainly worth mentioning.

I was almost going to list security as a disadvantage, since it can be easy to assume that if security is not handled in-house, then it is a potential issue. However, one would think that Google would do security much better than any individual could possibly do (at least, that’s what you’d think after reading Google’s spiel on security), and that therefore it makes sense to include the outsourcing of security as an advantage. This issue might get a little more complicated for a trading firm which may prefer to keep security in-house, but for most individuals it probably makes sense to outsource it to an expert.

## Pricing

GCE is surprisingly affordable. The cost of hosting and running my algorithm is approximately 7.6 cents per hour, which works out to around $55 per month (if I leave the instance running 24/7) including a sustained use discount, which is applied automatically. Factoring the$300 of free credit I received for signing up for GCP, the first year’s operation will cost me about $360. This price could come down significantly, depending on the infrastructure I use, as I’ll explain below. I used an n1-standard-1 machine from GCE’s list of standard machine types. This machine type utilizes a single CPU and allocates 3.75 GB of memory, and I attached a 50GB persistent disk. This was enough to run my trading algorithm via the Zorro trading automation software (which requires Windows), executed through Interactive Brokers via the IB Gateway. The algorithm in question generates trading signals (for a portfolio of three futures markets) by processing hourly data with callouts to a feedforward neural network written as an R script, and it monitors tick-wise price data for micro-management of individual trades. The machine type I chose handled this job reasonably well, despite recommendations from Google’s automated monitoring that I assign some additional resources. These recommendations generally arise as a result of retraining my neural network, a task that proved to be more resource intensive than the actual trading. Thankfully, this only happens periodically and I have so far chosen to ignore Google’s recommendations without apparent negative consequence. I used a Windows Server 2016 image (since my trading application runs on Windows only) and a 50GB persistent disk, which is the minimum required to run such an image. The Windows Server image accounts for the lion’s share of the cost – approximately$29 per month.

### Step 2: Navigate to Google Compute Engine

Firstly, from the GCP homepage, navigate to your GCP Console via one of the options shown below:

Then, navigate to the Compute Engine dashboard like so:

### Step 3: Create a new VM instance

Simply click on Create in the VM Instances screen on your GCE dashboard, like so:

Then fill out the specs for your new instance. The specs I used look like this (you can see the cost estimate on the right):

I used one of GCP’s US east-coast data centres since IB’s servers are located in the New York area. My algorithm isn’t latency sensitive, but every little bit helps.

After clicking Create, the instance will take a few moments to spin up, then it will appear in your VM dashboard like so:

### Step 4: Set up access control

Next, you need to set up a password for accessing the instance.  Click the arrow next to RDP and select Set Windows password like so:

### Step 5: Connect and test

You can connect directly from the VM dashboard using Google’s remote desktop application for Chrome by clicking RDP (ensuring the correct VM is selected), or download the Windows RDP:

Once connected to the instance, it should look like a normal (although somewhat Spartan) Windows desktop. To test that it can connect to Interactive Brokers (IB), we are going to connect to IB’s test page. But first, we have to adjust some default internet settings. To do this, open Internet Explorer. Select the Settings cog in the top right of the browser then Internet Options, then Security, then Trusted Sites. Click the Sites button and add https://www.interactivebrokers.com to the list of trusted sites. Then save the changes. Here’s a visual from my instance:

Now, connect to IB’s test page to check that your instance can communicate with IB’s servers. Simply navigate to https://www.interactivebrokers.com/cgi-bin/conn_test.pl in Internet Explorer. If the instance is connecting properly, you should see a page that looks like this:

You can now upload your trading software and algorithm to your instance by simply copying and pasting from your home computer, or download any required software from the net. Note that to copy and paste from your home computer, you will need to access the instance using Windows RDP, not Chrome RDP (this may change with future updates to Chrome RDP).

### Gotcha: changing permissions of root directory for Windows Server:

I found that I wasn’t able to install R packages from script due to restrictions on accessing certain parts of the Windows file structure. To resolve this, I followed these steps:

• In Windows Explorer, navigate to the R installation directory and right-click it, then choose Properties.
• Go to the Security tab.
• Click Advanced, then Change Permissions.
• Choose This folder, subfolders and files under Applies to:
• Choose Full Control under Basic Permissions.
• Click OK.

### Step 6: Don’t’ forget to stop the instance!

If you need to stop trading your algorithm, it is usually a good idea to stop the instance so that you aren’t charged for compute resources that you aren’t using. Do so from the VM dashboard:

So long as you don’t delete the instance, you can always restart it from the same state at which it was stopped, meaning you don’t have to re-upload your software and scripts. You are not billed for an instance that has been stopped.

On the other hand, if you delete your instance and later want to restart, you will have to create a whole new instance and re-upload all your trading infrastructure. That’s where images come in handy: you can save an image of your setup, and then start an identical instance from the console. I’ll show you how to do that in another post.

# Concluding Remarks

## GCP on the Command Line

In this post I’ve demonstrated how to set up and run instances using the GCP Console. The same can be achieved using the Gcloud Command Line tool, which is worth learning to use if you start using GCP extensively thanks to the boost in productivity that comes with familiarity.

## Going Further

There’s a lot that can be done on GCP, including big data analytics and machine learning. We can also apply some simpler workflows to make our lives easier, such as creating custom images as mentioned above, or integrating with cloud storage infrastructure for managing historical data and using Data Studio for monitoring performance via attractive dashboard-style interfaces. I’m in a good position to show you the ropes on how to use these tools in your trading workflow, so if there is something in particular that you would like me to showcase, let me know in the comments.

Happy Googling!

Recently, Yahoo Finance – a popular source of free end-of-day price data – made some changes to their server which wreaked a little havoc on anyone relying on it for their algos or simulations. Specifically, Yahoo Finance switched from HTTP to HTTPS and changed the data download URLs. No doubt this is a huge source of frustration, as many backtesting and trading scripts that relied on such data will no longer work.

Users of the excellent R package quantmod  however are in luck! The package’s author, Joshua Ulrich, has already addressed the change in a development version of quantmod. You can update your quantmod  package to the development version that addresses this issue using this command in R:

devtools::install_github("joshuaulrich/quantmod", ref="157_yahoo_502")

Of course, you need the devtools  package installed, so do install.packages("devtools")  first if you don’t already have it installed.

Once the package updates,  quantmod::getSymbols(src = "yahoo")  should work just as it did prior to the updates on the Yahoo Finance server. I can verify that this worked for me.

Of course, if you don’t want to update quantmod to a version that lives on a Git branch, you can wait until the changes are merged into master and do

devtools::install_github("joshuaulrich/quantmod")

I don’t know when that will happen, but I have been using the branch version for a few days now, and all appears to be working as expected.

Update: A user suggested making use of the quantmod::adjustOHLC() function as the adjusted close of Yahoo data is currently incomplete, and doesn’t account for dividends. Example usage:

First time here? Check out our posts on machine learning in finance and our recent review of dual momentum as an investment strategy. Enjoy!

# Dual Momentum: A Review

I recently read Gary Antonacci’s book Dual Momentum Investing: An Innovative Strategy for Higher Returns with Lower Risk, and it was clear to me that this was an important book to share with the Robot Wealth community. It is important not only because it describes a simple approach to exploiting the “premier anomaly” (Fama and French, 2008), but because it is ultimately about approaching the markets with a critical, inquisitive mindset, while not taking oneself too seriously. I think we can all do with a dose of that sometimes.

Gary’s style is unique: this is the work of a free and critical thinker who is not afraid to question the status quo. While articulately drawing from a range of sources, from Shakespeare to Bacon and Einstein to Buffet (even Thomas Conrad’s 1970 book Hedgemanship: How to Make Money in Bear Markets, Bull Markets and Chicken Markets While Confounding Professional Money Managers and Attracting a Better Class of Women, which has got to be the greatest title in the history of trading books), Gary comes across as playful and slightly eccentric (which is wonderfully refreshing in a book about the markets). He derides economists who take themselves and their models too seriously (the line “for followers of CAPM, the real world was an annoying special case” almost made me fall off my chair), and importantly, he does this from the perspective of someone who has won the right to do so through hard-fought and won practical experience.

In this post I’ll describe some of the highlights from the book, including a description of Dual Momentum and Gary’s modular approach for exploiting it. I’ll also describe a variation of this approach and of course illustrate the performance of both approaches with some equity curves. Robot Wealth members have access to the code for implementing these systems and a research framework for additional experimentation.

There is a lot more to Gary’s book than the results I’m showing here, including detailed discussions around the existing momentum literature, the basis for momentum (it’s always nice to have a tangible reason for an anomaly to exist before committing capital to it) and suggestions for other Dual Momentum implementations. The book is highly recommended.

# Momentum, Relative and Absolute

The seminal work of Jagadeesh and Titman (1993) showed that relative momentum – that is, the returns of an asset in comparison to other assets – provides profitable trading opportunities which are largely robust to the parameters of the trading strategy that might be used to exploit them. They showed that the returns of relative momentum outperformed benchmark returns, however in order to harvest this out-performance, one must typically endure significant volatility, often only marginally better than the benchmark itself. For many active investors and managers, the reward may not justify the risk.

Antonacci (2012) published an extremely simple yet highly effective extension to relative momentum. He also looked at the absolute momentum of an asset – that is, the momentum of the asset relative to itself – and found that by combining the two types of momentum, he could reap the rewards of relative momentum investing while vastly reducing the volatility of the approach.

Antonacci’s Dual Momentum is extremely simple to implement and manage, requiring at most a few positional adjustments each month. This combined with its history of low volatility out-performance in my opinion makes it the perfect place to start for people who are new to trading, prior to investigating more complex strategies. It won’t shoot the lights out and make you rich overnight, but it has proven itself over time to be a robust way to outperform the market in the long term.

So why isn’t everyone doing this? My theory is that it is too simple. Most folks who decide they want to beat the markets like an intellectual challenge. They like to apply advanced quantitative methods or perform in-depth research into the fundamentals. Of course, I really like doing this too, and these methods can be handsomely profitable. But they take a huge amount of effort and usually a lot of frustration to get them right. Dual Momentum is like low hanging fruit. When you’re starting out, it makes sense to work on something that is both simple and robust, even if it doesn’t exactly satisfy that intellectual itch, and at least start getting some results. One learns a lot in the process.

# Dual Momentum Explained

Before we go any further, I’ll explain what is meant by “Dual Momentum” and how it might be applied.

Dual Momentum is about selecting assets that have both historically outperformed and also themselves generated a positive return. The first step in applying Dual Momentum is to compare the assets of interest against one another. If an asset has a higher return than another over the time period of interest, then it has positive relative momentum. We select the assets which have positive relative momentum for further analysis. Relative momentum thus acts as the initial filter.

Next, we look at the absolute momentum of individual assets. That is, we look at the performance of individual assets compared only to themselves. In simple terms, if an asset has a positive return over the time period of interest, its absolute momentum is positive, and if its return is negative, its absolute momentum is negative. Taking our assets with positive relative momentum, we would only consider buying those assets whose absolute momentum is also positive.

It is possible for an asset to have positive relative momentum and negative absolute momentum. For example, if the whole market was going down, the best performer in such a bear market would have positive relative momentum, but it might have negative absolute momentum. That is, it might have lost less than its peers. The Dual Momentum approach would prevent us buying such assets. Likewise, an asset might be going up and have positive absolute momentum, but if other assets performed better, it would have negative relative momentum. The Dual Momentum approach would force us into the assets that had both gone up and outperformed their peers.

In the description above, I referred to the long side only, but of course Dual Momentum could be applied to the short side in the same way. In my experience, a long only Dual Momentum strategy seems to perform better when applied to equities (which makes sense given the long-term upward bias in that asset class), but you may be able to apply it in a different way that I haven’t thought of, or apply it to a different asset class, in order to take advantage of the short side too.

# Implementing Dual Momentum

There are many ways to build a strategy that implements the Dual Momentum approach. Gary’s research shows that momentum works best when applied to geographically diversified equity indices. This research was backed up by Geczy and Samonov (2015), by way of a 215-year backtest!

We can’t easily invest directly in an index, so in this examination of Dual Momentum, we used Exchange Traded Funds (ETFs) that track the indices of interest. While this analysis is constrained by the formation date of the individual ETFs, we found that the results are generally in line with Gary’s results over the same period. Gary’s backtests using indices date back to 1974 and demonstrate a greater degree of out-performance than is evident across the relatively short (~10 years) backtest used here.

Below are the results of an ETF-based version of the modular approach described in Antonacci (2012), as well as a sector-rotation approach. Transaction costs and ETF distributions have not been included in these simulations, however would likely not have a significant impact on results given the typical holding period and trade frequency.

## Modular Dual Momentum

The modular approach to Dual Momentum is the one described in Antonacci (2012). This approach dictates that every month, we compare two related sectors or two parts of a single sector and select the better performer over the formation period (the prior twelve months). If the better performer has positive absolute momentum, we buy that asset. If the better performer has negative absolute momentum, we hold treasury bonds or investment grade bonds.

Gary provides some examples of “modules” to which one might apply Dual Momentum:

• Equities (US equities – international equities)
• Bonds (credit bonds – high yield bonds)
• Economic stress (gold – treasury bonds)
• REITS (mortgage reits – credit reits)

We could also look within other sectors for more ideas for modularizing Dual Momentum.

Download my R code – used to generate the following three Modular Dual Momentum equity curves  – here.

Here are the results for the equities and bonds modules from 2007 to March 2017. The ETFs used were SPY/CWI and CSJ/HYG respectively. We can see that over this period, Modular Dual Momentum resulted in returns that were comparable with the best performing component but with a fraction of the maximum drawdown. The results are also consistent with Gary’s finding that Dual Momentum tends to underperform during strong bull regimes or when the market rebounds, but thanks to its ability to weather bear markets, has outperformed in the long run. In his paper, Antonacci (2012) provides a backtest that extends back to 1974 and which better captures this long-term outperformance (as stated above, in this ETF implementation, we are constrained by the time that the ETFs have been in existence).

Finally, here are the performance charts of a Modular Dual Momentum portfolio consisting of a 60-40 split between equities and bonds modules.

## Dual Momentum Sector Rotation

This is a twist on Antonacci’s modular approach, and is actually the approach I prefer. We choose a universe of ETFs that represent various sectors, regions and asset classes, ranking them based on their return over the formation period and buy up to the best three ETFs whose absolute momentum is also positive. Essentially, this approach results in a sector rotation strategy that leverages the benefits of Dual Momentum. Robot Wealth members have access to the complete research environment for reproducing and experimenting with this Dual Momentum Sector Rotation strategy. Join up here.

The results below are for a Dual Momentum Sector Rotation for the following sectors:

• Stocks in the Russell 1000
• Stocks in the S&P 500
• Stocks listed on the FTSE
• European equities
• Japanese equities
• Asia-Pacific (ex-Japan) equities
• 7-10 Year Treasury Bonds
• 1-3 Year Treasury Bonds
• Gold

And the results using a 6-month formation period:

No doubt you can see why I prefer this approach over the modular one! Although to be fair, the parameters that I used are among the best parameter sets for this particular universe of ETF sectors. A good question to ask now would be “how robust is the strategy to changes in the parameter space?”. After all, the goal of strategy research is to discover robust strategies that perform well in the future, not to create the best backtests. We will soon be adding our suite of robustness testing tools to the members’ area to help answer this question.

# Important Caveats

Readers please note that my implementations have some important differences from the approach that Gary describes in his book and on his website. Gary’s equities implementation is called Global Equities Momentum (GEM) and you can see its performance and read the fine print here.  One of the key differences is that the trend of the US market determines the trend of all equities indices. In other words, if the return of the S&P500 is less than the return of short-term treasuries, we hold bonds regardless of the performance of foreign stocks. The reasoning for this is research referenced in Gary’s book that shows the US stock market leading global equities markets. In the short backtest posted here, there is only one month when this makes a difference, but it may be more significant in the long term.

Also, Gary explicitly advises against a sector rotation model. In his backtests back to 1974, he found that sector rotation outperformed the S&P500, but underperformed his GEM model. If you look at Gary’s results here, you can see that sector rotation outperformed GEM only intermittently (including the last few years of the simulation), with GEM coming out well on top over time. However, the sector rotation Gary describes on his website here is, I think, very different to my implementation in that it examines the individual sectors of the US market only. My implementation is more of a “global macro sector rotation”.

Gary is quite explicit that based on his research, the best application of Dual Momentum is the one presented in his book, or a similar one that focuses on equities, which have historically offered the highest risk premium. Readers should certainly familiarize themselves with Gary’s FAQ page in order to get the full story, or better yet, read the book!

# Conclusion

This article provided a description of Dual Momentum and presented results for two different implementations of Dual Momentum using ETFs. We found that in general, the results of our approximately 10-year ETF-based simulations were in line with Gary’s much longer index-based simulations, although the latter better demonstrate the long-term outperformance of the Dual Momentum approach. It is recommended to read the FAQ page on Gary’s website,or better yet, the book, to get the full story.

Robot Wealth members have access to the code that was used to generate the backtests shown above, which forms part of a larger research environment that can be easily modified and extended, for example by varying the instruments used in each module, thinking up other modules, and varying strategy parameters like the formation period and the number of ETFs held in the sector rotation version. There is lots of other useful content in the members’ area too, including a machine learning research framework, educational material for learning algorithmic trading, and an active and exclusive forum of like minded individuals. We would love to have you in the community – register here.

# References

Antonacci G. 2012, Risk Premia Harvesting through Dual Momentum

Fama, E. and K. French, 2008, Dissecting Anomalies, The Journal of Finance, 63, pg. 1653-1678.

Geczy, C and Samonov, M. 2015, 215 Years of Global Multi-Asset Momentum: 1800-2014 (Equities, Sectors, Currencies, Bonds, Commodities and Stocks)

Jagadeesh N. and Titman S. 1993, Returns to Buying Winners and Selling LosersJournal of Finance, Vol 48, Issue 1, pp.65-69