The post Market making for beginners appeared first on Robot Wealth.

]]>I’ve noticed that beginners are often attracted to market making. To the uninitiated, it sounds like *easy money and constant action*.

The reality, of course, is that market making isn’t the goldmine many think it is, especially not for beginners.

In this article, I’ll explore why.

Back in the early days of crypto, you could’ve made a decent living by simply quoting prices on various exchanges about 5% around whatever BitMEX was showing.

While those days are gone, there are still opportunities if you’re creative, persistent, and willing to embrace the fractured nature of the crypto market.

But ask yourself: do you *really *want to turn market-making into a full-time job? *Because it quickly becomes one.*

Let’s break down **the basic idea of market making**:

Market makers provide liquidity. They’re the folks willing to buy when you want to sell, and sell when you want to buy. In return for this service, they pocket half the spread (the difference between the bid and ask prices) on each trade.

Sounds simple, right? Well, here’s where it gets tricky:

- You need to know the “fair value” of what you’re trading.
- You can’t be wrong by more than half your spread on average.
- Learning from your mistakes is expensive.

Let’s say you think an asset is worth $100. You might offer to sell at $102 and buy at $98. In a perfect world, you’d make $2 on every trade.

What happens if you’re wrong about that $100 fair value estimate?

Let’s say it should really be $95. Suddenly, everyone’s selling to you at your bid of $98.

That means that you’re *buying above fair value*, bleeding money with every trade.

“But wait,” you might say, “I’ll just adjust my prices!”

Sure, you can, and you should.

But remember, every adjustment based on market feedback comes at a cost. And when your best-case scenario is only half the spread, you can’t afford many mistakes.

This is the market maker’s dilemma: most of the time, you need to be right, or at least not wrong, by more than half your spread.

It’s a constant high-wire act with no safety net.

So, how do successful market makers do it?

They have better models for estimating fair value (and they update their estimates faster than others can pick off their stale quotes).

There are many ways to model fair value.

Often, this means looking at prices on other exchanges. If you’re making markets for Bitcoin on a smaller exchange, knowing the price on Binance would be a very good idea.

This isn’t unique to market making. Trading is largely about knowing which prices are good and which are bad, on average.

But market making differs from most other trading approaches in *one crucial aspect*.

In other forms of trading, you only get involved when there’s an obvious edge.

On the other hand, as a market maker, you’re *always *involved – you’re constantly quoting around your fair value. You can’t sit back and wait for the best opportunities.

Now, I’m not saying that you can’t make it work as a market maker.

But I very strongly think that beginners should focus on *easier games*.

Things like risk premia harvesting and crypto perpetual futures basis arbitrage require little execution skill and provide a path for building skills, experience and capital.

If you’re really interested in market making, a good approach to learning the ropes is to try to come up with ways to pick off bad quotes – act when market makers fail to update their notion of fair value fast enough.

That is, flip things around and be the person who takes advantage of *bad* market making.

You’ll learn a ton about market making without taking on the constant risk exposure by having quotes always dangling in the market.

We used to do this on DeFi.

We would create our own model of fair value based primarily on prices on centralised exchanges (and sometimes a premium or discount depending on the platform) and then pick off stale quotes on decentralised exchanges that were slow to update.

When we started doing this, it was surprisingly easy. You could make money by eyeballing two order books and click trading.

As we went along, it became increasingly competitive. To stay ahead of the game, we automated our strategy and optimised it as best we could.

Eventually, we could no longer compete, but it sure was a good ride.

I suspect opportunities like this still exist, especially in the murkier areas of crypto. Of course, they come with their own set of risks and trade-offs.

In summary, market making tends to be harder than most beginners think.

There isn’t a great deal of room for error, especially when your primary form of feedback costs you an asymmetrical amount of money compared to your edge.

As a beginner, start simple.

Focus on easier, more forgiving strategies first. Build your skills, your experience, and your capital.

Consider doing the opposite of market making as you learn the ropes – taking advantage of bad market making by picking off stale quotes. Then, if you still feel called to try market making, you’ll be much better equipped to take it on.

The post Market making for beginners appeared first on Robot Wealth.

]]>The post How important is keeping up with macro news? appeared first on Robot Wealth.

]]>Someone recently asked me if obsessing over FOMC announcements, Non-Farm Payrolls reports, geopolitics and other macro news is important for traders.

My answer might surprise you: unless you’re actually a dedicated macro trader – which is highly unlikely if you’re reading my stuff – ** it almost certainly won’t help your trading**.

Let’s start with what we know about big macro events, generally.

One thing we can be quite confident in is that the market is predictably more volatile around big events.

I ran the numbers on SPY volatility during FOMC days versus regular days. It turns out that it’s about 20% more volatile on FOMC days:

This presents us with a choice: we could either do something about that (such as cut size), or we can just accept that tomorrow is going to be twenty per cent more volatile than usual.

It’s trivial to form a decent view on ** volatility**.

But I don’t believe that knowing things about monetary policy or the economy could possibly lead me to predict the ** market direction **around an FOMC event.

There is precisely zero chance of someone like me having any edge in predicting that based on interpreting the macro news.

Here’s how I think about the importance of keeping abreast of macro news and events in a nutshell:

One thing you have some control over is how much risk you want to have on. And if you know that volatility is likely to spike, you might reduce your exposures a little. To be honest, I usually don’t – I just accept the extra volatility – but at least you can make a deliberate choice.*Know when the big events are happening.*The market is way better at that than you or I. You’re much more likely to get paid by doing something useful that the market values (providing liquidity, smoothing out seasonal or structural effects, etc.) than outwitting the broader market.*Don’t waste time trying to predict outcomes.*It’ll work in your favor half the time and against you the other half. That’s just how it goes.*Accept that extra volatility is a double-edged sword.*

Of course, there’s nothing wrong with staying informed.

I’m partial to certain newsletters myself!

But I would never kid myself that these make me a better trader. It’s not my edge.

So far, I’ve been a bit negative and told you some things that won’t help your trading.

So here are some things that absolutely will ** help your trading** and are deserving of your attention:

- Understanding what you can and can’t control (and obsessing about the former and accepting the latter).
- Trading multiple uncorrelated edges.
- Knowing why your edge exists and why you can exploit it (if you can’t explain this in a couple of sentences, you probably don’t have an edge).
- Doing the data analysis grunt work to help you understand your edge.
- Being deliberate about risk allocation (without obsessing over precision).
- Having solid operational processes for managing your trading.

Personally, I barely read financial news. It’s not that I don’t care (OK, maybe a little because I don’t care), but because I don’t think it’s all that important for my trading.

I do keep tabs on when big events are happening. Elections, FOMC meetings, even Nvidia earnings (because it can mess with SPX volatility).

But I don’t need to know the details. I’m not trying to outsmart the market on macroeconomic trends – I just take note of how these events typically affect volatility.

The key points are:

- Know when big events are coming so that you’re prepared for volatility spikes.
- Don’t waste time trying to predict the outcomes of specific events.
- Focus on managing your exposures and risk.
- Spend your energy on things you can actually control – your processes, your risk management, understanding your edges, and finding more things to trade.

Remember, good trading isn’t about having a crystal ball. It’s about doing useful things, taking on risks that others shun, and managing those risks well.

If you find yourself worrying about the outcome of macro events, chances are you could benefit from focussing more on the things you can actually control.

The post How important is keeping up with macro news? appeared first on Robot Wealth.

]]>The post What can you do with a small trading account? appeared first on Robot Wealth.

]]>A lot of my advice boils down to *trading is hard, so go after the easiest stuff you can find first.*

Some call this defeatist. I call it realistic and practical.

A big part of trading is not screwing up. So, at the start, learning skills, defining processes, and building confidence are important.

A question we get a lot is *I’ve got a small account; what’s the best way to manage that?*

And the answer is *slow and steady.*

If I were trying to make friends, rather than give you good advice, I would say this:

*If you have a small trading account, you can do really high-performing, low-capacity stuff. There are trades in crypto futures, and there are opportunities in constrained equity trades, where you can get very high risk-adjusted performance. And that’s because those opportunities aren’t really big enough for more sophisticated traders to be interested in. So let’s go after these really niche, constrained things.*

But there’s a catch.

Those trades are **tough.**

They require more skill, more time, and often more technology than a beginner has at their disposal.

It’s like trying to run before you can crawl.

*So what’s the alternative?*

**Start with the easy stuff.**

I’m talking about strategies that might not set the world on fire with their returns, but are forgiving enough to let you learn the ropes without blowing up your account or destroying your mental health.

Think of it like this: on one end of the spectrum, you’ve got high-skill, high-return strategies. On the other, you’ve got lower-skill, more modest return strategies.

Where do you think you should start?

If you said “high-skill, high-return”, I admire your ambition. But let’s be real – you’re setting yourself up for a world of pain.

Instead, focus on the easier end of the spectrum.

*Why?*

Because it’s about * building confidence, not screwing up, and learning to run processes consistently*.

Here’s a real-world example: Risk premia harvesting.

It’s not sexy, but it’s a solid strategy that’s been providing a tailwind to macro traders for decades.

And guess what?

You can start doing it today.

Now, I know what you’re thinking. “But I want those juicy returns!”

I get it. But don’t worry – those high-return strategies aren’t going anywhere.

They’ll still be there when you’re ready for them. And trust me, you’ll be much better equipped to handle them after you’ve cut your teeth on the simpler stuff.

*So what kind of returns can you expect from these simpler strategies?*

Let’s talk Sharpe ratios.

For a single risk premium harvesting approach, you’re looking at about 0.5 to 1. In plain English, that means your returns will be about half your volatility, or a bit more.

If you’re smart about it and combine a few of these strategies, you could potentially bump that up to a Sharpe ratio of 1 – meaning your returns roughly equal your volatility.

Now, I know that might not sound earth-shattering. But here’s the kicker – you could potentially manage this kind of portfolio with just a monthly check-in. Try doing that with high-frequency crypto arbs!

As you build your skills and confidence, you can start venturing into more complex territory. Those constrained trades with Sharpe ratios of 4 or 5? They’re not going anywhere. But by the time you get to them, you’ll have the skills to actually capitalise on them.

Remember, trading isn’t a sprint – it’s a marathon.

Start slow, build your skills, and before you know it, you’ll be tackling those high-return strategies with confidence.

The post What can you do with a small trading account? appeared first on Robot Wealth.

]]>The post Quant systematic trading vs discretionary appeared first on Robot Wealth.

]]>Someone recently asked me about the difference between our approach and discretionary order flow trading and technical analysis.

Order flow trading involves sitting in front of a screen and looking at orders and trades as they come in and trying to work out if more buying or selling is happening, where those trades are occurring, and how that might predict the short-term future.

Technical analysis relies on past price history and various patterns it makes on a chart.

Both approaches involve a lot of sitting in front of a screen and using one’s discretion, skill, and experience to make trades.

*How is that different to the approach we teach in Bootcamp?*

First, let’s get clear about what we mean by “edge.”

An edge is a pricing inefficiency. A mispricing. Something that’s worth $100 that is trading at $110. Something that you expect to make money on in the long run if you do it a lot.

Clear enough.

And an edge is caused by other people creating excess supply or demand. It arises due to other people causing the market to trade at the wrong price, which you then take advantage of.

This is the fundamental cause of any market inefficiency – ** excess supply or demand**.

How you discovered the inefficiency doesn’t really matter.

Whether you discovered it by talking to a drunk broker at lunch, by doing data analysis, or by taking a ton of Adderall and staring at ticks for 16 hours without a break, *it’s the same inefficiency.*

Don’t think that there’s some group of people with quantitative tools that are honing in on different effects from everyone else because they’re really smart.

All of these effects are created by people buying and selling – nothing more, nothing less.

And those effects are just as real, whether you discovered them by talking to your mate, by doing data analysis, or by racking up screen time.

An order flow guy could be looking at some effect that he learned about through months of disciplined screen time. And I could be looking at the exact same thing, but I learned about it by doing data analysis.

He used his subconscious brain; I used some data wrangling and a scatter plot.

We’re doing the same thing, we just got there via different paths.

The key point is that there’s no special set of edges for order flow traders and another set of edges for the quants. ** They’re the same thing**!

The approach to discovering them is where people differ.

The defining feature of the approach we teach in Bootcamp is that ** it minimises subjectivity**.

We avoid saying things like, “Hey, look at this effect, but it only plays out sometimes and under certain conditions, and you need context and nuance and experience to understand those conditions.”

We don’t have any interest in that because we can’t prove that it has validity.

And we certainly wouldn’t try to teach you that stuff. We don’t know how to teach you to acquire the sort of experience and nuance that matters for order flow traders or technical analysts or other discretionary approaches.

But we ** are** interested in edges that are obvious and repeatable.

Edges that show up in the data no matter how you slice it.

Edges that are so big that you can mess up the execution and still make money trading them.

We teach how to analyse these edges and reason about them. How to build a process to extract them – a process that is repeatable and doesn’t require any skill other than following the process.

Crucially, we urge you to do things that are ** easy**. At least at the start.

It takes time and effort to sit in front of a screen and train your subconscious mind to recognise intraday patterns. I know of very few traders who can make money doing this. I’m quite sure I couldn’t.

It’s exceptionally difficult and overwhelmingly unlikely to work out.

Instead, we think that you should tackle easy, obvious edges first.

Go slow. Learn where edge comes from so that you know where to look. Learn basic data analysis and portfolio construction skills so that you can analyse and trade them yourself.

These are skills that anyone can learn.

Chances are you’re not a galaxy-brained Adderal aficionado who might have a shot at learning discretionary trading.

So don’t waste time on that approach. Go easy on yourself and start with things that are much more likely to work out, and learn some useful skills along the way.

The post Quant systematic trading vs discretionary appeared first on Robot Wealth.

]]>The post The Generic, No-Voodoo Trading Process appeared first on Robot Wealth.

]]>I notice that in conversations with traders, they often think about entries, exits, and discrete trades with some sort of life-cycle.

*But your P&L does not come from a trade. *

A trade doesn’t change your P&L much, except for the commission and spread costs that you incur.

Your P&L comes from holding a risk position (a stock, crypto, etc), and then that position subsequently changes in value.

So you’re much better off focussing on ** what positions you want** rather than when you enter and when you exit.

When you enter and exit doesn’t really matter. That’s just a swap between a cash balance and an asset.

What really matters is the exposures you’re holding and how they change.

Trades, at least in my mind, are not things with a life cycle that opens and closes; they’re just events that change my positions.

If I buy $100 worth of IBM stock, that simply shifts $100 out of my cash balance and into my IBM stock balance. I’ve just swapped some balances.

The P&L comes from the change in the value of the stock.

Let’s say I sell that IBM stock for $110. We can agree that I made $10 (ignoring costs).

But the P&L didn’t come from the buying and selling. It came from holding a position while the value of the stock changed.

The trade is relevant because it results in us having a position. But our P&L comes from holding that position while its value changes, not from the trade itself.

*Our mission as traders, therefore, is to constantly (or at some acceptable frequency) look at the positions we have and compare them to the positions we want. And then to trade into those ideal positions, if they’re sufficiently different to what we currently have. *

That means that there’s never any excuse for bag-holding. There’s no excuse for saying, “I’m only in this position because I’m already in it.”

If you wouldn’t buy a thing you’re holding right now, then you shouldn’t have it on (unless it’s sufficiently similar to something you would like to hold, and the costs of switching are greater than your difference in expected return – but that’s a story for another time).

** Having **a position on and

This seems a subtle point, but it’s worth engraving on your brain. Because without it, you end up with all sorts of voodoo beliefs like not realising losses or bagholding.

If a stock I hold decreases in value by $10, then that’s a $10 loss whether I’ve sold that position or not.

People will often say, “I don’t want to sell that, even though I don’t like it, and even though I wouldn’t buy it today.”

But the truth is, you’ve already made that loss! You’re already holding something that was once worth $100 and is now worth $90.

Pretending that you haven’t lost that $10 because you haven’t realised it is honestly little more than some emotional cope that is not going to help you one bit.

In fact, ** it’s going to hinder you** because what remains of your capital will be tied up in positions that you don’t want!

One possible exception is when an asset is very illiquid, and selling it would mean that you realise a significant additional loss. But even then, you’d need to consider the best place to have your capital deployed.

To summarise, there’s never an excuse for holding a liquid position that you don’t like anymore.

As Euan Sinclair says, positions are not marriages. They are not things that need to be repaired. If you don’t like the one you’ve got, you can immediately change it to one that you like better.

This is the crux of trading.

You have some positions. Over time, some of them get bigger, some get smaller. Some of them you start to like more, some you like less, depending on your views of the future.

Your job as a trader is to compare these positions to the ones you ideally want and to trade towards them. You can fix things immediately and trivially by trading out of the things you don’t want and towards the ones you do want.

Sometimes, people are reluctant to do this because they want to make back losses on individual positions. But if you want to repair your P&L, putting on a better position is far more likely to help than just bag-holding an existing position that you don’t want anymore.

Let go of the mindset that has you white-knuckling and bag-holding unrealised losses. Instead, get into a more myopic, present-tense mindset where you focus on the difference between your existing and ideal positions.

No doubt this will feel jarring or annoying if I’ve labelled your thinking “voodoo”. But I hope you’ll let it percolate and think about it. Your P&L will thank you in the long run.

The post The Generic, No-Voodoo Trading Process appeared first on Robot Wealth.

]]>The post So You Want to Start a Trading Business appeared first on Robot Wealth.

]]>I work with many traders who aspire to turn their passion for the markets into a serious trading business. And I think this aspiration is entirely appropriate – it’s critical to approach the trading problem as seriously as you would any business venture.

A question that always comes up is, *“Where do I start?”*

In this article, I’ll discuss what to focus on when you’re starting out.

It’s worth taking stock of where you are right now versus where you want to be in the future.

You have some ideas of what you want to trade and the things you need to run a well-oiled trading operation:

- Alphas/edges
- Analytics and research
- Technology – databases, broker connections, reporting, etc
- Processes and management

I’ve noticed that people tend to have a decent understanding that they’ll need this stuff.

But the mistake I see people making is that they immediately put their heads down and get to work making it.

*The problem with that is that you don’t actually know what you want!*

Things move fast and won’t look the same in the future as when you started building. And without some actual experience in the market, you don’t really know what you need.

And the whole time you’ve got your head down building stuff, you’re not interacting with the market. You’re not making trades. You’re not getting feedback.

You’re not solving *real *problems; you’re solving *future* problems that you *assume* you’ll have at some point.

And not only were you not learning market lessons, you weren’t making any money because you weren’t trading!

The problem is that if you’re focused on building for the future – akin to an entrepreneur assuming “if I build it they will come” – a whole lot of things change *while you’re building*, and you simply aren’t aware until you’ve *finished building*. The thing you thought you needed to do isn’t necessarily relevant anymore, and you find that you need to switch to something else, and you’ve wasted a bunch of time.

*So what’s the solution?*

In trading, by far the most important thing is *making money today.*

Do everything you can to make money *today*, using the tools and skills that you have *today*.

And then, iteratively, move towards improving your setup and capabilities. Work on technical problems that are as close to what you’re trading as you possibly can. That way, you’re building things that support your trading today and gradually moving yourself closer to where you want to end up.

Here’s a real-world example:

When we started trading crypto back in 2021, we started working on low-latency execution to trade basis spreads and arbitrage on the centralised exchanges, but it soon became clear that that wasn’t the best place for us to compete. So we ended up moving over to Defi and doing something similar on Solana smart contracts, because it was much easier to compete.

We certainly didn’t envisage doing that at the outset. When we first started, I didn’t even know that Solana was a thing! But because we prioritised trading from the outset, we quickly learned enough to figure out what we actually needed, which was *completely different *to what we initially thought we needed.

In practical terms, how do you do this if you are literally starting with nothing?

Simply, you need to trade *today*, as best you can.

Ask yourself, *What can I be doing right now: *

*with my non-existent technology**that is likely to make money**and won’t take up all my time so that I can work on building the business and push myself towards where I want to be**and is somewhat aligned with where I think I want to be in the futur*e?

It’s an iterative process, not a linear one. You’re doing the best you can to make money today (this is the highest priority) with one eye on where you think you need to be in the future.

It’s iterative because as you trade, your vision for where you think you need to be in the future will evolve. You feed your experiences and your learnings in the market into your vision of the future and gradually work towards it.

*Let’s make this a bit more real.*

At the start, you should prioritise trading simple and obvious edges. Ideally, they would be mostly systematic trades that you can execute manually because that will reduce the operational overhead required and won’t take up all of your time.

*So what are these simple and obvious edges that are forgiving to trade?*

Great question! There are surprisingly many, and we share loads of examples in Trade Like a Quant Bootcamp, but here are a few:

Many crypto perpetual futures tend to trade at a premium to spot for structural reasons and the (fluctuating) appetite for leveraged long exposure.

You’ll find that this premium is sticky enough that you can make money after execution costs by trading into a perpetual position hedged with spot.

This edge won’t shoot the lights out anymore (none of the super obvious stuff does on its own), and the trade tends to do worse when crypto fever comes off, as it has recently. But it’s an obvious trade and is relatively easy to manage.

You can potentially juice it a little by doing it on Defi, but the trade-off is heightened counter-party risk and additional operational overhead.

To give you an idea of the size of the edge, we’ve done this trade on Defi in 2024 for an annualised return of about 30% for not much effort.

Importantly, in addition to being likely to make money, this trade will move you towards your goal by:

- Forcing you to understand idiosyncratic supply/demand patterns for leveraged crypto exposure.
- Teaching you how to execute spreads with minimum cost and impact – which is difficult and an important lesson that you’ll use all the time.
- Building tools to support the trade:
- Tools for risk management and exposure reporting.
- Execution tools for executing spreads with minimum impact (getting you on the path to algorithmic trading).

Where you can find similar assets (possibly trading on different venues), you can sometimes find reasonably forgiving statistical arbitrage opportunities.

This trade still works reasonably well in crypto, particularly if you trade the legs on different venues.

An interesting place to look for this trade is in crypto perpetual futures contracts trading on different venues. You can sometimes find opportunities to short the one paying the higher funding, and long the one paying the lower funding, and you can profit on both the funding differential and the convergence of the two contracts.

Admittedly this one has become less lucrative as time has gone on, but it’s still likely to make money, be amenable to trading by hand, and not take up a heap of time.

It’s a little harder in US equities, but if you can come up with a universe of pairs to trade (harder than it sounds), it is still possible to make it work, even trading end of day. I haven’t tried, but it’s probably a more forgiving trade in less developed markets, although you’ll get less size on – which probably doesn’t matter when you’re starting out.

This trade moves you towards your goal by:

- Helping you understand idiosyncratic supply/demand imbalances between exchanges.
- Forcing you to learn how to manage cross-exchange risk.
- Building tools and processes to support the trade:
- Tools for cross-venue risk management and consolidated exposure reporting.
- Processes and contingency plans (for example, for maintaining margin as the trade moves).

This is probably the most obvious edge of them all. It’s literally just buying things that tend to be priced cheaper than their expected value because of the inherent risk in holding them.

We’ve written extensively about risk premia harvesting in the past – go here for the low down.

This one takes almost no effort to manage, is extremely forgiving to execute because you can access very liquid markets, and has been a strong performer.

Below is a simulation of the risk premia harvesting implementation we share in Trade Like a Quant Bootcamp (the coloured areas represent the exposures of the assets held in the strategy):

This trade moves you towards your goal by:

- Helping you understand the nature of risk and reward in the markets (no risk, no premia).
- Teaching you about rebalancing and position sizing.
- Building tools and processes to support the trade:
- Tools for calculating rebalance trades.
- Processes for rebalancing and tools for pnl attribution.

VX futures tend to trade at a different price, usually higher, than the VIX index. This discrepancy is called *the basis.*

We think that at least some of the basis represents a risk premium due to the asymmetrical nature of VIX moves, and it manifests in the observed negative returns to being long volatility:

This suggests that being short volatility is a good idea, on average (in reality, being short volatility will make a little money most of the time but lose a lot of money occasionally and must therefore be managed very carefully).

A simple trade is to assume that the volatility risk premium is always the same and always positive (or at least, very difficult to predict) and to therefore maintain a constant dollar short volatility position:

This is a simple trade that can be managed end of day. You can even do it by getting long a short volatility ETF, which makes it more forgiving to manage.

Later, as you learn more about VIX dynamics, you’ll realise that the volatility risk premium is somewhat amenable to timing (at least, much more so than the equity risk premium). This will lead to more sophisticated trades.

This trade moves you towards your goal by:

- Forcing you to learn about VIX dynamics, basis effects, and the volatility risk premium.
- Forcing you to learn about position sizing and asymmetrical risk.
- Teaching you a healthy respect for volatility.
- Building tools and processes to support the trade:
- Processes for rebalancing and reporting pnl.

When you’re starting your trading business, be careful not to look to the future and start building things for a future that might never come. Your learnings and knowledge will develop so quickly that you’ll likely need entirely different things than those you originally assumed.

Instead, focus on making money right now and iteratively move towards the place you want to be long term.

Be flexible, open minded and prepared to change.

*Focus more on trading and less on building.*

The post So You Want to Start a Trading Business appeared first on Robot Wealth.

]]>The post Intuitive Options Pricing appeared first on Robot Wealth.

]]>In this article, we’ll use simulation and simple visualisations to build intuition around how different variables drive the price of an option.

Building this intuition is important because it helps you react quickly and make decisions without relying on complex pricing models.

Let’s get to it.

A call option gives the holder the right to buy the underlying at the strike price on or before the expiration date.

That means that the value, ${V}_{call}V\_\{call\}$, of the call option at expiration, is described by a step-wise function in the price at expiry, ${p}_{e}p\_e$ and the strike, $ss$:

${V}_{call}=\{\begin{array}{cc}{\textstyle 0}& {\textstyle {p}_{e}\le s}\\ {\textstyle \text{}{p}_{e}\text{\u2013}s}& {\textstyle {p}_{e}s}\end{array}V\_\{call\}\; =\; \backslash begin\{cases\}\; 0\; p\_e\; \backslash leq\; s\; \backslash \backslash \; p\_e\; \u2013\; s\; p\_e\; s\; \backslash end\{cases\}$

That just says that the option, at expiration, is:

- worthless if the price at expiration is less than or equal to the strike
- worth something if the price at expiration is more than the strike

In precise terms, if the option expires “in the money” then it is worth the price at expiration less the strike. That’s simply because when the option expires in the money, you acquire the stock at the strike price and can immediately sell at the expiration price, pocketing the difference.

Here’s a visual example of the payoff function of a call option at expiry with a strike of 100:

```
# session options
options(repr.plot.width = 14, repr.plot.height=7, warn = -1)
library(tidyverse)
library(patchwork)
# chart options
theme_set(theme_bw())
theme_update(text = element_text(size = 20))
```

```
# "instrinsic value"
min_price <- 0
max_price <- 200
strike <- 100
call_payoffs <- tibble(price = c(min_price, strike, max_price)) %>%
mutate(value = case_when(price < strike ~ 0, TRUE ~ price - strike))
call_payoffs %>%
ggplot(aes(x = price, y = value)) +
geom_line() +
ggtitle('Value of call option at expiration')
```

A put option gives the holder the right to sell the underlying at the strike price on or before the expiration date.

That means that the value, ${V}_{p}utV\_put$, of the put option at expiration, is also described by a step-wise function:

${V}_{put}=\{\begin{array}{cc}{\textstyle s\text{\u2013}{p}_{e}}& {\textstyle {p}_{e}<s}\\ {\textstyle \text{}0}& {\textstyle {p}_{e}\ge s}\end{array}V\_\{put\}\; =\; \backslash begin\{cases\}\; s\; \u2013\; p\_e\; p\_e\; s\; \backslash \backslash \; 0\; p\_e\; \backslash geq\; s\; \backslash end\{cases\}$

That just says that the option, at expiration, is:

- worth something if the price at expiration is less than the strike
- worthless if the price at expiration is greater than or equal to the strike

In precise terms, if the option expires “in the money,” it’s worth the strike less the price at expiration. That’s simply because when the option expires in the money, you acquire a short position in the stock at the (high) strike price and can immediately buy it back at the (lower) expiration price, pocketing the difference.

Here’s a visual example of the payoff function of a call option at expiry with a strike of 100:

```
put_payoffs <- tibble(price = c(min_price, strike, max_price)) %>%
mutate(value = case_when(price > strike ~ 0, TRUE ~ strike - price))
put_payoffs %>%
ggplot(aes(x = price, y = value)) +
geom_line() +
ggtitle('Value of put option at expiration')
```

So far so good. The value of an option at expiration is fairly intuitive – it’s either worth something or it isn’t, depending on the underlying’s price at expiration in relation to the strike.

What about the value of an option *prior* to its expiration?

If we ignore annoying things like interest rates and dividends, then the option’s value is related to just two fundamental properties:

- the possible values for the underlying at expiration
- the probability of the underlying closing at these values at expiration

In fact, the option’s expected value is simply the sum of these possible values weighted according to their probability of occurring.

Here’s a simple example to illustrate the point.

Say we play a game with only two outcomes: a coin toss. If heads comes up, we win $\mathrm{1.50.}Iftailscomesup,welose1.50.\; If\; tails\; comes\; up,\; we\; lose$1. The expected value of playing this game is the probability-weighted sum of the possible outcomes:

- 50% chance of winning $1.50-501.50$
- 50% chance of losing 1.00

The expected value is $E=0.5\ast 1.5+0.5\ast -1=0.25E\; =\; 0.5\; *\; 1.5\; +\; 0.5\; *\; -1\; =\; 0.25$

I’m sure you intuitively knew that playing this game many times would, on average, yield $0.25 per game. We have only two possible outcomes, and the probabilities associated with them are known.

But what about something more complex with a large number of possible outcomes?

Well, we do precisely the same thing. We take the possible outcomes, weight them by their probability of occurrence, and then sum those values. We just have a bit more work to do to understand those possible outcomes.

Here’s a visualisation to demonstrate.

- The blue line is the payoff function of a call option struck at 100. It shows the payoff associated with each possible expiry price between 80 and 120.
- The red line is the probability associated with each expiry price.

To calculate the expected value of the option, we multiply the blue line with the red line, and then sum the resulting set of numbers.

```
strike <- 100
payoff <- tibble(
possible_values = 80:120,
payoff = case_when(possible_values < strike ~ 0, TRUE ~ possible_values - strike),
probability = dnorm(seq(-5, 5, length.out = length(possible_values)), mean = 0, sd = 1)
)
payoff %>%
ggplot(aes(x = possible_values, y = payoff)) +
geom_line(aes(colour = 'value at expiry')) +
geom_line(aes(y = probability*20, colour = 'probability of value')) +
ggtitle('Value of call option at expiry') +
labs(colour = "") +
scale_y_continuous(sec.axis = sec_axis(~./20, name = "Probability of value occurring")) +
scale_colour_manual(values = c('red', 'blue')) +
theme(legend.position = c(0.2, 0.9))
```

The possible expiration values up to and including 100 contribute nothing to the expected value of the option, since we’re multiplying the red line by the blue line (which is zero at and below 100.)

In this particular example, a lot of probability mass is associated with possible expiration values around the strike (because the red line is high around the strike.) As the payoff increases (the blue line goes up), it has less probability associated with it.

Let’s plot the value of the probability-weighted payoffs to see what this looks like:

```
payoff <- payoff %>%
mutate(p_weighted_payoff = payoff * probability)
payoff %>%
ggplot(aes(x = possible_values, y = p_weighted_payoff)) +
geom_line() +
geom_area(fill = 'blue', alpha=0.6)
```

This plot shows the contribution of each of the possible expiration values to the expected value of the option given the probability distribution we saw above.

To get the expected value of the option, we simply sum these probability-weighted payoffs, which is equivalent to the area under the curve above:

```
payoff %>%
summarise(ev = sum(payoff * probability)) %>%
pull(ev) %>%
round(2)
```

6.35

The expected value of our option given the probability distribution of possible outcomes is about $6.40.

What if we had a different probability distribution?

Let’s say the distribution of possible expiration values was wider – that is, more of the probability mass was contained away from the centre of the distribution. Here’s what that might look like:

```
strike <- 100
payoff <- tibble(
possible_values = 60:140,
payoff = case_when(possible_values < strike ~ 0, TRUE ~ possible_values - strike),
probability = dnorm(seq(-7, 7, length.out = length(possible_values)), mean = 0, sd = 2)
)
payoff %>%
ggplot(aes(x = possible_values, y = payoff)) +
geom_line(aes(colour = 'value at expiry')) +
geom_line(aes(y = probability*40, colour = 'probability of value')) +
ggtitle('Value of call option at expiry') +
labs(colour = "") +
scale_y_continuous(sec.axis = sec_axis(~./40, name = "Probability of value occurring")) +
scale_colour_manual(values = c('red', 'blue')) +
theme(legend.position = c(0.2, 0.9))
```

```
payoff <- payoff %>%
mutate(p_weighted_payoff = payoff * probability)
payoff %>%
ggplot(aes(x = possible_values, y = p_weighted_payoff)) +
geom_line() +
geom_area(fill = 'blue', alpha=0.6)
```

```
payoff %>%
summarise(ev = sum(payoff * probability))%>%
pull(ev) %>%
round(2)
```

25.99

In this case, we see that having more of the probability mass away from the mean of the distribution means that the larger payoffs contribute proportionately more to the expected value of the option.

What would cause us to have a distribution like that?

Great question!

This would arise if there was a wider range of possible expiration prices. And volatility is the driver of the range of possible expiration prices.

All else being equal, higher volatility results in a higher spread of possible prices $NN$ time steps in the future. That makes intuitive sense – and we’ll see it in action shortly.

What about if we had a probability distribution that was right-shifted along the payoff function? That would look like this:

```
strike <- 100
payoff <- tibble(
possible_values = 80:130,
payoff = case_when(possible_values < strike ~ 0, TRUE ~ possible_values - strike),
probability = dnorm(seq(-5, 5, length.out = length(possible_values)), mean = 0., sd = 1)
)
payoff %>%
ggplot(aes(x = possible_values, y = payoff)) +
geom_line(aes(colour = 'value at expiry')) +
geom_line(aes(y = probability*20, colour = 'probability of value')) +
ggtitle('Value of call option at expiry') +
labs(colour = "") +
scale_y_continuous(sec.axis = sec_axis(~./20, name = "Probability of value occurring")) +
scale_colour_manual(values = c('red', 'blue')) +
theme(legend.position = c(0.2, 0.9))
```

```
payoff <- payoff %>%
mutate(p_weighted_payoff = payoff * probability)
payoff %>%
ggplot(aes(x = possible_values, y = p_weighted_payoff)) +
geom_line() +
geom_area(fill = 'blue', alpha=0.6)
```

```
payoff %>%
summarise(ev = sum(payoff * probability)) %>%
pull(ev) %>%
round(2)
```

27.06

Again, this increases the expected value of the call option, this time because the higher payoff values have more probability mass assigned to them.

But what would cause the probability distribution to be right-shifted like that?

That would occur if the current price was higher than the strike price.

This also makes intuitive sense – if the current price is higher than the strike price, then the range of possible outcomes at expiry is going to be skewed in the direction of higher payoff.

The examples above are completely contrived – I literally made the probability distribution I wanted in order to make a point.

But you can see that the value of the option prior to expiration very clearly depends on that distribution of possible outcomes.

It follows that in order to price our option before expiry, we need some forecast of that distribution. That is, we need a view on the probabilities associated with each possible outcome.

How might we arrive at such a view?

In situations like this, where we have a process that is heavily driven by uncertainty and randomness, it often pays to turn to simulation. The process looks like this:

- come up with a model of reality that captures the inherent uncertainty as best we can (or as we think appropriate)
- run the model many times
- calculate the probabilities associated with each possible outcome from the number of simulation runs that produced them

For instance, if we ran a simulation 1,000 times and 100 of those simulations produced a value of 10, then we’d infer the probability associated with the value 10 to be $100\mathrm{/}1,000=0.1100/1,000=0.1$, that is 10%.

There are of course difficulties associated with models of reality:

- the model is at best a representation of reality, an approximation
- we may not be able to accurately represent all of the things (variables) that impact the thing we want to simulate

Those limitations aside, models can still be useful. In particular in this case for getting a feel for the dynamics of option pricing.

Let’s keep things simple to start with. We’ll make some simplifying assumptions:

- the price of the security is a stochastic (random) process with no drift
- interest rates don’t exist
- dividends don’t exist

Think about the variables that drive the thing we’re trying to simulate – the terminal price of a security at some date in the future.

Fundamentally, those variables are (when we make our simplifying assumptions):

- the price of the security right now
- the time until the future date of interest (more time = larger range of outcomes, all else being equal)
- the volatility of the security (higher volatility = larger range of outcomes, all else being equal)

A widely accepted model of the price process is Geometric Brownian Motion (GBM). It has some nice properties that are consistent with stock prices:

- values can not be negative
- values will be log-normally distributed (a decent approximation for our purposes)

Here’s a GBM simulator. It takes a number of parameters:

`nsim`

: the number of simulations (that is, the number of price paths to simulate)`t`

: the number of time steps to expiration`mu`

: the mean of the stochastic process generating prices (we set this to zero in line with our assumptions)`sigma`

: the annualised volatility of prices`S0`

: the starting price at time t0`dt`

: the length of a single time step (here dt is in units of years, consistent with our annualised volatility)

`epsilon`

is a matrix containing the random draws from the generating process – one draw for each day for each simulation. We set this up as a matrix prior to running our simulator in order to leverage R’s ability to do vectorised operations. Sometimes in R we can replace loops with vectorised operations and reap dramatic increases in performance. This is what we’ve done here.

We simulate 1,000 possible price paths over 25 time steps given a starting price of 100, and a volatility of 10%. In the table below the code, you can see the first 10 values of the first 5 price path simulations:

```
gbm_sim <- function(nsim = 100, t = 25, mu = 0, sigma = 0.1, S0 = 100, dt = 1./365) {
# matrix of random draws - one for each day for each simulation
epsilon <- matrix(rnorm(t*nsim), ncol = nsim, nrow = t)
# get GBM paths
gbm <- exp((mu - sigma * sigma / 2) * dt + sigma * epsilon * sqrt(dt))
# convert to price paths
gbm <- apply(rbind(rep(S0, nsim), gbm), 2, cumprod)
return(gbm)
}
S0 <- 100
dt <- 1./365
sigma = 0.1
t <- 25
mu <- 0
nsim <- 1000
gbm <- gbm_sim(nsim = nsim, t = t, mu = mu, sigma = sigma, S0 = S0, dt = dt)
gbm[1:10, 1:5] %>%
as.data.frame()
```

V1 | V2 | V3 | V4 | V5 |
---|---|---|---|---|

<dbl> | <dbl> | <dbl> | <dbl> | <dbl> |

100.00000 | 100.00000 | 100.00000 | 100.00000 | 100.00000 |

100.71094 | 99.32058 | 99.46561 | 99.29194 | 100.01378 |

100.57485 | 99.11775 | 99.95742 | 98.83342 | 100.10393 |

99.92930 | 98.75792 | 100.08205 | 98.08633 | 100.68302 |

99.64371 | 99.16727 | 100.43916 | 98.18958 | 100.03987 |

99.76616 | 99.84069 | 99.83963 | 98.04711 | 98.80163 |

100.12213 | 99.42458 | 99.52403 | 98.12683 | 98.87662 |

100.29585 | 99.09918 | 100.05517 | 97.43653 | 98.40223 |

99.94002 | 99.29733 | 100.94484 | 97.19316 | 98.39106 |

99.13084 | 100.29225 | 101.39547 | 97.37832 | 97.52889 |

Let’s plot some of these price paths:

```
gbm_df <- as.data.frame(gbm) %>%
mutate(ix = 0:(nrow(gbm)-1)) %>%
pivot_longer(-ix, names_to = 'sim', values_to = 'price')
prices_plot <- gbm_df %>%
filter(sim %in% paste0('V', c(1:200))) %>%
ggplot(aes(x=ix, y=price, colour = sim)) +
geom_line() +
xlab('time steps') +
theme(legend.position = 'none')
prices_plot
```

We see:

- the range of terminal prices getting wider as time goes on (the “cone” of terminal prices expands with time)
- terminal prices concentrated around the starting price
- less frequent terminal prices far from the starting price

Most importantly, we’ve now got some values that we can work with. Specifically, we have 1,000 prices at expiration, 25 days after we started at a price of 100.

Let’s extract those terminal prices, look at their distribution (and ultimately derive a probability distribution). First, here’s a histogram of terminal prices:

```
# distribution of terminal prices
gbm_df %>%
filter(ix == max(gbm_df$ix)) %>%
ggplot(aes(x = price)) +
geom_histogram(binwidth = 0.1) +
ggtitle('Histogram of terminal prices')
```

Next, a density plot.

A density plot is a representation of the distribution of a numeric variable – think of it as a smoothed version of the histogram. More specifically, it represents the probability per unit of something (in this case probability per unit of price), and is a good view of the relative probability of the values taken by the variable.

From the area under the density plot, we can derive probabilities of values falling within certain intvervals. More on that shortly.

```
gbm_df %>%
filter(ix == max(gbm_df$ix)) %>%
ggplot(aes(x = price)) +
geom_histogram(aes(y = ..density..), binwidth = 0.1) +
geom_density() +
ggtitle('Density plot of terminal prices')
```

The value of price at some time $tt$ is also a log-normally distributed random variable. Its mean and standard deviation are functions of $tt$.

In the next code block, I implemented the analytical solutions for the mean and standard deviation of price at time $tt$ as the functions `terminal_ev`

and `terminal_sd`

(thanks chatGPT!).

Note in particular that the standard deviation of terminal prices – a proxy for the width of the distribution – depends on $tt$ and the volatility of the generating process, `sigma`

, thanks to the term `exp(sigma*sigma*t)`

.

We can also derive these values empirically from our simulation results:

```
# S_t is a log-normally distributed random variable with expected value and standard deviation as functions of t:
terminal_ev <- function(S0, mu, t) {
t_ev <- S0 * exp(mu * t)
return(t_ev)
}
terminal_sd <- function(S0, mu, sigma, t, dt) {
t_var <- S0*S0 * exp(2*mu*t) * (exp(sigma*sigma*t) - 1)
t_sd <- sqrt(t_var) * sqrt(dt) # note: take care to factor the time step!
return(t_sd)
}
glue::glue("Terminal mean: {terminal_ev(S0, mu, max(gbm_df$ix))}")
glue::glue("Terminal sd: {terminal_sd(S0, mu, sigma, max(gbm_df$ix), dt)}")
```

‘Terminal mean: 100’

‘Terminal sd: 2.78953728518147’

```
# empirical mean and standard deviation of terminal prices
gbm_df %>%
filter(ix == max(gbm_df$ix)) %>%
summarise(mean = mean(price), sd = sd(price))
```

mean | sd |
---|---|

<dbl> | <dbl> |

100.1262 | 2.639653 |

The fact that the standard deviation of price at time $tt$ grows with $tt$ means that the further the option is from expiration, the greater the range of possible outcomes.

It is perhaps easier to think about the distribution of terminal prices when we plot them alongside the actual price paths:

```
terminal_density_plot <- gbm_df %>%
filter(ix == max(gbm_df$ix)) %>%
ggplot(aes(x = price)) +
geom_histogram(aes(y = ..density..), binwidth = 0.1) +
geom_density() +
ylab('terminal price distribution') +
theme(axis.title.y = element_blank(),
axis.text.y = element_blank(),
axis.ticks.y = element_blank()) +
coord_flip()
prices_plot + terminal_density_plot + plot_layout(widths = c(1, 0.4))
```

Let’s see how that distribution changes if we simulate only 10 days until expiration:

```
prices_plot_10 <- gbm_df %>%
filter(ix <= 10) %>%
filter(sim %in% paste0('V', c(1:200))) %>%
ggplot(aes(x=ix, y=price, colour = sim)) +
geom_line() +
xlab('time steps') +
theme(legend.position = 'none')
terminal_density_plot_10 <- gbm_df %>%
filter(ix == 10) %>%
ggplot(aes(x = price)) +
geom_histogram(aes(y = ..density..), binwidth = 0.1) +
geom_density() +
ylab('terminal price distribution') +
theme(axis.title.y = element_blank(),
axis.text.y = element_blank(),
axis.ticks.y = element_blank()) +
coord_flip()
prices_plot_10 + terminal_density_plot_10 + plot_layout(widths = c(1., 0.4))
```