Archive

Archives

Risk Eats Return

How risk management can improve long-run portfolio outcomes
 

By: Verdad Research

Before Sam Bankman-Fried became famous for the collapse of FTX, he was well-known in certain circles for his utility-maximizing philosophy—a philosophy that he applied to investing as well as to charity. Bankman-Fried believed that investors should focus solely on maximizing expected value, effectively ignoring risk.

The economist Tyler Cowen once probed Bankman-Fried on this topic, asking him if he would play a double-or-nothing game with 51% odds that risked the planet Earth. When he answered that he would keep playing that game forever, Cowen pointed out that this would inevitably result in destruction. “Well, not necessarily,” replied Bankman-Fried. “Maybe you St. Petersburg paradox into an enormously valuable existence. That’s the other option.” What Bankman-Fried is saying is that he would wager all of existence, again and again, if the expected value of each bet was positive, even though doing so means that the probability of utter annihilation would approach mathematical certainty over time.

While few people in the investment world would share Bankman-Fried’s complete dismissal of risk management, there is a widespread belief in investing that risk management and maximizing long-term CAGR are at odds.

Ed Thorp, a math professor and hedge fund manager born amid the Great Depression, was the first to formally rebut this view. In the early 1970s, he wrote a paper that applied a then-obscure formula used by professional gamblers to the problem of investing. Thorp argued that this formula, known as the Kelly criterion, offered a path to maximizing the long-run rate of compounding. The Kelly criterion says that investors should choose their net exposure to risky assets to maximize the expected value of their logarithmic wealth. For this reason, this is also known as the “growth-optimal” portfolio. In the context of continuous-time finance, expected log wealth is maximized by investing a fraction µ / σ2—the expected future excess return (µ) divided by the expected future variance (σ2)—of wealth in risky assets and the remainder in riskless bonds.

To understand the intuition behind reducing exposure with the square of volatility, first consider an investor whose portfolio returns ten percent one month and draws down ten percent the next. The end portfolio isn’t flat: the investor has lost one percent of their starting investment, even though the average return over the two periods was zero. Now consider an investor whose portfolio rises five percent before falling by the same amount. That investor will only lose 25bps relative to his starting point. Thus, if we shrink the magnitude of the return swings—the volatility—by a factor of two, then the magnitude of the loss attributable to volatility falls by a factor of four. This is why we believe the optimal exposure to risky assets declines with the square of expected volatility.

How can we apply these ideas? In our previous piece, we argued that future volatility can be forecast using trailing volatility (one-month forward S&P 500 volatility is historically 64% correlated with one-month trailing volatility, while future returns are only 2% correlated with trailing volatility). So, as a starting point, we divide S&P 500 returns into four quartiles, based on trailing volatility, and compute the fractional Kelly leverage one would want to employ if one knew forward returns and volatilities for each quartile in advance.

Figure 1: S&P 500 Kelly Leverage vs One-Month Trailing Volatility Quartile, 1988–2022

Source: Bloomberg Data, Verdad Analysis

The impact of volatility is not linear. In the lowest volatility quartile, the formula is recommending about a 250% portfolio weight to the S&P 500, while in the lowest quartile it’s recommending only about a 50% exposure.

The volatility of the S&P 500 varies markedly over time. We took the S&P 500’s daily returns over the past 30 years and built a simple model for estimating volatility, based primarily on the previous month’s realized volatility. In the below chart, we show the model’s month-forward volatility forecasts. Forecasted volatility has a long-run baseline of around 15% but spikes during the crises in 2008 and 2020.

Figure 2: S&P 500 Monthly Volatility Forecasts, Annualized, 1988–2022

Source: Bloomberg, Verdad Analysis

We then run a simple test of the Kelly criterion logic by varying exposure to the S&P 500 based only on trailing realized volatility and the risk-free rate (see technical appendix for details). We assume that we borrow and lend at the risk-free rate, plus or minus a 50bps spread, and we pay 10bps in transaction costs on all rebalancing trades. Below, we plot cumulative performance for the dynamically levered S&P 500 strategy relative to an unleveraged static S&P 500 return series.

Figure 3: Log Cumulative Returns by Strategy, 1988–2022

Source: Bloomberg, Verdad Analysis

This dynamic strategy has lower volatility, a higher CAGR, a higher Sharpe ratio, and a smaller maximum drawdown than holding the S&P 500 at static exposure.

Figure 4: Strategy Summary Statistics

Source: Bloomberg, Verdad Analysis

For investors who are already comfortable with a 14.9% annualized volatility who want to focus on CAGR, our research found that leveraging this dynamic strategy so that it matches the volatility of the S&P 500 yields a CAGR of 11.6% which solidly bests the 9.8% CAGR of the S&P 500.

We are not the only researchers to come to these conclusions. A recent paper in the Journal of Finance authored by academics at the University of Rochester and UCLA, "Volatility-Managed Portfolios," also concluded that variance scaling could improve compound returns while reducing risk.

We believe that the best part about this strategy is that predicting volatility is significantly easier than predicting excess returns, which is nearly impossible to do for the S&P 500 on a monthly basis. And the excess returns and improved Sharpe Ratio of volatility-managed portfolios don’t require a counterparty to lose money, which means these benefits are less likely to be competed away.

Nevertheless, Kelly-leveraging is still ultimately about risk aversion. A highly risk-tolerant investor who has a 10-year investment horizon, or any other fixed finite period, and wants to come close to maximizing the expected value of their portfolio might choose to bet more aggressively than the Kelly criterion would advise. Paul Samuelson made this point in a 1979 paper, which we will quote in all its monosyllabic clarity: "To be clear is to be found out. Know that life is not a game with net stake of one when you beat your twin, and with net stake of nought when you do not. A win of ten is not the same as a win of two. Nor is a loss of two the same as a loss of three. How much you win by counts. How much you lose by counts.”

We believe Samuelson is right. But even for a highly risk-tolerant investor, the benefits of variance scaling endure. We believe an investor for whom Kelly is not aggressive enough will still achieve better outcomes—better CAGR, better expected return at a given level of expected volatility—by reducing their exposure when the world looks riskier.

The only investor for whom Kelly thinking has nothing to offer is the investor who is truly risk neutral: an investor who is pathologically indifferent to risk. This investor would pay any price to play the St. Petersburg paradox. He is an investor who does not exist in the real world, who cannot exist in the real world, a mere ghost dwelling only in the realm of mathematical formalisms, gambling in a casino where wealth levels involving up-arrow notation are treated as sensible and sane. He is mathematics run amok. Even Sam Bankman-Fried wasn’t this investor (he had enough risk aversion to last a few years before blowing up) and certainly neither are you. 

In the real world, for real people, investing over realistic horizons, Kelly thinking has much to offer the rational investor, even if levels of exposure must be tailored to individual risk appetites.

Technical Appendix and FAQs
Q: Tell me more about the Kelly criterion. Can you prove that µ / σ2 is the ratio for maximizing expected logarithmic wealth?

A: Yes. Consider a portfolio with two assets, a risky asset that offers an expected excess return µ with volatility σ, and a riskless asset that offers r. Let the price of the risky asset at time , St, be lognormally distributed and follow a geometric Brownian motion process. The price of St is given by the following equation, where Wt is a random variable with mean zero and variance t:

From here, we can compute an expected logarithm of the price St:

This implies that the expected (annualized) log return on the risky asset is given by:

Now that we know the log return of the risky asset, we can compute the expected log return for a portfolio of the risky asset and the risk-free asset. Let f be our allocation to the risky asset. The expected log return on this portfolio is given by:

To maximize the expected log return on our portfolio, we take the derivative of this expression and set it equal to zero:

which implies that

This proves that µ / σ2 is optimal for maximizing logarithmic returns and therefore next period log wealth.

Q: Thanks for the proof! Could you tell me a little bit more about the conceptual assumptions underpinning the Kelly Criterion? What are the premises that need to be true for it to be growth-optimal, and are they realistic?
 
A: Yes. There are three main assumptions Kelly makes that are unrealistic:

  1. Trading is frictionless and is continuous: an investor does not incur any costs to always maintain her desired exposure.

  2. The parameters of the return distribution are known with perfect certainty in advance. There is no estimation error.

  3. Risky asset returns are approximately normally distributed.

None of these assumptions is true in the real world. And because an investor’s CAGR falls faster due to overleveraging than underleveraging, savvy practitioners of Kelly thinking take smaller positions than the naïve Kelly rule would prescribe, even if they truly want to maximize expected logarithmic wealth.
 
Q: Tell me about the model(s) you used in your backtest. What exactly are you forecasting, and how are you converting your forecasts into leverage decisions?

A: We make use of two regression models: an excess return forecasting model and a variance forecasting model. The excess return forecasting model is just a regularized regression of S&P excess returns on the risk-free rate. It has an extremely low R­2, and its only purpose is to ensure that the leveraging decision is able to account for the cost of margin borrowing.

Our variance forecasting model is a regularized linear regression of the squared residuals of the first model on the Z-score of trailing realized volatility. We use squared residuals—and by extension variance—as our dependent variable because sample variance is an unbiased estimator of true variance of a random variable. Due to Jensen’s inequality, sample volatility is not an unbiased estimator of the true volatility of a random variable. What this means is that, if we were to use the absolute value of monthly returns as a proxy for volatility, we would systematically underestimate future volatility.

We then transform the variance forecasts produced by the model into volatility forecasts when plotting the outputs. This is more intuitive since the units of volatility are given in percentages, rather than percentages squared.

When determining desired leverage, we divide our S&P 500 return forecast by our variance forecast. Then we take the square root of that ratio—to adjust for the unrealistic assumptions discussed in the answer to the previous question—and multiply by a constant less than one, which reflects investor risk preferences.

As for why we use (the Z-score of) volatility as our regressor, rather than variance, this is mostly a matter of pragmatism. Empirically, we have found that Z-scored volatility leads to less dispersion in our variance forecasts than Z-scored variance, and this translates into better performance out of sample. But there is not a good mathematical reason for this decision that the author is aware of.
 
Q: In the first paragraph you quote SBF talking about something called the St. Petersburg paradox. Can you explain what that is and how it connects to risk management?

A: The St. Petersburg paradox is a thought experiment where a gambler is given the opportunity to flip a coin until he flips tails. The player receives at least $2, doubled for each heads that he flips before flipping tails. So if he flips tails immediately, he receives $2, if he flips one heads then tails he gets $4, two heads gives $8, and so on. Mathematically, the payoff can be written as:

Where k is the number of consecutive heads.

The astute reader will note that the expectation of this game is technically infinite! It can be given by the expression:

Which is an infinite sum that does not converge. You or I might say that one ought to take this kind of result with more than a grain of salt. But a mathematically idealized risk-neutral individual would plow straight ahead and price this game at infinity. Despite SBF’s claims, it’s actually a very good illustration of the fact that no real humans are “risk neutral” in the sense of mathematical economics or decision theory. Nor should they be.

Graham Infinger