Archive

Archives

The Elements of Quantamental Investing

The legendary psychologist Paul Meehl claimed in 1954 that algorithms were superior to human judgment in predictive tasks. And, over 60 years later, a vast array of research shows that he was systematically correct: algorithms make better predictions than experts across a range of fields.

In our view, reliability is a key reason why algorithms often provide more accurate forecasts than humans. As Nobel prize winner Daniel Kahneman notes, “if you present an algorithm the same problem twice, you’ll get the same output. That’s just not true of people.”

But how do we apply this insight to investing?

Eugene Fama and Ken French pioneered the first answer to this question. They looked at decades of historical returns on individual stocks, and they found that certain factors predicted which stocks would do well or poorly over time. Specifically, they found that small stocks performed better than large stocks and value stocks performed better than expensive stocks.

Fama and French’s research sparked a wave of new quantitative investing research as scholars looked for new predictive factors. A recent study found that over the past 30 years, researchers have identified more than 99 different factors—a “factor zoo” ranging from size and value to “industry-adjusted change in employees” and “sin stocks.” Many of the newer factors turned out to be old ones in disguise, which highlights the need for researchers to find better methods of model selection.

As factor research has exploded, so too have alternative quantitative approaches. The spectrum of quantitative methods is shown in Figure 1 below.

Figure 1: Spectrum of Quantitative Methods

From left to right along this spectrum, you move from a heavy focus on the variables you are measuring (and what real phenomena they are associated with) to a more pragmatic approach of pattern discovery that emphasizes predictive performance.

Today, Verdad synthesizes econometric methods, machine learning, and a form of “non-artificial intelligence” that we call “common sense.”

We start with econometric analysis—factor models like Fama and French’s—to identify predictive factors.  We have found that within the universe of small value stocks, increasing exposure to leverage and decreasing exposure to factors that predict financial distress can significantly improve returns.

But traditional factor models often struggle to effectively understand stochastic, high-dispersion environments with significant noise and randomness like equity markets. And we invest in leveraged equities, where these problems are exacerbated. An algorithm better suited to deal with this type of environment should be able to:

  1. Handle many different variables (i.e. Big Data),

  2. Automatically detect interactions among input variables, and

  3. Be able to model non-linear relationships.

This is where machine learning steps in. Machine learning is an exercise in pattern recognition across a vast array of data in order to make accurate predictions on new (out-of-sample) information.

For example, the first machine learning algorithm we developed predicts debt paydown. We trained this algorithm on 48 years of US data since 1964. It applies decision trees to 25 input variables and produces a probability of debt paydown as an output for each stock. Our out-of-sample tests show that this algorithm can predict debt paydown with up to 70% precision over a one-year horizon. Figure 2 below presents results from an out-of-sample test of this algorithm on European leveraged small value equities between 1997 and 2017.

Figure 2: Out-of-Sample Test of US Debt Paydown Algorithm in Europe, Jun 1997 – Dec 2017

Sources: S&P Capital IQ and Verdad Research.

Whereas econometrics focuses on model inference—how much an output should change in response to a unit change in an input—machine learning focuses on a singular goal of maximizing predictive performance. We believe combining both methods will produce more robust results in the greatest of out-of-sample tests: the future.

Even though algorithmic predictions are at the heart of our investment process, human judgment is needed to develop the algorithms, analyze the output of the algorithms, and then improve the algorithms. Verdad’s investment process is a hybrid of quantitative analysis by machines and fundamental analysis by humans; an approach that’s best described as “quantamental” investing. We can think about the combination as a flywheel, with each part of the process naturally feeding into the next stage in the cycle.

Figure 3: Quantamental Flywheel

Working together, humans and machines can complement each other in a robust process that combines the best elements of quantitative and fundamental investing in order to produce more accurate forecasts.  At the end of our quantitative process, we look at each individual stock the model recommends. Our process allows humans to override an algorithm in cases where it did not have access to relevant information, such as a company’s recent loss of a major customer or the divestiture of a major unit.

Ultimately, our combination of humans and machines is predicated on the maxim of “using the right tool for the right job.” Algorithms are good at making reliable forecasts. Humans are good at taking complex problems and breaking them down into solvable tasks for algorithms to execute. Continuous improvement is made possible through a feedback loop between humans and machines. When we use human judgment to correct for any missing information in an algorithm’s recommendations, we search for ways to automate this adjustment going forward. In the end, we believe this hybrid process produces better portfolios than humans or machines would provide in isolation.

Graham Infinger