Archive

Archives

AI and the Mag 7

What will be the ROI on AI spend?
 

By: Daniel Rasmussen

Taking a pessimistic view on Silicon Valley innovation is one of the worst things an investor could have done over the last decade.
 
Legendary short sellers bet against Tesla, arguing they lacked manufacturing expertise and scale to compete against GM, Ford, and Stellantis; against Uber and DoorDash, arguing the unit economics of their gig economy model weren’t sustainable; against Netflix, predicting it would drown in content costs; and against Facebook, warning that regulatory threats and a misguided metaverse pivot would doom the company. Even Bitcoin, dismissed as a bubble, a scam, and a tool for criminals, has defied endless obituaries to become a mainstream asset class.
 
Skepticism about AI—and the profits to be earned from its mastery—could very well suffer a similar fate…eventually. But in the short term, the fate of the US equity market depends on the fate of the Magnificent 7—Apple, Nvidia, Microsoft, Amazon, Alphabet (Google), Meta, Tesla—and, increasingly, the fate of the Mag 7 depends on the success of artificial intelligence.
 
We are at a level of market concentration not seen since just before the dot-com bubble burst in 2000, and the largest US companies by market cap are betting huge percentages of their net income on AI-related capex.
 
Figure 1: Market Cap of the 7 Largest Companies in the S&P 500

Source: Cembalest
 
Most recent tech revolutions have resembled a lab experiment funded by cutting-edge VC managers. Venture bets might be expected to fail 90%+ of the time. And when they do work, they often take 10+ years before investors see big returns. That’s the business model of seed-stage economics. The current AI revolution has plenty of VC backers, but much of it is driven by the biggest publicly traded companies, who need these bets to pay off soon. If they don't, it won't just be the Mag 7 that suffers hits to earnings and valuations. It'll be the market as a whole.
 
Last summer, Goldman Sachs was estimating a $1T spend on AI capex in the coming years, and the numbers have only gone up since then, with most of it concentrated in the Mag 7 that dominate the public markets, per the chart below.
 
Figure 2: 2025E Capex / 2025E Net Income

Source: Capital IQ
 
It’s necessary as an investor to at least consider how these bets might go awry, to consider what the short seller arguments might be if there were any short sellers of Silicon Valley left, if only as a thought exercise to commemorate a moment in time when thinking was still something primarily done by homo sapiens instead of by our robot overlords.
 
The skeptic’s case starts with the possibility that the Mag 7 is suffering from a classic case of “competition neglect,” where “subjects in competitive settings overestimate their own skill and speed in responding to common observable shocks and underestimate the skill and responsiveness of their competitors,” as Robin Greenwood and Samuel Hanson put it in their paper, "Waves in Ship Prices and Investment." When shipping prices increase, shipping companies all decide to invest in ships—after all, their models are all saying these investments will be profitable at current rates. That investment not only drives up the price of building new ships, it causes a glut of supply once they are built, resulting in poor returns on these pro-cyclical investments, as low as -36%, according to Greenwood and Hanson. Meanwhile, those who invest at the bottom of that cycle—when current shipping prices are low and there’s no one else building at the shipyards—earn returns as high as 24%.
 
Rather than ships, today’s AI capex “is a euphemism for building physical data centers with land, power, steel and industrial capacity,” as Sequoia Capital’s David Cahn puts it.
AI competitors are spending this money because they believe that AI follows a scaling law, essentially that models become exponentially smarter with more data, bigger models, and enough compute (and energy) to power it all. Scaling laws, and the resultant belief that to the biggest spender go the spoils, have turned AI into a manufacturing and infrastructure problem.
 
OpenAI, SoftBank, and the federal government’s $500 billion Project Stargate is the culmination of this race to convert tech companies into industrial manufacturers. But even winning this race could be a Pyrrhic victory. Capex at these levels is an asset-heavy business model. Asset-heavy business models historically have lower returns on capital, especially when sunk costs meet increased competition.
 
In this scenario, perhaps Stargate is the AI equivalent of overinvesting in new ships at the same moment that everyone else is overinvesting in ships, leading to a supply glut, price drops, and poor investment returns. Or it’s possible that all this AI spend ends up with the same result as the so-called “bandwidth glut” of the late 1990s. Massive investment in bandwidth made pricey long-distance phone calls a thing of the past, but it also helped drive overbuilders like MCI WorldCom into bankruptcy (a feckless merger spree and accounting fraud didn’t help). Or perhaps, to take another analogy, AI chips and data centers will depreciate as fast as shale wells.
 

What’s the profit model?

It’s impossible right now to know which AI models will be more Yahoo than Google. But what is clear is that AI companies are burning cash without a lot of revenue to show for it. Google, Microsoft, Amazon, Meta, and other big spenders on AI capex are $400-500 billion short in revenues to cover traditional gross margins on data center spending.
 
Figure 3: The Hyperscaler Revenue Gap: $400B

Source: Cembalest
 
Burn isn’t a problem if you don’t burn out before the returns come. But we still don’t have many economical use cases for AI. Even in low-compute mode, a single prompt on ChatGPT’s o3 model costs $20 to perform. High-compute mode can cost much more.
 
If we think of the internet as a large digital library and Google search as a better Dewey Decimal System, then AI is a librarian who has read every book and can answer any question you ask—but burns an incredible number of calories devising their response. Google was a better file organizer. LLMs are an energy-intensive digital brain.
 
Simple math calculations are a great way to understand why LLM systems are so expensive to run. To answer the question of what 2+2 equals, Microsoft Excel runs a simple “deterministic” calculation. It’s the same answer—4—every time. Since it’s running a line of code that produces the same answer every time, it needs only a tiny bit of processing power from your laptop and its battery. That is traditional software code, what Google’s original search engine was based on. But if you ask ChatGPT, Claude, or any other LLM-powered chatbot what 2+2 equals, it runs an immensely complex “probabilistic” calculation. This is a bit like the Dr. Strange character in the Marvel Universe, where the model is calculating all the possible outcomes to make a series of predictions.
 
While Anthropic CEO Dario Amodei is confident AI can beat humans at most things in 2-3 years, that doesn’t mean we will all be using AI that way. There's a difference between what can be automated and what is cost-effective to automate. Daron Acemoglu, Institute Professor at MIT, estimates that only a quarter of AI-exposed tasks will be cost-effective to automate within the next 10 years. An MIT research paper looked at jobs in non-farm businesses and found 36% of tasks in jobs they studied could be automated by AI vision models, but only 8% were economically worth automating.
 
Scaling laws are an assumption that brute force will get us more and more powerful AI. For AI investors, it’s a playbook to outspend the competition, win the market, and trust that, eventually, more infrastructure and better chips will bring costs down and make more tasks economical to automate. But shooting for scale and achieving high ROI are not usually achieved at the same time.
 
Shortly after Stargate was announced, it was soon overshadowed by bigger news about China’s DeepSeek model. While the exact specs are a subject of debate, DeepSeek shattered the cost-to-performance expectations that investors and the Mag 7 have been working from. At a fraction of the cost (the exact fraction is the subject of much ongoing debate), it performs on par with the leading US models on a range of tests and has called into question the big AI capex and high capital burn of AI model companies.

One could argue that DeepSeek is the proof of the investing thesis—an efficiency leap that will make AI automation far more cost effective and useful. But the subsequent hits to Nvidia and other Mag 7 stock prices show the market took a different interpretation: if the big Mag 7 companies have already invested huge sums under the assumption of huge compute cost, how are they supposed to recoup the expenses? How will the Mag 7 make up a $400 billion gap if a new Chinese model can deliver the same performance at one-tenth the cost? Better chips and data centers would still matter in that world, they just wouldn’t be the only thing that matters, knocking out much of the advantage that the Mag 7 companies have built up at such great cost.
 
Just like the Mag 7 today, 50 years ago, companies like IBM and Xerox seemed to have all the advantages for any coming computer revolution. But it was a bunch of kids like Woz and Jobs and Gates soldering their own motherboards together in Silicon Valley garages who saw what was coming: a personal computer revolution that the mega-cap incumbents simply couldn’t imagine. Those kids saw the future because they were the customers—early adopters who wanted to write college papers and do math and play games at home.
 
Microsoft CTO Kevin Scott puts it this way: “AI is a model, not a product.” We have acted as though the models are products themselves because ChatGPT was an accidentally viral product. Virality, however, is not the same as commercial viability. ChatGPT found rapid widespread consumer adoption, but that hasn't (yet) turned into revenues that can even remotely cover the high costs associated with these models.
 
We've only just entered the true product-building era for AI. How many people today think of the internet as a product? The internet is not a single thing but a collection of services and products on common digital infrastructure (e.g., TCP/IP protocol, which was built by DARPA with US taxpayer money and isn't a business anyone is making money on). Similarly, AI models could, like other commodities, utilities, and infrastructure projects, become a part of everything we use rather than a distinct product. Usage patterns are starting to reflect this: we are using these models less directly and more through other services built on top of them.
 
A whole new class of VCs and AI founders are betting that monolithic LLMs will be like an expensively educated jack of all trades but master of none. Who needs a billion-dollar model that can cure cancer, write a PhD-level paper, and walk, talk, and chew gum at the same time? Why not just license ChatGPT and build something custom on top of it or use free open-source LLM code to create all sorts of little purpose-built bots that solve distinct problems? In this analogy, LLMs become more like electrical utilities, which the LLM companies are literally trying to become to feed their own data centers. The electrical power game is incredibly important, but it is even less profitable than manufacturing, to say nothing of SaaS.
 
In this scenario, it would be the yet-to-emerge specialist firms who would make the big money by building products fitted to specific industries, use cases, and users and by paying much more commoditized rates for LLM processing power and chips. A bit like Netflix and Facebook benefiting from the costs sunk into internet infrastructure or Ford and General Motors cruising along on the fruits of the Federal-Aid Highway Act of 1956.
 
As we said at the top, a lot of investors have done very poorly betting against the scrappy innovators of Silicon Valley. But now that they are mega-cap behemoths run by mega-billionaires trying to outspend each other, maybe the Mag 7 will be outmaneuvered by their true heirs, another group of as-yet-unknown young innovators who are toiling away all over the world in garages far less expensive than the $1,700/square foot you have to pay to live in the cushy confines of Silicon Valley.

Graham Infinger