W43 •A• Silicon Valley Is Building a $600 Billion Casino With Chips That Expire in Three Years ✨ - NotebookLM ➡ Token Wisdom ✨
In this episode, we take a critical look at Khayyam Wakil’s provocative essay on the AI infrastructure bubble in Silicon Valley. Wakil claims that the…
"In the technology industry, the gap between vision and execution is often measured in billions of dollars of destroyed capital."

— according to Marc Andreessen (ironically, given his current AI investments)

The First Tech Bubble Where the Infrastructure Rots Faster Than the Business Models

The AI infrastructure boom isn't the future—it's the dumbest rerun of every tech bubble ever, remixed into something exponentially worse


There's a particular kind of cognitive dissonance that comes from watching the world's allegedly smartest people—Stanford PhDs, ex-Google executives, billionaire venture capitalists—gleefully recreate the 2008 financial crisis, the dot-com crash, and the telecom meltdown simultaneously while insisting this time it's different because AI.

It's not different. It's worse.

Here's what's actually happening: Tech giants are spending $600 billion building GPU infrastructure to generate $40 billion in revenue. For every dollar of actual revenue AI produces, Silicon Valley is lighting fifteen dollars on fire building data centers packed with chips that will be obsolete before the loans financing them come due.

The only thing genuinely innovative about this bubble is that it might actually be more destructive than all of its predecessors combined.

The Numbers Are Absolutely Deranged

Amazon increased quarterly capital expenditure from 13.9 billion to 22.6 billion in less than a year—just for AWS AI infrastructure. Microsoft jumped from 44.5 billion annual capex to a projected 63.6 billion. Google went up 63% to $52.5 billion. Meta is guiding toward $60-65 billion for 2025.

These companies are spending money like it's 1999 and they just discovered fiber optic cables. Except in 1999, the entire telecom sector spent $500 billion over five years. The hyperscalers are spending that every 18 months.

For what, exactly?

OpenAI—the company everyone points to as proof this is real—grew revenue from 2 billion to 3.7 billion last year. Impressive, right? They also lost $5 billion. They're projected to lose $44 billion cumulatively through 2028. The unit economics are genuinely hilarious: users pay $1, app providers pay $5 to OpenAI, OpenAI pays $7 to cloud providers, cloud providers pay $13 to Nvidia.

That's a 13x cost structure. You don't need a Stanford MBA to recognize that's not a business model—it's a Ponzi scheme with extra steps and better PR.

Microsoft claims AI is contributing 12 percentage points to Azure's 33% growth, which sounds great until you realize they spent $63 billion to generate maybe $10 billion in incremental revenue. That's a six-year payback period if nothing changes—which in AI means if the technology doesn't improve, the hardware doesn't become obsolete, and nobody else enters the market.

All of those assumptions are completely @^&$% wrong.

The Kedrosky Insight: This Time It's Actually Worse

Paul Kedrosky—a tech economist who correctly called previous bubbles—published an analysis in September 2024 that should have ended this party immediately.

His central observation is devastatingly simple: The 1990s fiber optic bubble saw companies invest $500 billion building infrastructure with a 4x capacity-to-revenue ratio. The AI boom is running 6-7x ratios—50% worse.

But here's the killer detail everyone's missing: Fiber optic cables lasted 30 years. The "dark fiber" from the 1990s—the 95% of capacity that sat unused after the crash—eventually powered broadband, cloud computing, Netflix, everything. The infrastructure proved valuable once demand caught up. Survivors bought assets for pennies on the dollar and made fortunes.

GPUs are obsolete in three years.

You cannot wait for demand to materialize when your hardware depreciates faster than a leased BMW. Nvidia's next chip generation—already announced—will make today's $40,000 H100s about as valuable as an iPhone 6.

This means the entire financial thesis underlying AI infrastructure investment is based on a lie: that overcapacity can be carried until revenue catches up. It can't. The capacity is rotting in real time, getting less valuable every day, while companies bleed cash paying 11-14% interest on debt secured by assets depreciating at 30-50% annually.

It's like the subprime mortgage crisis, except the houses are actively catching fire while you're still signing the loan papers.

The Return of History's Greatest Hits

The 2008 Playbook: CoreWeave—a company that started mining crypto in 2017—raised $12.9 billion in debt and equity over 18 months. They achieved a $23 billion valuation despite losing $863 million on $1.9 billion in revenue. Sixty-seven percent of that revenue comes from one customer (Microsoft).

Their capital structure is a work of art: $8 billion in debt at 11-14% interest rates, requiring $360 million in annual interest payments against negative profitability. Debt-to-equity ratio: 1,262%.

In December 2024, they violated loan covenants by transferring funds to foreign entities—a technical default. Blackstone, their lead lender, waived it without penalties and added $500 million in new funding.

That's not credit discipline. That's extend-and-pretend, the exact strategy that turned the subprime crisis into a global catastrophe.

The Dot-Com Playbook: Lambda Labs just secured a $500 million SPV facility collateralized by Nvidia chips—described as the "first-of-its-kind GPU asset-backed security." Translation: they're securitizing hardware that will be worthless before the loan matures, and calling it innovation.

The GPU-backed debt market went from zero in 2022 to $11+ billion by mid-2024. Special Purpose Vehicles—the off-balance-sheet entities that blew up Enron—are back with a vengeance. Private credit funds are deploying an estimated $50 billion quarterly into AI infrastructure.

The Telecom Playbook: Nvidia holds equity stakes in CoreWeave, Lambda Labs, and other GPU cloud providers, which collectively have spent over $10 billion buying Nvidia hardware.

Nvidia is simultaneously:

  • The supplier (selling GPUs)
  • The investor (funding buyers)
  • The customer (renting back capacity)

This is the exact circular financing pattern that killed Lucent and Nortel in 2001.

When your customers can only buy your product because you're lending them the money, you don't have customers. You have a circular money-printing scheme that ends the moment anyone stops believing in it.

The Technology Is Real, The Investment Is Insane

Here's the uncomfortable truth that makes this harder to dismiss: AI actually works.

Productivity gains of 26-66% in specific tasks are real. GitHub Copilot users complete coding tasks 26% faster. Customer service workers resolve 26% more issues, 14% faster. The Federal Reserve estimates AI could add 1.1% to aggregate productivity—roughly the economic impact of the PC and internet.

The transformation is real. The technology matters. Long-term, AI will reshape significant parts of the economy.

None of that justifies spending $600 billion to generate $40 billion in revenue.

This is the railroad bubble, which also financed transformational infrastructure—20-30 years too early. British Railway Mania in the 1840s invested 7% of GDP building 6,220 miles of track. Share prices doubled, then crashed 50%, destroying most investors. Yet 90% of routes eventually became viable, forming the backbone of British transportation.

AI is following this script perfectly, with one critical difference: the infrastructure won't outlast the investments because it's depreciating too fast.

The Utilization Lie

Want to know the dirtiest secret in AI infrastructure? Most of it isn't being used.

Cloud infrastructure utilization averages just 11-17% for compute resources. That's not a measurement problem—that's hundreds of billions in capital sitting idle, generating zero returns while accruing debt service costs.

The "GPU shortage" narrative from 2023 has completely evaporated. Microsoft alone accounted for 22% of Nvidia's Q4 revenue—they're not scrambling for chips, they're stockpiling them. Demand risk is transferring from chip makers to cloud providers, who now must either find customers willing to rent this capacity or write down the assets.

Enterprise deployment numbers reveal why this is a problem: Goldman Sachs found only 6.1% of American companies using AI in production. McKinsey reported 71% use generative AI in at least one function, but over 80% report no material enterprise-wide EBIT impact.

AI is delivering value in specific use cases—coding assistance, customer service, IT automation. But the aggregate financial impact on most organizations is approximately zero. The progression from "innovation budget experiments" to "permanent operational budgets" is stalling as proof-of-concepts fail to scale.

Because—and this is the part nobody wants to say out loud—most business problems AI "solves" either don't exist or aren't worth solving at AI's current price point.

Goldman Sachs analyst Jim Covello asked the question that should be tattooed on every data center: "What trillion-dollar problem will AI solve? Replacing low-wage jobs with tremendously costly technology is basically the polar opposite of prior technology transitions."

Who's Holding The Bag?

When this unwinds—and it will—the question is who gets destroyed and who survives.

Pure-play GPU cloud providers like CoreWeave are functionally dead companies that don't know it yet. They're carrying unsustainable debt loads, dependent on one or two customers, secured by depreciating collateral, in a market getting more competitive daily.

Private credit funds that deployed $50 billion into this sector are facing catastrophic losses. Blackstone, BlackRock, Magnetar, Carlyle—all the usual suspects—are deeply exposed across multiple correlated positions. When defaults start, they cascade: same borrowers, same lenders, same collateral, same customers, same market. One major default reprices risk across the entire sector simultaneously.

The hyperscalers—Amazon, Microsoft, Google, Meta—survive because they have diversified revenue streams and fortress balance sheets. But they'll take massive write-downs on billions in stranded GPU capacity.

Nvidia is the most interesting case. They're the arms dealer in this war, making money regardless of who wins. But $110+ billion in vendor financing and equity stakes in customers means they're exposed to the entire ecosystem's health. If the GPU cloud providers collapse, Nvidia's receivables and investments evaporate.

The broader economy faces two risks. First, AI infrastructure spending accounted for roughly 50% of U.S. GDP growth in H1 2024. When that spending stops or reverses, it shows up in GDP numbers. Second, the private credit market's exposure to AI infrastructure creates correlation risk across portfolios.

The Correction Timeline

CoreWeave's IPO valuation dropped from $35 billion aspirations to $23 billion, with share prices potentially cut from $47-55 to $40. That's market skepticism breaking through the private funding enthusiasm.

OpenAI's projected $44 billion in cumulative losses through 2028 requires AI to become 10-20x more efficient or pricing to rise dramatically—trends moving in opposite directions as competition intensifies .

Depreciation cycles are accelerating as new chip generations arrive. Nvidia's B100, B200, and future architectures will rapidly obsolete H100 investments. This creates a treadmill effect requiring constant reinvestment while stranding previous generations' capital.

Kedrosky estimated collapse 2 to 2.5 years from September 2024—placing the timeline at 2027-2028.

The trigger will likely be prosaic: rental income falling below sustainable levels as overcapacity drives pricing down, or a high-profile default causing credit markets to reassess risk across the sector.

The Lesson Nobody Wants To Learn

Here's what makes this so maddening: We know how this ends because we've seen it before.

AI will follow this pattern, with the added bonus that the infrastructure is depreciating so fast there might not be much left to buy at pennies on the dollar.

The truly depressing part is that none of this needed to happen. A measured, capital-efficient approach to AI infrastructure buildout—scaling investment with actual revenue growth—would have delivered the same long-term transformation without the spectacular destruction of capital.

So we get 600 billion in infrastructure chasing 40 billion in revenue, financed with vendor loans, securitized through SPVs, rationalized with the same "this time it's different" rhetoric that precedes every disaster.

It's not different. It's never different.

The current investment level is absolutely, irredeemably, catastrophically insane.

And when it crashes—probably sometime between late 2026 and 2028—everyone will act shocked that the thing that looked exactly like every previous bubble turned out to be a bubble.

The only surprise will be how many people saw it coming and built it anyway.

Don't miss the weekly roundup of articles and videos from the week in the form of these Pearls of Wisdom. Click to listen in and learn about tomorrow, today.

W42 •B• Pearls of Wisdom - 130th Edition 🔮 Weekly Curated List - NotebookLM ➡ Token Wisdom ✨
In this episode we explore the 130th edition of Token Wisdom, curated by Khayyam for October 12th to the 18th, the 42nd week of 2025. We delve into the…

Sign up now to read the post and get access to the full library of posts for subscribers only.

130th Edition 🔮 Token Wisdom \\ Week 42
Quantum computing breakthroughs, agricultural innovation, and cybersecurity threats headline this week’s Token Wisdom. Follow how analog circuits resurface, farming scales to new heights, and infrastructure vulnerabilities emerge—all amid AI governance battles and technological convergence.

Khayyam Wakil is a systems theorist specializing in technological transformation. His work created foundations for the creators of Google Street View and 360° live-streaming. He has zero dollars invested in AI infrastructure and owns gold, silver, and treasury bonds, making him both the most boring person at Silicon Valley parties and the one who'll probably be right about all this.

💡
My writing emerges from watching brilliant people make predictably terrible decisions with other people's money. What compels me to document these cycles is the stubborn human belief that technological breakthrough automatically justifies financial excess—a pattern I've observed through multiple boom-bust cycles in the tech world.

This essay isn't just market commentary—it's a warning about what happens when an entire industry mistakes spending for progress. The AI infrastructure boom follows every classic bubble pattern while adding a devastating new variable: assets that depreciate faster than the debt financing them. We're not just building too much too fast; we're building with materials that rot.

The three-body problem in physics demonstrates why perfectly predicting complex interactions is impossible. Similarly, the interaction between AI hype, cheap capital, and rapidly depreciating hardware creates dynamics where traditional investment logic breaks down. We must recognize that spending $15 for every $1 of revenue isn't visionary—it's delusional. The only question is whether we'll learn this lesson before or after the inevitable crash.