Open letter to Dario Amodei, CEO of Anthropic AI

Re: The Infrastructure Surrender

Dear @Dario,

After your 15 seconds of fame at Davos—at 10:03 AM on January 26, 2026—you published "The Adolescence of Technology", a link to your personal blog for another 15 seconds. Another personal manifesto. Another meditation on whether humanity possesses the "maturity to wield" AI’s awesome power.

You quote Carl Sagan's Contact:

How did you survive this technological adolescence without destroying yourself?

Beautiful question. Pity you're asking from inside a structure of your own making which you cannot escape.

The timeline tells the real story. October 2024: you publish "Machines of Loving Grace". Meanwhile, behind closed doors, Anthropic's sovereignty was slipping away. $8 billion from Amazon. Not just money; control. AWS becomes your "primary training partner." Your models? Locked to their Trainium chips. Your future? Built on someone else's foundation.

You write about whether humanity has the maturity to wield AI's power. But you've placed that power—the power Anthropic creates—in Amazon's hands. Not humanity's. Not through democratic governance. Amazon's.

The heist isn't coming, Dario. It’s done. It already happened. And you helped. I’ll frame this like a Hollywood thriller, act by act.

Act 1: What You Said

Allow me to quote you directly, in all its eloquent contradiction:

"I think that most people are underestimating just how radical the upside of AI could be, just as I think most people are underestimating how bad the risks could be."

Your vision dazzles: cancer... gone! Alzheimer's... erased! Human lifespan... doubled! A century of biological advancement compressed into a mere decade. You paint AI granting us "biological freedom" as mastery over our bodies, our appearance, our very reproduction. The "compressed 21st century," you call it. Slow clap. That is absolutely breathtaking.

"In this essay I try to sketch out what that upside might look like—what a world with powerful AI might look like if everything goes right."

You have mapped out five grand territories of transformation: better bodies, healthier minds, vanquished poverty, peaceful governance, meaningful work. Fifteen thousand meticulously crafted words envisioning "a country of geniuses in a datacenter" reshaping civilization's future.

You explained why you and Anthropic haven't talked much about AI's upsides:

"I've made this choice out of a desire to: Maximize leverage... Avoid perception of propaganda... Avoid grandiosity... Avoid 'sci-fi' baggage."

How thoughtful. How measured. How mature. Unlike those other AI hucksters pushing their agendas! No, you're the deep thinker. The philosopher. Just a humble intellectual sharing personal reflections, no corporate promises here.

And you published it on darioamodei.com.

Not anthropic.com. Never an official company statement. Just Dario. Personal musings. Simple design. Unadorned text. Zero corporate branding. Authenticity as performance art.

This is the personal manifesto playbook perfected by Silicon Valley's philosopher-kings. Elon tweets his way through policy decisions. Zuckerberg shares "personal challenges" that reshape billion-user platforms. And you publish civilization-level AI strategy on your personal blog.

Quite the brilliant flywheel, isn’t it? I’ll tell you the four reasons why it works:

  • First, it humanizes the inhuman. When you're running a company building technology that might obsolete human cognition, it helps to seem like just a thoughtful person sharing ideas. The personal blog aesthetic creates an illusion of authenticity that no amount of professional communications can match.
  • Second, it provides institutional flexibility. Board approval? Legal review? Investor sign-off? Not needed for a "personal essay." Just thinking out loud! Vision falls apart under scrutiny? Merely personal reflection, never company policy. The ultimate escape hatch.
  • Third, it amplifies reach while diffusing accountability. These personal manifestos get shared widely precisely because they don't feel like corporate PR. Tech journalists cover them as "thought leadership." Academics cite them as vision statements. But when push comes to shove on specifics, companies can always retreat to: "That was Dario's personal view, not Anthropic's official position." ¯\_(ツ)_/¯
  • Fourth, and most importantly… it controls the narrative. Wrap everything in philosophical grandeur—”Loving Machines!” and “Biological Liberation!” with centuries compressed into decades—and the conversation shifts. We find ourselves tangled in your abstract thought experiments rather than pressing for real answers about who controls what, and why.

Look over there! It's AI utopia! Don't look here at Anthropic's Amazon addiction. Think about alignment theory! Not who owns the server farms. Contemplate safety frameworks! Not the power dynamics of computational control.

This isn't just good PR, Dario. It's narrative capture at civilization scale. You get to frame the entire conversation about transformative AI using a personal platform that insulates you from institutional accountability while maximizing your cultural influence.

AT&T's CEO couldn't pull this trick fifty years ago. Official statements carried weight. Legal implications. The medium itself demanded accountability.

But in tech, we've normalized a model where unelected executives with massive power shape public discourse through personal essays that carry no institutional responsibility but enormous cultural influence.

We've given you prophet status while exempting you from priestly accountability.

And it works because we're desperate for tech saviors. Silicon Valley sold us salvation mythology while building surveillance capitalism. Last week's headlines proved it again: the same "visionaries" preaching AI ethics at Davos were quietly signing data-harvesting deals with defense contractors.

We're not witnessing philosopher-kings contemplating humanity's future; we're watching venture-backed opportunists in philosopher drag, racing to control what could become the final technological monopoly. The costume works brilliantly. The delusion remains complete.

Your personal blog manifesto delivers exactly what we crave. Authenticity... check. Thoughtfulness... check. Humanity... check.

Meanwhile, as we swooned over your visions of biological freedom and compressed centuries, your pen scratched across Amazon's contracts.

Act 2: What You Did

While your philosophical essays circulated, a different story unfolded through your business decisions:

  • September 2023: Amazon invests $1.25 billion in Anthropic, with plans for $4 billion total. Anthropic names AWS its primary cloud provider.
  • March 2024: Amazon completes the remaining $2.75 billion investment.
  • October 2024: You publish "Machines of Loving Grace" on your personal blog.
  • November 2024: Amazon pours in another $4 billion. Eight billion total. AWS becomes not just a vendor but your "primary training partner." The key commitment? Every future foundation model trained exclusively on AWS Trainium chips.

From your own announcement of the November deal:

"Anthropic is working closely with Annapurna Labs at AWS on the development and optimization of future generations of Trainium accelerators... Through deep technical collaboration, we're writing low-level kernels that allow us to directly interface with the Trainium silicon."

Let's decode this: You're not simply renting computing power. You've intertwined Anthropic's technical architecture with Amazon's proprietary hardware. Your engineers craft code specifically for Trainium chips. Your foundation now rests on Amazon's technology. This isn't a business relationship—it's a technological marriage with no escape clause.

Industry reporting from The Information suggests that "Amazon insisted that Anthropic use Amazon-developed silicon hosted on AWS to train its AI. Anthropic is said to prefer Nvidia chips, but the money may have been too good to pass up."

The reality is a little starker than that:

  1. Your operations required massive capital (reportedly burning through $2.7 billion in 2024)
  2. Amazon provided funding, with significant conditions attached
  3. Those conditions meant abandoning your preferred hardware (Nvidia)
  4. And committing your entire AI training infrastructure to AWS
  5. And making Anthropic fundamentally dependent on Amazon's computational ecosystem

This wasn't mere investment. This was surrendering sovereignty.

Switching costs, anyone? Let's let the maths math:

  • Code rewriting: Those custom kernels for Trainium? Useless elsewhere. Months, perhaps years, of engineering work to migrate.
  • Training infrastructure: Your entire pipeline is built on AWS. Moving means rebuilding everything.
  • Time to alternative: Want out? Good luck. Building equivalent infrastructure would consume years and devour billions.
  • Practical alternatives: Two providers with comparable scale—Microsoft (tied to OpenAI) and Google (already invested in you, but see the conflicts?). That's it.

You have no leverage.

Every philosophical position, every safety innovation, every constitutional principle Anthropic develops is now subordinate to Amazon's strategic interests.

Interests diverge? Too bad. Amazon decides Constitutional AI costs too much? Tough luck. AWS launches competing AI products? You'll smile and take it.

You'll do whatever Amazon wants. They own your ground. Your foundation. Your future.

This is the pattern we've seen before, and you know it:

In the 1990s, Microsoft didn't need the best browser because they controlled the operating system. Google dominated the 2000s while churning out mediocre apps—remember Google+? Google Wave?—because they owned the damn search box where everyone's day began.

Apple spent the 2010s charging premium prices for incremental updates while third-party developers did the real innovation, all because they controlled the only highway to a billion iPhones.

Control the digital bedrock, and you silently govern every structure standing on it.

This is the iron law of platform power, and you've placed Anthropic directly within its grip.

Remember the Amazon playbook? While everyone else built websites, Bezos built the foundation. While competitors fought over features, Amazon quietly seized the infrastructure. A decade later, "winning" in tech meant one thing: playing by Amazon's rules, using Amazon's tools, feeding Amazon's machine.

Now it's happening again with AI. The AI labs are today's website builders—creating impressive capabilities while entirely dependent on infrastructure they don't control and can never own.

Amazon need not build the smartest model. Need not win any AGI race. Need not match Claude's nuance or GPT's cleverness. Their only requirement? Make AWS unavoidable. And you've handed them the final piece.

While you philosophized about AI's transformative potential on your personal blog, Amazon was methodically building the computational substrate that will host it all. In cornfields in Indiana, far from Silicon Valley's philosophical debates, Amazon is constructing massive data centers that will house the servers powering the AI future.

Storage facilities? Hardly. These are cathedrals of computation. The bedrock of a new power structure. More fundamental than anything the digital age has witnessed.

And you signed up to be their anchor tenant.

Act 3: What It Means

Your contradiction stuns, Dario. It takes my breath away.

Your eloquent essays on responsible AI sit awkwardly beside your company's $8 billion Amazon deal. While you pontificate about "technological adolescence," Anthropic's financial lifeline stretches directly to Bezos's empire.

In public, safety over speed and ethical considerations guide your mission statements. Behind closed doors? The same company that optimizes warehouse workers like machines now bankrolls your Constitutional AI research. Your rhetoric emphasizes long-term human flourishing, yet you've chosen dependency on perhaps the most ruthlessly efficient corporation of our time. This transcends simple hypocrisy, it's the complete surrender of autonomy masked by lofty intellectual discourse.

Your hands clutch Amazon's $8 billion. Amazon—whose warehouses crush workers' bodies. Amazon—whose business practices suffocate competition. Amazon—whose surveillance infrastructure penetrates millions of homes. And you've made Anthropic their computational vassal.

You preach democratic AI governance while depending on oligarchic infrastructure. You warn about existential risk while creating structural dependencies that concentrate power. You advocate for safety and alignment while aligning your entire technical stack to Amazon's strategic interests.

Every philosophical position is now subordinate to Amazon's interests. Every safety innovation runs on infrastructure Amazon controls. Every constitutional principle is rented space in Amazon's building.

Amazon holds the lease. They'll change terms at will.

Do I doubt your intentions? No. Your desire for beneficial AI? Genuine. Your existential concerns? Authentic. Your commitment to alignment? Real.

But intentions are subordinate to power structures. And you've placed Anthropic—and by extension, the AI safety research it produces—within a structure it cannot escape.

The Adolescence Metaphor: Then and Now

October 2024: "Machines of Loving Grace" emerges—15,000 words of transformative possibility. Technology's problems? Mere growing pains. Our species' immaturity. Our wisdom deficit. Our collective adolescence.

This month, January 2026, you published "The Adolescence of Technology." Same theme. Same personal blog (darioamodei.com, not anthropic.com). Same philosophical framing:

"I believe we are entering a rite of passage, both turbulent and inevitable, which will test who we are as a species. Humanity is about to be handed almost unimaginable power, and it is deeply unclear whether our social, political, and technological systems possess the maturity to wield it."

Sagan's question haunts your essay: How does any civilization make it through its dangerous technological phase without destroying itself?

This adolescence metaphor, so seductive. It whispers that we're merely immature. That we're growing, learning, evolving toward wisdom. That careful thought—perhaps reading enough Dario Amodei essays—will guide us through.

But the problem isn't immaturity, Dario, it's power concentration by design. What you call "adolescent" behavior is the predictable result of profit-maximizing systems deployed without democratic oversight or meaningful constraints.

Your adolescence metaphor serves as smokescreen diversion, obscuring the power structures you've helped forge.

Want real maturity? It would demand something else entirely. AI compute as public utility. Democratic oversight over foundational infrastructure. New models for technological development; ones privileging human welfare over quarterly profits, democratic control over technical efficiency, distributed power over concentrated might.

Real maturity would mean:

  • Admitting compute needs public infrastructure
  • Advocating for breaking vertical integration between cloud providers and AI companies
  • Mandating interoperability standards so models can move between infrastructure
  • Supporting public compute resources for academic and public-interest AI research
  • Pushing for distributed ownership of AI systems to stakeholders affected by them

Will you champion these changes? Never. They'd topple your golden tower.

So you give us philosophical meditations instead. October 2024, January 2026—bookends of distraction. Frame everything as humanity's psychological journey, never as ruthless power consolidation. Write beautifully. Change nothing. Look profound. Ensure we wrestle with your paradoxes rather than demanding structural change.

You're not discussing technological adolescence, Dario. You're deploying it as narrative control.

The irony would be beautiful if it weren't so devastating: Your quest for "safe" AI has delivered Anthropic into the hands of one of Earth's most powerful, least accountable corporate behemoths.

All the constitutional principles, all the safety research, all the careful thinking about alignment and values, it's simply rented space in Amazon's building. And Amazon can change the lease terms whenever it wants.

This is what infrastructure power looks like. Not dramatic, not obvious, not even particularly sinister. Just the quiet, methodical consolidation of control over the substrate everything else depends on.

The Heist in Progress

History speaks one truth consistently: Infrastructure owners control everything above. From telegraph lines to railroad tracks, power grids to cloud computing—whoever owns the foundation shapes all that rises from it.

With AI, the stakes transcend mere market dominance. We're surrendering control of the computational substrate that may govern human cognition, decision-making, and perhaps civilization itself.

This isn't adolescence, it's annexation. While you philosophize about AI's coming-of-age story, Amazon builds the concrete reality in Indiana cornfields. The true power isn't in algorithms or safety frameworks but in cooling systems, fiber optic cables, and proprietary chips that will run them all.

The heist isn't coming, it's already complete. Three companies are silently establishing dependencies that will determine who controls the AI future, and you've just handed Amazon your keys.

The AI safety community won't speak this truth, they need the compute. The tech press won't say it, they need the access. And you can't say it, you need the $8 billion.

So let me state it clearly: We face a binary choice that's rapidly disappearing. Recognize AI infrastructure as public utility demanding democratic oversight, or surrender our collective future to Amazon, Microsoft, and Google by default.

Your adolescence metaphor provides the perfect cover story while power consolidates beneath our feet. The maturity we need isn't philosophical reflection, it's the courage to confront infrastructure monopoly before the concrete sets.


Two blog posts. Fifteen months apart. Same adolescence metaphor. Same diversion from power's ruthless consolidation.

While you philosophize about humanity's technological coming-of-age, a different story unfolds in the silicon and steel of real infrastructure. Your essays float in conceptual space while Claude's actual computations run on Amazon's proprietary hardware.

  • October 2024: "Machines of Loving Grace" published as Amazon completes its first $4B investment
  • January 2026: "The Adolescence of Technology" contemplated while your models train exclusively on AWS Trainium

In machine learning, we seek ground truth to validate our models. In business, ground truth reveals itself in contracts, dependencies, and power structures. Your philosophical essays search for metaphorical truth while ignoring the literal ground beneath your feet—owned and operated by Amazon.

Sagan asked how civilizations survive their technological adolescence. The answer won't be found in personal manifestos but in who controls the physical substrate of our digital future.

Your machines may aspire to loving grace, Dario. But their ground truth—the hardware they train on, the infrastructure they run on, the foundation they're built on—belongs to Amazon.

And no amount of philosophical maturity can rewrite that reality.

Khayyam Wakil is a technology strategist and the author of Token Wisdom, analyzing power structures in emerging technology.


Dear Dario, Attn: Anthropic AI - NotebookLM ➡ Token Wisdom ✨
YOUR ARTICLE HEREA Closer Look from Token Wisdom, courtesy of your friendly neighborhood, Khayyam ✨For A Closer Look, click the link for our weekly co…

All Claims with Sources

About "Machines of Loving Grace" Essay

CLAIM: Published on darioamodei.com, not anthropic.com
SOURCE: https://darioamodei.com/essay/machines-of-loving-grace
STATUS: ✓ VERIFIED

CLAIM: Published October 2024
SOURCE: Essay header says "October 2024"
STATUS: ✓ VERIFIED

CLAIM: Essay is 15,000 words
SOURCE: Multiple reviews cite "over 50 pages" and lengthy format
STATUS: ✓ VERIFIED (approximation is fair)

CLAIM: Direct quotes from essay
SOURCES: All quotes pulled from https://darioamodei.com/essay/machines-of-loving-grace
STATUS: ✓ VERIFIED - checked each quote

About Amazon Investment Deal

CLAIM: September 2023 - 1.25 billion initial investment, 4 billion total planned
SOURCE: https://en.wikipedia.org/wiki/Anthropic - "In September 2023, Amazon announced a partnership with Anthropic. Amazon became a minority stakeholder by initially investing 1.25 billion and planning a total investment of 4 billion."
SOURCE: https://www.cnbc.com/2024/11/22/amazon-to-invest-another-4-billion-in-anthropic-openais-biggest-rival.html
STATUS: ✓ VERIFIED

CLAIM: March 2024 - remaining 2.75 billion invested
SOURCE: https://en.wikipedia.org/wiki/Anthropic - "The remaining 2.75 billion was invested in March 2024."
SOURCE: https://www.geekwire.com/2024/amazon-boosts-total-anthropic-investment-to-8b-deepens-ai-partnership-with-claude-maker/
STATUS: ✓ VERIFIED

CLAIM: November 2024 - additional $4 billion investment
SOURCE: https://www.aboutamazon.com/news/aws/amazon-invests-additional-4-billion-anthropic-ai (published Nov 22, 2024)
SOURCE: https://www.anthropic.com/news/anthropic-amazon-trainium
STATUS: ✓ VERIFIED

CLAIM: Total investment 8 billion
SOURCE: All sources cited above confirm 4B (2023-2024) + 4B (Nov 2024) = 8B total
STATUS: ✓ VERIFIED

CLAIM: AWS named "primary cloud provider" (Sept 2023)
SOURCE: https://www.aboutamazon.com/news/aws/amazon-invests-additional-4-billion-anthropic-ai - "Last September, Amazon and Anthropic announced a strategic collaboration, which included Anthropic naming Amazon Web Services (AWS) its primary cloud provider"
STATUS: ✓ VERIFIED

CLAIM: AWS named "primary training partner" (Nov 2024)
SOURCE: https://www.anthropic.com/news/anthropic-amazon-trainium - "This expanded partnership includes a new $4 billion investment from Amazon and establishes AWS as our primary cloud and training partner."
SOURCE: https://www.aboutamazon.com/news/aws/amazon-invests-additional-4-billion-anthropic-ai - "Anthropic is now naming AWS its primary training partner"
STATUS: ✓ VERIFIED

CLAIM: Commitment to train future models on AWS Trainium
SOURCE: https://www.anthropic.com/news/anthropic-amazon-trainium - "Anthropic will use AWS Trainium and Inferentia chips to train and deploy its future foundation models"
SOURCE: https://www.geekwire.com/2024/amazon-boosts-total-anthropic-investment-to-8b-deepens-ai-partnership-with-claude-maker/ - "Anthropic will train and deploy its future foundation models using AWS Trainium and Inferentia chips"
STATUS: ✓ VERIFIED

CLAIM: Working on low-level kernels for Trainium silicon
SOURCE: https://www.anthropic.com/news/anthropic-amazon-trainium - "Through deep technical collaboration, we're writing low-level kernels that allow us to directly interface with the Trainium silicon, and contributing to the AWS Neuron software stack to strengthen Trainium."
STATUS: ✓ VERIFIED - exact quote

CLAIM: Anthropic prefers Nvidia chips but Amazon insisted on Trainium
SOURCE: https://techcrunch.com/2024/11/22/anthropic-raises-an-additional-4b-from-amazon-makes-aws-its-primary-cloud-partner/ - "The new investment is reportedly structured similarly to the last one, but with a twist: Amazon insisted that Anthropic use Amazon-developed silicon hosted on AWS to train its AI. Anthropic is said to prefer Nvidia chips, but the money may have been too good to pass up."
STATUS: ✓ VERIFIED (cited as The Information reporting)

CLAIM: Anthropic burning through 2.7 billion in 2024
SOURCE: https://techcrunch.com/2024/11/22/anthropic-raises-an-additional-4b-from-amazon-makes-aws-its-primary-cloud-partner/ - "Early this year, Anthropic reportedly projected it would burn through more than 2.7 billion in 2024 as it trained and scaled up its AI products."
STATUS: ✓ VERIFIED (cited as reported projection)

CLAIM: Amazon remains minority investor
SOURCE: https://www.anthropic.com/news/anthropic-amazon-trainium - "This will bring Amazon's total investment in Anthropic to $8 billion, while maintaining their position as a minority investor."
STATUS: ✓ VERIFIED

About Amazon's Business Practices

CLAIM: Amazon has "documented worker exploitation, monopolistic practices, and surveillance infrastructure"
NOTE: This is characterization/opinion but based on widely reported facts:

  • Worker conditions: Numerous reports about warehouse working conditions, bathroom breaks, injury rates (widely covered by NYT, WaPo, Guardian, etc.)
  • Monopolistic practices: Subject of ongoing FTC antitrust lawsuit, Congressional investigations
  • Surveillance infrastructure: Ring doorbell partnerships with police, Rekognition facial recognition, AWS services to government agencies

STATUS: ✓ DEFENSIBLE - opinion based on documented reporting, not presented as undisputed fact

About Historical Tech Platform Control

CLAIM: Microsoft controlled operating system in 1990s
STATUS: ✓ VERIFIED - documented tech history

CLAIM: Google controlled search/advertising in 2000s
STATUS: ✓ VERIFIED - documented tech history

CLAIM: Apple controlled hardware/app ecosystem in 2010s
STATUS: ✓ VERIFIED - documented tech history

CLAIM: AWS playbook from 2000s-2010s
STATUS: ✓ VERIFIED - documented history of AWS growth

About Personal Blog Publication Strategy

CLAIM: Elon Musk uses personal Twitter for policy
STATUS: ✓ VERIFIED - widely documented

CLAIM: Mark Zuckerberg shares personal challenges that reshape platforms
STATUS: ✓ VERIFIED - documented pattern of personal blog posts announcing major initiatives

CLAIM: This provides plausible deniability and flexibility
STATUS: ✓ DEFENSIBLE - this is analysis/opinion, clearly presented as such

Technical Claims

CLAIM: Switching costs from AWS would be astronomical
NOTE: Based on:

  • Low-level kernel development for specific hardware
  • Training pipeline built on AWS infrastructure
  • Time/cost to replicate AWS scale
    STATUS: ✓ DEFENSIBLE - reasonable technical analysis

CLAIM: Alternative providers limited to Microsoft/Google
STATUS: ✓ VERIFIED - these are the only hyperscalers with comparable AI training infrastructure

CLAIM: Building equivalent compute takes years and tens of billions
STATUS: ✓ DEFENSIBLE - based on known costs of data center construction and chip development

Claims That Are Opinion/Analysis (Clearly Presented As Such)

The following are presented as analysis/opinion, not as factual claims:

  • That the personal blog publication was "crisis management"
  • That this creates "narrative capture"
  • That Anthropic has "surrendered sovereignty"
  • That intentions are "subordinate to power structures"
  • That this represents a "heist in progress"

These are defensible characterizations based on documented facts, clearly presented as interpretation rather than undisputed truth.

Quotes Attribution Check

All direct quotes from:

Defamation risk: LOW =)

  • All factual claims are sourced
  • Opinions are clearly presented as opinions
  • Subject is public figure with higher bar for defamation claims
  • Focus is on public actions and statements
  • Truth is absolute defense


Token Wisdom: Some tokens hit different and compute more efficiently.

Thank you for cruising by the Cult of Innovation

We’re making some Stone Soup here. If you enjoyed this amuse-bouche of news compilations in bite-sized morsels, please share it with your friends and help feed a community. Mmmm. #nomnom.

Don't forget to check out our 2025 Rollup the Rim to Win Token Wisdom =)

The Ultimate Reading Guide ✨ 2025
52 essays: $600B in AI delusions. Lab neurons firing without experience. A confession of what tech analysis erases. In 2025, Silicon Valley built a machine that engineers forgetting—this is how we remember what platforms want us to forget. This is Token Wisdom…