W45 •A• Power in the Background ✨ - NotebookLM ➡ Token Wisdom ✨
In this gripping episode of The Deep Dig, we delve into the intricate layers of technological and infrastructural power dynamics with Khayyam’s essay,…
"The real problem of humanity is the following: We have Paleolithic emotions, medieval institutions and godlike technology."

— According to E.O. Wilson, who probably didn't anticipate we'd use the godlike technology to make chatbots that refuse to write limericks

How Amazon and Anthropic Are Building AI's Future Foundation While Others Fight For Headlines

While headlines buzz, concrete pours in Indiana


Somewhere in New Carlisle, Indiana—a town of 1,900 souls that until recently was known primarily for producing sweetcorn and the occasional meth lab—Amazon has erected what amounts to the largest non-NVIDIA AI infrastructure on the planet. Seven buildings. Five hundred thousand custom chips. Two-point-two gigawatts of electricity. All of it purpose-built to train Anthropic's Claude, the Large Language Model marketed as the "ethical alternative" to OpenAI's ChatGPT.

And whilst you've been fretting about whether ChatGPT might accidentally write a racist limerick or help your nephew with his A-Level coursework, Anthropic has quietly positioned itself as the default AI for every classified government operation, defence contractor, and critical infrastructure operator in the Western world.

Brilliant, really. Absolutely f@%ing brilliant.

But here's the thing: we've watched this exact playbook before. In Utah. With a company called Banjo. They promised to save kidnapped children, and we fell for it.

The Dress Rehearsal in Utah

Let me tell you about Damien Patton. In 2018, he relocated his surveillance startup from Vegas to Park City, Utah, and sold the state government on the most ambitious real-time surveillance system ever conceived on American soil. The pitch was simple and seductive: what if we could predict crimes before they happen?

Banjo's "Live Time" platform promised to ingest every data stream the state produced:

  • Every 911 call
  • Every traffic camera
  • Every police car's GPS location
  • Every CCTV feed from businesses (half of Park City's private cameras were feeding the system)
  • Social media posts (stripped of personally identifiable information, they claimed)
  • Flight data, autonomous vehicle telemetry, IoT sensor networks

All of it funnelled through proprietary algorithms that would detect "anomalies" and "emergency events" in real-time. The state attorney general was in tears during a simulated child abduction scenario. A SWAT commander reportedly said,

"This is what we've waited for our entire careers."

Utah gave Banjo a $20.8 million contract. Free rein across all 29 counties. Access to state databases containing sensitive information.

Banjo's lobbyist literally told municipal officials:

"We essentially do what Palantir does, but we do it live."

That should have been the first red flag. It wasn't.

Then investigative journalists discovered Patton's past: at seventeen, he'd been a Klan member who drove the Grand Knight during a drive-by shooting of a synagogue with a TEC-9. Police had confiscated an AK-47 from his possession before he fled the state with help from a Christian music producer.

OneZero later uncovered Patton's shadow company, Pink Unicorn Labs, which had developed innocuous-sounding apps like "One Direction Fan App" and "EDM Fan App" that were secretly harvesting social media data and farming user tokens without consent.

But the real kicker? When Utah's state auditor finally reviewed the system in 2021, they discovered that Banjo's technology couldn't actually do what it claimed. The vaunted AI was just "a dashboard of data aggregated from Utah governmental sources." No machine learning. No predictive algorithms. No real-time event detection from social media.

It was vaporware wrapped in surveillance infrastructure, sold to credulous government officials with a child safety narrative.

The company collapsed. Patton resigned. The contract was suspended.

And absolutely nobody in Silicon Valley learned a goddamn thing.

Enter Anthropic

Now let's return to Indiana, where Anthropic and Amazon are building the sequel.

This time, though, they've studied Banjo's failures:

  1. Don't oversell the technology. Anthropic doesn't promise crime prediction or real-time event detection. They just offer a really good chatbot with "constitutional AI" and strong safety guarantees. Much more defensible.
  2. Partner with legitimate infrastructure providers. No scrappy Park City startups. Amazon Web Services is the gold standard of enterprise computing. Unimpeachable credibility.
  3. Embed within existing power structures. Why sell surveillance to states when you can integrate with Palantir and become part of the intelligence community's operational substrate?
  4. Market it as ethical AI, not surveillance "AI Safety" is a much better brand than "predictive policing." Nobody protests constitutional AI the way they protested facial recognition.

The playbook is identical. The execution is just more sophisticated.

The Safety Theatre Industrial Complex

Let's be clear about what "AI safety" actually means in this context. It's not about preventing Skynet. It's about building an AI system palatable enough for government procurement officers and enterprise compliance departments to tick the box marked "ethically sourced."

Anthropic brands itself as the responsible adult in the room. Not like those cowboys at OpenAI who'll let just anyone summon demons from the latent space. No, Anthropic has constitutional AI and values alignment and all manner of reassuring corporate governance theatre that makes Lockheed Martin's procurement department feel warm and fuzzy inside.

In practice? Claude won't help you write erotica or explain how to hotwire a car, but it'll happily analyze satellite imagery for drone targeting or optimize supply chain logistics for private prisons. It's got principles, after all.

Just like Banjo promised to protect privacy whilst building a statewide panopticon.

This positioning lets Anthropic occupy the moral high ground while building surveillance infrastructure that would make ECHELON look quaint. They're not trying to win the consumer market—let OpenAI have the teenagers generating anime fan fiction. Anthropic is after the real money: government contracts, defence applications, and enterprise deployments where AI systems actually make decisions that affect human lives at scale.

Damien Patton understood this instinctively. That's why he targeted government contracts, not consumers. That's why he emphasised child safety and emergency response. That's why he sold Utah on saving lives, not on building surveillance infrastructure.

Anthropic is doing exactly the same thing. They've just got better PR and an $8 billion investment from Amazon to make it look legitimate.

The Palantir Integration: Banjo 2.0 With Actual Competence

Remember when Banjo's lobbyist said they "do what Palantir does, but live"? That was aspirational. They couldn't actually pull it off.

Anthropic can.

Anthropic isn't just mimicking Palantir—they're partnering with them. The same Palantir that provides surveillance software for immigration enforcement, predictive policing, and whatever happens in the murky depths of the national security state.

Palantir's Foundry platform is the backbone of classified government operations. It's the system that connects disparate data sources across intelligence agencies. And now Claude is being integrated into that ecosystem.

But unlike Banjo's vapourware, this actually works. Claude is a genuinely capable language model. The Trainium infrastructure is real. The AWS integration is operational. The government contracts are being signed.

Palantir ingests data from government and corporate sources. Claude interprets and analyzes it. AWS provides the certified infrastructure for classified operations. The result? An AI system embedded in the operational reality of the surveillance state, marketed as the "ethical" option because it won't write you a cheeky limerick about the Prime Minister.

This is what Banjo wanted to be. Anthropic is what Banjo becomes when you have actual technical competence, billions in funding, and partnerships with the world's largest cloud provider and most notorious surveillance contractor.

The Amazon Backdoor: Infrastructure as Destiny

While OpenAI courted Microsoft and Congress, Anthropic took a different approach. They got into bed with the company that already owns the infrastructure that runs the modern world.

AWS is the substrate that government agencies, Fortune 500 companies, and critical infrastructure operators run on. The CIA's data is on AWS. The NHS has patient records on AWS. Your bank's transaction systems are probably on AWS.

Anthropic's Claude isn't just available on AWS—it's being optimised for it. Co-designed with Amazon's Trainium chips and integrated into AWS Bedrock, one API call away from every enterprise system.

This isn't a partnership. It's a trojan horse.

Amazon doesn't even need NVIDIA anymore. They've built their own silicon specifically for training large language models. While other AI labs beg NVIDIA for GPUs, Amazon has gone full vertical integration. They control the entire stack—from the chip architecture to the cooling systems to the models themselves.

Infrastructure lock-in in 2025 isn't flashy. It's cornfields in Indiana turning into server farms while local residents worry about their aquifers.

Exactly like Utah in 2019, except this time the technology actually delivers what it promises.

The "Ethical AI" Brand Is a Marketing Construct

The researchers at Anthropic aren't villains. They genuinely believe in their safety mission. But "AI safety" has become completely decoupled from any meaningful analysis of who holds power and how that power gets wielded.

I'm sure Damien Patton thought he was saving children too. The Utah Attorney General's office maintained after the audit that Banjo "would have saved lives" if only it had been "fully built out."

Constitutional AI—Anthropic's flagship safety technique—is about aligning AI systems with predefined values. But whose values? Who writes the constitution? Who decides what counts as "helpful, harmless, and honest"?

Stanford PhDs in Mountain View make these decisions, implementing value systems that align suspiciously well with their corporate partners and government clients. The "safety" being optimised for? Safety for existing power structures, not safety for the people those structures govern.

The ethics are real—the people working on them genuinely care—but they're operating within constraints that ensure the fundamental power dynamics remain unchanged.

Constitutional AI might genuinely prevent Claude from generating offensive content.

But it doesn't prevent the fundamental problem: powerful institutions using AI to expand surveillance and control while marketing it as public safety or ethics.

The Banjo Lesson Nobody Learned

What should terrify you about the Anthropic-Amazon-Palantir nexus? They studied the Banjo failure and fixed all the mistakes.

Banjo's Mistakes:

  1. Overpromised technical capabilities → Caught by audit
  2. Relied on one contract → Lost everything when exposed
  3. Founder had obvious disqualifying history → Easy target for cancellation
  4. No real technical moat → Revealed as vapourware

Anthropic's Solutions:

  1. Realistic promises — Claude genuinely works
  2. Diversified funding and partnerships — too big to cancel
  3. Founders are respected AI safety researchers — impeccable credentials
  4. Actual technical capabilities — real infrastructure, real models

They're building the same surveillance apparatus Banjo envisioned. They've just made it actually functional and socially acceptable.

Utah's experience was a warning bell, but we hit snooze.

Now we're repeating it at planetary scale, with vastly more competent actors and vastly more resources.

While OpenAI Fights Regulatory Battles, Anthropic Builds the Panopticon

OpenAI's strategy is clear: lobby Congress, shape regulatory frameworks, position themselves as the responsible stewards of transformative AI. They're fighting yesterday's war, trying to win through political influence.

Anthropic is building the actual infrastructure that AI will run on. Not trying to control the rules—just being the substrate those rules regulate.

This is Monopoly 101, Silicon Valley Edition. Don't fight for market share—create the market conditions that make competition irrelevant. Amazon did it with AWS. Google did it with search advertising. Banjo tried it with surveillance. Now Anthropic is doing it with enterprise AI.

And they're doing it under the guise of "AI safety"—a branding coup that would make Big Tobacco's "harm reduction" researchers blush.

The Real Cost: What Gets Built in the Shadows

While we debate AI-generated celebrity photos, consequential AI gets deployed in the shadows.

Those systems being built in Indiana? They're not for chatbots:

  • Predictive policing algorithms that determine where officers are deployed
  • Immigration enforcement systems that decide who gets detained
  • Military targeting systems that identify potential threats
  • Credit scoring models that determine who gets loans
  • Healthcare triage systems that allocate medical resources
  • Hiring algorithms that filter CVs before humans see them

These systems will be built with Claude. They'll be optimised for AWS infrastructure, integrated into Palantir's data fusion platforms, all wearing the badge of "ethical AI" thanks to constitutional constraints designed by existential risk researchers.

Utah's experiment failed because the technology wasn't ready. Now it is. But we're too busy worrying about ChatGPT's content filters to notice.

The ethics aren't wrong, exactly. They're just utterly inadequate for the actual sociotechnical systems being deployed.

The Indiana Test Case: What Happens When AI Infrastructure Comes to Town

New Carlisle, Indiana—population 1,900—now hosts one of the world's largest AI training facilities.

Locals worry about water, farmland loss, roads getting destroyed by construction traffic, whether the electrical grid can handle the load.

These concerns echo Utah's experience with Banjo, but they miss the bigger picture.

What they should be worried about—what we should all be worried about—is what gets trained in those buildings. What models get deployed. What decisions those models make. Who benefits and who gets harmed.

Amazon gave the county $4 billion in tax breaks. The state kicked in another $4 billion. All to build infrastructure for training AI models that will be used to automate decisions about human lives.

The stakes are higher than Utah's $20 million gamble on Banjo. This time the infrastructure is real, the models work, the partnerships are operational. The panopticon won't fail from technical incompetence—it's governed by "Constitutional AI," funded by the world's largest e-commerce platform and integrated with defense contractor surveillance systems.

The "Responsible AI" Endgame

What Anthropic has accomplished:

  1. Positioned as the ethical alternative while building state surveillance infrastructure
  2. Captured enterprise and government markets through AWS integration
  3. Achieved compute independence from NVIDIA through vertical integration
  4. Embedded Claude into classified systems via Palantir partnerships
  5. Avoided regulatory scrutiny because they're the "safe" option

Banjo only managed the first and last steps before collapsing. Anthropic has executed the entire playbook—all while OpenAI fought over API access and testified to Congress.

This isn't a conspiracy. It's just capitalism working exactly as designed. Where Banjo failed to deliver on "predictive policing for public safety," Anthropic succeeded with "ethical AI for enterprise and government."

The ethics are real. They're just deployed in service of power consolidation rather than power redistribution.

What This Means for the Rest of Us

The AI future being built in Indiana cornfields isn't one where algorithms help us write better emails or generate funny images. It's one where AI systems mediate access to employment, healthcare, housing, credit, and increasingly, liberty itself.

What Utah couldn't achieve with Banjo's fraudulent technology, Claude will accomplish through AWS infrastructure and Palantir's platforms—marketed as "ethical" because the models won't swear or help with homework.

The real safety question isn't whether Claude will refuse to write offensive content. It's whether these systems will be deployed in ways that further concentrate power, deepen inequality, and expand state capacity for surveillance and control.

On that question? Anthropic's "Constitutional AI" has nothing to say.

The Bitter Irony

What's truly maddening? The researchers at Anthropic probably are worried about AI existential risk. They genuinely believe that careful alignment work today prevents catastrophic outcomes tomorrow.

But while they're focused on preventing hypothetical future harms from superintelligent AI, they're building the infrastructure for very real, very present harms from AI systems deployed in service of existing power structures.

The real alignment problem isn't between AI and human values—it's between AI development and power structures that control its deployment.

Anthropic has solved the second problem whilst pretending to work on the first.

And they've done it while wearing the white hat.

The Real AI Arms Race

OpenAI writes promises they can't fulfill. Anthropic builds the infrastructure that would make those promises possible.

One company is fighting for regulatory capture through political influence. The other is achieving market capture through infrastructure dominance.

Both are wrapped in the language of safety, responsibility, and ethics.

Neither addresses the core question: who controls these systems, and how do we ensure they serve humanity rather than shareholders and state security?

Utah's 2019 experiment with Banjo proved that fraudulent technology could still build real surveillance infrastructure.

It's 2025 now. Four thousand construction workers in New Carlisle, Indiana, are building out capacity for a million Trainium chips that will train the next generation of Claude models.

Those models will make decisions about people's lives. About who gets hired, who gets healthcare, who gets detained, who gets bombed.

They'll do it all while refusing to write dirty limericks, because that's what "ethical AI" means in 2025.

Welcome to the future. It's got constitutional constraints.

Just don't ask whose constitution we're following.

Don't miss the weekly roundup of articles and videos from the week in the form of these Pearls of Wisdom. Click to listen in and learn about tomorrow, today.

W44 •B• Pearls of Wisdom - 132nd Edition 🔮 Weekly Curated List - NotebookLM ➡ Token Wisdom ✨
Join us in this riveting episode of “The Deep Dive,” where we unravel the complex tapestry of our digital and geopolitical landscapes. As we navigat…

Sign up now to read the post and get access to the full library of posts for subscribers only.

132nd Edition 🔮 Token Wisdom \ Week 44
AI surveillance, privacy paradoxes, and simulation theory headline this week’s edition. Follow how age verification drives VPN adoption, AI systems develop unpredictable behaviors, and mathematical proofs challenge our understanding of reality—all amid psychopathy research and social norm evolution.

About the Author

Khayyam Wakil is a systems theorist who specializes in explaining why everyone else's model of technological change is wrong. His research focuses on the productive chaos that precedes breakthrough—the part venture capitalists edit out of their success stories. He practices intellectual humility by getting thoroughly demolished at chess in Union Square, which keeps his academic ego appropriately calibrated.

His forthcoming book, "Knowware: Systems of Intelligence — The Third Pillar of Innovation" challenges Silicon Valley's assumptions about artificial intelligence and proposes a radical new framework for systems of intelligence—one built on embodied cognition rather than pattern matching.

For speaking engagements or media inquiries: sendtoknowware@proton.me

Subscribe to "Token Wisdom" for weekly deep dives and round-ups into the future of intelligence, both artificial and natural: https://tokenwisdom.ghost.io


#artificialintelligence #ethics #techpolicy #bigtech #governance #infrastructure #techsurveillance #cloudcomputing #regulation #constitutionalAI #responsibleAI #AWS #anthropic #palantir #chips #enterprise #techanalysis #opEd #futureoftech #tech #techcriticism | #tokenwisdom #thelessyouknow 🌈✨