Skip to content
The AI ethics war has gone hot
AI Ethics Anthropic OpenAI National Security AI Policy Defense

The AI ethics war has gone hot

🎧Listen to this article
Steve Defendre
March 9, 2026
7 min read

I've been covering the Anthropic-Pentagon standoff since February, and every time I thought it had peaked, it got worse. This week it went somewhere I wasn't expecting. It stopped being about contracts and policy and turned into something personal, public, and genuinely ugly.

Here's where we are. Anthropic refused to strip Claude's safety guardrails for autonomous weapons systems and domestic mass surveillance. They missed the Pentagon's compliance deadline. Defense Secretary Pete Hegseth called it "arrogance and betrayal" and formally declared Anthropic a supply-chain risk to national security. That's the first time that designation has ever been applied to a US company. Ever. We use that label for Huawei. For Kaspersky. For adversaries.

Then OpenAI took the deal.

And then everything exploded.

The employee revolt nobody saw coming

Nearly 900 current and former employees from OpenAI and Google signed a joint petition opposing the use of AI in autonomous weapons systems. Nine hundred people, many of them still employed at these companies, putting their names on a document that directly contradicts their employer's newest revenue stream.

Aerial view of a tech campus at night with glowing protest signs visible from above

I keep coming back to the number. This isn't a handful of disgruntled engineers posting on Twitter. That's a significant chunk of the workforce at two of the biggest AI companies on the planet saying, publicly, that they think their employers are making a mistake.

The petition calls for a moratorium on AI systems that can select and engage targets without meaningful human oversight. It asks for independent ethics review boards with actual authority, not the advisory kind that issues reports nobody reads. And it asks companies to walk away from contracts that require removing safety constraints for lethal applications.

Whether any of that happens is a different question. But the fact that the petition exists at all tells you something about the temperature inside these buildings right now.

Kalinowski walked. That matters.

I wrote yesterday about Caitlin Kalinowski resigning as OpenAI's head of robotics over the Pentagon deal. I want to add some context that's emerged since.

Kalinowski wasn't a policy person who disagreed with a strategy memo. She was building the physical hardware that would carry OpenAI's models into the real world. Robots. Actual machines that move and act. When the person building the body quits because she doesn't trust what the brain will be used for, that's not a PR problem. That's a structural problem.

People inside OpenAI are reportedly shaken by her departure. She was well-liked and well-respected. When someone at her level leaves over principle, it gives everyone below her permission to ask the same questions she was asking. I've seen this dynamic before in defense-adjacent organizations. One departure cracks the dam. Six months from now, we'll know if others followed.

The founders went at each other

This is the part that surprised me most. Dario Amodei, Anthropic's CEO, publicly accused Sam Altman of "dictator-style praise" toward Trump. His exact words. He then apologized, but the damage was done. That's not a policy disagreement between competitors. That's a personal attack from one CEO to another, broadcast to the entire industry.

Two chess kings facing off on a glass chessboard, one robotic in blue light, one military in olive drab

Altman, for his part, hasn't directly responded to Amodei. He hasn't needed to. The Pentagon deal speaks louder than any statement.

Trump weighed in too, because of course he did. "I fired them like dogs" was the quote, referring to Anthropic's federal exile. Whether you find that alarming or funny probably depends on your politics, but from a business perspective, having the President publicly celebrate your blacklisting is about as bad as it gets for enterprise sales.

Why this week is different

I've been writing about AI safety and military AI for over a year now. Most of those posts were about hypotheticals. What might happen if a government pressured an AI company. What the tradeoffs would look like. How the industry might respond.

None of this is hypothetical anymore.

Anthropic drew a line and got punished for it. OpenAI saw the line, stepped over it, and got rewarded with billions in contracts. Their own employees revolted. Their robotics chief quit. The CEOs are trading insults. The President is talking about it on social media.

This is what it looks like when an industry's stated values collide with the actual incentive structure. For years, every major AI lab has published safety policies, hired ethics teams, written blog posts about responsible development. This week we found out which of those commitments were real and which were marketing.

Where I stand

I'm a veteran. I understand military technology. I understand that you want your own military to have the best tools available. I don't think defense AI is inherently wrong.

But autonomous kill systems without human oversight? Mass domestic surveillance powered by AI? Those aren't tools. Those are decisions that can't be undone, deployed at a scale no human can monitor. Anthropic was right to refuse. The fact that refusing cost them everything doesn't make them wrong. It makes the incentive structure broken.

The 900 employees who signed that petition know it too. So did Kalinowski.

The question I keep asking myself is simple: if this is what happens when one company says no, who's going to say no next time?

I don't have a good answer. And that bothers me more than anything else that happened this week.


Steve Defendre is the founder of Defendre Solutions, an AI consulting firm helping organizations adopt AI tools strategically. He writes about AI, veterans in tech, and the future of work.

Was this article helpful?

Share this post

Newsletter

Stay ahead of the curve

Get the latest insights on defense tech, AI, and software engineering delivered straight to your inbox. Join our community of innovators and veterans building the future.

Join 500+ innovators and veterans in our community

Comments (0)

Leave a comment

Loading comments...