Should we be concerned about AI procured for defense?
If you’ve been watching the news or scrolling social media lately, you’ve probably been seeing tons of people talking about the Anthropic and Pete Hegseth debacle. Depending on your channel of choice or algorithm, there’s a lot of variance in what you might have heard. My goal with this post is not to convince you of anything, and to be upfront I’m really just sharing my thoughts as someone who works in both defense and AI, and feel that I’m pretty up to date on both defense procurement regulations as well as the importance of the work most frontier labs are doing.
What’s going on?
Instead of giving you my rundown of the events, I’ll let you hear it from the man himself: Claude. The tool that the Pentagon and Anthropic seem to be having a fit over.
I asked Claude:
Give a non-biased, 3-4 sentence overview that summarizes the entire conflict recently between Anthropic and the Dept of Defense. Use neutral language, and be as fact based in your summarization as you can.
Claude said:
Anthropic refused the Pentagon’s demand to remove AI safety restrictions preventing use of its models for mass surveillance and autonomous weapons. Defense Secretary Hegseth gave a Friday deadline to comply or face consequences. Trump then ordered all federal agencies to stop using Anthropic’s tech.
There’s a couple things I want to define or give you a rundown on before I share my thoughts:
Claude? Who’s Claude
Claude is the name for an AI currently being developed by Anthropic, a frontier AI lab based in San Francisco. I could write several blogs about what Claude can do, and maybe I will now that I think about it, but all you need to know is Claude is very good at just about anything that can be done on a computer. Reading and responding to emails, sorting and filtering through Excel sheets, writing software, the list goes on.
What does the Dept. of Defense want with Claude?
To be specific, well it’s classified, but as you can imagine a powerful AI like Claude under the arm of the DoD would mean it could be used for all sorts of things: autonomous weapon systems, drones, surveillance systems, etc. Basically everything up to and including a literal Terminator (provided the hardware).
Ok with that out of the way, here’s what I think:
The beautiful world of defense procurement
I have spent most of my adult life writing software for a company that sells it to the government. When a company sells something to the government, it’s very different than selling it to say you or me, or even another company. As soon as the government is a customer via contract, there are thousands of complex rules and stipulations that immediately come into effect, regardless of the simplicity of the product being sold. The Federal Acquisition Regulation (FAR) beautiful little 2000 page document for anyone bored is basically the tip of the iceberg that then determines all of the related ethics, labor, pricing, etc. rules that follow. And that’s just for coffee cups, we are talking about one of the most powerful AI systems in the world right now when we talk about Claude.
Anthropic’s backstory
Anthropic was founded by Dario Amodei, a former OpenAI founder and researcher, one of the pioneers of artificial intelligence as we know it today. He founded Anthropic in 2021, and as the name would suggest they are very focused on AI safety. It’s actually kind of funny, because at least to my knowledge no other frontier labs publicly publish safety research except Anthropic, but because they are the only ones that do there are a lot of horrifying research journals they have shared, specifically on alignment research. I want to be clear, I think it’s good they are sharing these, because people do need to know what these systems are capable of, and I applaud them for being open about the complexity of the AI alignment problem.
Anthropic’s core belief is that Artificial General Intelligence (AGI) will be developed by someone, and it’s paramount to the safety of the world that whoever that is, is laser focused on safety. This drives everything they do, from their published research to the guardrails they place on Claude. It’s this philosophy that puts them at direct odds with a customer like the Department of Defense, where the use cases extend well beyond what Anthropic is comfortable enabling.
Claude and War
So with such a safety driven focus, maybe we begin to see where some of the friction comes in with the Department of Defense. I’m not saying the Department of Defense doesn’t care about safety, but there’s a fundamental, philosophical difference here in what the DoD wants the AI to be used for and what Anthropic wants the AI to be used for. This is where the specifics surrounding FAR come into question. Because of FAR, if Anthropic wishes to be a government contractor, they forfeit much of the control over how Claude is used, and that understandably makes them uncomfortable given the gravity and care that they take in how Claude is used.
This core belief, I actually agree with. The day that AGI is real, it will upend almost all societal norms we know today. Far more dangerous than all nuclear warheads combined, AGI will fundamentally alter the trajectory of life in a way that is very difficult for most people to understand. I want to clarify I think good will prevail, and AGI will be used to elevate the quality of life for all, and that this is inevitable. It should hopefully be a little more clear now how the philosophical differences between the DoD who is focused on the threats of today (Russia, Iran, etc.) differ from Anthropic’s concerns over the threat of a misaligned AGI. Both are very real threats.
So what do you think?
I think this was to be expected for anyone paying close attention to the current frontier AI labs. I’ve seen some stuff be shared that this means the DoD is ‘commandeering’ Anthropic, I don’t think that can legally happen. I am actually glad that both: Hegseth prioritized the sovereignty of the DoD supply chain and national security over having the latest technology. I am equally glad that Amodei understands the gravity of maintaining safety standards for developing AGI, and is willing to lose a lot of money over it. I believe it’s pretty likely that another frontier lab, probably Google’s Gemini or OpenAI’s ChatGPT will take on a fuller role for the DoD, but it’s difficult to be sure. The only two AI labs that have been ‘open’ are xAI when it comes to patents and model architecture, and Anthropic when it comes to safety.
AI is and will continue to be used in DoD technologies, there is really nothing we can do about it at this point. We will likely continue in the stick swinging contest of leverage with adversaries. However in the offices of a few labs in the world, and only to the knowledge of a few hundred researchers, AGI is being developed. We are edging closer and closer to achieving it, and as the AGI race continues on a breakneck pace where every month a new lab holds the top benchmark for ARC-AGI or HLE, my concern and fear is not that this will be used to guide missiles, or control robots, I’ve already seen how that’s done and those systems already exist. They are sticks. What I do fear is that when AGI is achieved, based on the current gap between public knowledge and where AI is at, it will be too late for the overwhelming majority of people.
Research progress and technology growth is predictable, but human reaction to it upending their way of life is not.