Pentagon Approves Seven AI Companies for Classified Networks, Blocks Anthropic
The Pentagon signed agreements with seven major AI companies—including OpenAI, Google, and Microsoft—to deploy their systems on the military's highest-security classified networks. Anthropic, an AI sa

The Pentagon announced on May 1, 2026 that it had reached agreements with seven AI companies to deploy their systems on the Defense Department's most secure networks. The companies — SpaceX, OpenAI, Google, NVIDIA, Reflection, Microsoft, and Amazon Web Services — will now have access to Impact Levels 6 and 7 network environments. These are the highest security tiers within defense computing, used for classified and top-secret information.
Anthropic, an AI company known for emphasizing safety in its systems, was notably absent from the group. The Pentagon officially designated Anthropic as a supply chain risk earlier in 2026, which bars the Defense Department and its contractors from directly using Anthropic's Claude AI tool in military work.
Why Anthropic Was Excluded
According to Anthropic CEO Dario Amodei, the Pentagon informed the company that Anthropic and its products are considered a supply chain risk. The conflict centers on disagreements over how the military could use Claude. Anthropic wanted stricter limits on how the military could deploy its AI, stricter than what other AI vendors have accepted.
The supply chain risk designation carries real consequences. Under federal procurement rules, it can force government contractors to stop using Claude when they integrate it directly into military contract work. However, Amodei clarified that the Pentagon's decision applies specifically to Claude's use within military contracts, not to its broader commercial use.
What It Means for Defense Networks
The seven approved companies will now have access to classified networks where they can integrate their AI systems. Google became the third AI company to reach such an agreement, joining OpenAI and others in a push to bring commercial AI into the highest levels of defense operations.
This marks a shift in how the Pentagon approaches AI. Rather than building its own AI from scratch or relying only on traditional defense contractors, the Defense Department is directly partnering with the companies leading commercial AI development. The reasoning is straightforward: these companies move faster and have more resources than any military-specific effort could match.
The specific companies included tell a story. SpaceX's presence suggests these agreements will cover both foundational AI and specialized applications for space-based defense systems. NVIDIA's inclusion indicates that computing infrastructure and AI training capabilities will be part of what gets deployed on classified networks.
How This Fits Into Larger Patterns
We have seen this pattern before. The Pentagon went through a similar transition with cloud computing in 2019, when it launched the JEDI program (a cloud infrastructure contract). That experience established the precedent for bringing commercial technology directly into classified environments. The current AI integration is more ambitious — it represents a shift toward bringing state-of-the-art commercial AI capabilities directly into military operations.
The Anthropic exclusion highlights a genuine tension within the AI industry. Some companies, like Anthropic, prioritize AI safety research and want strict guardrails, especially around military use. Others have been more willing to work with defense applications. OpenAI itself initially resisted military partnerships but later changed its policies to allow defense work, which cleared the way for its Pentagon approval.
The broader effect of these agreements is to create a new tier of credential for AI companies. Access to classified government networks requires extensive security clearances, background checks, and technical compliance that smaller startups cannot easily achieve. This reinforces the position of large, established players while potentially reducing the diversity of AI approaches available to the military.
The Technical Challenge
Deploying commercial AI systems in Impact Level 6 and 7 networks is not straightforward. These networks operate with air-gapped isolation — meaning they are physically and logically separated from the internet — strict rules about where data can live, and detailed audit trails that track everything. Commercial AI systems built for the internet must be rebuilt to work in these constraints.
Beyond network security, there is a deeper challenge with how military AI needs to behave. Military applications require AI systems that are explainable (you can understand why they made a decision), deterministic (they produce the same output for the same input), and equipped with fail-safes. These requirements can conflict with how modern large language models actually work — they are fundamentally statistical and probabilistic. Each of the seven companies will need to solve this problem in their own systems.
The Competitive Strategy
The Pentagon's decision to sign parallel agreements with multiple companies is itself noteworthy. This multi-vendor approach is different from traditional defense contracts, which often have a single winner. Using multiple vendors provides backup options if one company faces problems, and it prevents any single company from becoming indispensable. It also maintains competitive pressure.
The broader context here is worth considering. These agreements, coming in the early months of 2026, appear to be part of a coordinated acceleration of military AI adoption. The exclusion of Anthropic suggests the Pentagon is prioritizing speed and operational flexibility over the more cautious AI safety approaches some companies advocate. The question of whether that trade-off is the right one — whether gaining speed and capability while accepting less scrutiny on safety — will shape how military AI systems actually perform once deployed. That is genuinely uncertain, and how events unfold will reveal what the optimal balance actually was.


