The U.S. Department of Defense (DoD) has reached a deal with Elon Musk’s xAI that allows its Grok model to be used inside some of the military’s most sensitive classified systems, after a major clash with Anthropic over demands that its Claude model be usable for “all lawful purposes,” including mass surveillance and fully autonomous weapons.
Core of the Grok–Pentagon deal
- xAI has signed an agreement allowing Grok to be integrated into classified U.S. military networks used for high‑end intelligence analysis, weapons development, and battlefield operations.[1][3]
- A defense official confirmed that Grok will be deployed in “classified frameworks,” effectively joining or partially replacing Anthropic’s Claude as an AI engine inside secure systems.[1][3]
- xAI accepted the Pentagon’s “all lawful purposes” standard, meaning DoD can use Grok for any application that is legal under U.S. law, without additional policy restrictions imposed by the company.[2][1]
- Previous contracts: xAI already had around a $200 million DoD contract as part of a broader AI push, via a “Grok for Government” / AI tools program aimed at mission areas including the “warfighting domain.”[5][6]
What “classified systems” and “all lawful purposes” mean
- “Classified systems” here refers to protected networks and environments where the military handles sensitive intelligence, operational planning, weapons programs, and battlefield decision support.[1][3][6]
- The “all lawful purposes” clause is crucial: the Pentagon wants the freedom to apply the model to anything not explicitly illegal, including large‑scale surveillance and support for weapons, without vendor‑imposed guardrails blocking specific use cases.[7][2][8]
Anthropic’s refusal and the clash with DoD
- Anthropic’s Claude was, until now, the only frontier model fully integrated into the Pentagon’s classified environment, tailored specifically for national security clients.[1][9][8]
- Claude has reportedly been used in real operations, including a U.S. mission targeting Venezuelan president Nicolás Maduro, which raised internal concern among Anthropic staff.[10][11]
- Negotiations broke down because Anthropic refused to allow:
- Use of Claude for mass surveillance of U.S. citizens.
- Use for fully autonomous (“no human in the loop”) weapons targeting or kinetic operations.[4][7][12][8]
- DoD insisted on being able to employ AI for “all lawful use cases,” pushing Anthropic to drop these restrictions; Anthropic declined even when the Pentagon suggested adding an internal “safety stack” instead of hard use‑case bans.[4][13][12]
- Pentagon officials have threatened to label Anthropic a “supply chain risk” if it maintains its safety guardrails, which would force defense contractors and programs to drop Claude and switch to other vendors.[7][2]
The leverage and timing
- As the clash escalated and Anthropic’s Pentagon collaboration went “under review,” the department moved to ink the Grok deal and open the door to other providers like OpenAI and Google in classified spaces.[7][2][8]
- From DoD’s perspective, xAI’s willingness to accept “all lawful purposes” offers an immediate alternative if Anthropic is pushed out of critical programs.[2][1]
What Grok is expected to be used for
Public reporting does not say “the U.S. military will use Grok *specifically* for mass surveillance,” but the key is that xAI has agreed not to block such uses if they are legal, in contrast to Anthropic’s stance.[7][2][1]
Likely and stated mission areas include:
- **Sensitive intelligence analysis**: Assisting human analysts with large‑scale data ingestion, pattern recognition, summarization, and scenario analysis in intelligence and counterintelligence contexts.[1][6][3]
- **Weapons development and wargaming**: Supporting R&D, simulations, optimization of weapons systems, and battlefield tactics in “warfighting domain” workflows.[1][6][3]
- **Battlefield decision support**: Providing planning assistance, logistics optimization, and battle management tools for commanders, under classified conditions.[1][6]
- **Broader government use**: As part of “Grok for Government,” xAI has pitched tailored applications for national security and other public‑sector tasks beyond strictly military applications.[5]
Because the DoD has insisted on “all lawful purposes,” these same models could, in principle, be adapted for:
- Large‑scale data fusion and monitoring (a technical foundation for mass surveillance, even if not described that way in official language).[7][12][8]
- High‑autonomy targeting and command‑and‑control systems, as technical barriers fall.[12][6]
Policy, ethics, and civil‑liberties dimension
- Negotiations with Anthropic have highlighted a core unresolved policy issue: U.S. law and military AI policy have not fully caught up with frontier models, especially around domestic surveillance and lethal autonomy.[7][12][10]
- Anthropic’s position is that without explicit legal and policy constraints, allowing “all lawful” use effectively hands the military an unrestricted tool that can be used for domestic mass surveillance in ways that could infringe civil liberties.[7][8][12]
- The Pentagon counters that it will use AI in accordance with existing law and military directives, arguing that vendors should not unilaterally dictate mission constraints beyond legal requirements.[10][12]
- The threat to classify Anthropic as a “supply chain risk” is a strong pressure tactic, signaling that maintaining strict safety and ethics guardrails may carry real commercial and strategic costs in the defense market.[7][2]
Strategic implications
- **Vendor competition and leverage**: Anthropic, OpenAI, Google, and xAI have all landed large DoD AI contracts (around $200 million packages), but the Grok classified deal marks a shift in leverage away from Anthropic and toward companies willing to align more closely with DoD’s use‑case flexibility.[5][6][8]
- **Escalation of AI militarization**: Embedding Grok and similar systems into classified, warfighting‑adjacent workflows accelerates the move toward AI‑mediated decision‑making in conflict, including in areas like targeting, cyber operations, and information warfare.[6][1]
- **Geopolitical angle**: Some commentary points out the irony that the U.S. is opening highly classified access to an AI firm controlled by a single powerful billionaire, while simultaneously warning about foreign labs “stealing” U.S. AI capabilities, highlighting new governance and influence risks.[11]
What is *not* yet clear
- Specific technical configurations (fine‑tuned versions, safety layers, human‑in‑the‑loop guarantees) for Grok in classified environments are not publicly detailed.[1][3]
- The exact scope of any surveillance‑related deployments (e.g., signals intelligence triage vs. domestic monitoring) has not been disclosed.[12][8]
- Whether Congress or courts will respond to companies’ willingness to permit “all lawful purposes” with new statutes or oversight mechanisms remains unresolved.[10][8]
In short, the key dynamics are: Anthropic drew a bright line against mass surveillance and fully autonomous weapons; the Pentagon insisted on unrestricted “all lawful purposes” use; Anthropic resisted and is being pressured; xAI agreed to the Pentagon’s terms, enabling Grok to move into highly sensitive military and intelligence roles where those contested uses are now, in principle, possible.[4][7][2][1][12]
Citations:
[1] Musk's xAI and Pentagon reach deal to use Grok in classified systems https://www.axios.com/2026/02/23/ai-defense-department-deal-musk-xai-grok
[2] xAI Lands Pentagon Deal for Grok in Classified Systems https://www.heygotrade.com/en/news/xai-lands-pentagon-deal-for-grok-in-classified-systems/
[3] Pentagon, Musk's xAI reach agreement to use Grok in classified ... https://www.aa.com.tr/en/americas/pentagon-musk-s-xai-reach-agreement-to-use-grok-in-classified-systems-report/3838508
[4] The US military will reportedly use Elon Musk's Grok AI in ... https://www.engadget.com/ai/the-us-military-will-reportedly-use-elon-musks-grok-ai-in-its-classified-systems-110049021.html
[5] xAI announces $200m US military deal after Grok chatbot had Nazi meltdown https://www.theguardian.com/technology/2025/jul/14/us-military-xai-deal-elon-musk
[6] Grok's latest gig? A $200 million Pentagon contract https://responsiblestatecraft.org/dod-ai/
[7] Inside Anthropic's existential negotiations with the Pentagon https://www.theverge.com/ai-artificial-intelligence/883456/anthropic-pentagon-department-of-defense-negotiations
[8] Anthropic is clashing with the Pentagon over AI use. Here's what each side wants https://www.cnbc.com/2026/02/18/anthropic-pentagon-ai-defense-war-surveillance.html
[9] Pentagon and Musk's xAI reach agreement to use "Grok" in ... https://telegrafi.com/en/amp/pentagoni-dhe-xai-i-musk-arrijne-marreveshje-per-te-perdorur-grok-un-ne-sistemet-e-klasifikuara-2675309225
[10] Defense Dept. and Anthropic Square Off in Dispute Over A.I. Safety https://www.nytimes.com/2026/02/18/technology/defense-department-anthropic-ai-safety.html
[11] xAI and Pentagon reach deal to use Grok in classified ... https://www.reddit.com/r/singularity/comments/1rd9mss/xai_and_pentagon_reach_deal_to_use_grok_in/
[12] Exclusive: Pentagon clashes with Anthropic over military AI use https://www.reuters.com/business/pentagon-clashes-with-anthropic-over-military-ai-use-2026-01-29/
[13] The US military will reportedly use Elon Musk's Grok AI in ... https://sg.news.yahoo.com/us-military-reportedly-elon-musks-110049372.html
[14] Grok to join US military AI systems in Pentagon deal with Elon Musk | In Full https://www.youtube.com/watch?v=3rgHNMMJz-A
[15] Musk's AI tool Grok will be integrated into Pentagon networks, Hegseth says https://www.theguardian.com/technology/2026/jan/13/elon-musk-grok-hegseth-military-pentagon



