Defense Secretary Pete Hegseth criticized AI models he says "won’t allow you to fight wars" as the Pentagon added xAI’s Grok to its approved generative AI providers. The comment appeared aimed at Anthropic, which enforces safety restrictions and bans on weaponization. Tensions have increased as the DoD pushes faster adoption of frontier AI through platforms like Genai.mil, while safety advocates warn against loosening guardrails and overreliance on unproven models.
Hegseth Criticizes Anthropic’s Safety Limits as Pentagon Adds Grok to Approved AI Tools

Defense Secretary Pete Hegseth sparked controversy when he announced the Pentagon would add xAI’s Grok to its roster of approved generative AI providers and criticized models that, in his words, "won’t allow you to fight wars." People familiar with his thinking say his remarks were directed at Anthropic, the safety-focused AI startup that split from OpenAI.
In recent weeks, tensions have risen between Anthropic and U.S. military officials as the administration presses to accelerate the adoption of advanced AI tools in warfighting systems. Anthropic argues it has a responsibility to prevent its models from being used beyond their tested limits—especially in scenarios where errors could be lethal. The Pentagon counters that companies should not be the final arbiters of battlefield use; those decisions, officials say, belong to the armed forces, just as they do for other technologies and weapons the department acquires.
DoD Position and New Policy Language
A Defense Department official speaking on background told reporters the department will only deploy models that are "free from ideological constraints that limit lawful military applications. Our warfighters need to have access to the models that provide decision superiority in the battlefield." The policy language that accompanied the Pentagon's AI push includes the line: "We must accept that the risks of not moving fast enough outweigh the risks of imperfect alignment," and adds that the Department must use models "free from usage policy constraints that may limit lawful military applications."
Infrastructure and Additions
In December, the Pentagon launched Genai.mil, a portal intended to centralize and accelerate DoD access to generative AI, including a specialized version of Google’s Gemini Frontier model. On the announcement day of a broader AI acceleration strategy, Hegseth also said xAI’s Grok would be added to the list of supported models.
Concerns From Anthropic And Safety Advocates
Anthropic employees and supporters worry the Pentagon could place too much reliance on these models or trust their outputs prematurely, increasing the risk of deadly mistakes. The company’s policies explicitly prohibit using their models to develop weapons and restrict certain law-enforcement applications. Some AI safety advocates — and Anthropic insiders — said the new Pentagon language understates the importance of guardrails.
“If we rely on models before they are ready, there is a risk of costly and even lethal errors,” Anthropic supporters say.
Others put the concern in a wider context: some safety researchers emphasize that threats vary — from misuse of conventional systems to hypothetical long-term risks such as misaligned superintelligence. As safety advocate Nate Soares observed, a sufficiently powerful misaligned AI might not need conventional weapons to cause existential harm. Conversely, proponents say greater autonomy could reduce human exposure to frontline danger in some roles.
Past Frictions And Broader Implications
This is not Anthropic’s first clash with the U.S. government: the company previously disagreed with the White House over state-level AI regulatory issues and later restricted certain law-enforcement applications, widening the rift. Observers note the new DoD strategy echoes elements of a 2023 plan that also encouraged rapid adoption of frontier AI models for military uses.
As the debate continues, the central question remains: should private companies set limits on how their models are used in warfare, or should those decisions rest solely with military authorities? The answer will shape how AI is integrated into defense systems and how risks are managed.
Help us improve.





![Evaluating AI Safety: How Top Models Score on Risk, Harms and Governance [Infographic]](/_next/image?url=https%3A%2F%2Fsvetvesti-prod.s3.eu-west-1.amazonaws.com%2Farticle-images-prod%2F696059d4e289e484f85b9491.jpg&w=3840&q=75)




























