The Future of Life Institute's AI Safety Index finds that most leading AI firms lack the safeguards, oversight and credible long-term plans needed to manage catastrophic risks. US companies scored best, with Anthropic ahead of OpenAI and Google DeepMind, while several firms — including Alibaba Cloud, DeepSeek, Meta, xAI and Z.ai — received failing marks for existential risk. Experts like Stuart Russell urged proof that companies can reduce annual loss-of-control risk to extremely low levels, and the report calls for greater transparency and stronger protections against both immediate and long-term harms.
Study Names 'Most Harmful' AI: Most Leading Firms Fail to Manage Catastrophic Risks

A major new evaluation from the nonprofit Future of Life Institute warns that most leading artificial intelligence companies are not adequately managing the catastrophic risks posed by advanced systems.
Key Findings
The AI Safety Index assessed eight top AI firms and found widespread gaps in safeguards, independent oversight and long-term risk-management plans. US companies scored highest overall — with Anthropic ranked ahead of ChatGPT creator OpenAI and Google DeepMind — while Chinese firms received the lowest grades, with Alibaba Cloud placed just behind DeepSeek.
Most strikingly, no company scored above a D for existential risk. Alibaba Cloud, DeepSeek, Meta, xAI and Z.ai all received an F in that category.
"Existential safety remains the sector's core structural failure, making the widening gap between accelerating AGI/superintelligence ambitions and the absence of credible control plans increasingly alarming," the report says.
"While companies accelerate their AGI and superintelligence ambitions, none has demonstrated a credible plan for preventing catastrophic misuse or loss of control," the authors add.
Expert Warnings and Industry Responses
UC Berkeley computer science professor Stuart Russell highlighted the disconnect between ambition and safety planning, saying executives can’t yet demonstrate how they would prevent loss of control — a failure that, in his view, could put humanity’s survival at stake. Russell called for concrete proof that companies can reduce the annual risk of control loss to about one in 100 million, comparable to nuclear-reactor safety expectations. By contrast, he noted some firms have estimated risks as high as one in 10, one in five or even one in three without justification or clear mitigation plans.
Representatives for major firms responded by pointing to ongoing safety work. An OpenAI spokesperson said the company is "working with independent experts to build strong safeguards into our systems, and rigorously test our models." A Google spokesperson highlighted its "Frontier Safety Framework," saying the company continues to evolve safety and governance as model capabilities advance. The Independent has contacted Alibaba Cloud, Anthropic, DeepSeek, xAI and Z.ai for comment.
What The Report Urges
The authors call for greater transparency around internal safety assessments, independent oversight, and stronger protections against nearer-term harms — such as so-called "AI psychosis" and other misuse or reliability failures — as well as credible, long-term plans to prevent catastrophic loss of control.
The assessment underscores an urgent policy and industry challenge: as ambition for advanced AI accelerates, the sector must demonstrate concrete, independently verifiable steps to reduce both immediate harms and remote existential risk.
Similar Articles

AI Safety Report Card: Major Firms Fall Short, Index Urges Binding Standards
The Future of Life Institute's AI Safety Index assessed major AI firms on 35 indicators across six categories and found indus...

Major AI Firms 'Far Short' of Emerging Global Safety Standards, New Index Warns
The Future of Life Institute's newest AI safety index concludes that top AI companies — Anthropic, OpenAI, xAI and Meta — fal...

Anthropic CEO Calls for AI Regulation — Critics Warn Rules Could Favor Deep‑Pocketed Firms
Dario Amodei, CEO of Anthropic, urged "responsible and thoughtful" government regulation of AI on 60 Minutes, warning of majo...

Sam Altman Warns About AI’s Breakneck Pace as ChatGPT Tops 800 Million Weekly Users
Sam Altman, CEO of OpenAI, warned that ChatGPT’s unprecedented three‑year rise — now reaching more than 800 million weekly us...

Fei-Fei Li Urges End to AI Hyperbole: Call for Clear, Evidence-Based Public Messaging
Fei-Fei Li criticized polarized AI rhetoric that alternates between doomsday scenarios and utopian promises, urging clear, ev...

DeepSeek Founder Liang Wenfeng Named One of Nature's Top 10 People Who Shaped Science in 2025
Nature named DeepSeek founder Liang Wenfeng among its top 10 people who shaped science in 2025, citing the company's influent...

Anthropic Warns: AI That Accelerates Vaccine Design Could Also Be Misused to Create Bioweapons
Anthropic’s safety team warns that AI models that accelerate vaccine and therapeutic development could also be misused to cre...

How AI Could Ease America’s Cost‑of‑Living Crisis — If Policy Lets It
AI, automation and robotics could help reduce household costs in health care and transportation by enabling earlier detection...

Report: Elite U.S. Universities Partnered With Chinese AI Labs Tied To Xinjiang Surveillance
A joint report from Strategy Risks and the Human Rights Foundation alleges that top U.S. universities have partnered with Chi...

Who Funds AI Critics? How Tarbell Fellows Sparked a Media Fight Over AI Coverage
The dispute began after NBC reported that OpenAI had threatened nonprofits critical of the company; OpenAI then raised concer...

Anthropic’s Dario Amodei Heads to Washington to Repair Relations and Push for Strong AI Export Controls
Anthropic CEO Dario Amodei visited Washington to repair ties with the Trump administration and press for strong AI export con...

Advocate Urges Federal AI Rules to Shield Children as White House Weighs Nationwide Order
Clinical psychologist Eileen Kennedy-Moore says federal regulation of AI is urgently needed to protect children and support f...

AI Is Supercharging China’s Surveillance State — Algorithms Are Tightening Control
ASPI’s new report finds that China is integrating AI across surveillance, censorship, courts and prisons to monitor citizens ...

Three Big Questions Washington Must Answer to Secure America's AI Future
Washington is wrestling with three connected challenges as AI accelerates: whether to preempt diverse state laws with a feder...

Palmer Luckey Says AI in War Is Ethically Justified: 'No Moral High Ground in Using Inferior Technology'
Palmer Luckey, cofounder of Anduril, argued on "Fox News Sunday" that using inferior technology in life-or-death battlefield ...
