CRBC News
Technology

Study Names 'Most Harmful' AI: Most Leading Firms Fail to Manage Catastrophic Risks

Study Names 'Most Harmful' AI: Most Leading Firms Fail to Manage Catastrophic Risks

The Future of Life Institute's AI Safety Index finds that most leading AI firms lack the safeguards, oversight and credible long-term plans needed to manage catastrophic risks. US companies scored best, with Anthropic ahead of OpenAI and Google DeepMind, while several firms — including Alibaba Cloud, DeepSeek, Meta, xAI and Z.ai — received failing marks for existential risk. Experts like Stuart Russell urged proof that companies can reduce annual loss-of-control risk to extremely low levels, and the report calls for greater transparency and stronger protections against both immediate and long-term harms.

A major new evaluation from the nonprofit Future of Life Institute warns that most leading artificial intelligence companies are not adequately managing the catastrophic risks posed by advanced systems.

Key Findings

The AI Safety Index assessed eight top AI firms and found widespread gaps in safeguards, independent oversight and long-term risk-management plans. US companies scored highest overall — with Anthropic ranked ahead of ChatGPT creator OpenAI and Google DeepMind — while Chinese firms received the lowest grades, with Alibaba Cloud placed just behind DeepSeek.

Most strikingly, no company scored above a D for existential risk. Alibaba Cloud, DeepSeek, Meta, xAI and Z.ai all received an F in that category.

"Existential safety remains the sector's core structural failure, making the widening gap between accelerating AGI/superintelligence ambitions and the absence of credible control plans increasingly alarming," the report says.

"While companies accelerate their AGI and superintelligence ambitions, none has demonstrated a credible plan for preventing catastrophic misuse or loss of control," the authors add.

Expert Warnings and Industry Responses

UC Berkeley computer science professor Stuart Russell highlighted the disconnect between ambition and safety planning, saying executives can’t yet demonstrate how they would prevent loss of control — a failure that, in his view, could put humanity’s survival at stake. Russell called for concrete proof that companies can reduce the annual risk of control loss to about one in 100 million, comparable to nuclear-reactor safety expectations. By contrast, he noted some firms have estimated risks as high as one in 10, one in five or even one in three without justification or clear mitigation plans.

Representatives for major firms responded by pointing to ongoing safety work. An OpenAI spokesperson said the company is "working with independent experts to build strong safeguards into our systems, and rigorously test our models." A Google spokesperson highlighted its "Frontier Safety Framework," saying the company continues to evolve safety and governance as model capabilities advance. The Independent has contacted Alibaba Cloud, Anthropic, DeepSeek, xAI and Z.ai for comment.

What The Report Urges

The authors call for greater transparency around internal safety assessments, independent oversight, and stronger protections against nearer-term harms — such as so-called "AI psychosis" and other misuse or reliability failures — as well as credible, long-term plans to prevent catastrophic loss of control.

The assessment underscores an urgent policy and industry challenge: as ambition for advanced AI accelerates, the sector must demonstrate concrete, independently verifiable steps to reduce both immediate harms and remote existential risk.

Similar Articles