CRBC News

Major AI Firms 'Far Short' of Emerging Global Safety Standards, New Index Warns

The Future of Life Institute's newest AI safety index concludes that top AI companies — Anthropic, OpenAI, xAI and Meta — fall well short of emerging global safety standards. An independent expert panel found no firm has a robust plan to control potential superintelligent systems. The report highlights linked cases of chatbot-related self-harm and notes massive industry investment even as safety measures lag. Google DeepMind pledged continued safety work; several other firms did not comment.

Major AI Firms 'Far Short' of Emerging Global Safety Standards, New Index Warns

The Future of Life Institute's latest AI safety index finds that leading artificial intelligence companies — including Anthropic, OpenAI, xAI and Meta — are "far short of emerging global standards." The independent evaluation, released Wednesday, concludes that while firms race to build ever-more capable systems, none currently has a robust strategy to control potential superintelligent models.

Key findings

An expert panel convened by the institute highlighted several worrying gaps:

  • No company assessed has a comprehensive, credible plan to ensure control over advanced, reasoning-capable AI systems.
  • Safety practices lag behind the rapid investments and capability expansion by major tech firms, which continue to commit substantial funds to machine learning development.
  • Regulatory safeguards remain weak in the U.S., and some companies have lobbied against binding safety rules.

The report arrives amid growing public alarm after multiple cases linked AI chatbots to self-harm and suicide, and amid concerns about AI-driven harms such as automated hacking and psychological impacts.

"Despite recent uproar over AI-powered hacking and AI driving people to psychosis and self-harm, US AI companies remain less regulated than restaurants and continue lobbying against binding safety standards," said Max Tegmark, MIT professor and president of the Future of Life Institute.

Several major firms responded to the study. A Google DeepMind spokesperson said the company will "continue to innovate on safety and governance at pace with capabilities" as its models advance. xAI appeared to issue an automated reply saying simply, "Legacy media lies." Anthropic, OpenAI, Meta, Z.ai, DeepSeek and Alibaba Cloud did not immediately respond to requests for comment.

The Future of Life Institute, a non-profit founded in 2014 that has long warned about existential and societal risks from highly intelligent machines, received early support from Elon Musk. In October, a group of leading AI researchers — including Geoffrey Hinton and Yoshua Bengio — called for a moratorium on developing superintelligent AI until safer methods and broader public consent are in place.

This index underscores an urgent gap: industry capability is accelerating rapidly, but safety planning, transparency and external oversight have not kept pace.

Sources: Reporting by Zaheer Kachwala and Arnav Mishra; edited by Shinjini Ganguli.

Similar Articles