CRBC News
Technology

AI Safety Report Card: Major Firms Fall Short, Index Urges Binding Standards

AI Safety Report Card: Major Firms Fall Short, Index Urges Binding Standards

The Future of Life Institute's AI Safety Index assessed major AI firms on 35 indicators across six categories and found industry efforts lacking. OpenAI and Anthropic led with C+; Google DeepMind earned a C; Meta, xAI and several Chinese firms scored D or lower. All companies ranked below average on existential safety. The report calls for binding safety standards and greater transparency, while California's SB 53 marks an initial regulatory step.

Summary: A new AI Safety Index from the Future of Life Institute finds that most leading AI companies are not doing enough to prevent catastrophic risks. The study grades firms on 35 indicators across six categories and calls for binding safety standards and greater transparency.

What the Index Measured

The Silicon Valley nonprofit Future of Life Institute published an AI Safety Index that evaluated major AI companies using 35 indicators grouped into six categories, including existential safety, risk assessment, information sharing, and internal monitoring. Evidence came from publicly available documents and responses to a company survey. Eight AI experts — a mix of academics and leaders of AI organizations — scored the results.

Key Findings and Company Grades

The highest overall grades were C+ (OpenAI and Anthropic). Google DeepMind received a C. Meta and xAI earned Ds. Chinese firms Z.ai and DeepSeek also received Ds, while Alibaba Cloud received the lowest grade, a D-. Notably, every company scored below average on the report's existential safety metric, which assesses plans for internal monitoring, control interventions, and strategies to prevent catastrophic misuse or loss of control.

'They are the only industry in the U.S. making powerful technology that's completely unregulated, so that puts them in a race to the bottom against each other where they just don't have the incentives to prioritize safety,' said Max Tegmark, president of the Future of Life Institute and a professor at MIT.

Responses From Companies

OpenAI and Google DeepMind reiterated their commitment to safety. OpenAI said, in effect, that safety is central to its work and pointed to investments in frontier safety research, internal and external testing, and sharing of safety frameworks. Google DeepMind described a 'rigorous, science-led approach' and highlighted its Frontier Safety Framework for identifying and mitigating severe risks from advanced models.

The Future of Life Institute criticized several firms for weak or missing public commitments to existential safety. It said xAI and Meta lack monitoring-and-control commitments despite having risk-management frameworks, and that DeepSeek, Z.ai and Alibaba Cloud do not have publicly available existential-safety strategies. Meta, Z.ai, DeepSeek, Alibaba and Anthropic did not respond to requests for comment. xAI dismissed the index as 'Legacy Media Lies.'

Why This Matters

The index raises concerns about both immediate harms — such as people relying on chatbots for mental-health advice or AI being used in cyberattacks — and longer-term risks like AI-enabled biological threats, weaponization, or political destabilization. The report warns that, as companies race to build more capable systems, none has demonstrated a credible plan to prevent catastrophic misuse or loss of control.

Regulatory Context and Next Steps

Lawmakers and regulators are beginning to act. In California, SB 53 requires businesses to disclose safety and security protocols and to report incidents such as cyberattacks; the bill was signed by Gov. Gavin Newsom in September. Experts say this is a positive step but argue that stronger, enforceable national and international standards are needed to ensure compliance and to mitigate systemic risk.

Rob Enderle, principal analyst at the Enderle Group, called the index 'an interesting way to approach' the problem of insufficient regulation but warned that poorly designed rules could do more harm than good if they lack enforcement mechanisms. The Future of Life Institute and other advocates urge binding safety standards, greater transparency, and independent oversight to reduce the chance of catastrophic outcomes.

Bottom line: The AI Safety Index highlights meaningful gaps in how major AI firms manage existential and other high-consequence risks. It recommends clearer commitments, public reporting, and binding standards to steer the industry toward safer deployment of increasingly powerful AI systems.

Similar Articles