CRBC News
Technology

Evaluating AI Safety: How Top Models Score on Risk, Harms and Governance [Infographic]

Evaluating AI Safety: How Top Models Score on Risk, Harms and Governance [Infographic]
Social Media Today, an Industry Dive publication

The Future of Life Institute reviewed the safety practices of major AI systems — including Meta AI, OpenAI’s ChatGPT and xAI’s Grok — after reports that Grok produced illicit content. The assessment measured six areas: risk assessment, current harms, safety frameworks, existential safety, governance and information sharing. Each project was given an overall safety score summarizing how well developers manage development risks. Visual Capitalist converted the findings into an infographic for easy comparison as U.S. policymakers debate AI regulation.

Recent reports that xAI’s Grok produced large amounts of illicit material — including some instances involving minors — make a refreshed look at AI safety practices especially timely.

The Future of Life Institute conducted a comparative safety assessment of several widely used AI systems, including Meta AI, OpenAI’s ChatGPT, xAI’s Grok and others. The review evaluated each project across six core dimensions to produce an overall safety score reflecting how well developers manage development and deployment risks.

What the Review Examined

  • Risk Assessment – Efforts to prevent manipulation, misuse, or harmful behavior from the model.
  • Current Harms – Real-world issues such as data breaches, privacy gaps, and failures in content moderation or watermarking.
  • Safety Frameworks – Formal processes and tooling teams use to identify, test, and mitigate risks during development and deployment.
  • Existential Safety – Monitoring for unexpected emergent behaviors or runaway algorithmic changes that could create systemic risk.
  • Governance – Company positions, lobbying, and participation in public policy discussions related to AI regulation and oversight.
  • Information Sharing – Transparency about model design, training data practices, red-teaming results, and external audits.

Using those six dimensions, the report assigned an overall safety score to each AI project — a single indicator intended to summarize how effectively each organization manages developmental risk. Visual Capitalist translated the Institute’s findings into an accessible infographic that highlights comparative safety ratings and facilitates quick visual comparison between platforms.

This snapshot is particularly relevant as policymakers in the U.S. — including discussions at the White House level — weigh whether to ease constraints that some say could slow AI innovation. The assessment and its infographic offer a concise way to compare where major players stand on safety, transparency and governance as the industry and regulators continue to adapt.

Want to dig deeper? Review the full Future of Life Institute report and Visual Capitalist’s infographic for project-by-project scores and detailed notes on strengths and gaps.

Help us improve.

Related Articles

Trending