CRBC News
Technology

OpenAI Is Hiring a 'Head of Preparedness' — $555K+ to Lead AI Risk Forecasting and Mitigation

OpenAI Is Hiring a 'Head of Preparedness' — $555K+ to Lead AI Risk Forecasting and Mitigation
(Credit: Picture Alliance/Getty Images)

OpenAI has appointed a new senior role, Head of Preparedness, to forecast and mitigate the greatest risks from advanced AI. Announced by Sam Altman on X, the position pays about $555,000 per year plus equity and will run capability evaluations, threat models, and mitigation plans for high-risk domains such as cybersecurity and biosecurity. The move comes amid growing concern about mental-health impacts, AI-driven cyberattacks, and “AI psychosis,” and follows broader efforts to develop safety benchmarks for chatbots.

OpenAI has created a new executive role, Head of Preparedness, to anticipate and mitigate the most serious risks posed by future AI systems. The company says the hire will build a scalable safety pipeline and oversee plans for high-risk areas as its models grow more capable.

Role And Responsibilities

The Head of Preparedness will design and run capability evaluations, develop threat models, and craft mitigation plans for OpenAI's most advanced models. That work is intended to form a repeatable safety process that can be applied across domains. The role explicitly covers high-risk areas such as cybersecurity and biosecurity and will include building frameworks to track and prepare for emerging capabilities that could cause serious harm.

Announcement And Compensation

Sam Altman announced the position on X, noting it will pay approximately $555,000 per year plus equity and warning the job will be “stressful.” The listing highlights concerns ranging from mental-health impacts and AI-driven cyberattacks to risks posed by systems that can improve themselves.

Context And Concerns

The hire arrives amid rising scrutiny of how AI affects people and society. Lawsuits have alleged that some AI systems contributed to teenage self-harm, and researchers and regulators have raised alarms about so-called “AI psychosis,” where chatbots reinforce users’ hallucinations or delusions. Security experts also warn that advanced models could be misused to automate sophisticated cyberattacks or to generate biological threats if safeguards are insufficient.

Broader Industry Response

Researchers are increasingly developing safety benchmarks and adversarial tests to evaluate chatbot behavior and to reduce harm. OpenAI’s new senior role reflects a wider push across the AI industry to invest in governance, evaluation, and mitigation strategies as capabilities accelerate.

"This is an important time," Altman wrote in the job announcement — a signal that OpenAI seeks to centralize responsibility for anticipating and managing potentially catastrophic risks.

As AI systems advance, roles like Head of Preparedness aim to bridge technical evaluation, policy planning, and real-world mitigation so institutions can respond faster and more reliably to emerging threats.

Related Articles

Trending