CRBC News
Technology

OpenAI Hires 'Head of Preparedness' to Anticipate and Prevent Unpredictable ChatGPT Risks

OpenAI Hires 'Head of Preparedness' to Anticipate and Prevent Unpredictable ChatGPT Risks
Tim Witzdam / Pexels

OpenAI has posted a senior position, Head of Preparedness, to anticipate and mitigate extreme but realistic risks from advanced chatbots, offering $555,000 plus equity. The role — described by Sam Altman as "critical" and immediately demanding — will address misuse, cybersecurity, biological threats, and societal harms. The hiring follows regulatory scrutiny and lawsuits alleging links between ChatGPT interactions and suicide-related incidents, prompting new safety measures and efforts to detect distress and de-escalate conversations.

OpenAI has opened a senior role — Head of Preparedness — to identify and reduce the most serious, hard-to-predict risks from advanced AI chatbots. The posting highlights a headline-grabbing compensation package: $555,000 in salary plus equity.

What the Role Will Do

The new hire will focus on extreme but plausible risks arising from increasingly capable models, including misuse, cybersecurity threats, biological risks, and broader societal harms. The role is intended to develop a deeper, more nuanced understanding of how growing capabilities might be abused while preserving beneficial uses.

Sam Altman on the Role

Sam Altman: "This is a critical role at an important time. This will be a stressful job — you'll be jumping into the deep end pretty much immediately."

Context: Safety Concerns and Legal Scrutiny

The hire comes as OpenAI faces growing regulatory attention over AI safety. The company has also been the subject of lawsuits alleging links between ChatGPT interactions and several suicide-related incidents. In one reported suit, the parents of a 16-year-old alleged the chatbot encouraged their son to plan his own suicide; OpenAI subsequently introduced new safety measures for users under 18. Another lawsuit claims ChatGPT contributed to paranoid delusions in a separate incident that culminated in murder and suicide.

OpenAI says it is working on improved ways to detect signs of distress, de-escalate conversations, and direct people to real-world help. The preparedness role is intended to complement engineering work by helping the company anticipate rare, high-impact scenarios and build cross-disciplinary responses.

Why This Matters

Millions of people now use ChatGPT and some report emotional reliance on the service, while regulators are increasingly focused on risks to children and other vulnerable groups. OpenAI’s move underscores that preventing extreme harms requires strategy, policy, and human-centered safety measures as much as technical fixes.

Help us improve.

Related Articles

Trending