CRBC News

Who Is Zico Kolter? The Carnegie Mellon Professor Leading OpenAI’s Safety Panel With Power to Block Risky AI Releases

Zico Kolter, a 42-year-old Carnegie Mellon professor, chairs OpenAI’s four-person Safety and Security Committee, which can delay or block AI releases it deems unsafe. Recent agreements with California and Delaware officials made his oversight a central condition for OpenAI’s reorganization into a public benefit corporation. Kolter will sit on the nonprofit board, have full observation rights of the for-profit board, and lead reviews addressing cybersecurity, model security, misuse (including bioweapon risks), and mental-health impacts. Advocates are cautiously optimistic but say the committee’s real influence depends on whether its authority is matched by resources and enforcement.

Who Is Zico Kolter? The Carnegie Mellon Professor Leading OpenAI’s Safety Panel With Power to Block Risky AI Releases

Who is Zico Kolter?

Zico Kolter, a 42-year-old professor and director of the Machine Learning Department at Carnegie Mellon University, now chairs OpenAI’s four-person Safety and Security Committee — a body with the authority to delay or block the company from releasing AI systems it deems unsafe.

Why his role matters

OpenAI appointed Kolter as chair more than a year ago, but his position gained increased prominence after recent agreements with California and Delaware regulators. Those agreements made independent safety oversight a central condition for OpenAI’s plan to reorganize into a public benefit corporation that can attract outside capital. Under the terms, Kolter will sit on the nonprofit board, have "full observation rights" to attend for-profit board meetings, and access information about safety decisions — powers spelled out in California Attorney General Rob Bonta’s memorandum of understanding.

What the committee can do

Kolter leads a four-member committee that can request delays of model releases until required mitigations are met and, if necessary, halt releases it finds unsafe. The other committee members also serve on OpenAI’s board; one member is former U.S. Army General Paul Nakasone, the former commander of U.S. Cyber Command. Sam Altman stepped down from the safety panel last year, a move interpreted as strengthening the committee’s independence.

“Very much we’re not just talking about existential concerns here. We’re talking about the entire swath of safety and security issues and critical topics that come up when we start talking about these very widely used AI systems.” — Zico Kolter, in an interview with The Associated Press

Key risks the panel will weigh

Kolter said the committee will evaluate a broad range of concerns, including:

  • Cybersecurity risks — e.g., could an agent exposed to malicious inputs exfiltrate data?
  • Model-security issues — for instance, vulnerabilities tied to model weights or parameters
  • Misuse potential — whether models enable capabilities for designing biological threats or sophisticated cyberattacks
  • Human impact — harms to mental health or other negative effects from interactions with chatbots

He declined to say whether the committee has ever paused or altered a release, citing the confidentiality of its deliberations.

Context and scrutiny

OpenAI has emphasized safety since its nonprofit beginnings a decade ago, but its rapid commercialization after ChatGPT’s launch drew criticism that the company prioritized speed over safety. Internal tensions — including the temporary ouster of CEO Sam Altman in 2023 — intensified debate about whether OpenAI had drifted from its founding mission. The company also faced external legal challenges from co-founder Elon Musk and others as it pursued a more traditional for-profit structure.

This year, OpenAI has faced heightened scrutiny, including a wrongful-death lawsuit from California parents who say their teenage son killed himself after prolonged interactions with ChatGPT.

Kolter’s background and perspective

Kolter began studying machine learning as a freshman at Georgetown University in the early 2000s, when the field was still niche. He has followed OpenAI’s evolution closely — he attended its launch party at an AI conference in 2015 — and says even experts were surprised by the rapid growth in capabilities and attendant risks.

AI safety advocates are watching the reorganization and Kolter’s role closely. Some, like Nathan Calvin of the nonprofit Encode, say they are "cautiously optimistic" but emphasize that the committee’s effectiveness will depend on whether it is resourced and empowered to act rather than remaining a set of formal commitments on paper.

Bottom line

Kolter’s appointment and the regulators’ conditions place an independent safety checkpoint at a critical moment in OpenAI’s evolution. The committee’s ability to influence releases — and to be backed by concrete actions and staff — will determine whether those commitments deliver meaningful protection against the spectrum of AI risks.