The article contrasts two forms of existential warning: the Bulletin of the Atomic Scientists’ Doomsday Clock, a symbolic, independent alarm now set at 85 seconds to midnight, and Anthropic CEO Dario Amodei’s long, insider essay on AI risk. The Bulletin retains moral authority but lacks executive power; Amodei has direct influence but faces structural conflicts of interest. The piece argues neither model suffices alone and calls for new institutions that combine independent oversight with enforceable governance.
Who Should We Trust About Doomsday? The Watchmen Outside Versus The Priests Inside

Not everyone wants to rule the world, but lately it can feel as though everyone is sounding an alarm that civilization may be close to an inflection point.
Two recent interventions capture that anxiety: the Bulletin of the Atomic Scientists’ annual adjustment of the Doomsday Clock, now set at 85 seconds to midnight, and Anthropic CEO Dario Amodei’s 19,000-word essay, “The Adolescence of Technology.” The Bulletin cited a cascade of existential risks—including nuclear modernization, climate change, biosecurity gaps and emerging technologies—while Amodei warned that humanity is being handed “almost unimaginable power” without clear social or political maturity to wield it.
Different Voices, Different Authority
The Doomsday Clock is a powerful symbol with roots in the immediate postwar era. Created in 1947 by scientists who had helped build the atomic bomb, the Bulletin of the Atomic Scientists carried unique moral authority: experts who had seen the destructive power of nuclear weapons urging the public and policymakers to act.
That moral clarity came from independence. The Bulletin’s scientists were often outsiders to the institutions that controlled the weapons they criticized. That independence lent credibility but left the Bulletin without executive power: its role is to warn and persuade, not to implement or enforce policy.
Inside the Temple
Dario Amodei’s voice is different because of where he speaks from. As CEO of Anthropic—one of the firms driving advanced AI—he’s not just diagnosing risks; his company’s choices help set the technology’s trajectory. That insider position gives him influence that the Bulletin lacks.
“Humanity is about to be handed almost unimaginable power, and it is deeply unclear whether our social, political and technological systems possess the maturity to wield it.” — Dario Amodei
But influence brings an unavoidable conflict of interest. Amodei argues that stopping or dramatically slowing AI development is “fundamentally untenable” because others would proceed anyway. That may be a pragmatic argument for staying in the race to build safer systems, but it also aligns with the commercial incentives of the firms that would benefit from continued development.
When Warnings Multiply
The Doomsday Clock has widened its scope since the Cold War ended, folding climate change, biosecurity, erosion of public health infrastructure, and novel scientific risks into its calculus. Each threat is real, yet bundling many risks under a single metric can blur the precision the Clock once symbolized. The result is a powerful emblem whose specificity has been diluted by an increasingly crowded landscape of warnings.
By contrast, industry leaders like Amodei offer detailed, technically informed accounts of risk—but they speak from within the systems that can accelerate the very threats they describe. That creates a dilemma: who should we trust when the most informed voices are also the most invested?
What Comes Next?
Neither model is sufficient on its own. Independent scientific critique provides moral clarity and public pressure; insider leadership carries the practical ability to shape technologies and deployment decisions. The urgent task is to design institutions and governance mechanisms that combine both strengths: technical expertise that is transparent and accountable, together with the authority and incentives to enforce safety standards globally.
We used to trust scientists to step outside state power and sound the alarm. Today many of the people best placed to assess emerging risks are embedded in private institutions with enormous power and incentives. Recognizing that reality means rethinking how we create checks, balances, and public oversight that can keep up with technologies that may reshape civilization.
Disclosure: This piece is based on public reporting and commentary. (Original reporting referenced organizations and essays in the public domain.)
Help us improve.

































