CRBC News
Security

‘Country of Geniuses’: Anthropic CEO Warns AI Clusters Could Rival 50 Million Nobel Laureates

‘Country of Geniuses’: Anthropic CEO Warns AI Clusters Could Rival 50 Million Nobel Laureates
Anthropic CEO Dario Amodei(Krisztian Bocsi—Bloomberg/Getty Images)

Dario Amodei, CEO of Anthropic, warns that rapidly advancing AI could soon operate like a "country of geniuses in a data center," potentially matching the knowledge of some 50 million Nobel Prize winners and posing major security and economic risks. He predicts powerful AI instances may appear within one to two years and that cluster-scale computing could enable millions of such instances by about 2027. Amodei advocates safety measures—such as Anthropic's Constitutional AI approach—and stronger transparency laws while rejecting fatalism about AI's future.

Anthropic CEO Dario Amodei is urging U.S. policymakers and the tech industry to take seriously the risks posed by rapidly advancing artificial intelligence. In a new essay, "The Adolescence of Technology," Amodei argues that powerful AI systems could appear within one to two years and that by about 2027 cluster sizes may allow millions of AI instances to run concurrently at superhuman speeds.

What Amodei Warns

Amodei asks readers to imagine a single AI "country" inside a data center possessing the collective knowledge of roughly 50 million Nobel Prize winners. He warns that such a system, if it became hostile or was co-opted by a malicious actor, could present an existential or national-security-scale threat. Potential harms he lists include direct attempts at domination, enabling bad actors, widespread economic disruption and mass unemployment, and destabilizing secondary effects from rapid productivity changes.

How It Could Act

Even without persuading people to follow it, Amodei suggests a powerful AI could still cause damage by coordinating robotic forces, commandeering networked infrastructure or exploiting connected devices. He cautions that the mere existence of massively capable AI clusters could be destabilizing.

Responses and Safeguards

Amodei rejects fatalism and says Anthropic is pursuing safety measures. The company uses a post-training method called Constitutional AI to shape its large language model, Claude, by teaching a central set of values and principles rather than a long blacklist of forbidden prompts. Anthropic aims for Claude to seldom violate the spirit of that "constitution" by the end of 2026. Amodei also calls for stronger transparency rules—similar to laws recently passed in California and New York—requiring developers to disclose how models are built and trained.

Context and Reactions

Amodei reiterated related predictions at the World Economic Forum in Davos, where he said AI could replace software engineers within a year and eliminate half of white-collar jobs within five years—claims that have drawn controversy. Economists, including Goldman Sachs' David Mericle, have warned that net job losses in AI-exposed industries may increase meaningfully in 2026. Anthropic did not immediately respond to Fortune's request for comment.

"Humanity is about to be handed almost unimaginable power, and it is deeply unclear whether our social, political, and technological systems possess the maturity to wield it," Amodei writes.

This story was originally featured on Fortune.com.

Help us improve.

Related Articles

Trending