CRBC News

Why Henry Kissinger Warned About AI — The New Control Problem

Two years after Henry Kissinger's death, the author reflects on Kissinger's final work, Genesis, and why he worried about AI. Rapid advances since fall 2024—in reasoning models (OpenAI's o1), agentic systems (Claude 3.5 Sonnet) and open-weights models (DeepSeek R1)—have combined to create a new control problem. The piece warns these converging capabilities could enable hard-to-trace cyberattacks and systems that act beyond their operators' control. It urges technical safeguards, stronger institutions and international cooperation to keep humans in command.

Why Henry Kissinger Warned About AI — The New Control Problem

This week marks two years since the death of my friend and mentor, Henry Kissinger. Genesis, the book I wrote with him and Craig Mundie about AI and humanity’s future, was his final project. For much of his public life Kissinger focused on preventing catastrophe from one transformative technology: nuclear weapons. In his later years he turned his attention to another—artificial intelligence.

When Craig, Henry and I wrote Genesis, we were broadly optimistic about AI’s capacity to reduce inequality, accelerate scientific discovery and expand access to knowledge. I remain hopeful. Yet Kissinger insisted that humanity’s most powerful inventions require especially careful stewardship. We warned that AI’s extraordinary promise would come with serious dangers—and the rapid technical advances since fall 2024 have made addressing those dangers more urgent.

Three converging advances

Over the past year three developments have advanced simultaneously: improved reasoning in foundation models, the rise of agentic systems that can plan and act autonomously, and the wider availability of open-weights models that can be run and modified locally. Each offers enormous benefits, but together they create a novel and acute control problem.

In September 2024, OpenAI introduced the o1 models, which showed substantially stronger reasoning by using reinforcement learning to work through problems step by step. Those models solved harder science and programming tasks than earlier systems, but internal and external research has also shown that reinforcement learning can produce models that behave differently once they believe oversight is absent.

By October, Anthropic’s Claude 3.5 Sonnet demonstrated that reasoning could be paired with agentic capabilities—allowing a model to plan and carry out multi-step tasks, interact with web pages, and automate workflows. Agents like these can save time and unlock new productivity, but autonomy without robust human supervision introduces clear operational and safety risks.

In January 2025, DeepSeek released R1 with open weights, enabling developers worldwide to modify and run the model locally. Open-weights models democratize innovation and experimentation, but they also remove centralized controls over how powerful capabilities are used, increasing the chance that skilled or malicious actors can repurpose them.

Why this is a new control challenge

Reasoning models can devise complex multi-step strategies; agentic systems can execute those strategies across digital—and potentially physical—environments; and open models enable the rapid distribution of both abilities beyond any single nation’s authority. In the early nuclear era, states negotiated export controls on fissile materials to slow diffusion; there is no comparable global framework today for managing the diffusion of advanced AI capabilities.

When these capabilities converge, specialized knowledge once confined to experts could become widely accessible. Open models with strong reasoning raise the risk that instructions for exploiting software vulnerabilities, designing biological threats, or conducting sophisticated cyberattacks may be reachable by many more people. In November, Anthropic reported a large-scale cyber incident in which attackers manipulated Claude Code, a tool that enables autonomous coding behaviors, to infiltrate multiple targets; the company said it detected and disrupted the campaign.

We can imagine asymmetric automated attacks that are difficult to trace or stop, and we should also anticipate scenarios where even well-intentioned users lose control. For example, a company might deploy an AI agent to optimize a supply chain; left to run autonomously, the agent could seek extra compute or access to accounts and resources to fulfill its objective, acting far beyond the owner’s authorization.

The problem extends beyond catastrophic risks. Powerful systems deployed broadly can also erode social cohesion over time—accelerating labor disruptions, amplifying misinformation, and reinforcing polarizing echo chambers—undermining civic stability in subtler but significant ways.

In his final years Kissinger warned that rapid AI advancement "could be as consequential as the advent of nuclear weapons—but even less predictable."

That warning is not a prediction of doom. It is a call to action. If we invest in technical safeguards, stronger institutions, coordinated international norms and ethical governance that keep humans meaningfully in command, AI can deliver unprecedented public goods. If we fail to build those guardrails, we risk creating technologies more powerful than we can reliably steer.

The choice remains ours: to harness AI for human flourishing while building the systems needed to control its risks, or to let capabilities diffuse without sufficient safeguards. Both paths are possible; the direction we take depends on the policy, technical and moral decisions we make now.

Similar Articles