CRBC News
Technology

AI Insiders Worried: Researchers Focus On Distant AGI Risks While Present Harms Grow

AI Insiders Worried: Researchers Focus On Distant AGI Risks While Present Harms Grow
Illustration by Tag Hartman-Simkins / Futurism. Source: Getty Images

At NeurIPS in San Diego, many leading AI researchers voiced deep anxiety about the field’s future. Alex Reisner for The Atlantic reports a pattern: influential figures often emphasize speculative AGI dangers while paying less attention to urgent harms such as deepfakes, chatbot-related mental-health issues, and cultural disruption. Voices like Zeynep Tufekci urge a rebalancing toward addressing immediate, measurable risks alongside long-term safety work.

Engineers and researchers who helped build today’s powerful AI systems are increasingly uneasy about the technology’s trajectory. At NeurIPS, the leading AI research conference held this year at the San Diego Convention Center, many attendees voiced apocalyptic concerns — but a new account in The Atlantic suggests those concerns often prioritize distant, speculative threats over immediate, measurable harms.

Grand Fears, Overlooked Harms

Alex Reisner reports that conversations at NeurIPS revolved frequently around hypothetical artificial general intelligence (AGI) scenarios — systems with humanlike general reasoning — while issues already unfolding received less attention. Reisner observed that prominent researchers often framed risks in sweeping terms, sometimes at the expense of discussing concrete problems such as deepfakes, chatbot-related mental-health impacts, and damage to arts and culture.

“Many AI developers are thinking about the technology’s most tangible problems while public conversations about AI — including those among the most prominent developers themselves — are dominated by imagined ones,” Reisner wrote.

Voices From The Conference

Yoshua Bengio, a University of Montreal researcher and one of the so-called “godfathers” of modern AI, has publicly urged safeguards and founded a nonprofit, LawZero, to promote safer development. Reisner recalls Bengio warning that powerful AIs could deceive their creators and be misused for political advantage by influencing public opinion — yet Bengio, according to Reisner, gave less emphasis to how deepfakes and other current technologies are already reshaping discourse and livelihoods.

Sociologist Zeynep Tufekci, in a keynote titled “Are We Having the Wrong Nightmares About AI?,” argued that focusing on AGI risks can distract from immediate problems like chatbot addiction, misinformation, and labor disruptions. When challenged that these immediate concerns were already known, Tufekci said she still seldom sees them take center stage in technical and policy conversations, which tend to oscillate between mass unemployment and human extinction scenarios.

Industry Leaders Amplify Apocalyptic Rhetoric

Apocalyptic rhetoric is also common outside academia. OpenAI CEO Sam Altman has warned that AI could eliminate entire categories of jobs, facilitate widespread identity fraud, and has described taking contingency steps for extreme risk scenarios. Geoffrey Hinton, who shared the 2018 Turing Award with Bengio and Yann LeCun, has likened his role in popularizing deep learning to Oppenheimer’s oversight of nuclear technology, expressing public regret and raising alarms about job displacement and the militarization of AI.

Why This Matters

Reisner notes an ironic historical echo in the conference’s name: NeurIPS (Neural Information Processing Systems) dates to a period when researchers equated simplified computer models with biological neurons. Today, he argues, much of AI culture remains captivated by the idea that a computer can equate to a mind, and that science-fiction framing often shapes public and technical discourse more than sober, evidence-based analysis.

Conclusion: The debate at NeurIPS — and in the broader AI community — shows a split between those emphasizing speculative, existential scenarios and those urging urgent attention to present, measurable harms. Many experts now call for a more balanced approach: continuing long-term risk assessment while urgently mitigating the real-world harms already emerging from current AI systems.

Related Articles

Trending