CRBC News
Technology

Fei-Fei Li Urges End to AI Hyperbole: Call for Clear, Evidence-Based Public Messaging

Fei-Fei Li Urges End to AI Hyperbole: Call for Clear, Evidence-Based Public Messaging

Fei-Fei Li criticized polarized AI rhetoric that alternates between doomsday scenarios and utopian promises, urging clear, evidence-based public communication. She emphasized that exaggerated claims mislead people outside the tech industry and hinder informed policy and public understanding. Prominent researchers including Andrew Ng and Yann LeCun have similarly called for more measured messaging about AI's capabilities and limits.

Fei-Fei Li Urges End to AI Hyperbole and Calls for Balanced Public Communication

Fei-Fei Li, a Stanford computer science professor often called the 'Godmother of AI', criticized the polarized public conversation about artificial intelligence, saying it swings between apocalyptic 'machine overlord' scenarios and utopian promises of 'post-scarcity' that mislead rather than inform.

'I like to say I'm the most boring speaker in AI these days because precisely my disappointment is the hyperbole on both sides,' Li said during a talk at Stanford. 'The world's population, especially those who are not in Silicon Valley, need to hear the facts...'

Li — who helped create ImageNet and last year co-founded World Labs to build AI models that perceive, generate and interact with three-dimensional environments — urged technologists, journalists and policymakers to present measured, evidence-based explanations of what AI can and cannot do.

She warned that extreme rhetoric inflates fears or expectations and can misinform people who lack direct exposure to technical debates. That misinformation can harm public understanding and policy making, especially for communities outside Silicon Valley.

Other prominent AI figures have echoed similar cautions. Google Brain founder Andrew Ng said AGI has been overhyped, noting many human abilities remain difficult for current AI. Yann LeCun, Meta's former chief AI scientist, described large language models as 'astonishing' but limited, and recently left Meta after 12 years to start an AI-focused company.

Li's remarks form part of a broader movement among top researchers calling for sober, transparent public education about AI's capabilities, limits, and likely social impacts. Clear, factual messaging — not sensational headlines — will better equip the public to evaluate risks, benefits and policy choices.

Similar Articles