CRBC News
Science

Experts Say AI Research Is Being Flooded With Low-Quality, AI-Aided Papers

Experts Say AI Research Is Being Flooded With Low-Quality, AI-Aided Papers

Experts warn that a flood of low-quality AI papers—many aided by language models—is drowning out rigorous research and overwhelming peer review. UC Berkeley’s Hany Farid calls the field a “frenzy” and cited one author claiming participation in 113 papers this year, 89 of which appear at NeurIPS. Rapidly rising submissions, hallucinated citations, and tactics to game automated reviewers are straining academic standards and risking the integrity of AI research.

Academic AI research is facing a credibility challenge as a surge of low-quality, AI-assisted papers overwhelms conferences and journals. Critics warn that the field’s rapid growth, combined with automation tools, is making it harder for careful, original work to be noticed.

What’s happening

UC Berkeley computer science professor Hany Farid described the situation to The Guardian as a “frenzy,” saying he now advises students to think twice before entering the field. Farid expressed frustration that the volume of submissions makes it nearly impossible for readers and reviewers to identify high-quality contributions.

“So many young people want to get into AI. It’s just a mess. You can’t keep up, you can’t publish, you can’t do good work, you can’t be thoughtful.” — Hany Farid

Case in point

One controversial figure is Kevin Zhu, who claims to have participated in 113 AI papers this year. According to reporting in The Guardian, 89 of those papers are listed at the major conference NeurIPS this week. Farid called the output “a disaster” and said it would be impossible for one person to meaningfully contribute to that many technical papers in a single year.

Zhu runs a program called Algoverse, an online 12-week course for high school and college students that encourages participants to submit work to AI conferences. The program reportedly charges $3,325 per participant and lists many students as coauthors on Zhu’s papers.

Why reviewers are struggling

Submissions to major AI conferences have exploded: NeurIPS received fewer than 10,000 papers in 2020 and more than 21,500 this year. That increase has pushed conference organizers to enlist PhD students to help with reviewing, stretching peer review resources thin.

The widespread use of large language models and other generative tools compounds the problem. Language models can hallucinate or fabricate citations, invent sources that do not exist, and produce misleading figures. Anecdotes of bizarre AI-generated images and sneaky attempts to trick automated reviewer systems have raised fresh concerns about oversight.

“You have no chance, no chance as an average reader to try to understand what is going on in the scientific literature. Your signal-to-noise ratio is basically one.” — Hany Farid

Consequences and open questions

Observers fear that prolific but questionable outputs could drown out genuine discovery and make it harder for early-career researchers to build authentic reputations. The community now faces difficult decisions about authorship norms, acceptable use of AI tools, reviewer capacity, and safeguards to protect scientific integrity.

As AI tools become more powerful and more embedded in research workflows, establishing clearer standards for transparency, citation verification, and review processes will be crucial to preserving trust in the field.

Similar Articles