CRBC News
Technology

AI Research Under Strain: One Mentor Credited on 100+ Papers as Conferences Overflow

AI Research Under Strain: One Mentor Credited on 100+ Papers as Conferences Overflow

Kevin Zhu, a recent UC Berkeley graduate and founder of mentoring firm Algoverse, has been credited on more than 100 AI papers this year, prompting scrutiny over authorship and research quality. Surge pricing in conference submissions — NeurIPS and ICLR received tens of thousands of papers — combined with rapid review timetables, AI‑assisted drafting, and a flood of arXiv preprints has strained peer review and raised concerns about the field’s signal‑to‑noise ratio. Organisers and researchers are debating reforms to restore rigor.

Summary: A recent controversy over a single mentor credited on more than 100 AI papers this year has highlighted a wider problem: a surge of submissions to major machine‑learning conferences, overwhelmed review systems, and growing concerns about the quality and provenance of research.

Who Is Involved

Kevin Zhu, a recent computer‑science graduate of the University of California, Berkeley, has been credited with authoring or supervising an unusually large number of AI papers this year. Zhu initially said he authored 113 papers, with 89 scheduled for presentation at a major conference, and later told reporters he supervised 131 team projects run through his mentoring company, Algoverse.

What Algoverse Does

Algoverse runs selective 12‑week mentoring programs for high‑school and undergraduate students. The organisation charges $3,325 for its flagship course, which helps participants develop research projects, prepare manuscripts, and submit to conferences. Zhu says his role typically includes reviewing methodology and experimental design and commenting on full drafts; he also says projects include mentors or principal investigators with domain expertise.

Concerns From Academics

“The papers he has put out are a disaster,” said Hany Farid, a Berkeley computer‑science professor. “I’m fairly convinced that the whole thing, top to bottom, is just vibe coding.”

Farid’s LinkedIn post triggered broader discussion among researchers about a flood of low‑quality submissions in AI, and whether academic incentives and the use of AI tools are driving quantity over quality.

Conference Pressure and Review Strain

Major conferences have seen explosive growth: NeurIPS received 21,575 submissions this year, up from under 10,000 in 2020. ICLR reported nearly 20,000 submissions for its 2026 meeting, a roughly 70% increase from the previous year. Reviewers, often PhD students working under tight deadlines, must evaluate dozens of papers with limited time for revision — a setup many say degrades review quality.

AI Tools, arXiv, and the Signal‑to‑Noise Problem

Authors and teams sometimes use language models and other productivity tools for editing and drafting, which has raised questions about authorship and rigor. Conferences such as ICLR have experimented with AI‑assisted reviewing, but reports indicate this can produce hallucinated citations and confusing feedback. At the same time, companies and groups increasingly post unreviewed papers to arXiv, complicating efforts to separate vetted science from preliminary or low‑quality work.

Consequences and Responses

Some senior researchers now discourage students from pursuing AI research careers because the field’s publication frenzy can reward rapid, low‑quality output over careful inquiry. Conference organisers acknowledge strain on their review systems and have begun exploring reforms — but there is no consensus yet on the best fixes.

Why It Matters

While many landmark advances have emerged from conference circuits (for example, Google’s transformer paper presented at NeurIPS in 2017), the current overload risks obscuring important work and makes it harder for journalists, policymakers, and even experts to assess the state of the field.

“You have no chance, no chance as an average reader to try to understand what is going on in the scientific literature. Your signal‑to‑noise ratio is basically one,” Farid said.

Similar Articles

AI Research Under Strain: One Mentor Credited on 100+ Papers as Conferences Overflow - CRBC News