ArXiv announced on October 31 that review and position papers in its Computer Science category must have passed peer review to be accepted. The move responds to a surge of AI-generated survey papers — now arriving in the hundreds each month — that moderators say are often superficial. Authors must provide journal references and DOIs; workshop reviews are insufficient. The change has drawn debate over impacts on early-career researchers, the limits of AI-detection tools, and wider shifts in academic publishing.
ArXiv Bans Unreviewed CS Survey and Position Papers After Surge in AI-Generated Submissions
ArXiv announced on October 31 that review and position papers in its Computer Science category must have passed peer review to be accepted. The move responds to a surge of AI-generated survey papers — now arriving in the hundreds each month — that moderators say are often superficial. Authors must provide journal references and DOIs; workshop reviews are insufficient. The change has drawn debate over impacts on early-career researchers, the limits of AI-detection tools, and wider shifts in academic publishing.

ArXiv tightens rules for Computer Science review and position papers after AI 'flood'
ArXiv, the open research repository founded at Cornell University, announced on October 31 that it will no longer accept review articles or position papers in its Computer Science category unless those works have already passed peer review at a journal or conference.
The policy change responds to what moderators described as a “flood” of AI-generated survey papers — submissions volunteers say are often little more than annotated bibliographies. According to arXiv, the site now receives hundreds of such submissions each month, a sharp rise from the occasional, high-quality reviews historically posted by senior researchers.
“In the past few years, arXiv has been flooded with papers. Generative AI/large language models have added to this flood by making papers—especially papers not introducing new research results—fast and easy to write.”
Thomas G. Dietterich, an arXiv moderator and former president of the Association for the Advancement of Artificial Intelligence, said the spike in LLM-assisted survey papers, combined with limited moderator resources, made it infeasible to distinguish strong reviews from weak ones.
ArXiv’s volunteer moderators have always filtered submissions for scholarly value and topical relevance, but they do not conduct peer review. Review and position pieces were never a formally accepted content type; moderators sometimes made exceptions for established researchers or society reports. That informal system became unsustainable as generative AI made it trivial to produce superficial survey papers at scale.
Under the new rule, authors who want to post a review or position paper in the Computer Science section must provide documentation of successful peer review, such as journal references and DOIs. Workshop reviews and informal feedback do not meet the requirement. ArXiv said the change is limited to Computer Science for now, though other categories could adopt similar measures if they face comparable surges in AI-generated submissions.
The academic community reacted unevenly. Some researchers welcomed the policy as a way to protect quality and preserve moderators’ time. Others warned it could disproportionately affect early-career scholars and those at less well-resourced institutions. AI-safety researcher Stephen Casper argued that review and position papers are often written by junior researchers, people without access to large compute resources, and those outside institutions with extensive publishing experience.
Automated detection of AI-written text has been suggested as a tool, but detection tools remain imperfect and prone to false positives. Research highlights the scale of the issue: a Nature Human Behaviour study found nearly a quarter of computer-science abstracts showed signs of large-language-model modification by September 2024, while a Science Advances paper documented a sharp rise in AI use in 2024 after the launch of ChatGPT. Meanwhile, the American Association for Cancer Research reported that under 25% of authors disclosed AI use even where disclosure was required.
This decision is part of a broader reassessment across scholarly publishing. Major conferences, including CVPR 2025, have tightened screening and introduced desk-rejection policies for papers linked to irresponsible conduct. Publishers and reviewers are increasingly confronting papers that contain obvious AI artifacts — for example, boilerplate introductions that begin, verbatim, “Certainly, here is a possible introduction for your topic.”
ArXiv’s change aims to preserve the repository’s value as a place for substantive, reviewable scholarship while discouraging low-effort, AI-assisted submissions. How the policy will affect publishing patterns, junior researchers, and the development of reliable AI-detection tools remains under close observation.
