AI tools trained on scientific datasets can accelerate parts of research by analyzing massive amounts of data and surfacing patterns. However, systems like AlphaFold—which helped win a 2024 Nobel Prize—depend on decades of human-generated data and expertise rather than producing knowledge independently. Philosophers such as Emily Sullivan stress that AI must remain empirically anchored to existing science. Large initiatives like the Genesis Mission could help if scientists retain central roles; attempts to fully automate science risk undermining the social and methodological foundations of reliable discovery.
AI Can Accelerate Scientific Discovery — But It Can't Replace Human Scientists

As artificial intelligence becomes embedded across disciplines, researchers and policymakers are increasingly training AI systems on scientific datasets to generate hypotheses and propose experiments. These tools can transform parts of the research workflow, but they are not a substitute for the human judgment, creativity and community practices that make science reliable.
Genesis Mission and Early Results
On Nov. 24, 2025, the Trump administration issued an executive order launching the Genesis Mission, a program intended to train AI agents on federal scientific datasets to "test new hypotheses, automate research workflows, and accelerate scientific breakthroughs." Early outcomes from these so-called "AI scientists" have been mixed: AI can parse massive datasets and reveal subtle correlations beyond human scale, yet it often lacks commonsense reasoning and can produce impractical or misguided experimental recommendations.
Why AI Alone Cannot Do Science
There are several conceptual and practical reasons AI is unlikely to fully replace human scientists. First, AI models do not learn directly from the real world; humans design the digital representations and choose the training data that embed scientific knowledge and constraints. The quality and scope of those datasets shape the model's outputs. Without expert curation, AI cannot generate reliable, novel scientific insights.
Take AlphaFold as an example. Its developers were awarded the 2024 Nobel Prize in Chemistry for predictive models of protein structure. AlphaFold greatly accelerated structure prediction and thereby aided drug design and biological research, but it did so by learning from decades of human-generated protein data and methods. It did not independently discover new biological principles; rather, it made analysis of existing knowledge far more efficient.
As philosopher Emily Sullivan emphasizes, effective scientific AI must retain a strong empirical link to established knowledge: models' predictions must be grounded in what researchers already know about the natural world.
The Social, Creative Nature of Science
Science is fundamentally a social and creative enterprise. Discoveries typically emerge through intergenerational collaboration, debate, and methodological refinement. Historical cases—like the development and eventual confirmation of the double-helix model of DNA—show that breakthroughs often require technological advances, collective effort and sustained theoretical work over decades.
Scientists are not passive recorders of facts. They develop conceptual frameworks, design experiments with tacit expertise, adjudicate conflicting results, and assess the broader significance of findings in light of values and social context. These human practices are not readily reducible to dataset-driven pattern recognition.
How AI Can Help — And What To Avoid
AI's computational power is valuable when applied as a well-governed tool. With active participation from scientists, projects like the Genesis Mission could automate routine tasks, synthesize prior literature, propose candidate experiments informed by existing evidence, and speed data analysis. But deploying AI with the explicit aim of replacing scientists risks producing a hollow imitation of scientific practice—one that lacks the normative checks, creativity and communal validation that confer authority on scientific knowledge.
Bottom line: AI can accelerate many aspects of research, but it is best viewed as a powerful supplement to human expertise rather than a substitute for the human judgment, values and social processes that produce reliable science.
Help us improve.
































