Frankenstein’s lesson for AI: Mary Shelley warned not just against creating powerful things but against abandoning them. Modern AI models often produce convincing falsehoods, or hallucinations, because they are trained to sound plausible rather than to verify facts; those errors can become permanent if they enter legal records, health sites, or training data. To reduce harm without killing innovation, the authors recommend a pharma-style framework: documented training provenance, structured pre-deployment testing for high-stakes uses, and continuous post-market surveillance with graduated oversight.
Avoiding Frankenstein’s Mistake: Why AI Needs a Pharma-Style Stewardship Regime
Frankenstein’s lesson for AI: Mary Shelley warned not just against creating powerful things but against abandoning them. Modern AI models often produce convincing falsehoods, or hallucinations, because they are trained to sound plausible rather than to verify facts; those errors can become permanent if they enter legal records, health sites, or training data. To reduce harm without killing innovation, the authors recommend a pharma-style framework: documented training provenance, structured pre-deployment testing for high-stakes uses, and continuous post-market surveillance with graduated oversight.
A modern warning from Mary Shelley: steward what you create
Audiences know the outline of Frankenstein: a scientist creates life and then abandons it. But Mary Shelley’s sharper warning is often missed. The danger is not creation itself but abandonment — the failure to take responsibility for powerful technologies after they enter the world. Today's Victors often work in artificial intelligence.
Worrying examples are already surfacing in institutions we rely on. A California appeals court fined an attorney $10,000 after finding that 21 of 23 case citations in a brief were fabrications produced by AI. Similar incidents have proliferated across the United States, rising from a few cases a month to several a day. This summer, a Georgia appeals court vacated a divorce ruling when it discovered that 11 of 15 citations were inventions of an AI system. How many false citations remain undetected, silently polluting the legal record?
The new veracity problem
For decades many computer systems were provably correct: a pocket calculator reliably returns the right arithmetic answer, and engineers could forecast how an algorithm would behave. Modern large AI models upend that model. A recent study published in Science confirms what many AI specialists have warned for years: these systems commonly produce plausible falsehoods, known in the industry as hallucinations. Trained to predict what looks and sounds likely rather than to verify truth, they often supply confident answers even when unjustified. Their training regimes favor fluency and confidence over expressions of uncertainty; as one researcher put it, fixing that tendency could ’kill the product’.
Large models learn by extracting tangled patterns from enormous training corpora — patterns so numerous and interdependent that even their creators cannot fully predict every output. We are therefore forced to observe model behavior in the wild, sometimes only after harm has occurred.
Cascading, permanent harms
Unpredictability breeds cascading consequences. Errors do not simply vanish; they fossilize. A fabricated legal citation that slips into a database becomes precedent. False medical guidance seeded online is copied across health portals. AI-generated news spreads on social platforms. Worse, synthetic content can be scraped and reincorporated into future model training data, turning today’s hallucinations into tomorrow’s false facts.
A proven template: pharmaceutical regulation
We do not need to invent an approach from scratch. Pharmaceutical regulation offers a durable template for technologies whose full effects cannot be predicted in advance. Drug developers cannot foresee every biological interaction, so they test extensively; most candidates fail before reaching patients. Even approved medicines can produce unforeseen problems in the real world, which is why continuous post-market surveillance is mandatory. AI needs a comparable regulatory framework.
Three pillars of responsible AI stewardship
1) Prescribed training standards. Like drug manufacturers who control ingredients and document production, AI builders should document provenance of training data, monitor for contamination by synthetic content, prohibit certain harmful categories of material, and run systematic bias tests across demographic groups. Current disclosure by many AI firms falls far short of these expectations.
2) Structured pre-deployment testing. Just as staged clinical trials and randomized controlled trials are designed to expose safety problems before drugs reach patients, AI systems used in high-stakes domains — legal research, medical advice, financial management — should undergo standardized testing to document error rates and define safety thresholds. Many failures are subtle and only appear under realistic conditions; testing catches those before widespread deployment.
3) Continuous surveillance after deployment. Drug makers are required to track adverse events and report them to regulators, who can mandate warnings, restrictions or withdrawals. AI platforms need parallel post-market oversight so harms can be detected, traced and remediated promptly.
Regulation without smothering innovation
Regulation is essential because AI differs fundamentally from traditional tools. A hammer does not claim expertise; AI systems often project authority through fluent language while blending fact and fiction. Without rules, firms optimizing for engagement will trade accuracy for growth. But heavy-handed regulation can favor large incumbents and stifle small innovators — a challenge the EU’s AI Act already illustrates, and one mirrored in pharmaceuticals after high compliance costs concentrated markets.
Graduated oversight can strike a balance: scale requirements and costs with demonstrated harm. Low-risk tools with low error rates face lighter obligations and active monitoring. Systems showing higher error rates trigger mandatory fixes, tighter controls or temporary withdrawal. This creates incentives for improvement while preserving pathways for small teams to innovate responsibly.
Conclusion: stewardship, not abandonment
Responsible stewardship cannot be optional. Once creators release powerful AI, they are accountable for its consequences. The question is not whether advanced AI will be built — it already is — but whether we will insist on the careful stewardship those systems demand.
The pharmaceutical regulatory model — clear training provenance, rigorous pre-deployment testing, and ongoing surveillance — provides a practical blueprint for AI systems whose real-world effects cannot be fully predicted. Mary Shelley’s caution was not merely about invention; it was about abandonment. Two centuries later, her lesson remains urgent: with synthetic intelligence spreading rapidly, we may not get another chance to choose responsibility over neglect.
About the authors: Dov Greenbaum is professor of law and director of the Zvi Meitar Institute for Legal Implications of Emerging Technologies at Reichman University in Israel. Mark Gerstein is the Albert L. Williams Professor of Biomedical Informatics at Yale University.
Originally published in the Los Angeles Times.
