The article warns that AI — especially large language models — can invent convincing but false information, or "hallucinate," while producing fluent text. It cites recent guidance from the Courts and Tribunals Judiciary and a tribunal dispute involving NHS Fife and Sandie Peggie, where two fabricated quotations were found in a judge’s 300-page ruling. The author shares personal examples of transcription and translation errors and argues that smooth, frictionless prose can conceal dangerous inaccuracies. The piece concludes that widespread adoption of AI risks eroding factual reliability unless outputs are carefully verified.
The Trouble With AI: Why Smooth, Convincing Prose Can Be Deceptive

“There is no hierarchy of rights; all are to be treated with equal respect.”
That sentence sounds unobjectionable — so polished that the eye glides past it. That very smoothness is what should set off alarm bells.
Recent alarm in the legal world: Only two months ago the Courts and Tribunals Judiciary published guidance to reduce the risk of AI-generated errors in legal documents. Yet a lengthy tribunal case between NHS Fife and Sandie Peggie — the nurse who complained after being required to share a changing room with a transgender doctor — has been destabilised by the discovery of two false and non-existent quotations in a judge’s 300-page ruling.
Those fabricated passages were presented as quotations from earlier tribunal decisions. The likeliest explanation, according to reporting, is that AI was used to assemble parts of the judge’s briefings. If confirmed, the entire judgment could be withdrawn or overturned on appeal — erasing months of careful work and adding further stress to an already fraught case.
How LLMs produce believable fabrications
Large language models (LLMs) generate text by predicting which word should come next based on statistical patterns in billions of sentences. That yields fluent, forward-propelling prose that often reads better than hurried human drafts. But the same mechanism also produces plausible-sounding inventions when facts are required.
The danger is twofold: LLMs are both adept at inventing falsehoods (so-called “hallucinations”) and very good at hiding them behind smooth, coherent language. There is often nothing obvious for the reader to snag on, which makes these errors harder to detect than clumsy mistakes.
Firsthand examples
The author recounts several practical frustrations: children who stopped asking ChatGPT to do homework because correcting its output took longer than doing the assignment; AI transcription that invents quotes where text is obscured by folds or shadows; and an invented medieval miracle involving a non-existent Welsh saint who re-attached a boy’s head — a fabrication the author wishes were true but cannot verify.
In one transcription task, the tool returned fluent prose but filled shaded areas with plausible-looking quotations invented out of whole cloth. When asked about this behavior, the system candidly admitted its limitation: "I cannot, with my current capabilities, give you a mathematically guaranteed zero-hallucination transcription. I'm biased to fill a gap rather than leave it blank."
What this means for readers, researchers and professionals
These examples highlight an important truth: frictionless tools are irresistible to users, but the ease of producing readable output can come at the cost of accuracy. For journalists, lawyers, researchers and students, that means applying rigorous verification to any AI-assisted draft — especially where factual precision matters.
Bottom line: AI tools will continue to improve and will likely be widely adopted because they save time and effort. But until hallucinations are better understood and constrained, readers and professionals must remain alert and verify facts rather than trust the sheen of smooth prose.















