CRBC News
Technology

The DoorDash “Deep Throat” Hoax: How AI Made a Viral Lie — And Why Verification Matters

The DoorDash “Deep Throat” Hoax: How AI Made a Viral Lie — And Why Verification Matters
When Business Insider ran the image of an employee badge the Reddit poster provided through Gemini, the software said the image was likely AI-generated.Reddit user Trowaway_whistleblow

The viral Reddit "Deep Throat" post alleged a food-delivery app exploited drivers via secret scores and AI-driven pay cuts. Reporting by The Verge and Business Insider found the author likely fabricated supporting documents using generative tools, and the thread was deleted. The episode underscores real concerns about algorithmic surveillance and past wage litigation while showing that careful verification can still unmask sophisticated fakes. It argues for renewed investment in rigorous fact-checking and transparent reporting in the AI era.

Over a weekend, a viral Reddit post claiming to be a whistleblower from a food-delivery company ignited an online firestorm. The anonymous author — calling themselves a modern "Deep Throat" — alleged that managers called drivers "human assets," that workers were ranked by a so-called "desperation score," and that predictive models were used to dynamically lower base pay. The post suggested the app’s paid priority-delivery feature was mostly a "psychological value-add."

Rapid Spread and High-Profile Reaction

The thread gathered roughly 87,000 upvotes on Reddit and a screenshot was shared over 40 million times on X, drawing responses from journalists, activists and celebrities. DoorDash CEO Tony Xu publicly denied the claims late Sunday, writing on X:

"This is not DoorDash, and I would fire anyone who promoted or tolerated the kind of culture described in this Reddit post."

Reporting Uncovers a Fabrication

Investigations by The Verge and Business Insider found evidence the poster likely fabricated supporting materials. The person supplied an ID badge image and a PDF the reporters judged suspicious; AI-detection tools flagged at least one image as likely generated. Business Insider reporter Alex Bitter said the poster refused to verify their identity or speak on the record and stopped responding to follow-ups. By Tuesday the Reddit post had been deleted.

Why So Many Believed It

The hoax resonated because its claims echoed real, documented concerns about algorithmic surveillance and pay practices in the gig economy. DoorDash faced a $16.65 million settlement in New York related to allegations that the company used tips to reduce base pay; the settlement required changes to pay‑transparency practices, though DoorDash did not admit wrongdoing. Independent researchers have also documented intrusive monitoring systems: DAIR’s report described an app called Mentor used to rank Amazon delivery drivers, which sources say can penalize drivers for events beyond their control.

AI Cuts Both Ways

There is a bitter irony in this episode: the conspiracist used generative AI to fabricate evidence while journalists used AI-detection tools to expose the fake. Generative models and image tools have made it much easier to create convincing fakes, and detection remains imperfect. Reporters relied on a tool that flagged a supplied Uber Eats badge as likely machine-generated; Uber confirmed the card did not match the company's actual IDs. But detection tools are not foolproof and can only identify artifacts tied to particular generators or telltale signs they recognize.

Wider Lessons

This case highlights two simultaneous facts: (1) public concern about opaque algorithms and worker surveillance is not unfounded, and (2) the ease of creating realistic fakes raises the value of careful verification. The viral hoax did not become a lasting "truth" because reporters and fact-checkers pushed for evidence and used forensic tools. Still, the episode is a warning: as generative tools improve, the burden on newsrooms to verify sources, preserve transparency, and resist engagement-first incentives grows.

Conclusion

The DoorDash "Deep Throat" episode is a cautionary tale about trust in the AI era. It demonstrates how quickly credible-sounding allegations can spread and how generative tools can be abused to fabricate supporting evidence. At the same time, it shows that rigorous reporting, cross-checking, and transparency can still catch frauds before they calcify into accepted narrative. News organizations and platforms should take this moment to strengthen verification practices and public communication about how they authenticate claims.

Sources: Reporting from The Verge and Business Insider; DAIR report on delivery-driver surveillance; DoorDash public statements; public records of the New York settlement.

Tekendra Parmar is a former features editor at Business Insider and has written for Rest of World, Time, Fortune, Harper's Magazine, and The Nation.

Related Articles

Trending