Brett Johnson, a former identity thief turned Secret Service consultant, warns AI is transforming cybercrime into industrial-scale operations. He highlights three growing threats: realistic deepfakes that shortcut trust, organized "scam farms" that run coordinated cons, and synthetic identities that are hard to detect. Johnson says synthetic fraud now represents a large share of new-account crime and offers six practical steps — from freezing household credit to enabling MFA — to reduce personal risk.
Ex-Hacker Warns: AI Is Turning Fraud Into an Industrial Threat — Deepfakes, Scam Farms, and Synthetic IDs
Brett Johnson, a former identity thief turned Secret Service consultant, warns AI is transforming cybercrime into industrial-scale operations. He highlights three growing threats: realistic deepfakes that shortcut trust, organized "scam farms" that run coordinated cons, and synthetic identities that are hard to detect. Johnson says synthetic fraud now represents a large share of new-account crime and offers six practical steps — from freezing household credit to enabling MFA — to reduce personal risk.

Brett Johnson, a former identity thief who later worked as a consultant to the Secret Service, warns that cybercrime is evolving into AI-powered, industrialized operations that are harder to detect and stop. After spending more than a decade stealing identities and selling credit-card data — at times earning more than $100,000 a month — Johnson now helps organizations understand how the tactics he once used are mutating.
In interviews with reporters, Johnson said the next wave of crime will be driven by artificial intelligence: machines that write scams, fabricate evidence, and even interact with victims in real time. He highlights three rising threats that are already reshaping online fraud.
Three rising threats
Deepfakes: Convincing synthetic voices, images and live-video recreations are becoming central tools for fraud. Criminals can now impersonate trusted colleagues, executives, or relatives in real time, bypassing the slow work of earning trust. Johnson recounted cases where finance staff approved large transfers after receiving video calls showing fabricated versions of actual coworkers — one reported incident involved approvals that exceeded $25 million. As AI improves at mimicking speech patterns and facial movements, we risk reaching a point where we can no longer rely on what we see or hear online.
Scam farms: Fraud has shifted from lone operators to coordinated, businesslike operations. "Scam farms" are buildings staffed with workers — sometimes trafficked or coerced — who run simultaneous cons under supervisory systems and rotating shifts. Some specialize in long-term relationship scams (often called "pig butchering") that slowly drain victims' savings. Victims can suffer devastating financial and personal consequences; one person shared how an online relationship led him to invest his wages in a fake cryptocurrency and later forced him to relocate to seek higher pay after the loss.
Synthetic identity fraud: Synthetic identities combine real and fabricated data to create new, non-existent digital people. Johnson calls this the top form of identity theft worldwide: because the identities don't correspond to a real person, they can be nearly invisible to traditional checks. He estimates synthetic fraud accounts for roughly 80% of new-account fraud, about 20% of credit-card chargebacks, and around 5% of outstanding credit-card debt. Once established, these fake identities can build credit, open accounts, apply for loans, and be used for money laundering — often only discovered after the accounts vanish.
Why this is urgent
The automation and accessibility of modern tools make fraud easier than ever. Aspiring criminals can buy tutorials, enroll in live classes, and purchase turnkey tools online, meaning they do not need deep technical knowledge to start operating. Combined with AI's ability to scale deception, this trend makes individual vigilance and institutional defenses both more important and more challenging.
Six ways to reduce your risk
- Practice situational awareness online: Treat every platform as a potential venue for predators; be skeptical of unsolicited contacts and unexpected requests.
- Freeze household credit: Place a credit freeze for everyone in your home to block new-account fraud quickly.
- Enable alerts: Turn on transaction and account alerts so you're notified of activity in real time.
- Use strong, unique passwords: Never reuse passwords; use a reputable password manager to generate and store complex credentials.
- Enable multifactor authentication (MFA): MFA adds a critical extra layer of protection that significantly reduces account takeover risk.
- Limit social-media exposure: Avoid sharing sensitive details (birthdays, family names, location patterns) that attackers can harvest for social engineering.
Taken together, these precautions won't eliminate risk, but they will blunt many common attacks and make you a harder target. Johnson's message is clear: as AI tools make deception faster and more convincing, individual vigilance and stronger systemic defenses are essential.
