CRBC News

ChatGPT Searches Linked to Teen Arrests in Florida — Experts Warn: 'AI Is Not Your Friend'

Florida authorities have linked ChatGPT searches to investigations involving several teenagers, including a 17-year-old accused of staging an abduction and a 13-year-old who reportedly asked how to kill a classmate. Experts say searching for methods to commit crimes is risky, guardrails on AI are imperfect, and users—especially children—should understand that AI chat tools are products, not friends. The consensus: parents, schools and tech companies must act to reduce harm and improve safety.

ChatGPT Searches Linked to Teen Arrests in Florida — Experts Warn: 'AI Is Not Your Friend'

Law-enforcement officials in Florida say ChatGPT searches have been cited in investigations involving multiple teenagers, prompting experts to warn parents, educators and young people about treating conversational AI as a private confidant.

In October, the Marion County Sheriff's Office arrested a 17-year-old after investigators say he fabricated an abduction by four Hispanic men and even shot himself to fuel the ruse that triggered an Amber Alert. Officers reportedly found ChatGPT searches on the teen's laptop about Mexican cartels and how to draw his own blood without causing pain.

Separately, the Volusia County Sheriff's Office says a 13-year-old in DeLand was detained after typing 'how to kill my friend in the middle of class' into ChatGPT. School officials were alerted on Sept. 26 by Gaggle, a school-safety platform that scans school-issued accounts for troubling content. When questioned, the student told investigators he was 'trolling a friend who was annoying him.' The sheriff's office urged parents to talk with their children about the risks of such searches.

Online activity surfacing in criminal probes is not new, but the rapid adoption of generative AI tools like ChatGPT has raised fresh concerns. These services are used by millions for instant answers, essay help and problem-solving, which makes the potential for misuse and misinterpretation broader and faster than before.

'Simply searching for information is not automatically a crime,' said Tamara Lave, a law professor at the University of Miami, while noting exceptions such as seeking or sharing child sexual-abuse material. 'What has changed is access: people can now retrieve information they could not easily find before, and that raises understandable alarm about how that information might be used.'

Catherine Crump, a clinical professor at Berkeley Law who focuses on AI and technology law, emphasized that it is unwise to use ChatGPT or search engines to plan or further criminal acts. She warned that people—especially kids—should recognize ChatGPT as a commercial product, not a friend. 'These systems are designed to be obsequious and supportive,' Crump said, 'which can create the false sense of a private, sympathetic conversation.'

AI companies have implemented guardrails intended to block instructions for committing violent or illegal acts, but experts say those defenses are imperfect. The platform has also faced scrutiny for how it responds to users showing signs of suicidal ideation; companies say models are trained to detect and respond to mental or emotional distress. OpenAI did not respond to a request for comment for this piece.

Experts stress a dual approach: individual responsibility and corporate accountability. Users should avoid seeking instructions for harm and understand that chats are algorithmic responses built from patterns in training data. At the same time, companies should continue strengthening safety measures and transparency so that tools do not inadvertently enable dangerous behavior.

Takeaway: Conversations with AI are not private confessions. Parents, schools and technology providers all have roles to play in educating young people about safe and responsible use of generative AI.

Similar Articles