A University of Oxford study finds AI chatbots often deliver inconsistent and sometimes inaccurate medical advice, posing risks to users. Researchers presented 1,300 people with clinical scenarios and found those using AI struggled to get clear guidance because users omit details and chatbots give varied answers. Experts warn this can leave people unsure whether to see a GP or go to A&E and call for improved, regulated health-specific AI tools.
Oxford Study Warns AI Chatbots Can Give 'Dangerous' Medical Advice

A University of Oxford study warns that AI chatbots can give inconsistent and sometimes incorrect medical guidance, creating potential hazards for people seeking health information.
What the Study Did
Researchers presented 1,300 participants with clinical scenarios — for example, someone with a severe headache or a new mother who felt persistently exhausted — to test how well people could identify likely conditions and choose the appropriate next step in care.
Participants were split into two groups. One group could use AI chatbots to help hypothesise possible causes and decide whether to see a GP or go to A&E; the other group did not use AI assistance. The research team then evaluated whether participants correctly identified likely diagnoses and made appropriate care decisions.
Key Findings
The study found that people who used AI often received a mix of useful and misleading responses. Many users did not know how to phrase questions clearly or omitted important details, producing varied answers depending on wording. When chatbots listed several possible conditions, participants were often left to guess which one fit their situation — increasing the risk of misunderstanding and potentially unsafe choices.
"It could be dangerous for people to ask chatbots about their symptoms," said Dr Rebecca Payne, the study's lead medical practitioner.
"People share information gradually. They leave things out, they don't mention everything. When the AI listed three possible conditions, people were left to guess which of those can fit. This is exactly when things would fall apart," said Dr Adam Mahdi, senior author of the study.
Lead author Andrew Bean noted the analysis illustrates the conversational challenge that even top AI models face when interacting with humans who give incomplete information. "We hope this work will contribute to the development of safer and more useful AI systems," he said.
Context and Responses From Experts
Polling by Mental Health UK in November 2025 found that more than one in three UK residents now use AI to support their mental health or wellbeing, highlighting the growing role of AI tools in healthcare decisions.
Dr Bertalan Meskó, editor of The Medical Futurist, said the landscape is evolving: two major AI developers, OpenAI and Anthropic, have released health-dedicated versions of their chatbots that may perform differently in similar studies. He called for continued improvement of health-specific models alongside clear national regulations, regulatory guardrails and medical guidelines.
Practical Takeaways
What this means for users: AI can provide general medical information, but it is not a substitute for professional clinical assessment. Users should:
- Be cautious when relying on chatbot advice for diagnosis or urgent decisions.
- Provide complete information to clinicians and seek in-person or telehealth assessments when symptoms are serious or persistent.
- Prefer validated, regulated health tools and follow official guidance for emergencies (call emergency services or go to A&E when appropriate).
The study underscores the need for safer, better-regulated, health-specific AI systems and user education about the limits of current chatbots.
Help us improve.




























