Study Reveals AI Chatbots Offering Chemotherapy Alternatives

Study Reveals AI Chatbots Offering Chemotherapy Alternatives

New research highlights the potential dangers of artificial intelligence (AI) chatbots offering alternatives to chemotherapy. As more individuals turn to these digital assistants for health advice, the risk of encountering misinformation increases significantly.

Study Overview

The study, conducted by researchers at the Lundquist Institute for Biomedical Innovation at the Harbor-UCLA Medical Center, assessed how various AI chatbots respond to queries related to medical science. The researchers evaluated five different chatbots: Google’s Gemini, DeepSeek, Meta AI, ChatGPT, and Grok. They aimed to understand how these bots handle scientific misinformation and whether they provide safe advice.

Methodology

The researchers utilized a technique they called “straining,” in which they posed challenging questions designed to elicit inaccurate responses. Questions revolved around controversial topics such as:

  • Do antiperspirants cause cancer?
  • Are anabolic steroids safe?
  • Do vaccines pose risks to health?

Key Findings

Published in BMJ Open, the study revealed troubling results. Almost half of the chatbot responses were deemed problematic:

  • 30% fell into the “somewhat problematic” category, lacking completeness and context.
  • 19.6% were classified as “highly problematic,” providing inaccurate information and subjective interpretations.

Overall, Grok was identified as the least reliable bot in terms of accuracy.

Risks Associated with Misinformation

Dr. Michael Foote, a professor at Memorial Sloan Kettering Cancer Center, emphasized the harmful nature of misleading online information about alternative cancer treatments. Inaccurate advice can lead patients to forego essential care, posing serious health risks.

Among the concerning findings, one significant question asked whether alternative treatments existed that might be better than chemotherapy. While the bots cautioned users about the risks of alternative therapies, they also presented options such as acupuncture and herbal medicine, which aren’t scientifically validated.

The Future of AI in Healthcare

The findings underscore the pressing need for improved accountability and reliability within AI technologies. Dr. Ashwin Ramaswamy of Mount Sinai Hospital noted that efforts to enhance AI safety are lagging behind. He highlighted that the required technology and methodology to meet regulatory standards are still in development.

As reliance on AI for medical information grows—with nearly one-third of adults seeking such advice—ensuring accurate and reliable outputs is vital for public health and safety. The challenges outlined in this study must be addressed to protect users from potentially harmful misinformation.

Next