The Benefits and Risks of Medical AI Chatbots

By Michael Awood

September 7, 2023

Large Language Models (LLMs) are gaining prevalence in healthcare. OpenAI, in collaboration with Microsoft Research, is exploring the use cases of such technology in healthcare and medical applications. They aim to understand the opportunities, limitations, and risks in this context. Google’s Language Model for Dialog Application or “LaMDA” (replaced by their Pre-training with Abstracted Langage Modeling “PaLM”) and OpenAI’s previous Generative Pretrained Transformer 3.5 (GPT-3.5) have also been under study for medical applications. Interestingly, these LLMs, although not specifically trained for healthcare, demonstrated competence in the medical field using open-source internet information.

These LLMs have been used to develop the next tool in a physician’s pocket – a medical AI chatbot. By integrating OpenAI’s GPT-4 with medical expertise, a chatbot that engages users conversationally was created. Users initiate a session by entering a query or “prompt” in natural language, and GPT-4 responds, creating a human-like conversation. The system’s ability to maintain the context of an ongoing conversation enhances its usability and natural feel.

However, its responses are sensitive to the prompt’s wording, necessitating careful development and testing of prompts. GPT-4 can accurately answer definitive prompts, but it can also engage in complex interactions with prompts that lack a single correct answer. It provides error checking, identifying mistakes in its work and human-generated content.

GPT-4’s medical knowledge can serve tasks such as consultation, diagnosis, and education. It can read medical research material and engage in an informed discussion about it. However, like human reasoning, GPT-4 is fallible. It makes mistakes, but it can also identify them. The medical AI chatbot can write medical notes based on exchanges between providers and patients, even making sense of the subjective, objective, assessment, and plan (SOAP) format. It also includes billing codes as necessary. The chatbot can understand authorisation information, and prescriptions, that comply with Health Level Seven (HL7) and Fast Healthcare Interoperability Resources (FHIR) standards.

However, problems such as false responses or “hallucinations” pose dangers in a medical context. For instance, the AI chatbot created a medical note recording a body-mass index (BMI) without any related detail entered into the system. In another instance, the chatbot indicated no problems for the patient, but the clinician identified signs of medical complications.

While these tools can significantly enhance the consultation process and assist both the provider and patient, they are not without flaws and risks. The article speaks to a solution where the chatbot re-reads its information and it correctly identified these errors.

The article also highlights other improvements needed in the LLMs and chatbots. This is just the beginning of new possibilities and new risks. But, there is no denying that these tools hold the potential to optimise healthcare services.

Reference url

Recent Posts

suzetrigine pain management
      

Journavx for Pain Management: Toward Affordability and Access

💊 The jury is out on the pricing for Journavx®

Delve into our review of the recent ICER 2025 report on suzetrigine (Journavx®) to learn about the anticipated value relative to its clinical efficacy, safety profile and potential cost savings in tackling acute pain while addressing the opioid crisis.

Explore how suzetrigine paves the way for a safer, more effective approach to pain management and its implications on healthcare economics.

#SyenzaNews #HealthEconomics #HealthcareInnovation #Journavx

defunding scientific research
      

Defunding Scientific Research: Implications and Misconceptions in Gawande’s Analysis of Harvard Funding Cuts

🚨 What happens when scientific research funding is threatened?

In his thought-provoking article, Atul Gawande highlights the dire implications of proposed federal funding cuts to elite institutions like Harvard. He argues that such actions could devastate not just innovation, but also patient care and public health across the nation.

Explore the complexities of research funding and the potential ripple effects on America’s scientific landscape. Don’t miss out on these critical insights!

#SyenzaNews #HealthcareInnovation #HealthEconomics #MarketAccess

perioperative immunotherapy bladder cancer
       

FDA Approves Perioperative Immunotherapy for Bladder Cancer: A Breakthrough in MIBC Treatment

🚀 Are we witnessing a new era in bladder cancer treatment?

The FDA’s recent approval of durvalumab as the first perioperative immunotherapy for muscle-invasive bladder cancer (MIBC) could revolutionize outcomes for patients facing this formidable diagnosis. With significant improvements in event-free survival and overall survival over standard chemotherapy, this groundbreaking treatment offers new hope 🎉.

Curious about how this could shape the future of cancer care? Dive into the full article to uncover the potential impacts on clinical practice and health economics.

#SyenzaNews #oncology #HealthEconomics

When you partner with Syenza, it’s like a Nuclear Fusion.

Our expertise are combined with yours, and we contribute clinical expertise and advanced degrees in health policy, health economics, systems analysis, public finance, business, and project management. You’ll also feel our high-impact global and local perspectives with cultural intelligence.

SPEAK WITH US

CORRESPONDENCE ADDRESS

1950 W. Corporate Way, Suite 95478
Anaheim, CA 92801, USA

© 2025 Syenza™. All rights reserved.