Cracking the Code: Making AI in Healthcare Reliable and Fair

By Thanusha Pillay

July 15, 2024

Introduction

Artificial Intelligence (AI) has the potential to transform healthcare by providing accurate and efficient patient care. However, one of the significant challenges in implementing AI in clinical settings is generalisation. Generalisation refers to the AI system’s ability to apply its knowledge to new data that may differ from the original training data. A recent comment published in npj Digital Medicine explores the challenges of generalisation in clinical AI and discusses potential solutions to ensure trustworthy patient outcomes.

Understanding Generalisation in Clinical AI

In healthcare, generalisation is crucial for AI systems to make accurate predictions across diverse patient populations. Unfortunately, many machine learning (ML) models struggle to generalise effectively. This is particularly problematic in clinical settings, where the stakes are high. For instance, ML models trained on biassed or non-representative datasets may fail to provide reliable predictions for underrepresented groups.

One reason for this challenge is the inherent complexity and variability of clinical data. Clinical datasets are often high-dimensional, noisy, and contain numerous missing values. These factors can lead to overfitting, where the model performs well on training data but poorly on new, unseen data. Moreover, societal biases reflected in training data can exacerbate algorithmic biases, leading to poorer generalisation for certain groups.

Selective Deployment: An Ethical Approach

To address the generalisation challenge, recent work in bioethics advocates for the selective deployment of AI in healthcare. Selective deployment suggests that algorithms should not be deployed for groups underrepresented in their training datasets due to the risks of poor or unpredictable performance. This approach aims to safeguard patients from unreliable predictions while ensuring that AI systems are used responsibly.

Case Study: Breast Cancer Prognostic Algorithm

Breast cancer predominantly affects biological women, with a 100:1 ratio compared to biological men. Consequently, men experience worse health outcomes and are underrepresented in clinical datasets. A recent breast cancer prognostic algorithm, trained solely on female data, accurately predicts outcomes for women but is expected to underperform for men. Excluding men from using this algorithm protects them from unreliable predictions but raises ethical concerns about fairness and equal access to advanced treatments.

Technical Solutions for Generalisation

To improve the generalisation of AI models in clinical settings, several technical solutions can be employed: Data augmentation involves adding real or synthetic data to training datasets, enhancing the model’s ability to learn from diverse examples. Fine-tuning large-scale, generalist foundation models on limited data or using training paradigms like model distillation and contrastive learning can boost generalisation in low-data scenarios. Out-of-distribution (OOD) detection methods flag samples that significantly deviate from the training data, identifying cases where model predictions may be unreliable. Additionally, involving a human-in-the-loop in medium- and high-risk clinical applications provides an extra layer of safeguarding, ensuring critical decisions are not solely dependent on AI models.

 

Figure 1. AI generalisation challenges and solutions

Figure 1. AI generalisation challenges and solutions

Ethical Considerations and Future Directions

Ethical considerations are crucial in the deployment of AI in healthcare to ensure fairness and equal access to advanced treatments. Selective deployment, supported by robust technical solutions, can help balance these ethical concerns. Active data-centric AI techniques are essential for guiding data collection and valuation, ensuring that training data is representative and diverse, thereby reducing algorithmic biases. Additionally, synthetic data generation can enhance model generalisation by augmenting small datasets and simulating real-world distribution shifts, but it must be done using fair generation approaches to avoid propagating biases.

Conclusion

Generalisation remains a key challenge for the responsible implementation of AI in clinical settings. Selective deployment and techniques like data augmentation, model distillation, and OOD detection enhance AI model reliability. Ethical considerations must guide these efforts to ensure that all patients benefit from advanced AI-driven healthcare solutions.

Reference url

Recent Posts

oral health Africa
    

Oral Health in Africa: Promoting Collaborative Solutions

🦷 Is oral health taking a back seat in public health discussions in Africa?

A new article reveals alarming statistics about the high prevalence of untreated dental diseases across the continent and a critical shortage of oral health professionals. It emphasizes the urgent need for collaborative action among healthcare providers and policy-makers to integrate oral health into broader public health frameworks.

Discover how strengthening partnerships can pave the way for improved health outcomes and resource allocation in oral health.

#SyenzaNews #globalhealth #HealthEconomics

tislelizumab NSCLC treatment
        

Early Benefit Assessment of Tislelizumab NSCLC Treatment: Insights and Implications

🧐 How is the evolving treatment landscape for NSCLC affecting patient access to tislelizumab?

The German Federal Joint Committee (G-BA) has launched an early benefit assessment for tislelizumab as a second-line treatment for adults with advanced NSCLC. This assessment notably focuses on PD-L1 negative patients and highlights the need for additional data to substantiate its value amidst a shifting emphasis on first-line immunotherapy.

Explore the nuances of this assessment and its implications for future research and market access in the full article.

#SyenzaNews #oncology #MarketAccess

colorectal cancer screening
    

Advances in Colorectal Cancer Screening: Access and Cost

🚀 Is blood-based screening redefining colorectal cancer detection?

The Shield blood test offers a non-invasive alternative to colonoscopy—boosting screening uptake, but raising questions around effectiveness and value.

🔍 Discover how this innovation could reshape patient care, payer strategy, and health system costs.

#SyenzaNews #HealthcareInnovation #CostEffectiveness #DigitalTransformation

When you partner with Syenza, it’s like a Nuclear Fusion.

Our expertise are combined with yours, and we contribute clinical expertise and advanced degrees in health policy, health economics, systems analysis, public finance, business, and project management. You’ll also feel our high-impact global and local perspectives with cultural intelligence.

SPEAK WITH US

CORRESPONDENCE ADDRESS

1950 W. Corporate Way, Suite 95478
Anaheim, CA 92801, USA

© 2025 Syenza™. All rights reserved.