AI’s Presence and Localisation in the Global South

By Sumona Bose

January 12, 2024

The Importance of AI Presence and Localization

In recent years, there has been a surge of research focused on the socio-technical implications of AI systems in the Global North. However, the Global South has largely been overlooked in this discourse. This article aims to shed light on the importance of AI’s presence and localization in the Global South, and how it can contribute to addressing pressing social issues in the region. AI’s presence and localisation in the global south is important as it expands its access and utilization to demographics in public health.

Explainability is a key aspect of AI systems, ensuring that the decisions and recommendations made by these systems are understandable to the people who interact with them. Explainability  and development of model designs allows developers to record decision making and understand parameters. However, techniques such as feature importance and model distillation, commonly used for explaining machine learning models, are not accessible to those without specialized knowledge.

AI Localization in the Global South

To improve the understandability of AI systems, techniques such as looking at prediction accuracy, limiting the scope of decision-making, and educating AI teams can be employed. This becomes particularly important for corporate adopters of AI, who need to ensure that decisions made by their AI systems can be explained and understood.

In the field of explainable AI (XAI), popular techniques include SHAP and LIME. SHAP measures how model features contribute to individual predictions, while LIME trains surrogate models to explain the decision-making process. These techniques have been applied to various subfields of machine learning, such as computer vision and natural language processing, resulting in the development of visual explanation maps and saliency approaches. Other methods, like Anchor, explain individual predictions of classification models for text or tabular data.

While there is growing enthusiasm about the potential of AI in the Global South, research has shown that AI can also exacerbate systemic problems, including bias and discrimination. If AI systems are not made explainable and understandable to the people who will use them, they may end up causing more harm, particularly to marginalized communities.

Conclusion

Making AI systems explainable can be a pathway to making AI more useful in real-world environments and addressing pressing social issues in domains like agriculture, healthcare, and education. Understanding how different groups of people across various settings perceive model decision-making is crucial in developing XAI systems that are responsive to their needs. AI’s presence and localisation in global south increases its explainability and use in healthcare.

Reference url

Recent Posts

Social Media Policy Action: Protecting Youth from Cognitive Risks

By João L. Carapinha

October 16, 2025

A recent editorial published in JAMA emphasizes the urgent need for social media policy action due to the developmental impacts of social media on youth. The article discusses a pivotal study by Nagata et al., which examined a large cohort of adolescents. This study revealed that increas...
Colorectal Cancer Markers: Discovering Region-Specific Drivers for Precision Oncology

By HEOR Staff Writer

October 15, 2025

Colorectal cancer markers like NOX1 and NPY1R are changing the way experts diagnose and manage colon cancer. But how do these region-specific markers impact colorectal cancer detection, prognosis, and personalized therapies? Recent breakthroughs show that understanding where a tumor begins—with t...
Health Misinformation Autism: The Dangers of Politicized Science in Vaccine and Drug Discourse

By João L. Carapinha

October 7, 2025

The BMJ article “Tylenol, vaccines, and autism: the medical mayhem of the MAGA methodologists” argues that political and ideological actors, notably aligned with the MAGA movement, are promoting health misinformation about autism, vaccines, and paracetamol. They amplify preliminary, misinterprete...