AI’s Presence and Localisation in the Global South

By Sumona Bose

January 12, 2024

The Importance of AI Presence and Localization

In recent years, there has been a surge of research focused on the socio-technical implications of AI systems in the Global North. However, the Global South has largely been overlooked in this discourse. This article aims to shed light on the importance of AI’s presence and localization in the Global South, and how it can contribute to addressing pressing social issues in the region. AI’s presence and localisation in the global south is important as it expands its access and utilization to demographics in public health.

Explainability is a key aspect of AI systems, ensuring that the decisions and recommendations made by these systems are understandable to the people who interact with them. Explainability  and development of model designs allows developers to record decision making and understand parameters. However, techniques such as feature importance and model distillation, commonly used for explaining machine learning models, are not accessible to those without specialized knowledge.

AI Localization in the Global South

To improve the understandability of AI systems, techniques such as looking at prediction accuracy, limiting the scope of decision-making, and educating AI teams can be employed. This becomes particularly important for corporate adopters of AI, who need to ensure that decisions made by their AI systems can be explained and understood.

In the field of explainable AI (XAI), popular techniques include SHAP and LIME. SHAP measures how model features contribute to individual predictions, while LIME trains surrogate models to explain the decision-making process. These techniques have been applied to various subfields of machine learning, such as computer vision and natural language processing, resulting in the development of visual explanation maps and saliency approaches. Other methods, like Anchor, explain individual predictions of classification models for text or tabular data.

While there is growing enthusiasm about the potential of AI in the Global South, research has shown that AI can also exacerbate systemic problems, including bias and discrimination. If AI systems are not made explainable and understandable to the people who will use them, they may end up causing more harm, particularly to marginalized communities.

Conclusion

Making AI systems explainable can be a pathway to making AI more useful in real-world environments and addressing pressing social issues in domains like agriculture, healthcare, and education. Understanding how different groups of people across various settings perceive model decision-making is crucial in developing XAI systems that are responsive to their needs. AI’s presence and localisation in global south increases its explainability and use in healthcare.

Reference url

Recent Posts

Conditional Reimbursement in Chronic Pain Rehabilitation: Navigating Evidence Gaps and Patient Ac...

By João L. Carapinha

April 20, 2026

Healthcare stakeholders were informed in the Netherlands that Chronic Pain Rehabilitation through Interdisciplinary Medical Specialist Rehabilitation (IMSR) will be removed from the Dutch basic health insurance package for most patients. Under the new ruling by Zorginstituut Nederland, chronic pa...
Effective Hypertension Control Interventions in Underserved Populations
Hypertension control interventions delivered through a scalable, team-based care model significantly reduce blood pressure among low-income adults, according to a National Institutes of Health-supported clinical trial. Conducted in federally qualified health centers, the program combined intensiv...
Null Link Confirmed: Prenatal Acetaminophen Autism Risk Study in Denmark

By HEOR Staff Writer

April 17, 2026

A major nationwide Danish cohort study has found no link between maternal acetaminophen use during pregnancy and autism in children. The new evidence on prenatal acetaminophen autism risk should reassure clinicians and expectant mothers, as both population-wide and sibling-controlled analyses sho...