AI’s Presence and Localisation in the Global South

By Sumona Bose

January 12, 2024

The Importance of AI Presence and Localization

In recent years, there has been a surge of research focused on the socio-technical implications of AI systems in the Global North. However, the Global South has largely been overlooked in this discourse. This article aims to shed light on the importance of AI’s presence and localization in the Global South, and how it can contribute to addressing pressing social issues in the region. AI’s presence and localisation in the global south is important as it expands its access and utilization to demographics in public health.

Explainability is a key aspect of AI systems, ensuring that the decisions and recommendations made by these systems are understandable to the people who interact with them. Explainability  and development of model designs allows developers to record decision making and understand parameters. However, techniques such as feature importance and model distillation, commonly used for explaining machine learning models, are not accessible to those without specialized knowledge.

AI Localization in the Global South

To improve the understandability of AI systems, techniques such as looking at prediction accuracy, limiting the scope of decision-making, and educating AI teams can be employed. This becomes particularly important for corporate adopters of AI, who need to ensure that decisions made by their AI systems can be explained and understood.

In the field of explainable AI (XAI), popular techniques include SHAP and LIME. SHAP measures how model features contribute to individual predictions, while LIME trains surrogate models to explain the decision-making process. These techniques have been applied to various subfields of machine learning, such as computer vision and natural language processing, resulting in the development of visual explanation maps and saliency approaches. Other methods, like Anchor, explain individual predictions of classification models for text or tabular data.

While there is growing enthusiasm about the potential of AI in the Global South, research has shown that AI can also exacerbate systemic problems, including bias and discrimination. If AI systems are not made explainable and understandable to the people who will use them, they may end up causing more harm, particularly to marginalized communities.

Conclusion

Making AI systems explainable can be a pathway to making AI more useful in real-world environments and addressing pressing social issues in domains like agriculture, healthcare, and education. Understanding how different groups of people across various settings perceive model decision-making is crucial in developing XAI systems that are responsive to their needs. AI’s presence and localisation in global south increases its explainability and use in healthcare.

Reference url

Recent Posts

Boosting Africa Vaccine Manufacturing: A New Era of Local Production and Supply Security

By João L. Carapinha

May 15, 2026

Africa Vaccine Manufacturing stands to benefit from ongoing talks between Africa CDC and Aspen Pharmacare aimed at securing long-term vaccine supply agreements that strengthen local production capacity across the continent. The discussions focus on selecting priority antigens, scaling annual outp...
Bladder Cancer Treatment Impact: Barriers to Quality of Life in Patients
A global survey highlights the profound bladder cancer treatment impact on patients with non-muscle-invasive bladder cancer, with more than 90 percent of those undergoing radical cystectomy or BCG therapy reporting negative physical, emotional, and mental health effects. Life-Altering Effect...
Workshop on AI in GMP Manufacturing: Shaping Future Guidance from EMA

By HEOR Staff Writer

May 11, 2026

The European Medicines Agency is convening a two-day multistakeholder workshop to collect expert contributions that will inform the development of Annex 22 guidance on AI in GMP Manufacturing. Date: 30 June – 1 July 2026 Format: Hybrid The event features: ...