AI’s Presence and Localisation in the Global South

By Sumona Bose

January 12, 2024

The Importance of AI Presence and Localization

In recent years, there has been a surge of research focused on the socio-technical implications of AI systems in the Global North. However, the Global South has largely been overlooked in this discourse. This article aims to shed light on the importance of AI’s presence and localization in the Global South, and how it can contribute to addressing pressing social issues in the region. AI’s presence and localisation in the global south is important as it expands its access and utilization to demographics in public health.

Explainability is a key aspect of AI systems, ensuring that the decisions and recommendations made by these systems are understandable to the people who interact with them. Explainability  and development of model designs allows developers to record decision making and understand parameters. However, techniques such as feature importance and model distillation, commonly used for explaining machine learning models, are not accessible to those without specialized knowledge.

AI Localization in the Global South

To improve the understandability of AI systems, techniques such as looking at prediction accuracy, limiting the scope of decision-making, and educating AI teams can be employed. This becomes particularly important for corporate adopters of AI, who need to ensure that decisions made by their AI systems can be explained and understood.

In the field of explainable AI (XAI), popular techniques include SHAP and LIME. SHAP measures how model features contribute to individual predictions, while LIME trains surrogate models to explain the decision-making process. These techniques have been applied to various subfields of machine learning, such as computer vision and natural language processing, resulting in the development of visual explanation maps and saliency approaches. Other methods, like Anchor, explain individual predictions of classification models for text or tabular data.

While there is growing enthusiasm about the potential of AI in the Global South, research has shown that AI can also exacerbate systemic problems, including bias and discrimination. If AI systems are not made explainable and understandable to the people who will use them, they may end up causing more harm, particularly to marginalized communities.

Conclusion

Making AI systems explainable can be a pathway to making AI more useful in real-world environments and addressing pressing social issues in domains like agriculture, healthcare, and education. Understanding how different groups of people across various settings perceive model decision-making is crucial in developing XAI systems that are responsive to their needs. AI’s presence and localisation in global south increases its explainability and use in healthcare.

Reference url

Recent Posts

Patient-Reported Outcomes Trends in Clinical Trials: Insights from 2008-2023

By HEOR Staff Writer

November 24, 2025

Patient-reported outcomes trends show a clear surge in clinical trials from 2008 to 2023, where usage has doubled in interventional studies, while tools like EQ-5D lead as the top generic instrument, with PROMIS seeing e...
Addressing AI Polyp Detection Gaps: A Roadmap for Evidence Generation and NHS Integration
Addressing Evidence Gaps in AI Polyp Detection AI Polyp Detection Gaps persist in colorectal cancer screening, as highlighted by the National Institute for He...
Health Investment Returns: Harnessing Health as a Strategic Economic Asset

By João L. Carapinha

November 18, 2025

Health as a Strategic Economic Imperative A country's enduring strength stems not solely from military or industrial resources but from the vitality and productivity of its populace. A recent EFPIA Guest Blog by Michael Oberreiter frames he...