Mode
Text Size
Log in / Sign up

Scoping review of 60 sources in LMICs identifies barriers to AI adoption and governance gaps.

Scoping review of 60 sources in LMICs identifies barriers to AI adoption and governance gaps.
Photo by Brett Jordan / Unsplash
Key Takeaway
Note that successful AI integration in LMICs requires context-sensitive design, participatory governance, and capacity building.

A scoping review systematically mapped 60 sources addressing AI governance, ethical, regulatory, or implementation issues within low- and middle-income countries (LMICs). The study aimed to characterize barriers to AI adoption and strategies for inclusive deployment across this diverse population. No randomized trials or comparative effectiveness data were available, as the design focused on literature synthesis rather than primary clinical outcomes.

Key results indicated that only 7.4% of LMICs have adopted national AI strategies. Furthermore, over 60% of AI models in these regions rely on non-representative datasets, a factor potentially increasing contextual bias. The distribution of study focus showed that 25 sources examined ethics, 17 addressed regulatory gaps, and 18 focused on implementation challenges. Additionally, fewer than 10% of institutions offer structured AI training, indicating a significant gap in workforce readiness.

Safety and tolerability data were not applicable to this observational mapping study, as no adverse events or discontinuations were reported. A critical limitation identified was the presence of substantial gaps in empirical research regarding the operationalization of AI in these settings. The evidence remains descriptive rather than causal, reflecting the early stage of research in this domain.

Practice relevance suggests that stakeholders should prioritize context-sensitive design and participatory governance to overcome identified barriers. Capacity building is essential to address the scarcity of structured training and the reliance on non-representative data. Clinicians and policymakers should interpret these findings as a call for improved infrastructure and ethical frameworks rather than immediate clinical guidelines.

Study Details

Study typeSystematic review
EvidenceLevel 1
PublishedApr 2026
View Original Abstract ↓
Artificial intelligence (AI) has the potential to revolutionize healthcare delivery in low- and middle-income countries (LMICs), yet its rapid adoption raises complex ethical, regulatory, and implementation challenges. This review investigates these barriers and identifies emerging strategies that support equitable and inclusive AI deployment in resource-limited settings. Following the PRISMA Extension for Scoping Reviews (PRISMA-ScR) guidelines, a systematic mapping of literature was conducted using PubMed, Scopus, and Cochrane Library (2000–2025) alongside global health policy reports. The search was framed using the Population, Concept, and Context (PCC) framework to identify studies addressing AI governance in LMICs. A total of 60 sources addressing ethical, regulatory, or implementation issues were analyzed across three domains derived from the WHO and OECD frameworks: governance, privacy, and AI applications. This study reveals that 7.4% of LMICs have adopted national AI strategies. Evidence indicates that over 60% of AI models in LMICs rely on non-representative datasets, increasing contextual bias. Of the 60 included studies, 25 focused on ethics, 17 on regulatory gaps, and 18 on implementation. Findings highlight workforce readiness gaps, with fewer than 10% of institutions offering structured AI training. Case studies from Brazil and India illustrate how these barriers are addressed through context-sensitive design. Successful AI integration requires context-sensitive design, participatory governance, and capacity building. This scoping review identifies critical gaps in empirical research on operationalization and recommends a transition from digital dependency to local innovation ecosystems.
Free Newsletter

Clinical research that matters. Delivered to your inbox.

Join thousands of clinicians and researchers. No spam, unsubscribe anytime.